text
stringlengths 101
134k
| type
stringclasses 12
values | __index_level_0__
int64 0
14.7k
|
---|---|---|
OpenOffice.org
OpenOffice.org (OOo), commonly known as OpenOffice, is a discontinued open-source office suite. It was an open-sourced version of the earlier StarOffice, which Sun Microsystems acquired in 1999 for internal use. Sun open-sourced the OpenOffice suite in July 2000 as a competitor to Microsoft Office, releasing version 1.0 on 1 May 2002.
OpenOffice included a word processor (Writer), a spreadsheet (Calc), a presentation application (Impress), a drawing application (Draw), a formula editor (Math), and a database management application (Base). Its default file format was the OpenDocument Format (ODF), an ISO/IEC standard, which originated with OpenOffice.org. It could also read a wide variety of other file formats, with particular attention to those from Microsoft Office. OpenOffice.org was primarily developed for Linux, Microsoft Windows and Solaris, and later for OS X, with ports to other operating systems. It was distributed under the GNU Lesser General Public License version 3 (LGPL); early versions were also available under the Sun Industry Standards Source License (SISSL).
In 2011, Oracle Corporation, the then-owner of Sun, announced that it would no longer offer a commercial version of the suite and donated the project to the Apache Foundation. Apache renamed the software Apache OpenOffice. Other active successor projects include LibreOffice (the most actively developed) and NeoOffice (commercial, and available only for macOS).
History
OpenOffice.org originated as StarOffice, a proprietary office suite developed by German company Star Division from 1985 on. In August 1999, Star Division was acquired by Sun Microsystems for US$59.5 million, as it was supposedly cheaper than licensing Microsoft Office for 42,000 staff.
On 19 July 2000 at OSCON, Sun Microsystems announced it would make the source code of StarOffice available for download with the intention of building an open-source development community around the software and of providing a free and open alternative to Microsoft Office. The new project was known as OpenOffice.org, and the code was released as open source on 13 October 2000. The first public preview release was Milestone Build 638c, released in October 2001 (which quickly achieved 1 million downloads); the final release of OpenOffice.org 1.0 was on 1 May 2002.
OpenOffice.org became the standard office suite on many Linux distros and spawned many derivative versions. It quickly became noteworthy competition to Microsoft Office, achieving 14% penetration in the large enterprise market by 2004.
The OpenOffice.org XML file format – XML in a ZIP archive, easily machine-processable – was intended by Sun to become a standard interchange format for office documents, to replace the different binary formats for each application that had been usual until then. Sun submitted the format to the Organization for the Advancement of Structured Information Standards (OASIS) in 2002 and it was adapted to form the OpenDocument standard in 2005, which was ratified as ISO 26300 in 2006. It was made OpenOffice.org's native format from version 2 on. Many governments and other organisations adopted OpenDocument, particularly given there was a free implementation of it readily available.
Development of OpenOffice.org was sponsored primarily by Sun Microsystems, which used the code as the basis for subsequent versions of StarOffice. Developers who wished to contribute code were required to sign a Contributor Agreement granting joint ownership of any contributions to Sun (and then Oracle), in support of the StarOffice business model. This was controversial for many years. An alternative Public Documentation Licence (PDL) was also offered for documentation not intended for inclusion or integration into the project code base.
After acquiring Sun in January 2010, Oracle Corporation continued developing OpenOffice.org and StarOffice, which it renamed Oracle Open Office, though with a reduction in assigned developers. Oracle's lack of activity on or visible commitment to OpenOffice.org had also been noted by industry observers. In September 2010, the majority of outside OpenOffice.org developers left the project, due to concerns over Sun and then Oracle's management of the project and Oracle's handling of its open source portfolio in general, to form The Document Foundation (TDF). TDF released the fork LibreOffice in January 2011, which most Linux distributions soon moved to. In April 2011, Oracle stopped development of OpenOffice.org and fired the remaining Star Division development team. Its reasons for doing so were not disclosed; some speculate that it was due to the loss of mindshare with much of the community moving to LibreOffice while others suggest it was a commercial decision.
In June 2011, Oracle contributed the trademarks to the Apache Software Foundation. It also contributed Oracle-owned code to Apache for relicensing under the Apache License, at the suggestion of IBM (to whom Oracle had contractual obligations concerning the code), as IBM did not want the code put under a copyleft license. This code drop formed the basis for the Apache OpenOffice project.
Governance
During Sun's sponsorship, the OpenOffice.org project was governed by the Community Council, comprising OpenOffice.org community members. The Community Council suggested project goals and coordinated with producers of derivatives on long-term development planning issues.
Both Sun and Oracle are claimed to have made decisions without consulting the Council or in contravention to the council's recommendations, leading to the majority of outside developers leaving for LibreOffice. Oracle demanded in October 2010 that all Council members involved with the Document Foundation step down, leaving the Community Council composed only of Oracle employees.
Naming
The project and software were informally referred to as OpenOffice since the Sun release, but since this term is a trademark held by Open Office Automatisering in Benelux since 1999, OpenOffice.org was its formal name.
Due to a similar trademark issue (a Rio de Janeiro company that owned that trademark in Brazil), the Brazilian Portuguese version of the suite was distributed under the name BrOffice.org from 2004, with BrOffice.Org being the name of the associated local nonprofit from 2006. (BrOffice.org moved to LibreOffice in December 2010.)
Features
OpenOffice.org 1.0 was launched under the following mission statement:
Components
The suite contained no personal information manager, email client or calendar application analogous to Microsoft Outlook, despite one having been present in StarOffice 5.2. Such functionality was frequently requested. The OpenOffice.org Groupware project, intended to replace Outlook and Microsoft Exchange Server, spun off in 2003 as OpenGroupware.org, which is now SOGo. The project considered bundling Mozilla Thunderbird and Mozilla Lightning for OpenOffice.org 3.0.
Supported operating systems
The last version, 3.4 Beta 1, was available for IA-32 versions of Windows 2000 Service Pack 2 or later, Linux (IA-32 and x64), Solaris and OS X 10.4 or later, and the SPARC version of Solaris.
The latest versions of OpenOffice.org on other operating systems were:
IRIX (MIPS IV): v1.0.3
Linux 2.2: v2.x
Linux 2.4: v3.3.x
Mac OS X v10.2: v1.1.2
Mac OS X v10.3: v2.1
Mac OS X v10.4-Mac OS X v10.6: v4.0
Windows 95: v1.1.5
Windows NT 4.0 SP6: v1.1.x
Windows 98 and Windows ME: v2.4.3
Windows 2000 Service Pack 2 or later: v3.3.x
Solaris 7: 1.0.x
Solaris 8, Solaris 9: v2.x
Solaris 10: v3.4 Beta 1
Fonts
OpenOffice.org included OpenSymbol, DejaVu, the Liberation fonts (from 2.4) and the Gentium fonts (from 3.2). Versions up to 2.3 included the Bitstream Vera fonts. OpenOffice.org also used the default fonts of the running operating system.
Fontwork is a feature that allows users to create stylized text with special effects differing from ordinary text with the added features of gradient colour fills, shaping, letter height, and character spacing. It is similar to WordArt used by Microsoft Word. When OpenOffice.org saved documents in Microsoft Office file format, all Fontwork was converted into WordArt.
Extensions
From version 2.0.4, OpenOffice.org supported third-party extensions. As of April 2011, the OpenOffice Extension Repository listed more than 650 extensions. Another list was maintained by the Free Software Foundation.
OpenOffice Basic
OpenOffice.org included OpenOffice Basic, a programming language similar to Microsoft Visual Basic for Applications (VBA). OpenOffice Basic was available in Writer, Calc and Base. OpenOffice.org also had some Microsoft VBA macro support.
Connectivity
OpenOffice.org could interact with databases (local or remote) using ODBC (Open Database Connectivity), JDBC (Java Database Connectivity) or SDBC (StarOffice Database Connectivity).
File formats
From Version 2.0 onward, OpenOffice.org used ISO/IEC 26300:2006 OpenDocument as its native format. Versions 2.0–2.3.0 default to the ODF 1.0 file format; versions 2.3.1–2.4.3 default to ODF 1.1; versions 3.0 onward default to ODF 1.2.
OpenOffice.org 1 used OpenOffice.org XML as its native format. This was contributed to OASIS and OpenDocument was developed from it.
OpenOffice.org also claimed support for the following formats:
Development
OpenOffice.org converted all external formats to and from an internal XML representation.
The OpenOffice.org API was based on a component technology known as Universal Network Objects (UNO). It consisted of a wide range of interfaces defined in a CORBA-like interface description language.
Native desktop integration
OpenOffice.org 1.0 was criticized for not having the look and feel of applications developed natively for the platforms on which it runs. Starting with version 2.0, OpenOffice.org used native widget toolkit, icons, and font-rendering libraries on GNOME, KDE and Windows.
The issue had been particularly pronounced on Mac OS X. Early versions of OpenOffice.org required the installation of X11.app or XDarwin (though the NeoOffice port supplied a native interface). Versions since 3.0 ran natively using Apple's Aqua GUI.
Use of Java
Although originally written in C++, OpenOffice.org became increasingly reliant on the Java Runtime Environment, even including a bundled JVM. OpenOffice.org was criticized by the Free Software Foundation for its increasing dependency on Java, which was not free software.
The issue came to the fore in May 2005, when Richard Stallman appeared to call for a fork of the application in a posting on the Free Software Foundation website. OpenOffice.org adopted a development guideline that future versions of OpenOffice.org would run on free implementations of Java and fixed the issues which previously prevented OpenOffice.org 2.0 from using free-software Java implementations.
On 13 November 2006, Sun committed to releasing Java under the GNU General Public License and had released a free software Java, OpenJDK, by May 2007.
Security
In 2006, Lt. Col. Eric Filiol of the Laboratoire de Virologie et de Cryptologie de l'ESAT demonstrated security weaknesses, in particular within macros. In 2006, Kaspersky Lab demonstrated a proof of concept virus, "Stardust", for OpenOffice.org. This showed OpenOffice.org viruses are possible, but there is no known virus "in the wild".
As of October 2011, Secunia reported no known unpatched security flaws for the software. A vulnerability in the inherited OpenOffice.org codebase was found and fixed in LibreOffice in October 2011 and Apache OpenOffice in May 2012.
Version history
OpenOffice.org 1
The preview, Milestone 638c, was released October 2001. OpenOffice.org 1.0 was released under both the LGPL and the SISSL for Windows, Linux and Solaris on 1 May 2002. The version for Mac OS X (with X11 interface) was released on 23 June 2003.
OpenOffice.org 1.1 introduced One-click Export to PDF, Export presentations to Flash (.SWF) and macro recording. It also allowed third-party addons.
OpenOffice.org was used in 2005 by The Guardian to illustrate what it saw as the limitations of open-source software.
OpenOffice.org 2
Work on version 2.0 began in early 2003 with the following goals (the "Q Product Concept"): better interoperability with Microsoft Office; improved speed and lower memory usage; greater scripting capabilities; better integration, particularly with GNOME; a more usable database; digital signatures; and improved usability. It would also be the first version to default to OpenDocument. Sun released the first beta version on 4 March 2005.
On 2 September 2005, Sun announced that it was retiring SISSL to reduce license proliferation, though some press analysts felt it was so that IBM could not reuse OpenOffice.org code without contributing back. Versions after 2.0 beta 2 would use only the LGPL.
On 20 October 2005, OpenOffice.org 2.0 was released. 2.0.1 was released eight weeks later, fixing minor bugs and introducing new features. As of the 2.0.3 release, OpenOffice.org changed its release cycle from 18 months to releasing updates every three months.
The OpenOffice.org 2 series attracted considerable press attention. A PC Pro review awarded it 6 stars out of 6 and stated: "Our pick of the low-cost office suites has had a much-needed overhaul, and now battles Microsoft in terms of features, not just price." Federal Computer Week listed OpenOffice.org as one of the "5 stars of open-source products", noting in particular the importance of OpenDocument. ComputerWorld reported that for large government departments, migration to OpenOffice.org 2.0 cost one tenth of the price of upgrading to Microsoft Office 2007.
OpenOffice.org 3
On 13 October 2008, version 3.0 was released, featuring the ability to import (though not export) Office Open XML documents, support for ODF 1.2, improved VBA macros, and a native interface port for OS X. It also introduced the new Start Center and upgraded to LGPL version 3 as its license.
Version 3.2 included support for PostScript-based OpenType fonts. It warned users when ODF 1.2 Extended features had been used. An improvement to the document integrity check determined if an ODF document conformed to the ODF specification and offered a repair if necessary. Calc and Writer both reduced "cold start" time by 46% compared to version 3.0. 3.2.1 was the first Oracle release.
Version 3.3, the last Oracle version, was released in January 2011. New features include an updated print form, a FindBar and interface improvements for Impress. The commercial version, Oracle Open Office 3.3 (StarOffice renamed), based on the beta, was released on 15 December 2010, as was the single release of Oracle Cloud Office (a proprietary product from an unrelated codebase).
OpenOffice.org 3.4 Beta 1
A beta version of OpenOffice.org 3.4 was released on 12 April 2011, including new SVG import, improved ODF 1.2 support, and spreadsheet functionality.
Before the final version of OpenOffice.org 3.4 could be released, Oracle cancelled its sponsorship of development and fired the remaining Star Division development team.
Market share
Problems arise in estimating the market share of OpenOffice.org because it could be freely distributed via download sites (including mirror sites), peer-to-peer networks, CDs, Linux distributions and so forth. The project tried to capture key adoption data in a market-share analysis, listing known distribution totals, known deployments and conversions and analyst statements and surveys.
According to Valve, as of July 2010, 14.63% of Steam users had OpenOffice.org installed on their machines.
A market-share analysis conducted by a web analytics service in 2010, based on over 200,000 Internet users, showed a wide range of adoption in different countries: 0.2% in China, 9% in the US and the UK and over 20% in Poland, the Czech Republic, and Germany.
Although Microsoft Office retained 95% of the general market — as measured by revenue — as of August 2007, OpenOffice.org and StarOffice had secured 15–20% of the business market as of 2004 and a 2010 University of Colorado at Boulder study reported that OpenOffice.org had reached a point where it had an "irreversible" installed user base and that it would continue to grow.
The project claimed more than 98 million downloads as of September 2007 and 300 million total to the release of version 3.2 in February 2010. The project claimed over one hundred million downloads for the OpenOffice.org 3 series within a year of release.
Notable users
Large-scale users of OpenOffice.org included Singapore's Ministry of Defence, and Banco do Brasil. OpenOffice.org was the official office suite for the French Gendarmerie.
In India, several government organizations such as ESIC, IIT Bombay, National Bank for Agriculture and Rural Development, the Supreme Court of India, ICICI Bank, and the Allahabad High Court, which use Linux, completely relied on OpenOffice.org for their administration.
In Japan, conversions from Microsoft Office to OpenOffice.org included many municipal offices: Sumoto, Hyōgo in 2004, Ninomiya, Tochigi in 2006, Aizuwakamatsu, Fukushima in 2008 (and to LibreOffice as of 2012), Shikokuchūō, Ehime in 2009, Minoh, Osaka in 2009 Toyokawa, Aichi, Fukagawa, Hokkaido and Katano, Osaka in 2010 and Ryūgasaki, Ibaraki in 2011. Corporate conversions included Assist in 2007 (and to LibreOffice on Ubuntu in 2011), Sumitomo Electric Industries in 2008 (and to LibreOffice in 2012), Toho Co., Ltd. in 2009 and Shinsei Financial Co., Ltd. in 2010. Assist also provided support services for OpenOffice.org.
Retail
In July 2007, Everex, a division of First International Computer and the 9th-largest PC supplier in the U.S., began shipping systems preloaded with OpenOffice.org 2.2 into Wal-Mart, K-mart and Sam's Club outlets in North America.
Forks and derivative software
A number of open source and proprietary products derive at least some code from OpenOffice.org, including AndrOpen Office, Apache OpenOffice, ChinaOffice, Co-Create Office, EuroOffice 2005, Go-oo, KaiOffice, IBM Lotus Symphony, IBM Workplace, Jambo OpenOffice (the first office suite in Swahili), LibreOffice, MagyarOffice, MultiMedia Office, MYOffice 2007, NeoOffice, NextOffice, OfficeOne, OfficeTLE, OOo4Kids, OpenOfficePL, OpenOffice.org Portable, OpenOfficeT7, OpenOffice.ux.pl, OxOffice, OxygenOffice Professional, Pladao Office, PlusOffice Mac, RedOffice, RomanianOffice, StarOffice/Oracle Open Office, SunShine Office, ThizOffice, UP Office, White Label Office, WPS Office Storm (the 2004 edition of Kingsoft Office) and 602Office.
The OpenOffice.org website also listed a large variety of complementary products, including groupware systems.
Major derivatives include:
Active
Apache OpenOffice
In June 2011, Oracle contributed the OpenOffice.org code and trademarks to the Apache Software Foundation. The developer pool for the Apache project was proposed to be seeded by IBM employees, Linux distribution companies and public sector agencies. IBM employees did the majority of the development, including hiring ex-Star Division developers. The Apache project removed or replaced as much code as possible from OpenOffice.org 3.4 beta 1, including fonts, under licenses unacceptable to Apache and released 3.4.0 in May 2012.
The codebase for IBM's Lotus Symphony was donated to the Apache Software Foundation in 2012 and merged for Apache OpenOffice 4.0, and Symphony was deprecated in favour of Apache OpenOffice.
While the project considers itself the unbroken continuation of OpenOffice.org, others regard it as a fork, or at the least a separate project.
In October 2014, Bruce Byfield, writing for Linux Magazine, said the project had "all but stalled [possibly] due to IBM's withdrawal from the project." , the project has no release manager, and itself reports a lack of volunteer involvement and code contributions. After ongoing problems with unfixed security vulnerabilities from 2015 onward, in September 2016 the project started discussions on possibly retiring AOO.
Collabora Online
Collabora Online has LibreOffice at its core and can be integrated into any web application. It enables collaborative real-time editing with applications for word processing documents, spreadsheets, presentations, drawing and vector graphics. It is developed by Collabora Productivity, a division of Collabora who are a commercial partner with LibreOffice's parent organisation The Document Foundation (TDF), the majority of the LibreOffice software development is done by its commercial partners, Collabora, Red Hat, CIB, and Allotropia.
LibreOffice
Sun had stated in the original OpenOffice.org announcement in 2000 that the project would be run by a neutral foundation, and put forward a more detailed proposal in 2001. There were many calls to put this into effect over the ensuing years. On 28 September 2010, in frustration at years of perceived neglect of the codebase and community by Sun and then Oracle, members of the OpenOffice.org community announced a non-profit called The Document Foundation and a fork of OpenOffice.org named LibreOffice. Go-oo improvements were merged, and that project was retired in favour of LibreOffice. The goal was to produce a vendor-independent office suite with ODF support and without any copyright assignment requirements.
Oracle was invited to become a member of the Document Foundation and was asked to donate the OpenOffice.org brand. Oracle instead demanded that all members of the OpenOffice.org Community Council involved with the Document Foundation step down, leaving the Council composed only of Oracle employees.
Most Linux distributions promptly replaced OpenOffice.org with LibreOffice; Oracle Linux 6 also features LibreOffice rather than OpenOffice.org or Apache OpenOffice. The project rapidly accumulated developers, development effort and added features, the majority of outside OpenOffice.org developers having moved to LibreOffice. In March 2015, an LWN.net development comparison of LibreOffice with Apache OpenOffice concluded that "LibreOffice has won the battle for developer participation".
NeoOffice
NeoOffice, an independent commercial port for Macintosh that tracked the main line of development, offered a native OS X Aqua user interface before OpenOffice.org did. Later versions are derived from Go-oo, rather than directly from OpenOffice.org. All versions from NeoOffice 3.1.1 to NeoOffice 2015 were based on OpenOffice.org 3.1.1, though latter versions included stability fixes from LibreOffice and Apache OpenOffice. NeoOffice 2017 and later versions are fully based on LibreOffice.
Discontinued
Go-oo
The ooo-build patch set was started at Ximian in 2002, because Sun were slow to accept outside work on OpenOffice.org, even from corporate partners, and to make the build process easier on Linux. It tracked the main line of development and was not intended to constitute a fork. Most Linux distributions used, and worked together on, ooo-build.
Sun's contributions to OpenOffice.org had been declining for a number of years and some developers were unwilling to assign copyright in their work to Sun, particularly given the deal between Sun and IBM to license the code outside the LGPL. On 2 October 2007, Novell announced that ooo-build would be available as a software package called Go-oo, not merely a patch set. (The go-oo.org domain name had been in use by ooo-build as early as 2005.) Sun reacted negatively, with Simon Phipps of Sun terming it "a hostile and competitive fork". Many free software advocates worried that Go-oo was a Novell effort to incorporate Microsoft technologies, such as Office Open XML, that might be vulnerable to patent claims. However, the office suite branded "OpenOffice.org" in most Linux distributions, having previously been ooo-build, soon in fact became Go-oo.
Go-oo also encouraged outside contributions, with rules similar to those later adopted for LibreOffice. When LibreOffice forked, Go-oo was deprecated in favour of that project.
OpenOffice Novell edition was a supported version of Go-oo.
IBM Lotus Symphony
The Workplace Managed Client in IBM Workplace 2.6 (23 January 2006) incorporated code from OpenOffice.org 1.1.4, the last version under the SISSL. This code was broken out into a separate application as Lotus Symphony (30 May 2008), with a new interface based on Eclipse. Symphony 3.0 (21 October 2010) was rebased on OpenOffice.org 3.0, with the code licensed privately from Sun. IBM's changes were donated to the Apache Software Foundation in 2012, Symphony was deprecated in favour of Apache OpenOffice and its code was merged into Apache OpenOffice 4.0.
StarOffice
Sun used OpenOffice.org as a base for its commercial proprietary StarOffice application software, which was OpenOffice.org with some added proprietary components. Oracle bought Sun in January 2010 and quickly renamed StarOffice to Oracle Open Office. Oracle discontinued development in April 2011.
References
Further reading
External links
2002 software
Cross-platform free software
Formerly proprietary software
Free PDF software
Free software programmed in C++
Free software programmed in Java (programming language)
Office suites for macOS
Office suites for Windows
Open-source office suites
Discontinued software
Portable software
Unix software
Office suites | Operating System (OS) | 1,300 |
ICL VME
VME (Virtual Machine Environment) is a mainframe operating system developed by the UK company International Computers Limited (ICL, now part of the Fujitsu group). Originally developed in the 1970s (as VME/B, later VME 2900) to drive ICL's then new 2900 Series mainframes, the operating system is now known as OpenVME incorporating a Unix subsystem, and runs on ICL Series 39 and Trimetra mainframe computers, as well as industry-standard x64 servers.
Origins
The development program for the New Range system started on the merger of International Computers and Tabulators (ICT) and English Electric Computers in 1968. One of the fundamental decisions was that it would feature a new operating system. A number of different feasibility and design studies were carried out within ICL, the three most notable being:
VME/B (originally System B), targeted at large processors such as the 2970/2980 and developed in Kidsgrove, Staffordshire and West Gorton, Manchester
VME/K (originally System T), targeted at the mid-range systems such as the 2960 and developed at Bracknell after the original design for these small processors, System D, was dropped. VME/K was developed and introduced to the market but was eventually replaced by VME/B
VME/T, which was never actually launched, but warrants a mention as it was conceived to support "fault tolerance", and predated the efforts of the successful American startup company Tandem Computers in this area.
The chief architect of VME/B was Brian Warboys, who subsequently became professor of software engineering at the University of Manchester. A number of influences can be seen in its design, for example Multics and ICL's earlier George 3 operating system; however it was essentially designed from scratch.
VME/B was viewed as primarily competing with the System/370 IBM mainframe as a commercial operating system, and adopted the EBCDIC character encoding.
History
When New Range was first launched in October 1974, its operating system was referred to as "System B". By the time it was first delivered it had become "VME/B".
VME/K was developed independently (according to Campbell-Kelly, "on a whim of Ed Mack"), and was delivered later with the smaller mainframes such as the 2960.
Following a financial crisis in 1980, new management was brought into ICL (Christopher Laidlaw as chairman, and Robb Wilmot as managing director). An early decision of the new management was to drop VME/K. Thus in July 1981 "VME2900" was launched: although presented to the customer base as a merger of VME/B and VME/K, it was in reality the VME/B base with a few selected features from VME/K grafted on. This provided the opportunity to drop some obsolescent features, which remained available to customers who needed them in the form of the "BONVME" option.
The "2900" suffix was dropped at System Version 213 (SV213) when ICL launched Series 39 in 1985 as the successor to the original 2900 series; and the "Open" prefix was added after SV294. VME became capable of hosting applications written originally for Unix through a UNIX System V Release 3 based subsystem, called VME/X, adapted to run under VME and using the ASCII character encoding.
In 2007 Fujitsu announced a VME version run as a hosted subsystem, called superNova, within Microsoft Windows, or SUSE or Red Hat Enterprise Linux on x86-64 hardware.
In 2012 the VME user group, AXiS, announced that after almost 40 years it would be disbanding because of the reduced user base.
Fujitsu intended to support VME on customer computers until 2020.
In 2020 Fujitsu transferred 13 HM Revenue and Customs applications from their computers onto Fujitsu's virtual managed VME hosting platform.
As of 2022, the Department for Work and Pensions was still using VME based systems to support state pension payments.
Architecture
VME is structured as a set of layers, each layer having access to resources at different levels of abstraction. Virtual resources provided by one layer are constructed from the virtual resources offered by the layer below. Access to the resources of each layer is controlled through a set of Access Levels: in order for a process to use a resource at a particular access level, it must have an access key offering access to that level. The concept is similar to the "rings of protection" in Multics. The architecture allows 16 access levels, of which the outer 6 are reserved for user-level code.
Orthogonally to the access levels, the operating system makes resources available to applications in the form of a Virtual Machine. A Virtual Machine can run multiple processes. In practice, a VME Virtual Machine is closer to the concept of a process on other operating systems, while a VME process is more like a thread. The allocation of resources to a virtual machine uses a stack model:
when the stack is popped, all resources allocated at that stack level are released. Calls from an application to the operating system are therefore made by a call that retains the same process stack, but with a change in protection level; the resulting efficiency of system calls is one of the features that makes the architecture competitive.
Communication between Virtual Machines is achieved by means of Events (named communication channels) and shared memory areas. The hardware architecture also provides semaphore instructions INCT (increment-and-test) and TDEC (test-and-decrement).
Files and other persistent objects are recorded in a repository called the Catalogue. Unlike other operating systems, the file naming hierarchy is independent of the location of a file on a particular tape or disk volume. In days where there was more need for offline storage, this made it easy to keep track of files regardless of their location, and to move files between locations without renaming them. As well as files, the Catalogue keeps track of users and user groups, volumes, devices, network connections, and many other resources. Metadata for files can be held in an object called a File Description. The Catalogue was probably the first example of what would later be called an entity-relationship database.
Interrupts are handled by creating a new stack frame on the stack for the relevant process, handling the interrupt using this new environment, and then popping the stack to return to the interrupted process.
Run-time exceptions, referred to as contingencies, are captured by the Object Program Error Handler (OPEH), which can produce a report (equivalent to a stack trace), either interactively or written to a journal.
OMF
Compiled object code is maintained in a format called OMF (Object Module Format). Unlike in many other operating systems, this is also the format used by the loader. Various compilers are available, as well as utilities, notably the Collector, which links the code in several OMF modules into a single module, for more efficient loading at run-time, and the Module Amender, which allows patching of the instructions in an OMF module to fix bugs, using assembly language syntax.
SCL
The command language for VME is known as SCL (System Control Language).
This is much more recognizably a typed high-level programming language than the job control or shell languages found in most other operating systems: it can be likened to scripting languages such as JavaScript, though its surface syntax is derived from Algol 68.
SCL is designed to allow both line-at-a-time interactive use from a console or from a command file, and creation of executable scripts or programs (when the language is compiled into object module format in the same way as any other VME programming language). The declaration of a procedure within SCL also acts as the definition of a simple form or template allowing the procedure to be invoked from an interactive terminal, with fields validated according to the data types of the underlying procedure parameters or using the default procedure parameter values.
The built-in command vocabulary uses a consistent naming convention with an imperative verb followed by a noun: for example DELETE_FILE or DISPLAY_LIBRARY_DETAILS. The command can be written in full, or in an abbreviated form that combines standard abbreviations for the verb and noun: for example XF (X for DELETE, F for FILE) or DLBD (D for DISPLAY, LB for LIBRARY, D for DETAILS).
SCL is block-structured, with begin/end blocks serving the dual and complementary roles of defining the lexical scope of variable declarations, and defining the points at which resources acquired from the operating system should be released. Variables in the language (which are accessible from applications in the form of environment variables) can have a number of simple types such as strings, superstrings (sequences of strings), booleans, and integers, and are also used to contain references to system resources such as files and network connections.
It is possible to "disassemble" an SCL program from OMF back into SCL source code using the READ_SCL (or RSCL) command. However the output is not always perfect, and will often include errors that would stop re-compilation without user intervention.
A simple code example can be seen on the 99 bottles of beer website.
A more realistic example, where SCL is used to compile a program written in S3, is shown below. This example is taken from the Columbia University Archive of implementations of Kermit.
BEGIN
WHENEVER
RESULT GT 0 +_
THEN +_
SEND_RESULT_MESSAGE (RES = RESULT,
ACT = "QUIT()")
FI
INT KMT_SRC, KMT_OMF, KMT_REL
ASSIGN_LIBRARY (NAM = KERMIT.SOURCE,
LNA = KMT_SRC)
ASSIGN_LIBRARY (NAM = KERMIT.OMF,
LNA = KMT_OMF)
ASSIGN_LIBRARY (NAM = KERMIT.REL,
LNA = KMT_REL)
BEGIN
DELETE_FILE (NAM = *KMT_OMF.KMT_DATA_MODULE(101))
DELETE_FILE (NAM = *KMT_OMF.KMT_DH_MODULE(101))
DELETE_FILE (NAM = *KMT_OMF.KMT_EH_MODULE(101))
DELETE_FILE (NAM = *KMT_OMF.KMT_FH_MODULE(101))
DELETE_FILE (NAM = *KMT_OMF.KMT_HELP_MTM(101))
DELETE_FILE (NAM = *KMT_OMF.KMT_MAIN_MODULE(101))
DELETE_FILE (NAM = *KMT_OMF.KMT_PH_MODULE(101))
DELETE_FILE (NAM = *KMT_OMF.KMT_PP_MODULE(101))
DELETE_FILE (NAM = *KMT_OMF.KMT_SP_MODULE(101))
DELETE_FILE (NAM = *KMT_OMF.KMT_SP_MTM(101))
DELETE_FILE (NAM = *KMT_OMF.KMT_UI_MODULE(101))
DELETE_FILE (NAM = *KMT_REL.KERMIT(101))
DELETE_FILE (NAM = *KMT_REL.KERMIT_MODULE(101))
END
S3_COMPILE_DEFAULTS (LIS = OBJECT & XREF,
DIS = ERRORLINES)
S3_COMPILE (INP = *KMT_SRC.KMT_DATA_MODULE(101),
OMF = *KMT_OMF.KMT_DATA_MODULE(101))
S3_COMPILE (INP = *KMT_SRC.KMT_DH_MODULE(101),
OMF = *KMT_OMF.KMT_DH_MODULE(101))
S3_COMPILE (INP = *KMT_SRC.KMT_EH_MODULE(101),
OMF = *KMT_OMF.KMT_EH_MODULE(101))
S3_COMPILE (INP = *KMT_SRC.KMT_FH_MODULE(101),
OMF = *KMT_OMF.KMT_FH_MODULE(101))
NEW_MESSAGE_TEXT_MODULE (CON = *KMT_SRC.KMT_HELP_MTM(101),
OMF = *KMT_OMF.KMT_HELP_MTM(101))
S3_COMPILE (INP = *KMT_SRC.KMT_MAIN_MODULE(101),
OMF = *KMT_OMF.KMT_MAIN_MODULE(101))
S3_COMPILE (INP = *KMT_SRC.KMT_PH_MODULE(101),
OMF = *KMT_OMF.KMT_PH_MODULE(101))
S3_COMPILE (INP = *KMT_SRC.KMT_PP_MODULE(101),
OMF = *KMT_OMF.KMT_PP_MODULE(101))
S3_COMPILE (INP = *KMT_SRC.KMT_SP_MODULE(101),
OMF = *KMT_OMF.KMT_SP_MODULE(101))
NEW_MESSAGE_TEXT_MODULE (CON = *KMT_SRC.KMT_SP_MTM(101),
OMF = *KMT_OMF.KMT_SP_MTM(101))
S3_COMPILE (INP = *KMT_SRC.KMT_UI_MODULE(101),
OMF = *KMT_OMF.KMT_UI_MODULE(101))
COLLECT ()
----
INPUT(*KMT_OMF.KMT_DATA_MODULE(101) &
*KMT_OMF.KMT_DH_MODULE(101) &
*KMT_OMF.KMT_EH_MODULE(101) &
*KMT_OMF.KMT_FH_MODULE(101) &
*KMT_OMF.KMT_HELP_MTM(101) &
*KMT_OMF.KMT_MAIN_MODULE(101) &
*KMT_OMF.KMT_PH_MODULE(101) &
*KMT_OMF.KMT_PP_MODULE(101) &
*KMT_OMF.KMT_SP_MODULE(101) &
*KMT_OMF.KMT_SP_MTM(101) &
*KMT_OMF.KMT_UI_MODULE(101))
NEWMODULE(*KMT_REL.KERMIT_MODULE(101))
SUPPRESS
RETAIN(KERMIT_THE_FROG)
LISTMODULE
PERFORM
++++
COMPILE_SCL (INP = *KMT_SRC.KERMIT(101),
OUT = *KMT_REL.KERMIT(101),
COD = NOTIFWARNINGS,
OPT = FIL)
END
Commands illustrated in this fragment include WHENEVER (declares error handling policy), ASSIGN_LIBRARY (binds a local name for a file directory), DELETE_FILE (Makes a permanent file temporary, and it is then deleted at the END of the block), S3_COMPILE (compiles a program written in S3: this command breaks the usual verb-noun convention), NEW_MESSAGE_TEXT_MODULE (creates a module containing parameterized error messages suitable for localization) and COMPILE_SCL, which compiles an SCL program into object code.
The COLLECT command combines different object code modules into a single module, and is driven by its own local command file which is incorporated inline in the SCL between the delimiters "----" and "++++". The sub-commands INPUT and NEWMODULE identify the names of the input and output modules; SUPPRESS and RETAIN determine the external visibility of named procedures within the collected module; and LISTMODULE requests a report describing the output module.
Note that "." is used to separate the parts of a hierarchic file name. A leading asterisk denotes a local name for a library, bound using the ASSIGN_LIBRARY command. The number in parentheses after a file name is a generation number. The operating system associates a generation number with every file, and requests for a file get the latest generation unless specified otherwise. Creating a new file will by default create the next generation and leave the previous generation intact; this program however is deliberately choosing to create generation 101, to identify a public release.
Enhanced security variants
As a result of ICL's heavy involvement with delivery of computer services to the UK Public Sector, in particular those with special security requirements such as OPCON CCIS, it was an early entrant into the market for Secure Systems.
VME formed a core of ICL's activities in the Secure Systems arena. It had the advantage that as the last large-scale operating system ever designed, and one built from scratch, its underlying architecture encompassed many of the primitives needed to develop a Secure System, in particular the hardware assisted Access Control Registers (ACR) to limit to privileges that could be taken by any process (including Users).
This led to the UK Government's Central Computing and Telecommunications Agency (CCTA) funding Project Spaceman in the mid 1980s for ICL Defence Technology Centre (DTC) to develop an enhanced security variant of VME. ICL launched this as a pair of complementary products, with the commercial release being called High Security Option (HSO), and the public sector release, including Government Furnished Encryption (GFE) technologies, being called Government Security Option (GSO).
HSO and GSO were formally tested under the CESG UK (Security) Evaluation Scheme, one of the predecessors to ITSEC and Common Criteria, and in doing so became the first mainstream operating system to be formally Certified.
Series 39
The Series 39 range introduced Nodal Architecture, a novel implementation of distributed shared memory that can be seen as a hybrid of a multiprocessor system and a cluster design. Each machine consists of a number of nodes, and each node contains its own order-code processor (CPU) and main memory. Virtual machines are typically located (at any one time) on one node, but have the capability to run on any node and to be relocated from one node to another. Discs and other peripherals are shared between nodes. Nodes are connected using a high-speed optical bus, which is used to provide applications with a virtual shared memory. Memory segments that are marked as shared (public or global segments) are replicated to each node, with updates being broadcast over the inter-node network. Processes which use unshared memory segments (nodal or local) run in complete isolation from other nodes and processes.
Development process
VME was originally written almost entirely in S3, a specially-designed system programming language based on Algol 68R (however, VME/K was written primarily in the SFL assembly language). Although a high-level language is used, the operating system is not designed to be independent of the underlying hardware architecture: on the contrary, the software and hardware architecture are closely integrated.
From the early 1990s onwards, some entirely new VME subsystems were written partly or wholly in the C programming language.
From its earliest days, VME was developed with the aid of a software engineering repository system known as CADES, originally designed and managed by David Pearson (computer scientist) and built for the purpose using an underlying IDMS database. CADES is not merely a version control system for code modules: it manages all aspects of the software lifecycle from requirements capture, design methodology and specification through to field maintenance.
CADES was used in VME module development to hold separate definitions of data structures (Modes), constants (Literals), procedural interfaces and the core algorithms. Multiple versions ('Lives') of each of these components could exist. The algorithms were written in System Development Language (SDL), which was then converted to S3 source by a pre-processor. Multiple versions of the same modules could be generated.
Application development tools
The application development tools offered with VME fall into two categories:
third-generation programming languages
fourth-generation QuickBuild toolset.
The toolset on VME is unusually homogeneous, with most customers using the same core set of languages and tools. As a result, the tools are also very well integrated. Third-party tools have made relatively little impression.
For many years the large majority of VME users wrote applications in COBOL, usually making use of the IDMS database and the TPMS transaction processing monitor. Other programming languages included Fortran, Pascal, ALGOL 68RS, Coral 66 and RPG2, but these served minority interests. Later, in the mid 1980s, compilers for C became available, both within and outside the Unix subsystem, largely to enable porting of software such as relational database systems. It is interesting that a PL/I subset compiler was written by the EEC, to assist in porting programs from IBM to ICL hardware.
The compilers developed within ICL share a common architecture, and in some cases share components such as code-generators. Many of the compilers used a module named ALICE [Assembly Language Internal Common Environment] and produced an early form of precompiled code (P-Code) termed ROSE, making compiled Object Module Format (OMF) libraries loadable on any machine in the range. .
System Programming Languages: S3 and SFL
The primary language used for developing both the VME operating system itself and other system software such as compilers and transaction processing monitors is S3. This is a high level language based in many ways on Algol 68, but with data types and low-level functions and operators aligned closely with the architecture of the 2900 series.
An assembly language SFL (System Function Language) is also available. This was used for the development of VME/K, whose designers were not confident that a high-level language could give adequate performance, and also for the IDMS database system on account of its origins as a third-party product. SFL was originally called Macro Assembler Programming LanguagE (MAPLE), but as the 2900 architecture was being positioned as consisting of high level language machines the name was changed at the request of ICL Marketing. It had been developed as a part of the toolkit for System D, which was subsequently cancelled. Related families of assemblers for other architectures (CALM-xx running under VME, PALM-xx developed in Pascal and running on various hosts) were developed for internal use.
Neither S3 nor SFL was ever promoted as a commercial development tool for end-user applications, as neither were normally delivered as a standard part of the operating system, nor were they explicitly marketed as products in their own right. Both SFL and S3 were however available as options to user organisations and third parties who had a specific need for them.
QuickBuild
The QuickBuild application development environment on VME has been highly successful despite the fact that applications are largely locked into the VME environment. This environment is centred on the Data Dictionary System (DDS, also called OpenDDS), an early and very successful attempt to build a comprehensive repository supporting all the other tools, with full support for the development lifecycle. As well as database schemas and file and record descriptions, the dictionary keeps track of objects such as reports and queries, screen designs, and 4GL code; it also supports a variety of models at the requirements capture level, such as entity-relationship models and process models.
The QuickBuild 4GL is packaged in two forms:
ApplicationMaster for the creation of online TP applications
ReportMaster for batch reporting.
Both are high-level declarative languages, using Jackson Structured Programming as their design paradigm. ApplicationMaster is unusual in its approach to application design in that it focuses on the user session as if it were running in a single conversational process, completely hiding the complexity of maintaining state across user interactions. Because the 4GL and other tools such as the screen designer work only with the DDS dictionary, which also holds the database schemas, there is considerable reuse of metadata that is rarely achieved with other 4GLs.
References
Sources
The Architecture of OpenVME. Nic Holt. ICL publication 55480001. Undated (probably around 1995)
External links
VME - Into the Future, Fujitsu UK.
Virtual Machine Environment
Computer-related introductions in 1974
Multics-like | Operating System (OS) | 1,301 |
Fermi Linux
Fermi Linux is the generic name for Linux distributions that are created and used at Fermi National Accelerator Laboratory (Fermilab). These releases have gone through different names: Fermi Linux, Fermi Linux LTS, LTS, Fermi Linux STS, STS, Scientific Linux Fermi, SLF. For the purposes of this entry they can be used interchangeably to designate a version of Linux specific to Fermilab.
At the current time, the only officially supported Fermi Linux is Scientific Linux Fermi, which is based on Scientific Linux.
History
Fermi Linux started out as an extension of the PC Farms Pilot Project spearheaded by Connie Sieh. A Fermilab initiative to seek out cost effective computing for the Tevatron. Continuing to update the SGI and AIX hardware for the computing needs of that experiment was very expensive.
Initial builds of Fermi Linux were merely Red Hat Linux with some things turned off or some extra packages added. With the release of Scientific Linux, Fermi Linux became a 'site' specific build of Scientific Linux.
Releases
Support policy
Fermi Linux follows the Scientific Linux life cycle regarding support and updates.
There is a vibrant Linux community at Fermilab. This includes dedicated email lists and regular meetings provided by the Scientific Linux development team.
Fermi Linux LTS
Fermi Linux LTS is in essence Red Hat Enterprise Linux, recompiled.
Workers in Fermilab took the source code from Red Hat Enterprise Linux in srpm form and recompiled them resulting in binaries in rpm form with the only restrictions being the license from the original source code. They are bundling these binaries into a Linux distribution that is as close to Red Hat Enterprise Linux as they can get. The goal is to ensure that if a program runs and is certified on Red Hat Enterprise Linux, then it will run on the corresponding Fermi Linux release.
See also
Fermi National Accelerator Laboratory (Fermilab)
Scientific Linux
Linux
Red Hat Linux
Red Hat Enterprise Linux (RHEL), commercial Linux distribution on which Fermi Linux is based
CentOS, another Linux distribution based on Red Hat Enterprise Linux
External links
References
Fermilab
RPM-based Linux distributions
X86-64 Linux distributions
Linux distributions | Operating System (OS) | 1,302 |
Visi On
VisiCorp Visi On was a short-lived but influential graphical user interface-based operating environment program for IBM compatible personal computers running MS-DOS. Although Visi On was never popular, as it had steep minimum system requirements for its day, it was a major influence on the later development of Microsoft Windows.
History
Background
In the spring of 1981, Personal Software was cash-flush from the ever-increasing sales of VisiCalc, and the corporate directors sat down and planned out their future directions. Ed Esber introduced the concept of a "family" of products that could be sold together, but from a technical perspective none of their products were similar in anything but name. For instance, to use VisiPlot with VisiCalc data, the numbers to be plotted had to be exported in a "raw" format and then re-imported.
Dan Fylstra led a technical discussion on what sorts of actions the user would need to be able to accomplish in order for their products to be truly integrated. They decided that there were three key concepts. One was universal data exchange, which would be supported by a set of common data structures used in all of their programs. Another was a common, consistent interface so users would not have to re-learn the UI as they moved from one program to another. Finally, Fylstra was concerned that the time needed to move from one program to another was too long to be useful – a user needing to quickly look something up in VisiDex would have to save and exit VisiCalc, look up the information, and then quit that and re-launch VisiCalc again. This process had to be made quicker and simpler.
Creation
In July 1981 Xerox announced the Xerox Star, an advanced workstation computer featuring a graphical user interface, and by that point it was a well known "secret" that Apple Computer was working on a low-cost computer with a GUI that would later be released as the Apple Lisa. Personal Software's president, Terry Opdendyk, knew of a two-man team in Texas that was working on a GUI, and arranged for Scott Warren and Dennis Abbe to visit Personal Software's headquarters in Sunnyvale, California. They demonstrated a version of the Smalltalk programming language running on the TRS-80 microcomputer, a seriously underpowered machine for the task. Personal Software was extremely impressed.
A contract was soon signed, and work on project "Quasar" started almost immediately. The name was shortly thereafter changed to Visi On, a play on "vision" that retained their "Visi" naming. An experimental port to the ill-fated Apple III was completed in November, and after that, development work shifted to the DEC VAX, which had cross-compilers for a number of different machines. In early 1982 Personal Software changed their name to VisiCorp, and was betting much of the future success of the company on Visi On.
Visi On had many features of a modern GUI, and included a few that did not become common until many years later. It was fully mouse-driven, used a bit-mapped display for both text and graphics, included on-line help, and allowed the user to open a number of programs at once, each in its own window. Visi On did not, however, include a graphical file manager. Visi On also demanded a hard drive in order to implement its virtual memory system used for "fast switching", and at the time hard drives were a very expensive piece of equipment.
COMDEX demo
Tom Powers, VisiCorp's new VP of marketing, pushed for the system to be demonstrated at the fall COMDEX show in 1982. Others in the company were worried that the product was not ready for shipping, and that showing it so early would leave potential customers and distributors upset if it wasn't ready soon after. Another concern was that VisiWord was being released at the same show, and there was some worry that it might be lost in the shuffle.
The demonstrations at COMDEX were a huge success. Many viewers had to be told it was not simply a movie they were watching, and Bill Gates speculated that the PC was in fact simply a terminal for a "real" machine like a VAX. It became one of the most talked-about products in the industry. However this huge success led to a number of very serious problems.
In separate June and July 1983 Byte articles, the company mentioned a late summer 1983 release.
Corporate civil war
While Visi On development continued, VisiCorp as an entity was in the process of self-destruction. Terry Opdendyk, the president hand-picked by the early venture capital investors, had an extremely autocratic management style that led to the departure of many key executives. From late 1981 to the eventual release of Visi On, most of the product management of the company left, notably Mitch Kapor in charge of VisiCalc development, Ed Esber, Roy Folk, Visi On's product marketing manager, among others. This was referred to as "corporate civil war".
It was Mitch Kapor's departure that would prove most devastating to the company, however. Kapor, developer of VisiPlot and VisiTrend, had been pressing for the development of a greatly improved spreadsheet to succeed VisiCalc, but Opdendyk was uninterested. This was during a time when VisiCorp and VisiCalc's developers were at an impasse, and VisiCalc was growing increasingly outdated. When Kapor decided to leave, the other executives pressed for a clause forbidding Kapor to work on an "integrated spreadsheet", but Opdendyk couldn't be bothered, exclaiming Kapor is a spaghetti programmer, denigrating his abilities.
Kapor would go on to release Lotus 1-2-3, which became a major competitor to VisiCalc in 1983. By the end of the year, sales had been cut in half. Combined with the exodus of major portions of the senior executive staff and the ongoing battle with VisiCalc's developers, VisiCorp was soon in serious financial difficulty. All hopes for the company's future were placed on Visi On.
The October 31, 1983 InfoWorld, in an article titled, "Finally, Visi On is here," flatly stated: "the... publisher is putting the product on computer store shelves... Visi On was scheduled to be available during the last week in October". The November 14, 1983 issue said: "VisiCorp has just released Visi On." However, the July 2, 1984 issue says: "By the time Visi On was actually shipped on December 16, 1983,..." and PC Magazine reported in the February 7, 1984 issue that they still hadn't received the product in its commercially available form.
Release
The operating system, known as the Visi On Applications Manager, was released in December 1983 and sold for $495, requiring a mouse for another $250.
Reception
The main disadvantage of Visi On was its extremely high system requirements by 1982 standards. It needed 512 kilobytes of RAM and a hard disk at a time when PCs shipped with 64k-128k and IBM did not yet offer a hard disk with the PC (IBM's first model with a hard drive, the PC XT, didn't ship until March 1983). Third-party drives were however available at the time, typically 5MB units that connected to the floppy controller and were treated by the operating system as an oversized floppy disk (there was no subdirectory support). This brought the total cost of running Visi On to $7500, three-quarters the cost of the Apple Lisa.
The press continued to laud the product, going so far as to claim it represented the end of operating systems. The end-users were less impressed, however, not only due to the high cost of the required hardware, but also the general slowness of the system. In a market where computers were generally used for only one or two tasks, usually business related, the whole purpose of Visi On was seriously diluted.
In January 1984, Apple Computer released the Macintosh with much fanfare. Although the Macintosh was seriously lacking software, it was faster, cheaper, and included one feature Visi On lacked: a graphical file manager (the Finder). Although it didn't compete directly with Visi On, which was really a "PC product", it nevertheless demonstrated that a GUI could indeed be fast and relatively inexpensive, both of which Visi On failed to deliver.
Adding to the release's problems was Bill Gates, who took a page from VisiCorp's book and announced that their own product, Microsoft Windows, would be available in May 1984. This muddied the waters significantly, notably when he further claimed it would have a similar feature set, didn't require a hard disk, and cost only $250. Windows was released with an even longer delay than Visi On, shipping in November 1985, and was lacking the features that forced Visi On to demand a hard drive.
End of life
Only eight VisiCorp employees were still developing Visi On when VisiCorp sold the source code to Control Data in mid-1984 to raise cash as it sued Software Arts, while continuing to sell the software itself. Sales were apparently very slow; in February 1985, VisiCorp responded by lowering the price of the basic OS to $99, knowing that anyone purchasing it would also need to buy the applications. These were bundled, all three for $990. This improved the situation somewhat, but sales were still far below projections, and it was certainly not helping the company stave off the problems due to Lotus 1-2-3.
Following declining VisiCalc sales and low revenues from Visi On, in November 1985, the company merged with Paladin Software. The new company kept the Paladin name. VisiCorp, and its line of "VisiProducts", were history.
Technical information
Official system requirements for Visi On were:
512K of User Memory
RS232 Serial Port
5 Megabyte Hard Disk (FAT12 file system )
1 Floppy Disk Drive, DS/DD, 40 Track, 48 tpi
VisiCorp Mouse (Mouse Systems-compatible mice)
MS-DOS 2.0
Graphics Adapter compatible with CGA 640x200 monochrome mode
Graphics Monitor capable of displaying CGA 640x200
It will work on newer PCs, but requires a compatible mouse and hard disk partition under 15MB as only the FAT12 file system is supported. In addition, as it revectors some IRQs used by PC/ATs and later, VISIONXT.EXE requires modifications which prevent Graph and other applications from functioning properly.
Visi On required Mouse Systems-compatible mice; Microsoft-compatible PC mice, which over time became the standard, were introduced later (in May 1983). Visi On used two mouse drivers. First, loaded in text mode, made mouse registers accessible to the embedded driver, which translated coordinates to cursor position. This internal driver, built-in as a subroutine into VISIONXT.EXE, required Mouse Systems PC-Mouse pointing device. It is not compatible with the Microsoft Mouse standard.
Writing Visi On applications required a Unix development environment. Visi On was targeted toward high-end (expensive) PC workstations. Visi On applications were written in a subset of C VisiC, and a third-party could have ported the core software (VisiHost, VisiMachine virtual machine, VISIONXT.EXE in IBM PC DOS version) to Unix, but that never occurred. In 1984, VisiCorp's assets were sold off to Control Data Corporation.
Making working copies of the original floppy disks using modern methods is difficult - they are protected using pre-created bad sectors and other methods of floppy disk identification.
See also
VisiCalc
VisiCorp
References
External links
Visi On PCE emulator
DOS software
Operating system APIs
Windowing systems | Operating System (OS) | 1,303 |
IBM OfficeVision
OfficeVision was an IBM proprietary office support application that primarily ran on IBM's VM operating system and its user interface CMS.
Other platform versions were available, notably OV/MVS and OV/400. OfficeVision provided e-mail, shared calendars, and shared document storage and management, and it provides the ability to integrate word processing applications such as Displaywrite/370 and/or the Document Composition Facility (DCF/SCRIPT). IBM introduced OfficeVision in their May 1989 announcement, followed by several other key releases later.
The advent of the personal computer and the client–server paradigm changed the way organizations looked at office automation. In particular, office users wanted graphical user interfaces. Thus e-mail applications with PC clients became more popular.
OfficeVision/2
IBM's initial answer was OfficeVision/2, a server-requestor system designed to be the strategic implementation of IBM's Systems Application Architecture. The server could run on OS/2, VM, MVS (XA or ESA), or OS/400, while the requester required OS/2 Extended Edition running on IBM PS/2 personal computers, or DOS. IBM also developed OfficeVision/2 LAN for workgroups, which failed to find market acceptance and was withdrawn in 1992. IBM began to resell Lotus Notes and Lotus cc:Mail as an OfficeVision/2 replacement. Ultimately, IBM solved its OfficeVision problems through the hostile takeover of Lotus Software for its Lotus Notes product, one of the two most popular products for business e-mail and calendaring.
IBM originally intended to deliver the Workplace Shell as part of the OfficeVision/2 LAN product, but in 1991 announced plans to release it as part of OS/2 2.0 instead:
IBM last week said some features originally scheduled to ship in OfficeVision/2 LAN will be bundled into the current release of the product, while others will be either integrated into OS/2 or delayed indefinitely... IBM's Workplace Shell, an enhanced graphical user interface, is being lifted from OfficeVision/2 LAN to be included in OS/2 2.0... The shell offers the capability to trigger processes by dragging and dropping icons on the desktop, such as dropping a file into an electronic wastebasket. Porting that feature to the operating system will let any application take advantage of the interface.
Users of IBM OfficeVision included the New York State Legislature and the European Patent Office.
Migration
IBM discontinued support of OfficeVision/VM as of October 6, 2003. IBM recommended that its OfficeVision/VM customers migrate to Lotus Notes and Lotus Domino environments, and IBM offered migration tools and services to assist. Guy Dehond, one of the beta-testers of the AS/400, developed the first migration tool. However, OfficeVision/MVS remained available for sale until March 2014, and was still supported until May 2015, and thus for a time was another migration option for OfficeVision/VM users. OfficeVision/MVS runs on IBM's z/OS operating system.
Earlier PROFS, DISOSS and Office/36
OfficeVision/VM was originally named PROFS (for PRofessional OFfice System) and was initially made available in 1981. Before that it was a PRPQ (Programming Request for Price Quotation), an IBM administrative term for non-standard software offerings with unique features, support and pricing. The first release of PROFS was developed by IBM in Poughkeepsie, NY, in conjunction with Amoco, from a prototype developed years earlier in Poughkeepsie by Paul Gardner and others. Subsequent development took place in Dallas. The editor XEDIT was the basis of the word processing function in PROFS.
PROFS itself was descended from an in-house system developed by IBM Poughkeepsie laboratory. Poughkeepsie developed a primitive in-house solution for office automation over the period 1970–1972; OFS (Office System), which evolved into PROFS, was developed by Poughkeepsie laboratory as a replacement for that earlier system and first installed in October 1974. Compared to Poughkeepsie's original in-house system, the distinctive new features added by OFS were a centralised database virtual machine (data base manager or DBM) for shared permanent storage of documents, instead of storing all documents in user's personal virtual machines; and a centralised virtual machine (mailman master machine or distribution virtual machine) to manage mail transfer between individuals, instead of relying on direct communication between the personal virtual machines of individual users. By 1981, IBM's Poughkeepsie site had over 500 PROFS users.
In 1983, IBM introduced release 2 of PROFS, along with auxiliary software to enable document interchange between PROFS, DISOSS, Displaywriter, IBM 8100 and IBM 5520 systems.
PROFS and its e-mail component, known colloquially as PROFS Notes, featured prominently in the investigation of the Iran-Contra scandal. Oliver North believed he had deleted his correspondence, but the system archived it anyway. Congress subsequently examined the e-mail archives.
OfficeVision/MVS originated from IBM DISOSS, and OfficeVision/400 from IBM Office/36.
IBM's European Networking Center (ENC) in Heidelberg, Germany, developed prototype extensions to OfficeVision/VM to support Open Document Architecture (ODA), in particular a converter between ODA and Document Content Architecture (DCA) document formats.
Earlier ODPS in Far East
OfficeVision/VM for the Far Eastern languages of Japanese, Korean and Chinese, originated from IBM Office and Document Control System (ODPS), a DBCS-enabled porting from PROFS, plus document edit, store and search functions, similar to Displaywrite/370. It was an integrated office system for the Asian languages, that ran on IBM's mainframe computers under VM, offering such functions as email, calendar, and document processing & storing. IBM ODPS was later renamed as IBM OfficeVision/VM and its MVS version (using DISOSS) was not offered. After IBM's buyout of Lotus Development in 1995, the ODPS users were recommended to migrate to Lotus Notes.
IBM ODPS was developed in IBM Tokyo Programming Center, located in Kawasaki, Japan, later absorbed into IBM Yamato Development Laboratory, in conjunction with IBM Dallas Programming Center in Westlake, Texas, U.S., where PROFS was developed, and other programming centers. It first became available in 1986 for Japanese, and then was translated into Korean by IBM Korea and into Traditional Chinese by IBM Taiwan. It was not translated into Simplified Chinese for mainland China.
IBM ODPS consisted of four software components:
The Office Support Program, or OFSP, was PROFS enabled to process the Double Byte Character Set of the Asian languages and added some more functions. It could handle email, address, scheduling, storing/search/distribution of documents, and switch to PROFS in English.
The Document Composition Program, or DCP, was a porting from Document Composition Facility, enabled for processing the Double Byte Character Sets with additional functions. It allowed preparation and printing of documents, with a SCRIPT-type editing method.
The Document Composition Program/Workstation allowed preparation of documents on IBM 5550, PS/55 and other "workstations" (personal computers), that offered IBM Kanji System functions.
The Facsimile Program offered sending/receiving of facsimile data.
References
Further reading
OfficeVision
Email systems
IBM mainframe software
VM (operating system)
Discontinued software | Operating System (OS) | 1,304 |
IMS
IMS may refer to:
Organizations
Companies
Integrated Micro Solutions, a GPU manufacturer
Intelligent Micro Software, a former developer of Multiuser DOS, REAL/32, and REAL/NG
International Military Services, a UK Ministry of Defence owned company for selling weapons overseas
IMS Health, Information Medical Statistics, a US company providing data services for healthcare, ticker symbol NYSE:IMS
IMS Associates, Inc., an early computer manufacturer
IMS Bildbyrå, a Swedish stock photograph agency
Institutes
IMS International Medical Services, of the State University Medical Center Freiburg, Germany
Indian Mathematical Society
Indian Medical Service, a military medical service in British India
Indian Missionary Society, an Indian Catholic Religious Institute (Varanasi)
Insight Meditation Society, a Buddhist organization in Barre, Massachusetts
Institute of Mathematical Statistics
International Magicians Society
International Military Staff, the executive body of the NATO Military Committee
International Mountain Society, for mountain research, Bern, Switzerland
International Musicological Society, a learned society for musicology in Basel, Switzerland
Irish Marching Society, Rockford, Illinois, US
Irish Mathematical Society
Iranian Mathematical Society
Education
Institute of Management Studies, Devi Ahilya University, Devi Ahilya University, India
Institute of Medical Sciences, Banaras Hindu University, Varanasi, India
Iowa Mennonite School, Iowa, US
International
International Monitoring System, a division of the Comprehensive Nuclear-Test-Ban Treaty Organization
Science and technology
Immunomagnetic separation, a laboratory technique
Indian Mini Satellite bus, Indian Space Research Organisation (ISRO)
Industrial methylated spirit
Insulated metal substrate, for power electronics
Intelligent Maintenance Systems, to predict machine failure
Intelligent Munitions System, American smart mine
Intermediate syndrome, in organophosphate poisoning
IEEE MTT-S International Microwave Symposium, an annual conference
Intramuscular stimulation or dry needling
Ion-mobility spectrometry
Irritable male syndrome, an annual behavior in some mammals
Intermediate shaft, of a car transmission; See Porsche Intermediate Shaft Bearing issue
Computing and Internet
IBM Information Management System
Internet Map Server
Internet Medieval Sourcebook
IP Multimedia Subsystem, a framework for delivering services over mobile networks
Other uses
Indianapolis Motor Speedway, Speedway, Indiana, US
Integrated master schedule, a US DoD planning tool
International Measurement System, a handicapping system used in sailboat racing
International Music Summit, a yearly dance conference held in Ibiza
See also
Ims, Norwegian surname | Operating System (OS) | 1,305 |
Comparison of Intel processors
, the x86 architecture is used in most high end compute-intensive computers, including cloud computing, servers, workstations, and many less powerful computers, including personal computer desktops and laptops. The ARM architecture is used in most other product categories, especially high-volume battery powered mobile devices such as smartphones and tablet computers.
Some Xeon Phi processors support four-way hyper-threading, effectively quadrupling the number of threads. Before the Coffee Lake architecture, most Xeon and all desktop and mobile Core i3 and i7 supported hyper-threading while only dual-core mobile i5's supported it. Post Coffee Lake, increased core counts meant hyper-threading is not needed for Core i3, as it then replaced the i5 with four physical cores on the desktop platform. Core i7, on the desktop platform no longer supports hyper-threading; instead, now higher-performing core i9s will support hyper-threading on both mobile and desktop platforms. Before 2007 and post-Kaby Lake, some Intel Pentiums support hyper-threading. Celeron and Atom processors never supported it.
Intel processors table
See also
Intel Corporation
List of Intel microprocessors
List of Intel Atom microprocessors
List of Intel Itanium microprocessors
List of Intel Celeron microprocessors
List of Intel Pentium processors
List of Intel Pentium Pro microprocessors
List of Intel Pentium II microprocessors
List of Intel Pentium III microprocessors
List of Intel Pentium 4 processors
List of Intel Pentium D microprocessors
List of Intel Pentium M microprocessors
List of Intel Xeon microprocessors
List of Intel Core processors
List of Intel Core 2 microprocessors
List of Intel Core i3 microprocessors
List of Intel Core i5 processors
List of Intel Core i7 microprocessors
List of Intel Core i9 processors
List of Intel CPU microarchitectures
List of AMD processors
List of AMD CPU microarchitectures
Table of AMD processors
List of AMD graphics processing units
List of Intel graphics processing units
List of Nvidia graphics processing units
External links
Intel- Intel Source for Specification of Intel Processor
Comparison Charts for Intel Core Desktop Processor Family
Intel - Microprocessor Quick Reference Guide
Intel
Comparison
Intel processors | Operating System (OS) | 1,306 |
OpenPicus
OpenPicus is an Italian hardware company who designs and produces Internet of Things system on modules called Flyport. Flyport is open hardware and the openPicus framework and IDE are open software.
Flyport is a stand-alone system on module, no external processor is needed to create IoT applications.
History
OpenPicus was founded by Claudio Carnevali and Gabriele Allegria during 2011. The idea was to create a hardware and software open platform to speed up the development of professional IoT devices and services.
By the end of 2018, OpenPicus Wiki and all relative Open Hardware info disappeared from internet as founders of OpenPicus now promote the brand name IOmote converting their knowledge to real business. Some old info (wiki, tutorials, etc.) for OpenPicus boards can be recovered via Internet Archive Wayback Machine (https://web.archive.org/).
Product
Flyport is a smart and connected system on modules for the Internet of Things. Flyport is powered by a powerful and light open source framework (based on FreeRTOS) that manages the TCP/IP software stack, the user application and the integrated web server'.
Flyport is available in 3 pin compatible versions:
FlyportPRO Wi-Fi 802.11g
FlyportPRO GPRS quadband
FlyportPRO Ethernet
Flyport system on module is based on Microchip Technology PIC24 low power processor. It is used to connect and control systems over Internet through an embedded customizable web server or the standard TCP/IP services. The integrated microcontroller runs the customer application, so no host processor is needed. The pinout is very flexible since it is customizable by software.
Flyport can connect with several cloud servers such as Evrthng, Xively, ThingSpeak and many more.
Licensing
Hardware: Schematics are released under CC-BY 3.0
Software: Framework is released under LGPL 3.0
See also
openPicus website
openPicus Wiki
Free hardware
References
Single-board computers
Networking hardware
Internet of things companies
Home automation | Operating System (OS) | 1,307 |
Canonical account
A canonical account (or built-in account), in the context of computer software and systems, is an account that is included by default with a program or firmware. Such accounts usually also have a default password and may have certain access rights by default.
As such accounts and their password and permissions are usually common knowledge, given that anyone possessing a copy of the software, the device or their documentation will likely know of the account, a common security measure is to change the account's password and to double-check or modify the groups (if any) it is included in, or simply disable or delete it if it is not required.
Examples
Zyxel routers typically have admin as their default firmware administration account and 1234 as the default password. The password can and should be changed as soon as possible.
Microsoft Windows 2000 and XP, and possibly other versions, have an account named Guest by default, which has no password and grants a very basic access to the operating system. Even though it is disabled by default, some administrators may choose to activate it, change the password and disable it once more for good measure. This account cannot be deleted.
If not blank, canonical passwords are usually simple and may often be:
A simple sequence: 1234, 4321, abcd
The same as the account: if the account is bob, the password will also be bob
A word relating to the account or software: support, finance, windows
Simply password, pass
References
External links
Default Router Password List
Alecto - Default Password List Project
Password authentication | Operating System (OS) | 1,308 |
Instructions per second
Instructions per second (IPS) is a measure of a computer's processor speed. For complex instruction set computers (CISCs), different instructions take different amounts of time, so the value measured depends on the instruction mix; even for comparing processors in the same family the IPS measurement can be problematic. Many reported IPS values have represented "peak" execution rates on artificial instruction sequences with few branches and no cache contention, whereas realistic workloads typically lead to significantly lower IPS values. Memory hierarchy also greatly affects processor performance, an issue barely considered in IPS calculations. Because of these problems, synthetic benchmarks such as Dhrystone are now generally used to estimate computer performance in commonly used applications, and raw IPS has fallen into disuse.
The term is commonly used in association with a metric prefix (k, M, G, T, P, or E) to form kilo instructions per second (kIPS), million instructions per second (MIPS), and billion instructions per second (GIPS) and so on. Formerly TIPS was used occasionally for "thousand ips".
Computing
IPS can be calculated using this equation:
However, the instructions/cycle measurement depends on the instruction sequence, the data and external factors.
Thousand instructions per second (TIPS/kIPS)
Before standard benchmarks were available, average speed rating of computers was based on calculations for a mix of instructions with the results given in kilo Instructions Per Second (kIPS). The most famous was the Gibson Mix, produced by Jack Clark Gibson of IBM for scientific applications in 1959.
Other ratings, such as the ADP mix which does not include floating point operations, were produced for commercial applications. The thousand instructions per second (kIPS) unit is rarely used today, as most current microprocessors can execute at least a million instructions per second.
The Gibson Mix
Gibson divided computer instructions into 12 classes, based on the IBM 704 architecture, adding a 13th class to account for indexing time. Weights were primarily based on analysis of seven scientific programs run on the 704, with a small contribution from some IBM 650 programs. The overall score was then the weighted sum of the average execution speed for instructions in each class.
Millions of instructions per second (MIPS)
The speed of a given CPU depends on many factors, such as the type of instructions being executed, the execution order and the presence of branch instructions (problematic in CPU pipelines). CPU instruction rates are different from clock frequencies, usually reported in Hz, as each instruction may require several clock cycles to complete or the processor may be capable of executing multiple independent instructions simultaneously. MIPS can be useful when comparing performance between processors made with similar architecture (e.g. Microchip branded microcontrollers), but they are difficult to compare between differing CPU architectures. This led to the term "Meaningless Indicator of Processor Speed," or less commonly, "Meaningless Indices of Performance," being popular amongst technical people by the mid-1980s.
For this reason, MIPS has become not a measure of instruction execution speed, but task performance speed compared to a reference. In the late 1970s, minicomputer performance was compared using VAX MIPS, where computers were measured on a task and their performance rated against the VAX 11/780 that was marketed as a 1 MIPS machine. (The measure was also known as the VAX Unit of Performance or VUP.) This was chosen because the 11/780 was roughly equivalent in performance to an IBM System/370 model 158–3, which was commonly accepted in the computing industry as running at 1 MIPS.
Many minicomputer performance claims were based on the Fortran version of the Whetstone benchmark, giving Millions of Whetstone Instructions Per Second (MWIPS). The VAX 11/780 with FPA (1977) runs at 1.02 MWIPS.
Effective MIPS speeds are highly dependent on the programming language used. The Whetstone Report has a table showing MWIPS speeds of PCs via early interpreters and compilers up to modern languages. The first PC compiler was for BASIC (1982) when a 4.8 MHz 8088/87 CPU obtained 0.01 MWIPS. Results on a 2.4 GHz Intel Core 2 Duo (1 CPU 2007) vary from 9.7 MWIPS using BASIC Interpreter, 59 MWIPS via BASIC Compiler, 347 MWIPS using 1987 Fortran, 1,534 MWIPS through HTML/Java to 2,403 MWIPS using a modern C/C++ compiler.
For the most early 8-bit and 16-bit microprocessors, performance was measured in thousand instructions per second (1000 kIPS = 1 MIPS).
zMIPS refers to the MIPS measure used internally by IBM to rate its mainframe servers (zSeries, IBM System z9, and IBM System z10).
Weighted million operations per second (WMOPS) is a similar measurement, used for audio codecs.
Timeline of instructions per second
See also
TOP500
FLOPS - floating-point operations per second
SUPS
Benchmark (computing)
BogoMips (measurement of CPU speed made by the Linux kernel)
Instructions per cycle
Cycles per instruction
Dhrystone (benchmark) - DMIPS integer benchmark
Whetstone (benchmark) - floating-point benchmark
Million service units (MSU)
Orders of magnitude (computing)
Performance per watt
Data-rate units
References
Computer performance
Units of frequency | Operating System (OS) | 1,309 |
ISO/IEC JTC 1/SC 36
ISO/IEC JTC 1/SC 36 Information Technology for Learning, Education and Training is a standardization subcommittee (SC), which is part of the Joint Technical Committee ISO/IEC JTC 1 of the International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC), that develops and facilitates standards within the field of information technology (IT) for learning, education and training (LET). ISO/IEC JTC 1/SC 36 was established at the November 1999 ISO/IEC JTC 1 plenary in Seoul, Korea. The subcommittee held its first plenary meeting in March 2000 in London, United Kingdom. The international secretariat of ISO/IEC JTC 1/SC 36 is the Korean Agency for Technology and Standards (KATS), located in the Republic of Korea.
Scope
The scope of ISO/IEC JTC 1/SC 36 is “Standardization in the field of information technologies for learning, education, and training (ITLET) to support individuals, groups, or organizations, and to enable interoperability and reusability of resources and tools.”
The following exclusions apply to the ISO/IEC JTC 1/SC 36 scope:
ISO/IEC JTC 1/SC 36 shall not create standards or technical reports that define educational standards, cultural conventions, learning objectives, or specific learning content.
In the area of work of this subcommittee, standards and technical reports will not duplicate work done by other ISO or IEC technical committees, subcommittees, or working groups with respect to their component, specialty, or domain. Instead, when appropriate, normative or informative references to other standards shall be included. Examples include documents on special topics such as multimedia, web content, cultural adaptation, and security.
The Chairperson and Committee Manager of SC36
<Chairperson>
Mr. Frank Farance: 1999 – 2005
Dr. Bruce Peoples: 2005 – 2012
Mr. Erlend Overby: 2013 – 2021
Dr. Jon Mason: 2022 – 2024
<Committee Manager>
United States: 1999 – 2003
United Kingdom: 2004 – 2007
Republic of Korea: 2008 – present
Structure
ISO/IEC JTC 1/SC 36 is made up of 5 Working Groups (WGs), 3 Advisory Groups (AG) and 1 Ad-Hoc Group(AHG). Each Working Group carries out specific tasks in standards development within the field of ITLET, where the focus of each working group is described in the group’s terms of reference. The Working Groups of ISO/IEC JTC 1/SC 36 are:
Collaborations
ISO/IEC JTC 1/SC 36 works in close collaboration with a number of other organizations or subcommittees, both internal and external to ISO or IEC, in order to avoid conflicting or duplicative work.
Organizations internal to ISO or IEC that collaborate with or are in liaison to ISO/IEC JTC 1/SC 36 include:
ISO/IEC JTC 1/SC 17, Cards and personal identification
ISO/IEC JTC 1/SC 27, IT Security techniques
ISO/IEC JTC 1/SC 32, Data management and interchange
ISO/IEC JTC 1/SC 34, Document description and processing languages
ISO/IEC JTC 1/SC 35, User interfaces
ISO/IEC JTC 1/SC 39, Sustainability for and by information technology
ISO/TC 37, Terminology and other language and content resources
ISO/TC 46, Information and documentation
ISO/TC 176, Quality management and quality assurance
ISO/TC 215, Health informatics
ISO/TC 232, Learning services outside formal education
ISO/PC 288, Educational organizations management systems – Requirements with guidance for use
ISO/PC 288/WG1, Educational organizations management systems
Organizations external to ISO or IEC that collaborate with, or are in liaison to, ISO/IEC JTC 1/SC 36 include:
<Current in 2021>
Agence universitaire de la Francophonie (AUF)
International Information Centre for Terminology(Infoterm)
IEEE LTSC, Learning Technology Standards Committee
<Past>
Advanced Distributed Learning (ADL)
Aviation Industry Computer-Based Training Committee (AICC)
Cartago Alliance
CEN TC 353, Information and Communication Technologies for Learning, Education and Training
Dublin Core Metadata Initiative (DCMI)
IMS Global Learning Consortium
International Federation for Learning, Education, and Training Systems Interoperability (LETSI)
International Digital Publishing Forum (IDPF)
W3C:Indie UI, W3C Web Accessibility Independent User Interface
Schema.org
Member countries
Countries pay a fee to ISO to be members of subcommittees.
The 20 "P" (participating) members of ISO/IEC JTC 1/SC 36 are: Australia (SA), Canada (SCC), China (SAC), Finland (SFS), France (AFNOR), Germany (DIN), India (BIS), Italy (UNI), Japan (JISC), Kazakhstan(KAZMEST), Republic of Korea (KATS), Netherlands (NEN), Norway (SN), Portugal (IPQ), Russian Federation (GOST R), Slovakia (SOSMT), South Africa (SABS), Spain (AENOR), Ukraine (DSTU), United Kingdom (BSI).
The 27 "O" (observer) members of ISO/IEC JTC 1/SC 36 are: Algeria (IANOR), Argentina (IRAM), Austria (ASI), Belgium (NBN), Bosnia and Herzegovina (BAS), Colombia (ICONTEC), Czech Republic (UNMZ), Ethiopia(ESA), Ghana (GSA), Greece(NQIS ELOT), Hong Kong (ITCHKSAR), Hungary (MSZT), Indonesia (BSN), Iran, Islamic Republic of (ISIRI), Ireland (NSAI), Kenya (KEBS), New Zealand (SNZ), Pakistan (PSQCA), Philippines(BPS), Romania (ASRO), Saudi Arabia (SASO), Serbia (ISS), Sweden (SIS), Switzerland (SNV), Tunisia (INNORPI), Turkey (TSE). Uganda (UNBS)
※ Note: Name of Country(Name of National Body)
Published standards
As of 2022, ISO/IEC JTC 1/SC 36 has 55 published standards within the field of Information technology for learning, education and training, including:
Meetings
Since the 1999-11 JTC1 Plenary in Seoul, SC36 holds plenary meetings every 6 months in the months of March and September until 2011, and has held plenary meetings every 1 year in the months of June since 2012.
* (Webex) has been counted since 2013
See also
ISO/IEC JTC1
List of ISO Standards
Korean Agency for Technology and Standards
International Organization for Standardization
International Electrotechnical Commission
References
External links
ISO/IEC JTC 1/SC 36 page at ISO
036 | Operating System (OS) | 1,310 |
Libre Computer Project
The Libre Computer Project is an effort initiated by Shenzhen Libre Technology Co., Ltd., with the goal of producing standards-compliant single-board computers (SBC) and upstream software stack to power them.
Hardware
Libre Computer Project uses crowd-funding on Indiegogo and Kickstarter to market their SBC designs. The delivery and after-sales support was poor resulting in lots of complaints and dissatisfied funders.
Active Libre Computer SBC designs include:
ROC-RK3328-CC (Renegade)
The ROC-RK3328-CC "Renegade" board was funded on Indiegogo and features the following specifications:
Rockchip RK3328 SoC
4 ARM Cortex-A53 @ 1.4GHz
Cryptography Extensions
2G + 2P ARM Mali-450 @ 500MHz
OpenGL ES 1.1 / 2.0
OpenVG 1.1
Multi-Media Processor
Decoders
VP9 P2 4K60
H.265 M10P @ L5.1 4K60
H.264 H10P @ L5.1 4K60
JPEG
Encoders
H.265 1080P30 or 2x H.264 720P30
H.264 1080P30 or 2x H.264 720P30
Up to 4GB DDR4-2133 SDRAM
2 USB 2.0 Type A
1 USB 3.0 Type A
Gigabit Ethernet
3.5mm TRRS AV Jack
HDMI 2.0
MicroUSB Power In
MicroSD Card Slot with UHS support
eMMC Interface with 5.x support
IR Receiver
U-Boot Button
40 Pin Low Speed Header (PWM, I2C, SPI, GPIO)
ADC Header
Power Enable/On Header
AML-S905X-CC (Le Potato)
The AML-S905X-CC "Le Potato" board was funded on Kickstarter on 24 July 2017 and features the following specifications:
Amlogic S905X SoC
4 ARM Cortex-A53 @ 1.512GHz
Cryptography Extension
2G + 3P ARM Mali-450 @ 750MHz
OpenGL ES 1.1 / 2.0
OpenVG 1.1
Amlogic Video Engine 10
Decoders
VP9 P2 4K60
H.265 MP10@L5.1 4K60
H.264 HP@L5.1 4K30
JPEG / MJPEG
Encoders
H.264 1080P60
JPEG
Up to 2GB DDR3 SDRAM
4 USB 2.0 Type A
100 Mb Fast Ethernet
3.5mm TRRS AV Jack
HDMI 2.0
MicroUSB Power In
MicroSD Card Slot
eMMC Interface
IR Receiver
U-Boot Button
40 Pin Low Speed Header (PWM, I2C, SPI, GPIO)
Audio Headers (I2S, ADC, SPDIF)
UART Header
NOTE: GPIO Header Pin 11 or HDMI CEC is selectable by onboard jumper. They can not be used at the same time since they share the same pad.
ALL-H3-CC (Tritium)
The "Tritium" board was funded on Kickstarter on 13 January 2018 with the following specifications:
Software
Operating systems
Open Source
Software
Libre Computer is focused on upstream support in open-source software using standardized API interfaces. This includes Linux, u-boot, LibreELEC RetroArch, and more. A variety of open-source operating systems may be used on Libre Computer boards, including Linux and Android. Few to no binary blobs are used to boot and operate the boards.
Hardware
Schematics and 2D silkscreen are available for all hardware. Design files are based on non-disclosure materials from SoC vendors. CAD files are not available.
See also
Comparison of single-board computers
List of open-source hardware projects
OLinuXino
BeagleBoard
Raspberry Pi
References
Open-source computing hardware
Some of these devices should be under :Category:Embedded Linux but not all.
Microcomputers
Motherboard companies
Single-board computers | Operating System (OS) | 1,311 |
Hercules (emulator)
Hercules is a computer emulator allowing software written for IBM mainframe computers (System/370, System/390, and zSeries/System z) and for plug compatible mainframes (such as Amdahl machines) to run on other types of computer hardware, notably on low-cost personal computers. Development started in 1999 by Roger Bowler, a mainframe systems programmer.
Hercules runs under multiple parent operating systems including Linux, Microsoft Windows, FreeBSD, NetBSD, Solaris, and Mac OS X and is released under the open source software license QPL. It is analogous to Bochs and QEMU in that it emulates CPU instructions and select peripheral devices only. A vendor (or distributor) must still provide an operating system, and the user must install it. Hercules was the first mainframe emulator to incorporate 64-bit z/Architecture support.
Design
The emulator is written almost entirely in C. Its developers ruled out using machine-specific assembly code to avoid problems with portability even though such code could significantly improve performance. There are two exceptions: Hercules uses hardware assists to provide inter-processor consistency when emulating multiple CPUs on SMP host systems, and Hercules uses assembler assists to convert between little-endian and big-endian data on platforms where the operating system provides such services and on x86/x86-64 processors.
Operating systems status
Hercules is technically compatible with all IBM mainframe operating systems, even older versions which no longer run on newer mainframes. However, many mainframe operating systems require vendor licenses to run legally. Newer licensed operating systems, such as OS/390, z/OS, VSE/ESA, z/VSE, VM/ESA, z/VM, TPF/ESA, and z/TPF are technically compatible but cannot legally run on the Hercules emulator except in very limited circumstances, and they must always be licensed from IBM. IBM's Coupling Facility control code, which enables Parallel Sysplex, and UTS also require licenses to run.
Operating systems which may legally be run, without license costs, on Hercules include:
Older IBM operating systems including OS/360, DOS/360, DOS/VS, MVS, VM/370, and TSS/370 which are either public domain or "copyrighted software provided without charge."
The MUSIC/SP operating system may be available for educational and demonstration purposes upon request to its copyright holder, McGill University. Some of MUSIC/SP's features, notably networking, require z/VM (and thus an IBM license). However, a complete demonstration version of MUSIC/SP, packaged with the alternative Sim390 mainframe emulator, is available.
The Michigan Terminal System (MTS) version 6.0A has been tailored to run under Hercules.
There is no known legal restriction to running open-source operating systems Linux on IBM Z and OpenSolaris for System z on the Hercules emulator. They run well on Hercules, and many Linux on IBM Z developers do their work using Hercules. Several distributors provide 64-bit z/Architecture versions of Linux, and some also provide ESA/390-compatible versions. Mainframe Linux distributions include SUSE Linux Enterprise Server, Red Hat Enterprise Linux, Debian, CentOS, and Slackware. Sine Nomine Associates brought OpenSolaris to System z, relying on features provided by z/VM. Emulation of those specific z/VM features for OpenSolaris is included starting with Hercules Version 3.07.
Certain unencumbered editors and utilities which can run on a mainframe without a parent operating system may be available to run on Hercules as well.
PDOS/3X0 (Public Domain Operating System, mainframe version)
Usage
Hercules can be used as a development environment to verify that code is portable (across Linux processor architectures, for example), supports symmetric multiprocessing (SMP), and is 64-bit "clean."
There is also a large community of current and former mainframe operators and programmers, as well as those with no prior experience, who use Hercules and the public domain IBM operating systems as a hobby and for learning purposes. Most of the skills acquired when exploring classic IBM mainframe operating system versions are still relevant when transitioning to licensed IBM machines running the latest versions.
The open source nature of Hercules means that anyone can produce their own customized version of the emulator. For example, a group of developers independent of the Hercules project implemented a hybrid mainframe architecture which they dubbed "S/380" using modifications to both Hercules and to freely available classic versions of MVS (and later VM and DOS/VS), enhancing the operating systems with some degree of 31-bit (and as of 2016, 64-bit) binary compatibility with later operating system versions (and as of 2018, 32-bit is also supported).
Performance
It is difficult to determine exactly how Hercules emulation performance corresponds to real mainframe hardware, but the performance characteristics are understandably quite different. This is partially due to the difficulty of comparing real mainframe hardware to other PCs and servers as well as the lack of concrete, controlled performance comparisons. Performance comparisons are likely legally impossible for licensed IBM operating systems, and those operating systems are quite different from other operating systems, such as Linux.
Hercules expresses its processing performance in MIPS. Due to the age of the earlier System/360 and System/370 hardware, it is a relatively safe assumption that Hercules will outperform them when running on moderately powerful hardware, despite the considerable overhead of emulating a computer architecture in software. However, newer, partially or fully configured System z machines outperform Hercules by a wide margin. A relatively fast dual processor X86 machine running Hercules is capable of sustaining about 50 to 60 MIPS for code that utilizes both processors in a realistic environment, with sustained rates rising to a reported 300 MIPS on leading-edge (early 2009) PC-class systems. Hercules can produce peaks of over 1200 MIPS when running in a tight loop, such as in a synthetic instruction benchmark or with other small, compute-intensive programs.
Tom Lehmann, co-founder of TurboHercules, wrote:
Hercules generally outperforms IBM's PC based mainframes from the mid-1990s, which have an advertised peak performance of around 29 MIPS. Compared to the more powerful but still entry-level IBM Multiprise 2000 and 3000 mainframes (also from the 1990s), Hercules on typical x86 hardware would be considered a mid-range server in performance terms. For every mainframe after the 9672 Generation 1, Hercules would generally be the lowest end system. For comparison, current high-end IBM zEnterprise 196 systems can deliver over 52,000 MIPS per machine, and they have considerable I/O performance advantages. With the same number of emulated Sys Z processors, z/PDT is about 3 times faster than Hercules.
Note that there are other non-functional system attributes beyond performance which are typically relevant to mainframe operators.
TurboHercules
In 2009, Roger Bowler founded TurboHercules SAS, based in France, to commercialize the Hercules technology. In July 2009, TurboHercules SAS asked IBM to license z/OS to its customers for use on systems sold by TurboHercules. IBM declined the company's request. In March 2010, TurboHercules SAS filed a complaint with European Commission regulators, alleging that IBM infringed EU antitrust rules through its alleged tying of mainframe hardware to its mainframe operating system, and the EC opened a preliminary investigation. In November 2010, TurboHercules announced that it had received an investment from Microsoft Corporation. In September 2011, EC regulators closed their investigation without action.
See also
PC-based IBM-compatible mainframes – z/Architecture and today
References
External links
Hercules, Son of Z's (Review on Tech-news.com)
Public domain OS library (MVS version 3.8, VM/CMS release 6, DOS/VS release 34, TSS/370 version 3)
Public domain software archive (includes Turnkey MVS CD image)
Linux emulation software
Free emulation software
MacOS emulation software
Windows emulation software
IBM System/360 mainframe line | Operating System (OS) | 1,312 |
Xbox 360
The Xbox 360 is a home video game console developed by Microsoft. As the successor to the original Xbox, it is the second console in the Xbox series. It competed with Sony's PlayStation 3 and Nintendo's Wii as part of the seventh generation of video game consoles. It was officially unveiled on MTV on May 12, 2005, with detailed launch and game information announced later that month at the 2005 Electronic Entertainment Expo.
The Xbox 360 features an online service, Xbox Live, which was expanded from its previous iteration on the original Xbox and received regular updates during the console's lifetime. Available in free and subscription-based varieties, Xbox Live allows users to: play games online; download games (through Xbox Live Arcade) and game demos; purchase and stream music, television programs, and films through the Xbox Music and Xbox Video portals; and access third-party content services through media streaming applications. In addition to online multimedia features, it allows users to stream media from local PCs. Several peripherals have been released, including wireless controllers, expanded hard drive storage, and the Kinect motion sensing camera. The release of these additional services and peripherals helped the Xbox brand grow from gaming-only to encompassing all multimedia, turning it into a hub for living-room computing entertainment.
Launched worldwide across 2005–2006, the Xbox 360 was initially in short supply in many regions, including North America and Europe. The earliest versions of the console suffered from a high failure rate, indicated by the so-called "Red Ring of Death", necessitating an extension of the device's warranty period. Microsoft released two redesigned models of the console: the Xbox 360 S in 2010, and the Xbox 360 E in 2013. Xbox 360 is the sixth-highest-selling home video game console in history, and the highest-selling console made by an American company. Although not the best-selling console of its generation, the Xbox 360 was deemed by TechRadar to be the most influential through its emphasis on digital media distribution and multiplayer gaming on Xbox Live.
The Xbox 360's successor, the Xbox One, was released on November 22, 2013. On April 20, 2016, Microsoft announced that it would end the production of new Xbox 360 hardware, although the company will continue to support the platform.
History
Development
Known during development as Xbox Next, Xenon, Xbox 2, Xbox FS or NextBox, the Xbox 360 was conceived in early 2003. In February 2003, planning for the Xenon software platform began, and was headed by Microsoft's Vice President J Allard. That month, Microsoft held an event for 400 developers in Bellevue, Washington to recruit support for the system. Also that month, Peter Moore, former president of Sega of America, joined Microsoft. On August 12, 2003, ATI signed on to produce the graphic processing unit for the new console, a deal which was publicly announced two days later. Before the launch of the Xbox 360, several Alpha development kits were spotted using Apple's Power Mac G5 hardware. This was because the system's PowerPC 970 processor running the same PowerPC architecture that the Xbox 360 would eventually run under IBM's Xenon processor. The cores of the Xenon processor were developed using a slightly modified version of the PlayStation 3's Cell Processor PPE architecture. According to David Shippy and Mickie Phipps, the IBM employees were "hiding" their work from Sony and Toshiba, IBM's partners in developing the Cell Processor. Jeff Minter created the music visualization program Neon which is included with the Xbox 360.
Launch
The Xbox 360 was released on November 22, 2005, in the United States and Canada; December 2, 2005, in Europe and December 10, 2005, in Japan. It was later launched in Mexico, Brazil, Chile, Colombia, Hong Kong, Singapore, South Korea, Taiwan, Australia, New Zealand, South Africa, India, and Russia. In its first year in the market, the system was launched in 36 countries, more countries than any other console has launched in a single year.
Critical reception
In 2009, IGN named the Xbox 360 the sixth-greatest video game console of all time, out of a field of 25. Although not the best-selling console of the seventh-generation, the Xbox 360 was deemed by TechRadar to be the most influential, by emphasizing digital media distribution and online gaming through Xbox Live, and by popularizing game achievement awards. PC Magazine considered the Xbox 360 the prototype for online gaming as it "proved that online gaming communities could thrive in the console space". Five years after the Xbox 360's original debut, the well-received Kinect motion capture camera was released, which set the record of being the fastest selling consumer electronic device in history, and extended the life of the console. Edge ranked Xbox 360 the second-best console of the 1993–2013 period, stating "It had its own social network, cross-game chat, new indie games every week, and the best version of just about every multiformat game ... Killzone is no Halo and nowadays Gran Turismo is no Forza, but it's not about the exclusives—there's nothing to trump Naughty Dog's PS3 output, after all. Rather, it's about the choices Microsoft made back in the original Xbox's lifetime. The PC-like architecture meant the early EA Sports games ran at 60fps compared to only 30 on PS3, Xbox Live meant every dedicated player had an existing friends list, and Halo meant Microsoft had the killer next-generation exclusive. And when developers demo games on PC now they do it with a 360 pad—another industry benchmark, and a critical one."
Sales
The Xbox 360 began production only 69 days before launch, and Microsoft was not able to supply enough systems to meet initial consumer demand in Europe or North America, selling out completely upon release in all regions except in Japan. Forty thousand units were offered for sale on auction site eBay during the initial week of release, 10% of the total supply. By year's end, Microsoft had shipped 1.5 million units, including 900,000 in North America, 500,000 in Europe, and 100,000 in Japan.
In May 2008, Microsoft announced that 10 million Xbox 360s had been sold and that it was the "first current generation gaming console" to surpass the 10 million figure in the US. In the US, the Xbox 360 was the leader in current-generation home console sales until June 2008, when it was surpassed by the Wii. By the end of March 2011, Xbox 360 sales in the US had reached 25.4 million units. Between January 2011 and October 2013, the Xbox 360 was the best-selling console in the United States for these 32 consecutive months. By the end of 2014, Xbox 360 sales had surpassed sales of the Wii, making the Xbox 360 the best-selling 7th-generation console in the US once again. In Canada, the Xbox 360 has sold a total of 870,000 units as of August 1, 2008.
In Europe, the Xbox 360 has sold seven million units as of November 20, 2008. In the United Kingdom, the Xbox 360 had sold 3.2 million units by January 2009, according to GfK Chart-Track. The 8 million unit mark was crossed in the UK by February 2013. Sales of the Xbox 360 would overtake sales of the Wii later that year, making the Xbox 360 the best-selling 7th-generation console in the UK, with lifetime sales over 9 million units. Over 1 million units were sold in Spain across the console's lifecycle.
The Xbox 360 crossed the 1 million units sold in Japan in March 2009, and the 1.5 million units sold in June 2011. While the Xbox 360 has sold poorly in Japan, selling 1.63 million units,
it improved upon the sales of the original Xbox, which had sold only 450,000 units. Furthermore, the Xbox 360 managed to outsell both the PlayStation 3 and Wii the week ending September 14, 2008, as well as the week ending February 22, 2009, when the Japanese Xbox 360 exclusives Infinite Undiscovery and Star Ocean: The Last Hope, were released those weeks, respectively. Ultimately, Edge magazine would report that Microsoft had been unable to make serious inroads into the dominance of domestic rivals Sony and Nintendo; adding that lackluster sales in Japan had led to retailers scaling down and in some cases, discontinuing sales of the Xbox 360 completely. The significance of Japan's poor sales might be overstated in the media in comparison to overall international sales.
Legacy
The Xbox 360 sold much better than its predecessor, and although not the best-selling console of the seventh generation, it is regarded as a success since it strengthened Microsoft as a major force in the console market at the expense of well-established rivals. The inexpensive Nintendo Wii did sell the most console units but eventually saw a collapse of third-party software support in its later years, and it has been viewed by some as a fad since the succeeding Wii U had a poor debut in 2012. The PlayStation 3 struggled for a time due to being too expensive and initially lacking quality games, making it far less dominant than its predecessor, the PlayStation 2, and it took until late in the PlayStation 3's lifespan for its sales and games to reach parity with the Xbox 360. TechRadar proclaimed that "Xbox 360 passes the baton as the king of the hill – a position that puts all the more pressure on its successor, Xbox One".
The Xbox 360's advantage over its competitors was due to the release of high-profile games from both first party and third-party developers. The 2007 Game Critics Awards honored the platform with 38 nominations and 12 wins – more than any other platform. By March 2008, the Xbox 360 had reached a software attach rate of 7.5 games per console in the US; the rate was 7.0 in Europe, while its competitors were 3.8 (PS3) and 3.5 (Wii), according to Microsoft. At the 2008 Game Developers Conference, Microsoft announced that it expected over 1,000 games available for Xbox 360 by the end of the year. As well as enjoying exclusives such as additions to the Halo franchise and Gears of War, the Xbox 360 has managed to gain a simultaneous release of games that were initially planned to be PS3 exclusives, including Devil May Cry 4, Ace Combat 6, Virtua Fighter 5, Grand Theft Auto IV, Final Fantasy XIII, Tekken 6, Metal Gear Solid : Rising, and L.A. Noire. In addition, Xbox 360 versions of cross-platform games were generally considered superior to their PS3 counterparts in 2006 and 2007, due in part to the difficulties of programming for the PS3.
TechRadar deemed the Xbox 360 as the most influential game system through its emphasis of digital media distribution, Xbox Live online gaming service, and game achievement feature. During the console's lifetime, the Xbox brand has grown from gaming-only to encompassing all multimedia, turning it into a hub for "living-room computing environment". Five years after the Xbox 360's original debut, the well-received Kinect motion capture camera was released, which became the fastest selling consumer electronic device in history, and extended the life of the console.
Microsoft announced the successor to the Xbox 360, the Xbox One, on May 21, 2013. On April 20, 2016, Microsoft announced the end of production of new Xbox 360 hardware, though the company will continue to provide hardware and software support for the platform as selected Xbox 360 games are playable on Xbox One. The Xbox 360 continued to be supported by major publishers with new games well into the Xbox One's lifecycle. New titles were still being released in 2018. The Xbox 360 continues to have an active player base years after the system's discontinuation. Speaking to Engadget at E3 2019 after the announcement of Project Scarlett, the next-generation of Xbox consoles after the Xbox One, Phil Spencer stated that there were still "millions and millions of players" active on the Xbox 360. After the launch of the Xbox Series X and S by the end of 2020, the Xbox 360 still had a 17.7% market share of all consoles in use in Mexico; comparatively, newer systems like the Xbox One and PlayStation 4 stood at 36.9% and 18.0% market share, respectively.
Hardware
The main unit of the Xbox 360 itself has slight double concavity in matte white or black. The official color of the white model is Arctic Chill. It features a port on the top when vertical (left side when horizontal) to which a custom-housed hard disk drive unit can be attached.
On the Slim and E models, the hard drive bay is on the bottom when vertical (right side when horizontal) and requires the opening of a concealed door to access it. (This does not void the warranty.) The Xbox 360 Slim/E hard drives are standard 2.5" SATA laptop drives, but have a custom enclosure and firmware so that the Xbox 360 can recognize it.
Technical specifications
Various hard disk drives have been produced, including options at 20, 60, 120, 250, 320, or 500 GB. Inside, the Xbox 360 uses the triple-core IBM designed Xenon as its CPU, with each core capable of simultaneously processing two threads, and can therefore operate on up to six threads at once. Graphics processing is handled by the ATI Xenos, which has 10 MB of eDRAM. Its main memory pool is 512 MB in size.
Accessories
Many accessories are available for the console, including both wired and wireless controllers, faceplates for customization, headsets for chatting, a webcam for video chatting, dance mats and Gamercize for exercise, three sizes of memory units and five sizes of hard drives (20, 60, 120, 250 (initially Japan only, but later also available elsewhere) and 320 GB), among other items, all of which are styled to match the console.
In 2006, Microsoft released the Xbox 360 HD DVD Player. The accessory was discontinued in 2008 after the format war had ended in Blu-ray's favor.
Kinect
Kinect is a "controller-free gaming and entertainment experience" for the Xbox 360. It was first announced on June 1, 2009, at the Electronic Entertainment Expo, under the codename, Project Natal. The add-on peripheral enables users to control and interact with the Xbox 360 without a game controller by using gestures, spoken commands and presented objects and images. The Kinect accessory is compatible with all Xbox 360 models, connecting to new models via a custom connector, and to older ones via a USB and mains power adapter. During their CES 2010 keynote speech, Robbie Bach and Microsoft CEO Steve Ballmer went on to say that Kinect would be released during the holiday period (November–January) and work with every 360 console. It was released on November 4, 2010.
AV output
Built-in
HDMI (only made after 2007)
S/PDIF (only Slim)
Stereo Audio, Composite Video - Jack 3.5 mm (only Slim E)
Through AV connector (excluding E models which have no AV connector)
Composite Video
S-Video
SCART RGB
VGA
YPBPR
D-Terminal
S/PDIF
RCA – stereo audio
Retail configurations
At launch, the Xbox 360 was available in two configurations: the "Xbox 360" package (unofficially known as the 20 GB Pro or Premium), priced at US$399 or GB£279.99, and the "Xbox 360 Core", priced at US$299 and GB£209.99. The original shipment of the Xbox 360 version included a cut-down version of the Media Remote as a promotion. The Elite package was launched later at US$479. The "Xbox 360 Core" was replaced by the "Xbox 360 Arcade" in October 2007 and a 60 GB version of the Xbox 360 Pro was released on August 1, 2008. The Pro package was discontinued and marked down to US$249 on August 28, 2009, to be sold until stock ran out, while the Elite was also marked down in price to US$299.
Two major hardware revisions of the Xbox 360 have succeeded the original models; the Xbox 360 S (also referred to as the "Slim") replaced the original "Elite" and "Arcade" models in 2010. The S model carries a smaller, streamlined appearance with an angular case, and utilizes a redesigned motherboard designed to alleviate the hardware and overheating issues experienced by prior models. It also includes a proprietary port for use with the Kinect sensor. The Xbox 360 E, a further streamlined variation of the 360 S with a two-tone rectangular case inspired by Xbox One, was released in 2013. In addition to its revised aesthetics, the Xbox 360 E also has one fewer USB port, no AV connector (and thus is HDMI-only), and no longer supports S/PDIF.
Timeline
United States
November 22, 2005
Launch of Xbox 360 Premium (20 GB) – $399.99
Launch of Xbox 360 Core – $299.99
April 29, 2007
Launch Xbox 360 Elite (120 GB) – $479.99
August 6, 2007
Price cut on Xbox 360 Premium (20 GB) – $349.99
Price cut on Xbox 360 Core – $279.99
Price cut on Xbox 360 Elite – $449.99
October 27, 2007
Launch of Xbox 360 Arcade – $279.99
Discontinuation of Xbox 360 Core
July 13, 2008
Discontinuation of Xbox 360 (20 GB) (price cut to $299.99 for remaining stock)
August 1, 2008
Launch of Xbox 360 Premium (60 GB) – $349.99
September 5, 2008
Price cut on Xbox 360 Elite – $399.99
Price cut on Xbox 360 (60 GB) – $299.99
Price cut on Xbox 360 Arcade – $199.99
August 28, 2009
Discontinuation of Xbox 360 (60 GB) (price cut to $249.99 for remaining stock)
Price cut on Xbox 360 Elite – $299.99
June 19, 2010
Launch of Xbox 360 S 250 GB – $299.99
Discontinuation of Xbox 360 Elite (price cut to $249.99 for remaining stock)
Discontinuation of Xbox 360 Arcade (price cut to $149.99 for remaining stock)
August 3, 2010
Launch of Xbox 360 S 4 GB – $199.99
June 10, 2013
Launch of Xbox 360 E 4 GB – $199.99
Launch of Xbox 360 E 250 GB – $299.99
April 20, 2016
Discontinuation of all Xbox 360 models.
Technical problems
The original model of the Xbox 360 has been subject to a number of technical problems. Since the console's release in 2005, users have reported concerns over its reliability and failure rate.
To aid customers with defective consoles, Microsoft extended the Xbox 360's manufacturer's warranty to three years for hardware failure problems that generate a "General Hardware Failure" error report. A "General Hardware Failure" is recognized on all models released before the Xbox 360 S by three quadrants of the ring around the power button flashing red. This error is often known as the "Red Ring of Death". In April 2009, the warranty was extended to also cover failures related to the E74 error code. The warranty extension is not granted for any other types of failures that do not generate these specific error codes.
After these problems surfaced, Microsoft attempted to modify the console to improve its reliability. Modifications included a reduction in the number, size, and placement of components, the addition of dabs of epoxy on the corners and edges of the CPU and GPU as glue to prevent movement relative to the board during heat expansion, and a second GPU heatsink to dissipate more heat. With the release of the redesigned Xbox 360 S, the warranty for the newer models does not include the three-year extended coverage for "General Hardware Failures". The newer Xbox 360 S and E models indicate system overheating when the console's power button begins to flash red, unlike previous models where the first and third quadrant of the ring would light up red around the power button if overheating occurred. The system will then warn the user of imminent system shutdown until the system has cooled, whereas a flashing power button that alternates between green and red is an indication of a "General Hardware Failure" unlike older models where three of the quadrants would light up red.
Software
Games
The Xbox 360 launched with 14 games in North America and 13 in Europe. The console's best-selling game for 2005, Call of Duty 2, sold over a million copies. Five other games sold over a million copies in the console's first year on the market: Ghost Recon Advanced Warfighter, The Elder Scrolls IV: Oblivion, Dead or Alive 4, Saints Row, and Gears of War. Gears of War would become the best-selling game on the console with 3 million copies in 2006, before being surpassed in 2007 by Halo 3 with over 8 million copies.
Six games were initially available in Japan, while eagerly anticipated games such as Dead or Alive 4 and Enchanted Arms were released in the weeks following the console's launch. Games targeted specifically for the region, such as Chromehounds, Ninety-Nine Nights, and Phantasy Star Universe, were also released in the console's first year. Microsoft also had the support of Japanese developer Mistwalker, founded by Final Fantasy creator Hironobu Sakaguchi. Mistwalker's first game, Blue Dragon, was released in 2006 and had a limited-edition bundle which sold out quickly with over 10,000 pre-orders. Blue Dragon is one of three Xbox 360 games to surpass 200,000 units in Japan, along with Tales of Vesperia and Star Ocean: The Last Hope. Mistwalker's second game, Lost Odyssey also sold over 100,000 copies.
The 2007 Game Critics Awards honored the Xbox 360 platform with 38 Nominations and 11 Wins.
By 2015, game releases started to decline as most publishers instead focused on the Xbox One. The last official game released for the system was Just Dance 2019, released on October 23, 2018, in North America, and October 25 in Europe and Australia.
As one of the late updates to the software following its discontinuation, Microsoft will add the ability for Xbox 360 users to use cloud saves even if they do not have Xbox Live Gold prior to the launch of the Xbox Series X and Series S in November 2020. The new consoles will have backward compatibility for all Xbox 360 games that are already backward compatible on the Xbox One, and can use any Xbox 360 game's cloud saves through this update, making the transition to the new consoles easier.
Interface
The Xbox 360's original graphical user interface was the Xbox 360 Dashboard; a tabbed interface that featured five "Blades" (formerly four blades), and was designed by AKQA and Audiobrain. It could be launched automatically when the console booted without a disc in it, or when the disc tray was ejected, but the user had the option to select what the console does if a game is in the tray on start up, or if inserted when already on. A simplified version of it was also accessible at any time via the Xbox Guide button on the gamepad. This simplified version showed the user's gamercard, Xbox Live messages and friends list. It also allowed for personal and music settings, in addition to voice or video chats, or returning to the Xbox Dashboard from the game.
On November 19, 2008, the Xbox 360's dashboard was changed from the "Blade" interface, to a dashboard reminiscent of that present on the Zune and Windows Media Center, known as the "New Xbox Experience" or NXE.
Since the console's release, Microsoft has released several updates for the Dashboard software. These updates have included adding new features to the console, enhancing Xbox Live functionality and multimedia playback capabilities, adding compatibility for new accessories, and fixing bugs in the software. Such updates are mandatory for users wishing to use Xbox Live, as access to Xbox Live is disabled until the update is performed.
New Xbox Experience
At E3 2008, at Microsoft's Show, Microsoft's Aaron Greenberg and Marc Whitten announced the new Xbox 360 interface called the "New Xbox Experience" (NXE). The update was intended to ease console menu navigation. Its GUI uses the Twist UI, previously used in Windows Media Center and the Zune. Its new Xbox Guide retains all Dashboard functionality (including the Marketplace browser and disk ejection) and the original "Blade" interface (although the color scheme has been changed to match that of the NXE Dashboard).
The NXE also provides many new features. Users can now install games from disc to the hard drive to play them with reduced load time and less disc drive noise, but each game's disc must remain in the system in order to run. A new, built-in Community system allows the creation of digitized Avatars that can be used for multiple activities, such as sharing photos or playing Arcade games like 1 vs. 100. The update was released on November 19, 2008.
While previous system updates have been stored on internal memory, the NXE update was the first to require a storage device—at least a 128 MB memory card or a hard drive.
Microsoft released a further update to the Xbox 360 Dashboard starting on December 6, 2011. It included a completely new user interface which utilizes Microsoft's Metro design language, and added new features such as cloud storage for game saves and profiles, live television, Bing voice search, access to YouTube videos and better support for Kinect voice commands.
Multimedia
The Xbox 360 supports videos in Windows Media Video (WMV) format (including high-definition and PlaysForSure videos), as well as H.264 and MPEG-4 media. The December 2007 dashboard update added support for the playback of MPEG-4 ASP format videos. The console can also display pictures and perform slideshows of photo collections with various transition effects, and supports audio playback, with music player controls accessible through the Xbox 360 Guide button. Users may play back their own music while playing games or using the dashboard, and can play music with an interactive visual synthesizer.
Music, photos and videos can be played from standard USB mass storage devices, Xbox 360 proprietary storage devices (such as memory cards or Xbox 360 hard drives), and servers or computers with Windows Media Center or Windows XP with Service pack 2 or higher within the local-area network in streaming mode. As the Xbox 360 uses a modified version of the UPnP AV protocol, some alternative UPnP servers such as uShare (part of the GeeXboX project) and MythTV can also stream media to the Xbox 360, allowing for similar functionality from non-Windows servers.
This is possible with video files up to HD-resolution and with several codecs (MPEG-2, MPEG-4, WMV) and container formats (WMV, MOV, TS).
As of October 27, 2009, UK and Ireland users are also able to access live and on-demand streams of Sky television programming.
At the 2007, 2008, and 2009 Consumer Electronics Shows, Microsoft had announced that IPTV services would soon be made available to use through the Xbox 360. In 2007, Microsoft chairman Bill Gates stated that IPTV on Xbox 360 was expected to be available to consumers by the holiday season, using the Microsoft TV IPTV Edition platform. In 2008, Gates and president of Entertainment & Devices Robbie Bach announced a partnership with BT in the United Kingdom, in which the BT Vision advanced TV service, using the newer Microsoft Mediaroom IPTV platform, would be accessible via Xbox 360, planned for the middle of the year. BT Vision's DVR-based features would not be available on Xbox 360 due to limited hard drive capacity. In 2010, while announcing version 2.0 of Microsoft Mediaroom, Microsoft CEO Steve Ballmer mentioned that AT&T's U-verse IPTV service would enable Xbox 360s to be used as set-top boxes later in the year. As of January 2010, IPTV on Xbox 360 has yet to be deployed beyond limited trials.
In 2012, Microsoft released the Live Event Player, allowing for events such as video game shows, beauty pageants, award shows, concerts, news and sporting events to be streamed on the console via Xbox Live. The first live events streamed on Live were the 2012 Revolver Golden Gods, Microsoft's E3 2012 media briefing and the Miss Teen USA 2012 beauty pageant.
XNA community
XNA Community is a feature whereby Xbox 360 owners can receive community-created games, made with Microsoft XNA Game Studio, from the XNA Creators Club. The games are written, published, and distributed through a community managed portal. XNA Community provides a channel for digital videogame delivery over Xbox Live that can be free of royalties, publishers and licenses. XNA game sales, however, did not meet original expectations, though Xbox Live Indie Games (XBLIG) has had some "hits".
Services
Xbox Live
When the Xbox 360 was released, Microsoft's online gaming service Xbox Live was shut down for 24 hours and underwent a major upgrade, adding a basic non-subscription service called Xbox Live Silver (later renamed Xbox Live Free) to its already established premium subscription-based service (which was renamed Gold). Xbox Live Free is included with all SKUs of the console. It allows users to create a user profile, join on message boards, and access Microsoft's Xbox Live Arcade and Marketplace and talk to other members. A Live Free account does not generally support multiplayer gaming; however, some games that have rather limited online functions already (such as Viva Piñata) and games that feature their own subscription service (e.g. EA Sports games) can be played with a Free account. Xbox Live also supports voice, a feature possible with the Xbox Live Vision.
Xbox Live Gold includes the same features as Free and includes integrated online game playing capabilities outside of third-party subscriptions. Microsoft has allowed previous Xbox Live subscribers to maintain their profile information, friends list, and games history when they make the transition to Xbox Live Gold. To transfer an Xbox Live account to the new system, users need to link a Windows Live ID to their gamertag on Xbox.com. When users add an Xbox Live enabled profile to their console, they are required to provide the console with their passport account information and the last four digits of their credit card number, which is used for verification purposes and billing. An Xbox Live Gold account has an annual cost of US$59.99, C$59.99, NZ$90.00, GB£39.99, or €59.99. On January 5, 2011, Xbox Live reached over 30 million subscribers.
Xbox Live Marketplace
The Xbox Live Marketplace is a virtual market designed for the console that allows Xbox Live users to download purchased or promotional content. The service offers movie and game trailers, game demos, Xbox Live Arcade games and Xbox 360 Dashboard themes as well as add-on game content (items, costumes, levels etc.). These features are available to both Free and Gold members on Xbox Live. A hard drive or memory unit is required to store products purchased from Xbox Live Marketplace. In order to download priced content, users are required to purchase Microsoft Points for use as scrip; though some products (such as trailers and demos) are free to download. Microsoft Points can be obtained through prepaid cards in 1,600 and 4,000-point denominations. Microsoft Points can also be purchased through Xbox Live with a credit card in 500, 1,000, 2,000 and 5,000-point denominations. Users are able to view items available to download on the service through a PC via the Xbox Live Marketplace website. An estimated 70 percent of Xbox Live users have downloaded items from the Marketplace.
Xbox Live Arcade
Xbox Live Arcade is an online service operated by Microsoft that is used to distribute downloadable video games to Xbox and Xbox 360 owners. In addition to classic arcade games such as Ms. Pac-Man, the service offers some new original games like Assault Heroes. The Xbox Live Arcade also features games from other consoles, such as the PlayStation game Castlevania: Symphony of the Night and PC games such as Zuma. The service was first launched on November 3, 2004, using a DVD to load, and offered games for about US$5 to $15. Items are purchased using Microsoft Points, a proprietary currency used to reduce credit card transaction charges. On November 22, 2005, Xbox Live Arcade was re-launched with the release of the Xbox 360, in which it was now integrated with the Xbox 360's dashboard. The games are generally aimed toward more casual gamers; examples of the more popular games are Geometry Wars, Street Fighter II' Hyper Fighting, and Uno. On March 24, 2010, Microsoft introduced the Game Room to Xbox Live. Game Room is a gaming service for Xbox 360 and Microsoft Windows that lets players compete in classic arcade and console games in a virtual arcade.
Movies and TV
On November 6, 2006, Microsoft announced the Xbox Video Marketplace, an exclusive video store accessible through the console. Launched in the United States on November 22, 2006, the first anniversary of the Xbox 360's launch, the service allows users in the United States to download high-definition and standard-definition television shows and movies onto an Xbox 360 console for viewing. With the exception of short clips, content is not currently available for streaming, and must be downloaded. Movies are also available for rental. They expire in 14 days after download or at the end of the first 24 hours after the movie has begun playing, whichever comes first. Television episodes can be purchased to own, and are transferable to an unlimited number of consoles. Downloaded files use 5.1 surround audio and are encoded using VC-1 for video at 720p, with a bitrate of 6.8 Mbit/s. Television content is offered from MTV, VH1, Comedy Central, Turner Broadcasting, and CBS; and movie content is Warner Bros., Paramount, and Disney, along with other publishers.
After the Spring 2007 update, the following video codecs are supported:
H.264 video support: Up to 15 Mbit/s, Baseline, Main, and High (up to level 4.1) Profiles with 2 channel AAC LC and Main Profiles.
MPEG-4 Part 2 video support: Up to 8 Mbit/s, Simple Profile with 2 channel AAC LC and Main Profiles.
As a late addition to the December Xbox 360 update, 25 movies were added to the European Xbox 360 video market place on the December 11, 2007 and cost 250 Microsoft points for the SD version of the movie and 380 points for the HD version of the movie. Xbox Live members in Canada featured the ability to go on the Xbox Live Marketplace also as of December 11, 2007 with around 30 movies to be downloaded for the same number of Microsoft Points.
On May 26, 2009, Microsoft announced it would release the Zune HD (in the fall of 2009), which was then the next addition to the Zune product range. This was of an impact on the Xbox Live Video Store as it was also announced that the Zune Video Marketplace and the Xbox Live Video Store will be merged to form the Zune Marketplace, which will be arriving on Xbox Live in 7 countries initially, the United Kingdom, the United States, France, Italy, Germany, Ireland and Spain. Further details were released at the Microsoft press conference at E3 2009.
On October 16, 2012, Xbox Video and Xbox Music were released, replacing the Zune Marketplace. Xbox Video is a digital video service on that offers full HD movies and TV series for purchase or rental on Xbox 360, Windows 8, Windows RT PCs and tablets, and Windows Phones.
On August 18, 2015, Microsoft rolled out an update renaming it Movies and TV similar to the Windows 10 App.
Groove Music
Xbox Music provides 30 million music tracks available for purchase or access through subscription. It was announced at the Electronic Entertainment Expo 2012 and it integrates with Windows 8 and Windows Phone as well.
In August 2015 Microsoft rolled out an update renaming it to Groove Music similar to the Windows 10 App.
Xbox SmartGlass
Xbox SmartGlass allows for integration between the Xbox 360 console and mobile devices such as tablets and smartphones. An app is available on Android, Windows Phone 8 and iOS. Users of the feature can view additional content to accompany the game they are playing, or the TV shows and movies they are watching. They can also use their mobile device as a remote to control the Xbox 360. The SmartGlass functionality can also be found in the Xbox 360's successor, the Xbox One.
Game development
PartnerNet, the developers-only alternative Xbox Live network used by developers to beta test game content developed for Xbox Live Arcade, runs on Xbox 360 debug kits, which are used both by developers and by the gaming press. In a podcast released on February 12, 2007, a developer breached the PartnerNet non-disclosure agreement (NDA) by commenting that he had found a playable version of Alien Hominid and an unplayable version of Ikaruga on PartnerNet. A few video game journalists, misconstruing the breach of the NDA as an invalidation of the NDA, immediately began reporting on other games being tested via PartnerNet, including a remake of Jetpac. (Alien Hominid for the Xbox 360 was released on February 28 of that year, and Ikaruga was released over a year later on April 9, 2008. Jetpac was released for the Xbox 360 on March 28, 2007, as Jetpac Refuelled.) There have also been numerous video and screenshot leaks of game footage on PartnerNet, as well as a complete version of Sonic the Hedgehog 4: Episode I, which caused for the whole PartnerNet service to be shut down overnight on April 3, 2010. In the following days, Microsoft reminded developers and journalists that they were in breach of NDA by sharing information about PartnerNet content and asked websites to remove lists of games in development that were discovered on the service. Sega used feedback from fans about the leaked version of Sonic the Hedgehog 4: Episode I to refine it before they eventually released it. Additionally, a pair of hackers played their modded Halo 3 games on PartnerNet in addition to using PartnerNet to find unreleased and untested software. The hackers passed this information along to their friends before they were eventually caught by Bungie. Consequently, Bungie left a message for the hackers on PartnerNet which read "Winners Don't Break Into PartnerNet." Other games that were leaked in the PartnerNet fiasco include Shenmue and Shenmue 2.
See also
List of Xbox 360 games
List of Xbox 360 applications
List of original programs distributed by Xbox Entertainment Studios
Further reading
References
External links
Microsoft Support for Xbox 360
Xbox development team blog
2000s toys
2010s toys
Articles which contain graphical timelines
Backward-compatible video game consoles
Home video game consoles
Microsoft video game consoles
Products introduced in 2005
Products and services discontinued in 2016
Seventh-generation video game consoles | Operating System (OS) | 1,313 |
NetWare
NetWare is a discontinued computer network operating system developed by Novell, Inc. It initially used cooperative multitasking to run various services on a personal computer, using the IPX network protocol.
The original NetWare product in 1983 supported clients running both CP/M and MS-DOS, ran over a proprietary star network topology and was based on a Novell-built file server using the Motorola 68000 processor. The company soon moved away from building its own hardware, and NetWare became hardware-independent, running on any suitable Intel-based IBM PC compatible system, and able to utilize a wide range of network cards. From the beginning NetWare implemented a number of features inspired by mainframe and minicomputer systems that were not available in its competitors' products.
In 1991, Novell introduced cheaper peer-to-peer networking products for DOS and Windows, unrelated to their server-centric NetWare. These are NetWare Lite 1.0 (NWL), and later Personal NetWare 1.0 (PNW) in 1993.
In 1993, the main NetWare product line took a dramatic turn when version 4 introduced NetWare Directory Services (NDS, later renamed eDirectory), a global directory service based on ISO X.500 concepts (seven years later, Microsoft released Active Directory, which lacked the tree structure and time synchronization of NDS). The directory service, along with a new e-mail system (GroupWise), application configuration suite (ZENworks), and security product (BorderManager) were all targeted at the needs of large enterprises.
By 2000, however, Microsoft was taking more of Novell's customer base and Novell increasingly looked to a future based on a Linux kernel. The successor to NetWare, Open Enterprise Server (OES), released in March 2005, offers all the services previously hosted by NetWare 6.5, but on a SUSE Linux Enterprise Server; the NetWare kernel remained an option until OES 11 in late 2011.
The final update release was version 6.5SP8 of May 2009; NetWare is no longer on Novell's product list. NetWare 6.5SP8 General Support ended in 2010; Extended Support was available until the end of 2015, and Self Support until the end of 2017. The replacement is Open Enterprise Server.
History
NetWare evolved from a very simple concept: file sharing instead of disk sharing. By controlling access at the level of individual files, instead of entire disks, files could be locked and better access control implemented. In 1983 when the first versions of NetWare originated, all other competing products were based on the concept of providing shared direct disk access. Novell's alternative approach was validated by IBM in 1984, which helped promote the NetWare product.
Novell NetWare shares disk space in the form of NetWare volumes, comparable to logical volumes. Client workstations running DOS run a special terminate and stay resident (TSR) program that allows them to map a local drive letter to a NetWare volume. Clients log into a server in order to be allowed to map volumes, and access can be restricted according to the login name. Similarly, they can connect to shared printers on the dedicated server, and print as if the printer is connected locally.
At the end of the 1990s, with Internet connectivity booming, the Internet's TCP/IP protocol became dominant on LANs. Novell had introduced limited TCP/IP support in NetWare 3.x (circa 1992) and 4.x (circa 1995), consisting mainly of FTP services and UNIX-style LPR/LPD printing (available in NetWare 3.x), and a Novell-developed webserver (in NetWare 4.x). Native TCP/IP support for the client file and print services normally associated with NetWare was introduced in NetWare 5.0 (released in 1998). There was also a short-lived product, NWIP, that encapsulated IPX in TCP/IP, intended to ease transition of an existing NetWare environment from IPX to IP.
During the early to mid-1980s Microsoft introduced their own LAN system in LAN Manager, based on the competing NBF protocol. Early attempts to compete with NetWare failed, but this changed with the inclusion of improved networking support in Windows for Workgroups, and then the successful Windows NT and Windows 95. NT, in particular, offered a sub-set of NetWare's services, but on a system that could also be used on a desktop, and due to the vertical integration there was no need for a third-party client.
Early years
NetWare originated from consulting work by SuperSet Software, a group founded by the friends Drew Major, Dale Neibaur, Kyle Powell and later Mark Hurst. This work stemmed from their classwork at Brigham Young University in Provo, Utah, starting in October 1981.
In 1981, Raymond Noorda engaged the work by the SuperSet team. The team was originally assigned to create a CP/M disk sharing system to help network the CP/M Motorola 68000 hardware that Novell sold at the time. The first S-Net is CP/M-68K-based and shares a hard disk. In 1983, the team was privately convinced that CP/M was a doomed platform and instead came up with a successful file-sharing system for the newly introduced IBM-compatible PC. They also wrote an application called Snipes – a text-mode game – and used it to test the new network and demonstrate its capabilities. Snipes [aka 'NSnipes' for 'Network Snipes'] is the first network application ever written for a commercial personal computer, and it is recognized as one of the precursors of many popular multiplayer games such as Doom and Quake.
First called ShareNet or S-Net, this network operating system (NOS) was later called Novell NetWare. NetWare is based on the NetWare Core Protocol (NCP), which is a packet-based protocol that enables a client to send requests to and receive replies from a NetWare server. Initially, NCP was directly tied to the IPX/SPX protocol, and NetWare communicated natively using only IPX/SPX.
The first product to bear the NetWare name was released in 1983. There were two distinct versions of NetWare at that time. One version was designed to run on the Intel 8086 processor and another on the Motorola processor which was called NetWare 68 (aka S-Net); it runs on the Motorola 68000 processor on a proprietary Novell-built file server (Novell could not write an original network operating system from scratch so they licensed a Unix kernel and based NetWare on that) and uses a star network topology. This was soon joined by NetWare 86 4.x, which was written for the Intel 8086. This was replaced in 1985 with Advanced NetWare 86 version 1.0a which allows more than one server on the same network. In 1986, after the Intel 80286 processor became available, Novell released Advanced NetWare 286 1.0a. Two versions were offered for sale; the basic version was sold as ELS I and the more enhanced version was sold as ELS II. The acronym ELS was used to identify this new product line as NetWare's Entry Level System.
NetWare 286 2.x
Advanced NetWare version 2.x, launched in 1986, was written for the then-new 80286 CPU. The 80286 CPU features a new 16-bit protected mode that provides access to up to 16 MiB RAM as well as new mechanisms to aid multi-tasking. (Prior to the 80286, PC CPU servers used the Intel 8088/8086 8-/16-bit processors, which are limited to an address space of 1 MiB with not more than 640 KiB of directly addressable RAM.) The combination of a higher 16 MiB RAM limit, 80286 processor feature utilization, and 256 MB NetWare volume size limit (compared to the 32 MB that DOS allowed at that time) allowed the building of reliable, cost-effective server-based local area networks for the first time. The 16 MiB RAM limit was especially important, since it makes enough RAM available for disk caching to significantly improve performance. This became the key to Novell's performance while also allowing larger networks to be built.
In a significant innovation, NetWare 286 is also hardware-independent, unlike competing network server systems. Novell servers can be assembled using any brand system with an Intel 80286 CPU, any MFM, RLL, ESDI, or SCSI hard drive and any 8- or 16-bit network adapter for which NetWare drivers are available – and 18 different manufacturer's network cards were supported at launch.
The server could support up to four network cards, and these can be a mixture of technologies such as ARCNET, Token Ring and Ethernet. The operating system is provided as a set of compiled object modules that required configuration and linking. Any change to the operating system requires a re-linking of the kernel. Installation also requires the use of a proprietary low-level format program for MFM hard drives called COMPSURF.
The file system used by NetWare 2.x is NetWare File System 286, or NWFS 286, supporting volumes of up to 256 MB. NetWare 286 recognizes 80286 protected mode, extending NetWare's support of RAM from 1 MiB to the full 16 MiB addressable by the 80286. A minimum of 2 MiB is required to start up the operating system; any additional RAM is used for FAT, DET and file caching. Since 16-bit protected mode is implemented in the 80286 and every subsequent Intel x86 processor, NetWare 286 version 2.x will run on any 80286 or later compatible processor.
NetWare 2.x implements a number of features inspired by mainframe and minicomputer systems that were not available in other operating systems of the day. The System Fault Tolerance (SFT) features includes standard read-after-write verification (SFT-I) with on-the-fly bad block re-mapping (at the time, disks did not have that feature built in) and software RAID1 (disk mirroring, SFT-II). The Transaction Tracking System (TTS) optionally protects files against incomplete updates. For single files, this requires only a file attribute to be set. Transactions over multiple files and controlled roll-backs are possible by programming to the TTS API.
NetWare 286 2.x normally requires a dedicated PC to act as the server, where the server uses DOS only as a boot loader to execute the operating system file . All memory is allocated to NetWare; no DOS ran on the server. However, a "non-dedicated" version was also available for price-conscious customers. In this, DOS 3.3 or higher remains in memory, and the processor time-slices between the DOS and NetWare programs, allowing the server computer to be used simultaneously as a network file server and as a user workstation. Because all extended memory (RAM above 1 MiB) is allocated to NetWare, DOS is limited to only 640 KiB; expanded memory managers that used the MMU of 80386 and higher processors, such as EMM386, do not work; 8086-style expanded memory on dedicated plug-in cards is possible however. Time slicing is accomplished using the keyboard interrupt, which requires strict compliance with the IBM PC design model, otherwise performance is affected.
Server licensing on early versions of NetWare 286 is accomplished by using a key card. The key card was designed for an 8-bit ISA bus, and has a serial number encoded on a ROM chip. The serial number has to match the serial number of the NetWare software running on the server. To broaden the hardware base, particularly to machines using the IBM MCA bus, later versions of NetWare 2.x do not require the key card; serialised license floppy disks are used in place of the key cards.
Licensing is normally for 100 users, but two ELS versions were also available. First a 5-user ELS in 1987, and followed by the 8-user ELS 2.12 II in 1988.
NetWare 3.x
NetWare's 3.x range was a major step forward. It began with version 3.0 in 1990, followed quickly by version 3.10 and 3.11 in 1991.
A key feature was support for 32-bit protected mode, eliminating the 16 MiB memory limit of NetWare 286 and therefore allowing larger hard drives to be supported (since NetWare 3.x cached the entire file allocation table and directory entry table into memory for improved performance).
NetWare version 3.x was also much simpler to install, with disk and network support provided by software modules called a NetWare Loadable Module (NLM) loaded either at start-up or when it was needed. NLMs could also add functionality such as anti-virus software, backup software, database and web servers. Support for long filenames was also provided by an NLM.
A new file system was introduced by NetWare 3.x – "NetWare File System 386", or NWFS 386, which significantly extended volume capacity (1 TB, 4 GB files), and could handle up to 16 volume segments spanning multiple physical disk drives. Volume segments could be added while the server was in use and the volume was mounted, allowing a server to be expanded without interruption.
In NetWare 386 3.x all NLMs ran on the server at the same level of processor memory protection, known as "ring 0". This provided the best possible performance, it sacrificed reliability because there was no memory protection, and furthermore NetWare 3.x used a co-operative multitasking model, meaning that an NLM was required to yield to the kernel regularly. For either of these reasons a badly behaved NLM could result in a fatal (ABEND) error.
NetWare continued to be administered using console-based utilities.
For a while, Novell also marketed an OEM version of NetWare 3, called Portable NetWare, together with OEMs such as Hewlett-Packard, DEC and Data General, who ported Novell source code to run on top of their Unix operating systems. Portable NetWare did not sell well.
While NetWare 3.x was current, Novell introduced its first high-availability clustering system, named NetWare SFT-III, which allowed a logical server to be completely mirrored to a separate physical machine. Implemented as a shared-nothing cluster, under SFT-III the OS was logically split into an interrupt-driven I/O engine and the event-driven OS core. The I/O engines serialized their interrupts (disk, network etc.) into a combined event stream that was fed to two identical copies of the system engine through a fast (typically 100 Mbit/s) inter-server link. Because of its non-preemptive nature, the OS core, stripped of non-deterministic I/O, behaves deterministically, like a large finite state machine. The outputs of the two system engines were compared to ensure proper operation, and two copies fed back to the I/O engines. Using the existing SFT-II software RAID functionality present in the core, disks could be mirrored between the two machines without special hardware. The two machines could be separated as far as the server-to-server link would permit. In case of a server or disk failure, the surviving server could take over client sessions transparently after a short pause since it had full state information. SFT-III was the first NetWare version able to make use of SMP hardware – the I/O engine could optionally be run on its own CPU. NetWare SFT-III, ahead of its time in several ways, was a mixed success.
With NetWare 3 an improved routing protocol, NetWare Link Services Protocol, has been introduced which scales better than Routing Information Protocol and allows building large networks.
NetWare 4.x
Version 4 in 1993 introduced NetWare Directory Services, later re-branded as Novell Directory Services (NDS), based on X.500, which replaced the Bindery with a global directory service, in which the infrastructure was described and managed in a single place. Additionally, NDS provided an extensible schema, allowing the introduction of new object types. This allowed a single user authentication to NDS to govern access to any server in the directory tree structure. Users could therefore access network resources no matter on which server they resided, although user license counts were still tied to individual servers. (Large enterprises could opt for a license model giving them essentially unlimited per-server users if they let Novell audit their total user count.)
Version 4 also introduced a number of useful tools and features, such as transparent compression at file system level and RSA public/private encryption.
Another new feature was the NetWare Asynchronous Services Interface (NASI). It allowed network sharing of multiple serial devices, such as modems. Client port redirection occurred via a DOS or Windows driver allowing companies to consolidate modems and analog phone lines.
NetWare for OS/2
Promised as early as 1988, when the Microsoft-IBM collaboration was still ongoing and OS/2 1.x was still a 16-bit product, the product didn't become commercially available until after IBM and Microsoft had parted ways and OS/2 2.0 had become a 32-bit, pre-emptive multitasking and multithreading OS.
By August 1993, Novell released its first version of "NetWare for OS/2". This first release supported OS/2 2.1 (1993) as the base OS, and required that users first buy and install IBM OS/2, then purchase NetWare 4.01, and then install the NetWare for OS/2 product. It retailed for $200.
By around 1995, and coincidental with IBM's renewed marketing push for its 32-bit OS/2 Warp OS, both as a desktop client and as a LAN server (OS/2 Warp Server), NetWare for OS/2 began receiving some good press coverage. "NetWare 4.1 for OS/2" allowed to run Novell's network stack and server modules on top of IBM's 32-bit kernel and network stack. It was basically NetWare 4.x running as a service on top of OS/2. It was compatible with third party client and server utilities and NetWare Loadable Modules.
Since IBM's 32-bit OS/2 included Netbios, IPX/SPX and TCP/IP support, this means that sysadmins could run all three most popular network stacks on a single box, and use the OS/2 box as a workstation too. NetWare for OS/2 shared memory on the system with OS/2 seamlessly. The book "Client Server survival Guide with OS/2" described it as "glue code that lets the unmodified NetWare 4.x server program think it owns all resources on a OS/2 system". It also claimed that a NetWare server running on top of OS/2 only suffered a 5% to 10% overhead over NetWare running over the bare metal hardware, while gaining OS/2's pre-emptive multitasking and object oriented GUI.
Novell continued releasing bugfixes and updates to NetWare for OS/2 up to 1998.
Strategic mistakes
Novell's strategy with NetWare 286 2.x and 3.x proved very successful; before the arrival of Windows NT Server, Novell claimed 90% of the market for PC based servers.
While the design of NetWare 3.x and later involved a DOS partition to load NetWare server files; while of little technical import (DOS merely loaded NetWare into memory and turned execution over to it; in later versions, DOS could be unloaded from RAM), this feature became a marketing liability. Additionally, the NetWare console remained text-based, which was also a marketing, rather than technical, issue when the Windows graphical interface gained widespread acceptance. Novell could have eliminated this technical liability by retaining the design of NetWare 286, which installed the server file into a Novell partition and allowed the server to boot from the Novell partition without creating a bootable DOS partition. Novell finally added support for this in a Support Pack for NetWare 6.5.
As Novell initially used IPX/SPX instead of TCP/IP, they were poorly positioned to take advantage of the Internet in 1995. This resulted in Novell servers being bypassed for routing and Internet access in favor of hardware routers, Unix-based operating systems such as FreeBSD, and SOCKS and HTTP Proxy Servers on Windows and other operating systems.
A decision by the management of Novell also took away the ability of independent resellers and engineers to recommend and sell the product. The reduction of their effective sales force created this downward spiral in sales.
NetWare 4.1x and NetWare for Small Business
Novell priced NetWare 4.10 similarly to NetWare 3.12, allowing customers who resisted NDS (typically small businesses) to try it at no cost.
Later Novell released NetWare version 4.11 in 1996 which included many enhancements that made the operating system easier to install, easier to operate, faster, and more stable. It also included the first full 32-bit client for Microsoft Windows-based workstations, SMP support and the NetWare Administrator (NWADMIN or NWADMN32), a GUI-based administration tool for NetWare. Previous administration tools used the Cworthy interface, the character-based GUI tools such as SYSCON and PCONSOLE with blue text-based background. Some of these tools survive to this day, for instance MONITOR.NLM.
Novell packaged NetWare 4.11 with its Web server, TCP/IP support and the Netscape browser into a bundle dubbed IntranetWare (also written as intraNetWare). A version designed for networks of 25 or fewer users was named IntranetWare for Small Business and contained a limited version of NDS and tried to simplify NDS administration. The intranetWare name was dropped in NetWare 5.
During this time Novell also began to leverage its directory service, NDS, by tying their other products into the directory. Their e-mail system, GroupWise, was integrated with NDS, and Novell released many other directory-enabled products such as ZENworks and BorderManager.
NetWare still required IPX/SPX as NCP used it, but Novell started to acknowledge the demand for TCP/IP with NetWare 4.11 by including tools and utilities that made it easier to create intranets and link networks to the Internet. Novell bundled tools, such as the IPX/IP gateway, to ease the connection between IPX workstations and IP networks. It also began integrating Internet technologies and support through features such as a natively hosted web server.
NetWare 5.x
With the release of NetWare 5 in October 1998 Novell switched its primary NCP interface from the IPX/SPX network protocol to TCP/IP to meet market demand. Products continued to support IPX/SPX, but the emphasis shifted to TCP/IP. New features included:
a GUI for NetWare
Novell Storage Services (NSS), a file system to replace the traditional NetWare File System (which Novell continued to support)
Java virtual machine for NetWare
Novell Distributed Print Services (NDPS), an infrastructure for printing over networks
ConsoleOne, a Java-based GUI administration console
directory-enabled Public key infrastructure services (PKIS)
directory-enabled DNS and DHCP servers
support for Storage Area Networks (SANs)
Novell Cluster Services (NCS), a replacement for SFT-III
Oracle 8i with a 5-user license
The Cluster Services improved on SFT-III, as NCS did not require specialized hardware or identical server configurations.
Novell released NetWare 5 during a time when NetWare's market share had started dropping precipitously; many companies and organizations replaced their NetWare servers with servers running Microsoft's Windows NT operating system.
Around this time Novell also released their last upgrade to the NetWare 4 operating system, NetWare 4.2.
NetWare 5 and above supported Novell NetStorage for Internet-based access to files stored within NetWare.
Novell released NetWare 5.1 in January 2000. It introduced a number of tools, such as:
IBM WebSphere Application Server
NetWare Management Portal (later called Novell Remote Manager), web-based management of the operating system
FTP, NNTP and streaming-media servers
NetWare Web Search Server
WebDAV support
NetWare 6.0
NetWare 6 was released in October 2001, shortly after its predecessor. This version has a simplified licensing scheme based on users, not server connections. This allows unlimited connections per user to any number of NetWare servers in the network. Novell Cluster Services was also improved to support 32-node clusters; the base NetWare 6.0 product included a two-node clustering license.
NetWare 6.5
NetWare 6.5 was released in August 2003. Some of the new features in this version included:
more open-source products such as PHP, MySQL and OpenSSH
a port of the Bash shell and a lot of traditional Unix utilities such as wget, grep, awk and sed to provide additional capabilities for scripting
iSCSI support (both target and initiator)
Virtual Office – an "out of the box" web portal for end users providing access to e-mail, personal file storage, company address book, etc.
Domain controller functionality
Universal password
DirXML Starter Pack – synchronization of user accounts with another eDirectory tree, a Windows NT domain or Active Directory.
exteNd Application Server – a Java EE 1.3-compatible application server
support for customized printer driver profiles and printer usage auditing
NX bit support
support for USB storage devices
support for encrypted volumes
The latest – and apparently last – Service Pack for NetWare 6.5 is SP8, released May 2009.
Open Enterprise Server
1.0
In 2003, Novell announced the successor product to NetWare: Open Enterprise Server (OES). First released in March 2005, OES completes the separation of the services traditionally associated with NetWare (such as Directory Services, and file-and-print) from the platform underlying the delivery of those services. OES is essentially a set of applications (eDirectory, NetWare Core Protocol services, iPrint, etc.) that can run atop either a Linux or a NetWare kernel platform. Clustered OES implementations can even migrate services from Linux to NetWare and back again, making Novell one of the very few vendors to offer a multi-platform clustering solution.
Consequent to Novell's acquisitions of Ximian and the German Linux distributor SuSE, Novell moved away from NetWare and shifted its focus towards Linux. Marketing was focused on getting faithful NetWare users to move to the Linux platform for future releases. The clearest indication of this direction was Novell's controversial decision to release Open Enterprise Server on Linux only, not NetWare. Novell later watered down this decision and stated that NetWare's 90 million users would be supported until at least 2015. Meanwhile, many former NetWare customers rejected the confusing mix of licensed software running on an open-source Linux operating system in favor of moving to complete Open Source solutions such as those offered by Red Hat.
2.0
OES 2 was released on 8 October 2007. It includes NetWare 6.5 SP7, which supports running as a paravirtualized guest inside the Xen hypervisor and new Linux based version using SLES10.
New features include
64-bit support
Virtualization
Dynamic Storage Technology, which provide Shadow Volumes
Domain services for Windows (provided in OES 2 service pack 1)
From the 1990s
some organizations still used Novell NetWare, but it had started to lose popularity from the mid-1990s, when NetWare was the de facto standard for file- and printer-sharing software for the Intel x86 server platform.
Microsoft successfully took market share from NetWare products from the late-1990s. Microsoft's more aggressive marketing was aimed directly at non-technical management through major magazines, while Novell NetWare's was through more technical magazines read by IT personnel.
Novell did not adapt their pricing structure to current market conditions, and NetWare sales suffered,
NetWare Lite / Personal NetWare
NetWare Lite and Personal NetWare were a series of peer-to-peer networks developed by Novell for DOS- and Windows-based computers aimed at personal users and small businesses between 1991 and 1995.
Performance
NetWare dominated the network operating system (NOS) market from the mid-1980s through the mid- to late-1990s due to its extremely high performance relative to other NOS technologies. Most benchmarks during this period demonstrated a 5:1 to 10:1 performance advantage over products from Microsoft, Banyan, and others. One noteworthy benchmark pitted NetWare 3.x running NFS services over TCP/IP (not NetWare's native IPX protocol) against a dedicated Auspex NFS server and an SCO Unix server running NFS service. NetWare NFS outperformed both 'native' NFS systems and claimed a 2:1 performance advantage over SCO Unix NFS on the same hardware.
The reasons for NetWare's performance advantage are given below.
File service instead of disk service
When first developed, nearly all LAN storage was based on the disk server model. This meant that if a client computer wanted to read a particular block from a particular file it would have to issue the following requests across the relatively slow LAN:
Read first block of directory
Continue reading subsequent directory blocks until the directory block containing the information on the desired file was found, could be many directory blocks
Read through multiple file entry blocks until the block containing the location of the desired file block was found, could be many directory blocks
Read the desired data block
NetWare, since it was based on a file service model, interacted with the client at the file API level:
Send file open request (if this hadn't already been done)
Send a request for the desired data from the file
All of the work of searching the directory to figure out where the desired data was physically located on the disk was performed at high speed locally on the server.
By the mid-1980s, most NOS products had shifted from the disk service to the file service model. Today, the disk service model is making a comeback, see SAN.
Aggressive caching
From the start, the NetWare design focused on servers with copious amounts of RAM. The entire file allocation table (FAT) was read into RAM when a volume was mounted, thereby requiring a minimum amount of RAM proportional to online disk space; adding a disk to a server would often require a RAM upgrade as well. Unlike most competing network operating systems prior to Windows NT, NetWare automatically used all otherwise unused RAM for caching active files, employing delayed write-backs to facilitate re-ordering of disk requests (elevator seeks). An unexpected shutdown could therefore corrupt data, making an uninterruptible power supply practically a mandatory part of a server installation.
The default dirty cache delay time was fixed at 2.2 seconds in NetWare 286 versions 2.x. Starting with NetWare 386 3.x, the dirty disk cache delay time and dirty directory cache delay time settings controlled the amount of time the server would cache changed ("dirty") data before saving (flushing) the data to a hard drive. The default setting of 3.3 seconds could be decreased to 0.5 seconds but not reduced to zero, while the maximum delay was 10 seconds. The option to increase the cache delay to 10 seconds provided a significant performance boost. Windows 2000 and 2003 server do not allow adjustment to the cache delay time. Instead, they use an algorithm that adjusts cache delay.
Efficiency of NetWare Core Protocol (NCP)
Most network protocols in use at the time NetWare was developed didn't trust the network to deliver messages. A typical client file read would work something like this:
Client sends read request to server
Server acknowledges request
Client acknowledges acknowledgement
Server sends requested data to client
Client acknowledges data
Server acknowledges acknowledgement
In contrast, NCP was based on the idea that networks worked perfectly most of the time, so the reply to a request served as the acknowledgement. Here is an example of a client read request using this model:
Client sends read request to server
Server sends requested data to client
All requests contained a sequence number, so if the client didn't receive a response within an appropriate amount of time it would re-send the request with the same sequence number. If the server had already processed the request it would resend the cached response, if it had not yet had time to process the request it would only send a "positive acknowledgement". The bottom line to this 'trust the network' approach was a 2/3 reduction in network transactions and the associated latency.
Non-preemptive OS designed for network services
One of the raging debates of the 1990s was whether it was more appropriate for network file service to be performed by a software layer running on top of a general purpose operating system, or by a special purpose operating system. NetWare was a special purpose operating system, not a timesharing OS. It was written from the ground up as a platform for client-server processing services. Initially it focused on file and print services, but later demonstrated its flexibility by running database, email, web and other services as well. It also performed efficiently as a router, supporting IPX, TCP/IP, and Appletalk, though it never offered the flexibility of a 'hardware' router.
In 4.x and earlier versions, NetWare did not support preemption, virtual memory, graphical user interfaces, etc. Processes and services running under the NetWare OS were expected to be cooperative, that is to process a request and return control to the OS in a timely fashion. On the down side, this trust of application processes to manage themselves could lead to a misbehaving application bringing down the server.
See also
Novell NetWare Access Server (NAS)
Comparison of operating systems
Btrieve
NCOPY
References
Further reading
External links
NetWare Cool Solutions – Tips & tricks, guides, tools and other resources submitted by the NetWare community
Another brief history of NetWare
Epic uptime of NetWare 3 server, arstechnica.com
1983 software
Network operating systems
NetWare
Proprietary software
X86 operating systems
PowerPC operating systems
MIPS operating systems
Discontinued operating systems | Operating System (OS) | 1,314 |
List of Macintosh software
The following is a list of Macintosh Software – notable computer applications for current macOS operating systems.
For software designed for the classic Mac OS, see List of old Macintosh software.
Anti-malware software
The software listed in this section is antivirus software and malware removal software.
BitDefender Antivirus for Mac – antivirus software
Intego VirusBarrier – antivirus software
MacScan – malware removal program
Norton Antivirus for Mac – an antivirus program specially made for Mac
Sophos – antivirus software
VirusScan – antivirus software
Archiving, backup, restore, recovery
This section lists software for file archiving, backup and restore, data compression and data recovery.
Archive Utility – built-in archive file handler
Backup – built-in
Compact Pro – data compression
Disk Drill Basic – data recovery software for macOS
iArchiver – handles archives, commercial
Stellar Phoenix Mac Data Recovery – Data Recovery Software for Mac Computers
Stellar Phoenix Video Repair – Repair corrupt of damaged videos
Stuffit – data compression
The Unarchiver
Time Machine (macOS) – built-in backup software
BetterZip – file archiver and compressor utility
WinZip – file archiver and compressor utility
Audio-specific software
Ableton Live – digital audio workstation
Adobe Soundbooth – music and soundtrack editing
Ardour – hard disk recorder and digital audio workstation program
Audacity – digital audio editor
Audion – media player (development ceased)
Audio Hijack – audio recorder
baudline – signal analyzer
BIAS Peak – mastering
Cog – open source audio player, supports multiple formats
Cubase – music production program
djay – digital music mixing software
Digital Performer – MIDI sequencer with audio tracking
Final Cut Express/Pro – movie editor
Finale – scorewriter program
fre:ac – open source audio converter and CD ripper
GarageBand – music/podcast production
Impro-Visor – educational notation and playback for music improvisation
iTunes – audio/video Jukebox
ixi software – free improvisation and sketching tools
Jaikoz – mass tagger
LilyPond – scorewriter program
Logic Express – prosumer music production
Logic Studio – music writing studio package by Apple Inc.
Apple Loops Utility – production and organisation of Apple Loops
Apple Qmaster and Qadministrator
Logic Pro – digital audio workstation
Mainstage – program to play software synthesizers live
QuickTime Pro – pro version of QuickTime
Soundtrack Pro – post production audio editor
WaveBurner – CD mastering and production software
Mixxx – DJ mix software
Max – Cycling 74's visual programming language for MIDI, audio, video; with MSP, Jitter
Nuendo – audio and post production editor
Overture – scorewriter program
ReBirth – virtual synth program simulates Roland TR-808, TB-303
REAPER – digital audio workstation
Reason Studios – digital audio workstation
Recycle – sample editor
Renoise – contemporary digital audio workstation, based upon the heritage and development of tracker software.
RiffWorks – guitar recording and online song collaboration software
CD and DVD authoring
Disco
DVD Studio Pro – DVD authoring application
Adobe Encore
iDVD – a basic DVD-authoring application
Roxio Toast – DVD authoring application
Chat (text, voice, image, video)
Active
Adium – multi-protocol IM client
aMSN
ChitChat
Colloquy – freeware advanced IRC and SILC client
Discord
Fire – open source, multiprotocol IM client
FaceTime – videoconferencing between Mac, iPhone, iPad and iPod touch
iMessage – instant messaging between Mac, and iDevices
Ircle
Irssi – IrssiX and MacIrssi
Kopete
LiveChat
Microsoft Messenger for Mac
Microsoft Teams
Palringo
Psi (instant messenger)
Skype
Snak
Ventrilo – audio chatroom application
X-Chat Aqua
Yahoo! Messenger
Telegram
Discontinued
AOL Instant Messenger – discontinued as of December 15, 2017
iChat – instant messaging and videoconferencing (discontinued since OS X 10.8 Mountain Lion in favour of FaceTime and iMessage)
Children's software
Kid Pix Deluxe 3X – bitmap drawing program
Stagecast Creator – programming and internet authoring for kids
Developer tools and IDEs
Apache Web Server
AppCode – an Objective-C IDE by JetBrains for macOS and iOS development
Aptana – an open source integrated development environment (IDE) for building Ajax web applications
Clozure CL – an open source integrated development environment (IDE) for building Common Lisp applications
Code::Blocks – open source IDE for C++
CodeWarrior – development environment, framework
Coldstone game engine
Dylan
Eclipse – open source Java-based IDE for developing rich-client applications, includes SWT library, replaces Swing by using underlying OS native windowing abilities
Fink – Debian package manager for ported Unix software
Free Pascal – Object Pascal compiler, XCode plugin available
GNU Compiler Collection
Glasgow Haskell Compiler
Helix – relational database IDE
Homebrew - Package manager for installing many open source, mostly terminal based, utilities. Includes Apache, PHP, Python and many more.
HotSpot – Sun's Java Virtual Machine
HyperNext – freeware software development
IntelliJ IDEA – a JAVA IDE by JetBrains (free limited community edition)
Komodo – commercial multi-language IDE from ActiveState
Lazarus – cross-platform IDE to develop software with Free Pascal, specialized in graphical software
LiveCode – high-level cross-platform IDE
MacApp – application development framework Pascal and C++
Macintosh Programmer's Workshop (MPW)
Macports – a package management system that simplifies the installation of free/open source software on the macOS.
Macromedia Authorware – application (CBT, eLearning) development, no Mac development environment since version 4, though can still package applications with the 'Mac Packager' for OS 8 through 10 playback
Mono – open source implementation of Microsoft .NET Framework with a C# compiler
NetBeans – modular, open source, multi-language platform and IDE for Java written in pure Java
Omnis Studio – cross-platform development environment for creating enterprise and web applications for macOS, Windows, Linux, Solaris
Panorama
Perl
PHP
Python
Qt Creator – an IDE for C++ GUI applications, by Trolltech
Real Studio – cross-platform compiled REALbasic BASIC programming language IDE
ResEdit – resource editor
Script Debugger – an AppleScript and Open Scripting Architecture IDE
SuperCard – high-level IDE
Tcl/tk – scripting shell & GUI utility that allows cross platform development. Included with macOS.
TextMate – multipurpose text editor that supports Ruby, PHP, and Python
Torque (game engine) – game creation software
WebKit – open source application framework for Safari (web browser)
WebObjects
wxPython – API merging Python and wxWidgets
Xcode – IDE made by Apple, which comes as a part of macOS and is available as a downloadon, was called Project Builders of Florida tech
Email
Email clients
Apple Mail – the bundled email client
Claris Emailer – classic Mac OS only, no longer available
Entourage – email client by Microsoft; analogous to Microsoft Outlook
Eudora
Foxmail
Lotus Notes
Mailbird
Mailplane – a WebKit-based client for Gmail
Microsoft Outlook
Mozilla Thunderbird
Mulberry – open-source software for e-mail, calendars and contacts
Opera Mail
Outlook Express
Postbox
Sparrow – as well as Sparrow Lite
Other email software
Gmail Notifier
FTP clients
Classic FTP
Cyberduck
Fetch
Fugu
FileZilla
ForkLift
Interarchy
Transmit
WebDrive – FTP and cloud client
Yummy FTP
Games
Steam – digital distribution software for video games and related media
Graphics, layout, desktop publishing
CAD, 3D graphics
3D-Coat
Autodesk Alias
Ashlar-Vellum – 2D/3D drafting, 3D modeling
ArchiCAD
AutoCAD
Blender
BricsCAD
Cheetah3D
Cinema 4D
SketchUp – 3D modeling software
Houdini
Lightwave
Maya
Modo
PowerCADD
ZBrush
Distributed document authoring
Adobe Acrobat
Preview
Icon editors, viewers
Icon Composer – part of Apple Developer Tools
IconBuilder
Microsoft Office
File conversion and management
Active
Adobe Bridge – digital asset management app
BibDesk – free bibliographic database app that organizes linked files
Font Book – font management tool
GraphicConverter – graphics editor, open/converts a wide range of file formats
Photos – photo management application
Discontinued
iPhoto – discontinued photo management application
Layout and desktop publishing
Active
Adobe InDesign – page layout
iCalamus – page layout
iStudio Publisher – page layout
Pages – part of iWork
QuarkXPress – page layout
Ready, Set, Go! – page layout
Scribus – page layout
TeX – publishing
MacTeX – TeX redistribution of TeX Live for Mac
Comparison of TeX Editors
The Print Shop – page layout
Discontinued
iBooks Author – created interactive books for Apple Books
Raster and vector graphics
This section lists bitmap graphics editors and vector graphics editors.
Active
Adobe Fireworks – supports GIF animation
Adobe Illustrator – vector graphics editor
Adobe Photoshop – also offers some vector graphics features
Affinity Designer – vector graphics editor for Apple macOS and Microsoft Windows
Anime Studio – 2D based vector animation
Collabora Online - enterprise-ready edition of LibreOffice
Corel Painter
EazyDraw – vector graphics editor; versions available that can convert old formats such as MacDraw files
Fontographer
GIMP – free bitmap graphics editor
GIMPShop – free open source cross-platform bitmap graphics editor
GraphicConverter – displays and edits raster graphics files
Inkscape – free vector graphics editor
Luminar
Macromedia FreeHand – vector graphics editor
Paintbrush – free simple bitmap graphics program
Photos – official photo management and editing application developed by Apple
Photo Booth – photo camera, video recorder
Pixelmator – hardware-accelerated integrated photo editor
Polarr – photo editing app
Seashore – open source, based around the GIMP's technology, but with native macOS (Cocoa) UI
Discontinued
Aperture – Apple's pro photo management, editing, publishing application
MacPaint – painting software by Apple (discontinued)
Integrated software technologies
Finder
Path Finder
QuickTime
Terminal
X11.app
Language and reference tools
Cram (software)
Dictionary (software)
Encyclopædia Britannica
Rosetta Stone (software) – proprietary language learning software
Ultralingua – proprietary electronic dictionaries and language tools
World Book Encyclopedia – multimedia
Mathematics software
Fityk
Grapher
Maple (software)
Mathematica
MATLAB
MathMagic
Octave (software) – open source
R (programming language)
Sysquake
SciLab – open source
Media center
Boxee – Mac and Apple TV
Front Row
Mira
MythTV
SageTV
Plex
Kodi
Multimedia authoring
Adobe Director – animation/application development
Adobe Flash – vector animation
Adobe LiveMotion – a discontinued competitor to Flash, until Adobe bought Macromedia
Apple Media Tool – a discontinued multimedia authoring tool published by Apple
Dragonframe - stop motion animation and time-lapse
iBooks Author – created interactive books for Apple Books (discontinued)
iLife – media suite by Apple
Unity – 3D authoring
Networking and telecommunications
Apple Remote Desktop
Google Earth
iStumbler – find wireless networks and devices
Karelia Watson (defunct)
KisMAC
Little Snitch – network monitor and outgoing connection firewall
NetSpot – software tool for wireless network assessment, scanning, and surveys, analyzing Wi-Fi coverage and performance
Timbuktu – remote control
WiFi Explorer – a wireless network scanner tool
News aggregators
Feedly – news aggregator, and news aggregator reading application
NetNewsWire – news aggregator reading application
NewsFire – news aggregator reading application
RSSOwl – news aggregator reading application
Safari (web browser) - news aggregation via built-in RSS support
Apple Mail – news aggregation via (discontinued) built-in RSS support
Office and productivity
AbiWord
Adobe Acrobat
Address Book – bundled with macOS
AppleWorks – word processor, spreadsheet, and presentation applications (discontinued)
Banktivity – personal finance, budgeting, investments
Bean (word processor) – free TXT/RTF/DOC word processor
Celtx
Collabora Online enterprise-ready edition of LibreOffice
CricketGraph – graphmaker
Delicious Library
FileMaker
FlowVella
Fortora Fresh Finance
Helix (database)
iBank – personal finance application
iCal – calendar management, bundled with macOS
iWork – suite:
Pages – word processor application
Numbers – spreadsheet application
Keynote – presentation application
Journler – diary and personal information manager with personal wiki features
KOffice
LibreOffice
MacLinkPlus Deluxe – file format translation tool for PowerPC-era Mac OS X, converting and opening files created in other operating systems
Mellel
Microsoft Office – office suite:
Microsoft Word – word processor application
Microsoft Excel – spreadsheet application
Microsoft PowerPoint – presentation application
Microsoft Entourage – email application (replaced by Microsoft Outlook)
Microsoft Outlook – email application
Microsoft OneNote – note-taking application
MoneyWiz – personal finance application
Montage – screenwriting software
NeoOffice
Nisus Writer
OmniFocus
OpenOffice.org
WriteNow
Taste – word processor (discontinued)
Operating systems
Darwin – the BSD-licensed core of macOS
macOS – originally named "Mac OS X" until 2012 and then "OS X" until 2016
macOS Server – the server computing variant of macOS
Outliners and mind-mapping
FreeMind
Mindjet
OmniOutliner
OmniGraffle
XMind
Peer-to-peer file sharing
aMule
BitTorrent client
FrostWire
LimeWire
Poisoned
rTorrent
Transmission (BitTorrent)
μTorrent
Vuze – Bittorrent client, was Azureus
Science
Celestia – 3D astronomy program
SimThyr – Simulation system for thyroid homeostasis
Stellarium – 3D astronomy program
Text editors
ACE
Aquamacs
BBEdit
BBEdit Lite
Coda
Emacs
jEdit
iA Writer
Komodo Edit
Nano
SimpleText
Smultron
SubEthaEdit
TeachText
TextEdit
TextMate
TextWrangler
vim
XEmacs
Ulysses
Utilities
Activity Monitor – default system monitor for hardware and software
AppZapper – uninstaller (shareware)
Automator – built-in, utility to automate repetitive tasks
Butler – free, launcher and utility to automate repetitive tasks
CleanGenius – free system optimization tool for macOS, disk cleaner, uninstaller, device ejector, disk monitor. (freeware)
CandyBar – system customization software (commercial)
CDFinder – disk cataloging software (commercial)
DaisyDisk – disk visualization tool
Dashboard – built-in macOS widgets
Grab (software) – built-in macOS screenshot utility
Growl – global notifications system, free
iSync – syncing software, bundled with Mac OS X up to 10.6
LaunchBar – provides instant access to local data, search engines and more by entering abbreviations of search item names, commercial
Mavis Beacon Teaches Typing – proprietary, typing tutor
OnyX – a freeware system maintenance and optimization tool for macOS
Quicksilver – a framework for accessing and manipulating many forms of data
Screen Sharing
SheepShaver – PowerPC emulator, allows, among other things, running Mac OS 9 on Intel Macs
Sherlock – file searching (version 2), web services (version 3)
Stickies – put Post-It Note-like notes on the desktop
System Preferences – default Mac system option application
UUTool – uuencoded/uudecode and other transcoding
Xsan – storage network utility
Yahoo! Widget Engine – JavaScript-based widget system
Support for non-Macintosh software
Bochs
Boot Camp – a multi-boot utility built into macOS from 10.5
CrossOver – commercial implementation of Wine
DOSBox – DOS emulator
Hercules emulator
pcAnywhere – VNC-style remote control
Parallels Workstation – commercial full virtualization software for desktop and server
Q – emulates an IBM-compatible PC on a Mac, allows running PC operating systems
VMware – virtualization software
Wine – Windows API reimplementation
Virtual PC – full virtualization software allows running other operating systems, such as Windows and Linux, on PowerPC Macs (discontinued in 2007)
VirtualBox
vMac – emulates a Macintosh Plus and can run Apple Macintosh System versions 1.1 to 7.5.5.
Video
Adobe After Effects
Adobe Premiere Pro
Adobe Presenter Video Express
ArKaos – VJ software
Avid
DaVinci Resolve – Video Editing Suite
DivX
DivX Player
DVD Player (Apple) – DVD player software built into macOS
FFmpeg – audio/video converter
Final Cut Express
Final Cut Studio – audio-video editing suite:
Apple Qmaster
Cinema Tools
Compressor
DVD Studio Pro
Final Cut Pro
LiveType
Motion 2
Soundtrack Pro
HandBrake – DVD to MPEG-4 and other formats converter
iMovie – basic video editing application
Miro Media Player
MPlayer
Perian
QuickTime – including its Player and QuickTime Pro
RealPlayer
Shotcut - An open source rich video editor
Shake
Windows Media Player
VLC media player
4K Video Downloader – free video downloader
Web browsers
Amaya – free
Camino – open source
Flock – free, Mozilla Firefox based
Google Chrome – free, proprietary
iCab – free
Konqueror – open source
Lynx – free
Mozilla – open source, combines browser, email client, WYSIWYG editor
Mozilla Firefox – open source
Microsoft Edge – free
Netscape Navigator – free, proprietary
OmniWeb – free, proprietary
Opera – free
Safari (web browser) – built-in from Mac OS X 10.3, available as a separate download for Mac OS X 10.2
SeaMonkey – open source Internet application suite
Shiira – open source
Sleipnir – free, by Fenrir Inc
Tor (anonymity network) – free, open source
Torch (web browser) – free, by Torch Media Inc.
Internet Explorer for Mac – free, by Microsoft
WebKit – Safari application framework, also in the form of an application
Web design and content management
Adobe Contribute
Adobe Dreamweaver
Adobe GoLive
Claris Homepage
Coda
Freeway
iWeb
NVu
RapidWeaver – a template-based website editor
Weblog clients
ecto
MarsEdit
See also
List of Macintosh software published by Microsoft
List of Unix commands
References
List of Macintosh software
Macintosh | Operating System (OS) | 1,315 |
Orb (software)
Orb was a freeware streaming software that enabled users to remotely access all their personal digital media files including pictures, music, videos and television. It could be used from any Internet-enabled device, including laptops, pocket PC, smartphones, PS3, Xbox 360 and Wii video game consoles.
In 2013, Orb Networks, Inc. announced that they were acquired by a strategic partner and would be shutting down operations. Also in 2013, Co-founder Luc Julia indicated that Orb Networks' technology had been acquired by Qualcomm, but no accompanying press release had been issued.
Orb's website (accessed May, 2014) announced: "...about a year ago Orb's team and technology were acquired by Qualcomm Connected Experiences, Inc." and "Orb Networks will no longer be offering any Orb software downloads or support for our web based products such as OrbLive and Mycast." The statement invited people to "check out Qualcomm's AllPlay media platform" but did not specify how Orb software may have been utilized.
What it did
Orb was available for Intel Macintosh, Windows or Media Center PCs. Users create an online account to remotely access their computer.
Access to videos, audio, images
All the music, pictures and video files stored on a home computer are made available for streaming, provided that the computer is connected to the Internet. The media files are transcoded and streamed directly from that PC, or available for download with the use of a file explorer plug-in.
The current version of Orb can be used as a replacement for Microsoft's Windows Media Connect software for computers running the Windows operating system. This allows the Xbox 360 or PlayStation 3 consoles to access the videos, audios, and images on the computer with Orb installed, natively. This allows the user to also watch videos not in the Windows Media Video format without having previously re-transcoded their videos. Orb will transcode the video files from the computer on the fly as they are requested as long as the computer running Orb has the correct codec for doing so.
Access to a tuner card or Webcam and Web-based DVR
If there is a TV tuner card installed on that PC, live TV is also available for streaming. Furthermore, Orb offers a DVR functionality, like a TiVo. One can schedule TV recordings and stream the recorded video files remotely via the Web-based interface.
The same is true for a webcam (there is also a webcam surveillance feature).
Sharing
Orb also allows the user to share photos and videos without having to upload them to an online service. To do that, one has to select the files one wants to share, and enter the email address of the person that should have access to that data. That person will receive an email with a link to view the files.
Mobile phone usage
Supported mobile phones
Orb supports streaming to Windows Media, RealPlayer, 3gp and Flash format, supporting a wide range of 3G cell phones, including the iPhone.
Adaptive streaming inside a LAN or to a mobile phone
Orb will detect the type of device being used, and adapt the stream according to your player availability and the network quality.
For example, on the Nokia N80, Orb will use the native web browser and RealPlayer
When the device used to connect is on the same local network as the PC where Orb is running, it will connect directly to the stream from the PC without going through the Internet (provided no firewall blocks the connection). This provides a much higher bit rate and quality when streaming inside a LAN.
New features in Orb 2.0
Orb 2.0 beta is Ajax-based and offers a home page similar to Google Personalized Homepage or netvibes. It is organized into tabs, with each tab containing user-defined modules, such as TV guide, personal media files, RSS/Atom feeds such as YouTube videos, the local weather forecast, etc...
Similar products
Similar solutions are Synology and Slingbox, which does not require a PC, but comes with its own hardware.
Another similar solution for remote access is ifunpix, which additionally includes tools to create and upload content to a personal mini-site support video as well as audio and music. Unlike Orb, iFunPix uses a proxy storage server between the user and the home computer which can help prevent attacks on the source computer. iFunPix is not able to stream live television.
There is also TVersity, another freeware program, much like orb. Other similar solution for remote access via mobile is Younity as well as Orbweb.me, both of those products enable mobile access to digital files in your computer.
References
External links
dead
Orb Version History and Downloads provided by Digital Digest
Web services
Companies based in Oakland, California
Media players
Media servers
Qualcomm software | Operating System (OS) | 1,316 |
VM2000
VM 2000 is a hypervisor from Fujitsu (formerly Siemens Nixdorf Informationssysteme) designed specifically for use with the BS2000 operating system. It is an EBCDIC-based operating system. It allows multiple images of BS2000 and Linux to operate on a S-series computer, which is based on the IBM System/390 architecture. It also supports BS2000, Linux and Microsoft Windows on x86-based SQ-series mainframes. Additionally, it can virtualize BS2000 guests on SR- and SX-series mainframes, based on MIPS and SPARC respectively.
See also
Paravirtualization
References
External links
Virtualization VM2000
Virtualization software
MIPS operating systems | Operating System (OS) | 1,317 |
Sytek
Sytek, known as Hughes LAN Systems (HLS) after being acquired by Hughes Electronics and known as Whittaker Communications (WCI) since April 24, 1995, created the NetBIOS API, used by Microsoft to make its early networks.
Sytek was founded in Silicon Valley and last officed in their own building on Charleston Road in Mountain View. During this crucial period in LAN development, there were two factions within IBM competing over the basic LAN architecture. One group, the telco switch folks from Geneva liked a central hub, with a network of distributed twisted pair conductors, as is used in phone systems. The other group, Entry Systems Division from Boca Raton, liked the idea of a distributed bus architecture.
Building on prior work done by such companies as Intech Labs (aka American Modem & AMDAX), Sytek built an RF transceiver that operated on cable TV frequencies. It received in the High VHF band and transmitted in the Low VHF band. These bands were referred to as the forward and reverse channels. Within any given frequency band, it was possible to have many, many, virtual circuits between devices. In order to increase the advantage of using FDM on the cable, Sytek added an algorithm based on CSMA/CD. There were few, if any, IEEE standards at that time.
All of the devices, called T-Boxes, were equipped with an EIA RS-232 serial interface, which had a 'legal' maximum data rate of 19.2 kbit/s. While this may seem slow by today's standards, it was considered very fast in 1982. Determining a need for PC-PC and (especially) PC-mainframe communications, IBM asked Sytek to manufacture LAN adapter cards based on their FDM/TDM technology for IBM PCs, which they did.
IBM saw the value of PCs as being a catalyst to sell more mainframe computers and understood that a LAN of the type Sytek made was superior to dedicated runs of RG-62 coaxial cable, which were required for 327X terminals and controllers. These IBM PC Network cards were available, from IBM for about $700 ea.
In the mid-80s, IBM moved its focus to Token Ring, and much of the rest of the market moved to Ethernet.
References
External links
Setting Up OS/2 Peer-to-Peer Networking & Coexistence of Warp & NT Machines on the MITNet
Sytek brochures at bitsavers.org
Software companies of the United States
1979 establishments in California | Operating System (OS) | 1,318 |
Microsoft Safety Scanner
Microsoft Safety Scanner is a free virus scanner similar to Windows Malicious Software Removal Tool that can be used to scan a system for computer viruses and other forms of malware. It was released on 15 April 2011, following the discontinuation of Windows Live OneCare Safety Scanner.
It is not meant to be used as a day-to-day tool, as it does not provide real-time protection against viruses, cannot update its virus definitions and expires after ten days. However, it can be run on a computer that already has an antivirus product without any potential interference, and can therefore be used to scan a potentially infected computer as a second check from another antivirus program. It uses the same detection engine and malware definitions as Microsoft Security Essentials and Microsoft Forefront Endpoint Protection.
License restriction
, part of Microsoft Safety Scanner's end-user license agreement which restricts its use reads:
Notes
References
Further reading
External links
2011 software
Windows-only freeware | Operating System (OS) | 1,319 |
Accredited Symbian Developer
Accredited Symbian Developer (ASD) was an (now defunct) accreditation program for software developers using Symbian OS, a mobile phone operating system, having been terminated in April 2011 after the closure of the Symbian Foundation. The scheme was operated independently on the Foundation's behalf by Majinate Limited, which also closed for business when the Foundation closed.
Qualifications required
The primary qualification for being accredited as an ASD was a pass in an on-line multiple choice examination that adhered to the Principles of Symbian OS curriculum. This curriculum was reviewed on an annual basis to ensure that the accreditation kept up to date with developments in the Symbian operating system. The final release of the curriculum was made in 2009 although it still adhered closely to the ASD Primer, a learning aid published by Wiley under the Symbian Press imprint.
Curriculum
The final version of the curriculum contained the following major topics:
C++ Language Fundamentals
Classes And Objects
Class Design And Inheritance
Symbian OS Types & Declarations
Cleanup Stack
Object Construction
Descriptors
Dynamic Arrays
Active Objects
System Structure
Client Server
File Server, Store & Streams
Sockets
Tool Chain
Platform Security
Binary Compatibility
Each topic was assessed and marked separately in the examination and a pass required both a high score and coverage of the majority of topics.
See also
Software development
Software engineering
Software development process
Computer and video game development
External links
Symbian developer website
The Symbian Foundation website
Information technology qualifications
Symbian OS | Operating System (OS) | 1,320 |
PERCS
PERCS (Productive, Easy-to-use, Reliable Computing System) is IBM's answer to DARPA's High Productivity Computing Systems (HPCS) initiative. The program resulted in commercial development and deployment of the Power 775, a supercomputer design with extremely high performance ratios in fabric and memory bandwidth, as well as very high performance density and power efficiency.
IBM officially announced the Power 775 on July 12, 2011 and started to ship systems in August 2011.
Background
The HPCS program was a three phase research and development effort. IBM was one of three companies, along with Cray and Sun Microsystems, that received the HPCS grant for Phase II. In this phase, IBM collaborated with a consortium of 12 universities and the Los Alamos National Lab to pursue an adaptable computing system with the goal of commercial viability of new chip technology, new computer architecture, operating systems, compiler and programming environments.
IBM was chosen for Phase III in November 2006, and granted $244 million in funds for continuing development of PERCS technology and delivering prototype systems by 2010.
Deployment
The first supercomputer using PERCS technology was intended to be the Blue Waters system, however the high costs and complexity of the system resulted in its contract being canceled. The machine was subsequently delivered by Cray Inc, using a combination of GPUs and CPUs for processing, and a network with reduced global bandwidth capabilities.
Power775 / PERCS systems were subsequently deployed at roughly two dozen institutions in the U.S. and other countries, in installations ranging from 2,000 to over 64,000 Power7 processing cores. Major deployments have been for network-intensive and memory-intensive applications (as opposed to FLOPS-intensive), such as weather & climate modeling (ECMWF, UKMO, Environment Canada, Japan Meteorological Agency), and scientific research (University of Warsaw, Slovak Academy of Sciences, and several other government laboratories in the U.S., and other countries).
Technology
PERCS will use IBM's large scale technologies from servers and supercomputers like the POWER7 microprocessor, AIX operating system, X10 programming language and General Parallel File System.
Power 775
Sometimes known as the POWER7-IH or P7-IH, the Power 775 is the commercial product that was developed by PERCS as a part of IBM Power Systems line. The Power 775 was released by IBM in 2011 as a commercial product after IBM ended its participation in the Blue Waters petaflops project at the University of Illinois, but marketed the 775 based on the growth of its high performance computing business.
Unlike the IBM Blue Gene series, which uses low-power processors to avoid heat density issues, the Power 775 was a water-cooled rack module system, and each module was 34 inches wide, 54 inches deep and 3.5 inches high (2U).
Each drawer comprises 8 cache coherent nodes (each of which can host single one or more O/S images) with a MCM with four POWER7 CPUs each, and 16 DDR3 SDRAM slots per MCM for a total of 256 POWER7 cores and 2 TB RAM. Each drawer has 8 optical connect controller hub chips, connecting neighboring MCMs, PCIe peripherals and other compute nodes in a dragonfly network topology. One rack can house up to a dozen Power 775 drawers for a total performance of 96 TFLOPS.
The system supports up to 24 terabytes of memory and 230 terabytes of storage per rack. It is estimated to achieve over 94 teraflops per rack.
See also
High Productivity Computing Systems
DARPA
IBM Watson - built on a similar (Power 750) air-cooled platform
References
External links
Power 775 product page
IBM wins DARPA funding – IBM.com
High Productivity Computing Systems (HPCS) – DARPA.mil
IBM supercomputer platforms
Parallel computing | Operating System (OS) | 1,321 |
X86
x86 is a family of complex instruction set computer (CISC) instruction set architectures initially developed by Intel based on the Intel 8086 microprocessor and its 8088 variant. The 8086 was introduced in 1978 as a fully 16-bit extension of Intel's 8-bit 8080 microprocessor, with memory segmentation as a solution for addressing more memory than can be covered by a plain 16-bit address. The term "x86" came into being because the names of several successors to Intel's 8086 processor end in "86", including the 80186, 80286, 80386 and 80486 processors.
Many additions and extensions have been added to the x86 instruction set over the years, almost consistently with full backward compatibility. The architecture has been implemented in processors from Intel, Cyrix, AMD, VIA Technologies and many other companies; there are also open implementations, such as the Zet SoC platform (currently inactive). Nevertheless, of those, only Intel, AMD, VIA Technologies, and DM&P Electronics hold x86 architectural licenses, and from these, only the first two are actively producing modern 64-bit designs.
The term is not synonymous with IBM PC compatibility, as this implies a multitude of other computer hardware. Embedded systems and general-purpose computers used x86 chips before the PC-compatible market started, some of them before the IBM PC (1981) debut.
, most desktop computers, laptops and game consoles (with the exception of the Nintendo Switch) sold are based on the x86 architecture, while mobile categories such as smartphones or tablets are dominated by ARM; at the high end, x86 continues to dominate compute-intensive workstation and cloud computing segments, while the fastest supercomputer is ARM-based, and the top 4 are no longer x86-based.
Overview
In the 1980s and early 1990s, when the 8088 and 80286 were still in common use, the term x86 usually represented any 8086-compatible CPU. Today, however, x86 usually implies a binary compatibility also with the 32-bit instruction set of the 80386. This is due to the fact that this instruction set has become something of a lowest common denominator for many modern operating systems and probably also because the term became common after the introduction of the 80386 in 1985.
A few years after the introduction of the 8086 and 8088, Intel added some complexity to its naming scheme and terminology as the "iAPX" of the ambitious but ill-fated Intel iAPX 432 processor was tried on the more successful 8086 family of chips, applied as a kind of system-level prefix. An 8086 system, including coprocessors such as 8087 and 8089, and simpler Intel-specific system chips, was thereby described as an iAPX 86 system. There were also terms iRMX (for operating systems), iSBC (for single-board computers), and iSBX (for multimodule boards based on the 8086-architecture), all together under the heading Microsystem 80. However, this naming scheme was quite temporary, lasting for a few years during the early 1980s.
Although the 8086 was primarily developed for embedded systems and small multi-user or single-user computers, largely as a response to the successful 8080-compatible Zilog Z80, the x86 line soon grew in features and processing power. Today, x86 is ubiquitous in both stationary and portable personal computers, and is also used in midrange computers, workstations, servers, and most new supercomputer clusters of the TOP500 list. A large amount of software, including a large list of are using x86-based hardware.
Modern x86 is relatively uncommon in embedded systems, however, and small low power applications (using tiny batteries), and low-cost microprocessor markets, such as home appliances and toys, lack significant x86 presence. Simple 8- and 16-bit based architectures are common here, although the x86-compatible VIA C7, VIA Nano, AMD's Geode, Athlon Neo and Intel Atom are examples of 32- and 64-bit designs used in some relatively low-power and low-cost segments.
There have been several attempts, including by Intel, to end the market dominance of the "inelegant" x86 architecture designed directly from the first simple 8-bit microprocessors. Examples of this are the iAPX 432 (a project originally named the Intel 8800), the Intel 960, Intel 860 and the Intel/Hewlett-Packard Itanium architecture. However, the continuous refinement of x86 microarchitectures, circuitry and semiconductor manufacturing would make it hard to replace x86 in many segments. AMD's 64-bit extension of x86 (which Intel eventually responded to with a compatible design) and the scalability of x86 chips in the form of modern multi-core CPUs, is underlining x86 as an example of how continuous refinement of established industry standards can resist the competition from completely new architectures.
Chronology
The table below lists processor models and model series implementing various architectures in the x86 family, in chronological order. Each line item is characterized by significantly improved or commercially successful processor microarchitecture designs.
History
Other manufacturers
At various times, companies such as IBM, VIA, NEC, AMD, TI, STM, Fujitsu, OKI, Siemens, Cyrix, Intersil, C&T, NexGen, UMC, and DM&P started to design or manufacture x86 processors (CPUs) intended for personal computers and embedded systems. Such x86 implementations are seldom simple copies but often employ different internal microarchitectures and different solutions at the electronic and physical levels. Quite naturally, early compatible microprocessors were 16-bit, while 32-bit designs were developed much later. For the personal computer market, real quantities started to appear around 1990 with i386 and i486 compatible processors, often named similarly to Intel's original chips. Other companies, which designed or manufactured x86 or x87 processors, include ITT Corporation, National Semiconductor, ULSI System Technology, and Weitek.
Following the fully pipelined i486, Intel introduced the Pentium brand name (which, unlike numbers, could be trademarked) for their new set of superscalar x86 designs. With the x86 naming scheme now legally cleared, other x86 vendors had to choose different names for their x86-compatible products, and initially some chose to continue with variations of the numbering scheme: IBM partnered with Cyrix to produce the 5x86 and then the very efficient 6x86 (M1) and 6x86MX (MII) lines of Cyrix designs, which were the first x86 microprocessors implementing register renaming to enable speculative execution. AMD meanwhile designed and manufactured the advanced but delayed 5k86 (K5), which, internally, was closely based on AMD's earlier 29K RISC design; similar to NexGen's Nx586, it used a strategy such that dedicated pipeline stages decode x86 instructions into uniform and easily handled micro-operations, a method that has remained the basis for most x86 designs to this day.
Some early versions of these microprocessors had heat dissipation problems. The 6x86 was also affected by a few minor compatibility problems, the Nx586 lacked a floating-point unit (FPU) and (the then crucial) pin-compatibility, while the K5 had somewhat disappointing performance when it was (eventually) introduced. Customer ignorance of alternatives to the Pentium series further contributed to these designs being comparatively unsuccessful, despite the fact that the K5 had very good Pentium compatibility and the 6x86 was significantly faster than the Pentium on integer code. AMD later managed to grow into a serious contender with the K6 set of processors, which gave way to the very successful Athlon and Opteron. There were also other contenders, such as Centaur Technology (formerly IDT), Rise Technology, and Transmeta. VIA Technologies' energy efficient C3 and C7 processors, which were designed by the Centaur company, have been sold for many years. Centaur's newest design, the VIA Nano, is their first processor with superscalar and speculative execution. It was introduced at about the same time as Intel's first "in-order" processor since the P5 Pentium, the Intel Atom.
Extensions of word size
The instruction set architecture has twice been extended to a larger word size. In 1985, Intel released the 32-bit 80386 (later known as i386) which gradually replaced the earlier 16-bit chips in computers (although typically not in embedded systems) during the following years; this extended programming model was originally referred to as the i386 architecture (like its first implementation) but Intel later dubbed it IA-32 when introducing its (unrelated) IA-64 architecture.
In 1999–2003, AMD extended this 32-bit architecture to 64 bits and referred to it as x86-64 in early documents and later as AMD64. Intel soon adopted AMD's architectural extensions under the name IA-32e, later using the name EM64T and finally using Intel 64. Microsoft and Sun Microsystems/Oracle also use term "x64", while many Linux distributions, and the BSDs also use the "amd64" term. Microsoft Windows, for example, designates its 32-bit versions as "x86" and 64-bit versions as "x64", while installation files of 64-bit Windows versions are required to be placed into a directory called "AMD64".
Basic properties of the architecture
The x86 architecture is a variable instruction length, primarily "CISC" design with emphasis on backward compatibility. The instruction set is not typical CISC, however, but basically an extended version of the simple eight-bit 8008 and 8080 architectures. Byte-addressing is enabled and words are stored in memory with little-endian byte order. Memory access to unaligned addresses is allowed for almost all instructions. The largest native size for integer arithmetic and memory addresses (or offsets) is 16, 32 or 64 bits depending on architecture generation (newer processors include direct support for smaller integers as well). Multiple scalar values can be handled simultaneously via the SIMD unit present in later generations, as described below. Immediate addressing offsets and immediate data may be expressed as 8-bit quantities for the frequently occurring cases or contexts where a -128..127 range is enough. Typical instructions are therefore 2 or 3 bytes in length (although some are much longer, and some are single-byte).
To further conserve encoding space, most registers are expressed in opcodes using three or four bits, the latter via an opcode prefix in 64-bit mode, while at most one operand to an instruction can be a memory location. However, this memory operand may also be the destination (or a combined source and destination), while the other operand, the source, can be either register or immediate. Among other factors, this contributes to a code size that rivals eight-bit machines and enables efficient use of instruction cache memory. The relatively small number of general registers (also inherited from its 8-bit ancestors) has made register-relative addressing (using small immediate offsets) an important method of accessing operands, especially on the stack. Much work has therefore been invested in making such accesses as fast as register accesses—i.e., a one cycle instruction throughput, in most circumstances where the accessed data is available in the top-level cache.
Floating point and SIMD
A dedicated floating-point processor with 80-bit internal registers, the 8087, was developed for the original 8086. This microprocessor subsequently developed into the extended 80387, and later processors incorporated a backward compatible version of this functionality on the same microprocessor as the main processor. In addition to this, modern x86 designs also contain a SIMD-unit (see SSE below) where instructions can work in parallel on (one or two) 128-bit words, each containing two or four floating-point numbers (each 64 or 32 bits wide respectively), or alternatively, 2, 4, 8 or 16 integers (each 64, 32, 16 or 8 bits wide respectively).
The presence of wide SIMD registers means that existing x86 processors can load or store up to 128 bits of memory data in a single instruction and also perform bitwise operations (although not integer arithmetic) on full 128-bits quantities in parallel. Intel's Sandy Bridge processors added the Advanced Vector Extensions (AVX) instructions, widening the SIMD registers to 256 bits. The Intel Initial Many Core Instructions implemented by the Knights Corner Xeon Phi processors, and the AVX-512 instructions implemented by the Knights Landing Xeon Phi processors and by Skylake-X processors, use 512-bit wide SIMD registers.
Current implementations
During execution, current x86 processors employ a few extra decoding steps to split most instructions into smaller pieces called micro-operations. These are then handed to a control unit that buffers and schedules them in compliance with x86-semantics so that they can be executed, partly in parallel, by one of several (more or less specialized) execution units. These modern x86 designs are thus pipelined, superscalar, and also capable of out of order and speculative execution (via branch prediction, register renaming, and memory dependence prediction), which means they may execute multiple (partial or complete) x86 instructions simultaneously, and not necessarily in the same order as given in the instruction stream.
Some Intel CPUs (Xeon Foster MP, some Pentium 4, and some Nehalem and later Intel Core processors) and AMD CPUs (starting from Zen) are also capable of simultaneous multithreading with two threads per core (Xeon Phi has four threads per core). Some Intel CPUs support transactional memory (TSX).
When introduced, in the mid-1990s, this method was sometimes referred to as a "RISC core" or as "RISC translation", partly for marketing reasons, but also because these micro-operations share some properties with certain types of RISC instructions. However, traditional microcode (used since the 1950s) also inherently shares many of the same properties; the new method differs mainly in that the translation to micro-operations now occurs asynchronously. Not having to synchronize the execution units with the decode steps opens up possibilities for more analysis of the (buffered) code stream, and therefore permits detection of operations that can be performed in parallel, simultaneously feeding more than one execution unit.
The latest processors also do the opposite when appropriate; they combine certain x86 sequences (such as a compare followed by a conditional jump) into a more complex micro-op which fits the execution model better and thus can be executed faster or with fewer machine resources involved.
Another way to try to improve performance is to cache the decoded micro-operations, so the processor can directly access the decoded micro-operations from a special cache, instead of decoding them again. Intel followed this approach with the Execution Trace Cache feature in their NetBurst microarchitecture (for Pentium 4 processors) and later in the Decoded Stream Buffer (for Core-branded processors since Sandy Bridge).
Transmeta used a completely different method in their Crusoe x86 compatible CPUs. They used just-in-time translation to convert x86 instructions to the CPU's native VLIW instruction set. Transmeta argued that their approach allows for more power efficient designs since the CPU can forgo the complicated decode step of more traditional x86 implementations.
Segmentation
Minicomputers during the late 1970s were running up against the 16-bit 64-KB address limit, as memory had become cheaper. Some minicomputers like the PDP-11 used complex bank-switching schemes, or, in the case of Digital's VAX, redesigned much more expensive processors which could directly handle 32-bit addressing and data. The original 8086, developed from the simple 8080 microprocessor and primarily aiming at very small and inexpensive computers and other specialized devices, instead adopted simple segment registers which increased the memory address width by only 4 bits. By multiplying a 64-KB address by 16, the 20-bit address could address a total of one megabyte (1,048,576 bytes) which was quite a large amount for a small computer at the time. The concept of segment registers was not new to many mainframes which used segment registers to swap quickly to different tasks. In practice, on the x86 it was (is) a much-criticized implementation which greatly complicated many common programming tasks and compilers. However, the architecture soon allowed linear 32-bit addressing (starting with the 80386 in late 1985) but major actors (such as Microsoft) took several years to convert their 16-bit based systems. The 80386 (and 80486) was therefore largely used as a fast (but still 16-bit based) 8086 for many years.
Data and code could be managed within "near" 16-bit segments within 64 KB portions of the total 1 MB address space, or a compiler could operate in a "far" mode using 32-bit segment:offset pairs reaching (only) 1 MB. While that would also prove to be quite limiting by the mid-1980s, it was working for the emerging PC market, and made it very simple to translate software from the older 8008, 8080, 8085, and Z80 to the newer processor. During 1985, the 16-bit segment addressing model was effectively factored out by the introduction of 32-bit offset registers, in the 386 design.
In real mode, segmentation is achieved by shifting the segment address left by 4 bits and adding an offset in order to receive a final 20-bit address. For example, if DS is A000h and SI is 5677h, DS:SI will point at the absolute address DS × 10h + SI = A5677h. Thus the total address space in real mode is 220 bytes, or 1 MB, quite an impressive figure for 1978. All memory addresses consist of both a segment and offset; every type of access (code, data, or stack) has a default segment register associated with it (for data the register is usually DS, for code it is CS, and for stack it is SS). For data accesses, the segment register can be explicitly specified (using a segment override prefix) to use any of the four segment registers.
In this scheme, two different segment/offset pairs can point at a single absolute location. Thus, if DS is A111h and SI is 4567h, DS:SI will point at the same A5677h as above. This scheme makes it impossible to use more than four segments at once. CS and SS are vital for the correct functioning of the program, so that only DS and ES can be used to point to data segments outside the program (or, more precisely, outside the currently executing segment of the program) or the stack.
In protected mode, introduced in the 80286, a segment register no longer contains the physical address of the beginning of a segment, but contain a "selector" that points to a system-level structure called a segment descriptor. A segment descriptor contains the physical address of the beginning of the segment, the length of the segment, and access permissions to that segment. The offset is checked against the length of the segment, with offsets referring to locations outside the segment causing an exception. Offsets referring to locations inside the segment are combined with the physical address of the beginning of the segment to get the physical address corresponding to that offset.
The segmented nature can make programming and compiler design difficult because the use of near and far pointers affects performance.
Addressing modes
Addressing modes for 16-bit processor modes can be summarized by the formula:
Addressing modes for 32-bit x86 processor modes can be summarized by the formula:
Addressing modes for the 64-bit processor mode can be summarized by the formula:
Instruction relative addressing in 64-bit code (RIP + displacement, where RIP is the instruction pointer register) simplifies the implementation of position-independent code (as used in shared libraries in some operating systems).
The 8086 had of eight-bit (or alternatively ) I/O space, and a (one segment) stack in memory supported by computer hardware. Only words (two bytes) can be pushed to the stack. The stack grows toward numerically lower addresses, with pointing to the most recently pushed item. There are 256 interrupts, which can be invoked by both hardware and software. The interrupts can cascade, using the stack to store the return address.
x86 registers
16-bit
The original Intel 8086 and 8088 have fourteen 16-bit registers. Four of them (AX, BX, CX, DX) are general-purpose registers (GPRs), although each may have an additional purpose; for example, only CX can be used as a counter with the loop instruction. Each can be accessed as two separate bytes (thus BX's high byte can be accessed as BH and low byte as BL). Two pointer registers have special roles: SP (stack pointer) points to the "top" of the stack, and BP (base pointer) is often used to point at some other place in the stack, typically above the local variables (see frame pointer). The registers SI, DI, BX and BP are address registers, and may also be used for array indexing.
Four segment registers (CS, DS, SS and ES) are used to form a memory address. The FLAGS register contains flags such as carry flag, overflow flag and zero flag. Finally, the instruction pointer (IP) points to the next instruction that will be fetched from memory and then executed; this register cannot be directly accessed (read or written) by a program.
The Intel 80186 and 80188 are essentially an upgraded 8086 or 8088 CPU, respectively, with on-chip peripherals added, and they have the same CPU registers as the 8086 and 8088 (in addition to interface registers for the peripherals).
The 8086, 8088, 80186, and 80188 can use an optional floating-point coprocessor, the 8087. The 8087 appears to the programmer as part of the CPU and adds eight 80-bit wide registers, st(0) to st(7), each of which can hold numeric data in one of seven formats: 32-, 64-, or 80-bit floating point, 16-, 32-, or 64-bit (binary) integer, and 80-bit packed decimal integer. It also has its own 16-bit status register accessible through the instruction, and it is common to simply use some of its bits for branching by copying it into the normal FLAGS.
In the Intel 80286, to support protected mode, three special registers hold descriptor table addresses (GDTR, LDTR, IDTR), and a fourth task register (TR) is used for task switching. The 80287 is the floating-point coprocessor for the 80286 and has the same registers as the 8087 with the same data formats.
32-bit
With the advent of the 32-bit 80386 processor, the 16-bit general-purpose registers, base registers, index registers, instruction pointer, and FLAGS register, but not the segment registers, were expanded to 32 bits. The nomenclature represented this by prefixing an "E" (for "extended") to the register names in x86 assembly language. Thus, the AX register corresponds to the lowest 16 bits of the new 32-bit EAX register, SI corresponds to the lowest 16 bits of ESI, and so on. The general-purpose registers, base registers, and index registers can all be used as the base in addressing modes, and all of those registers except for the stack pointer can be used as the index in addressing modes.
Two new segment registers (FS and GS) were added. With a greater number of registers, instructions and operands, the machine code format was expanded. To provide backward compatibility, segments with executable code can be marked as containing either 16-bit or 32-bit instructions. Special prefixes allow inclusion of 32-bit instructions in a 16-bit segment or vice versa.
The 80386 had an optional floating-point coprocessor, the 80387; it had eight 80-bit wide registers: st(0) to st(7), like the 8087 and 80287. The 80386 could also use an 80287 coprocessor. With the 80486 and all subsequent x86 models, the floating-point processing unit (FPU) is integrated on-chip.
The Pentium MMX added eight 64-bit MMX integer registers (MMX0 to MMX7, which share lower bits with the 80-bit-wide FPU stack). With the Pentium III, Intel added a 32-bit Streaming SIMD Extensions (SSE) control/status register (MXCSR) and eight 128-bit SSE floating-point registers (XMM0 to XMM7).
64-bit
Starting with the AMD Opteron processor, the x86 architecture extended the 32-bit registers into 64-bit registers in a way similar to how the 16 to 32-bit extension took place. An R-prefix (for "register") identifies the 64-bit registers (RAX, RBX, RCX, RDX, RSI, RDI, RBP, RSP, RFLAGS, RIP), and eight additional 64-bit general registers (R8-R15) were also introduced in the creation of x86-64. However, these extensions are only usable in 64-bit mode, which is one of the two modes only available in long mode. The addressing modes were not dramatically changed from 32-bit mode, except that addressing was extended to 64 bits, virtual addresses are now sign extended to 64 bits (in order to disallow mode bits in virtual addresses), and other selector details were dramatically reduced. In addition, an addressing mode was added to allow memory references relative to RIP (the instruction pointer), to ease the implementation of position-independent code, used in shared libraries in some operating systems.
128-bit
SIMD registers XMM0–XMM15.
256-bit
SIMD registers YMM0–YMM15.
512-bit
SIMD registers ZMM0–ZMM31.
Miscellaneous/special purpose
x86 processors that have a protected mode, i.e. the 80286 and later processors, also have three descriptor registers (GDTR, LDTR, IDTR) and a task register (TR).
32-bit x86 processors (starting with the 80386) also include various special/miscellaneous registers such as control registers (CR0 through 4, CR8 for 64-bit only), debug registers (DR0 through 3, plus 6 and 7), test registers (TR3 through 7; 80486 only), and model-specific registers (MSRs, appearing with the Pentium).
AVX-512 has eight extra 64-bit mask registers for selecting elements in a ZMM.
Purpose
Although the main registers (with the exception of the instruction pointer) are "general-purpose" in the 32-bit and 64-bit versions of the instruction set and can be used for anything, it was originally envisioned that they be used for the following purposes:
AL/AH/AX/EAX/RAX: Accumulator
CL/CH/CX/ECX/RCX: Counter (for use with loops and strings)
DL/DH/DX/EDX/RDX: Extend the precision of the accumulator (e.g. combine 32-bit EAX and EDX for 64-bit integer operations in 32-bit code)
BL/BH/BX/EBX/RBX: Base index (for use with arrays)
SP/ESP/RSP: Stack pointer for top address of the stack.
BP/EBP/RBP: Stack base pointer for holding the address of the current stack frame.
SI/ESI/RSI: Source index for string operations.
DI/EDI/RDI: Destination index for string operations.
IP/EIP/RIP: Instruction pointer. Holds the program counter, the address of next instruction.
Segment registers:
CS: Code
DS: Data
SS: Stack
ES: Extra data
FS: Extra data #2
GS: Extra data #3
No particular purposes were envisioned for the other 8 registers available only in 64-bit mode.
Some instructions compile and execute more efficiently when using these registers for their designed purpose. For example, using AL as an accumulator and adding an immediate byte value to it produces the efficient add to AL opcode of 04h, whilst using the BL register produces the generic and longer add to register opcode of 80C3h. Another example is double precision division and multiplication that works specifically with the AX and DX registers.
Modern compilers benefited from the introduction of the sib byte (scale-index-base byte) that allows registers to be treated uniformly (minicomputer-like). However, using the sib byte universally is non-optimal, as it produces longer encodings than only using it selectively when necessary. (The main benefit of the sib byte is the orthogonality and more powerful addressing modes it provides, which make it possible to save instructions and the use of registers for address calculations such as scaling an index.) Some special instructions lost priority in the hardware design and became slower than equivalent small code sequences. A notable example is the LODSW instruction.
Structure
Note: The ?PL registers are only available in 64-bit mode.
Note: The ?IL registers are only available in 64-bit mode.
Operating modes
Real mode
Real Address mode, commonly called Real mode, is an operating mode of 8086 and later x86-compatible CPUs. Real mode is characterized by a 20-bit segmented memory address space (meaning that only slightly more than 1 MiB of memory can be addressed), direct software access to peripheral hardware, and no concept of memory protection or multitasking at the hardware level. All x86 CPUs in the 80286 series and later start up in real mode at power-on; 80186 CPUs and earlier had only one operational mode, which is equivalent to real mode in later chips. (On the IBM PC platform, direct software access to the IBM BIOS routines is available only in real mode, since BIOS is written for real mode. However, this is not a property of the x86 CPU but of the IBM BIOS design.)
In order to use more than 64 KB of memory, the segment registers must be used. This created great complications for compiler implementors who introduced odd pointer modes such as "near", "far" and "huge" to leverage the implicit nature of segmented architecture to different degrees, with some pointers containing 16-bit offsets within implied segments and other pointers containing segment addresses and offsets within segments. It is technically possible to use up to 256 KB of memory for code and data, with up to 64 KB for code, by setting all four segment registers once and then only using 16-bit offsets (optionally with default-segment override prefixes) to address memory, but this puts substantial restrictions on the way data can be addressed and memory operands can be combined, and it violates the architectural intent of the Intel designers, which is for separate data items (e.g. arrays, structures, code units) to be contained in separate segments and addressed by their own segment addresses, in new programs that are not ported from earlier 8-bit processors with 16-bit address spaces.
Unreal mode
Unreal mode is used by some 16-bit operating systems and some 32-bit boot loaders.
System Management Mode
The System Management Mode (SMM) is only used by the system firmware (BIOS/UEFI), not by operating systems and applications software. The SMM code is running in SMRAM.
Protected mode
In addition to real mode, the Intel 80286 supports protected mode, expanding addressable physical memory to 16 MB and addressable virtual memory to 1 GB, and providing protected memory, which prevents programs from corrupting one another. This is done by using the segment registers only for storing an index into a descriptor table that is stored in memory. There are two such tables, the Global Descriptor Table (GDT) and the Local Descriptor Table (LDT), each holding up to 8192 segment descriptors, each segment giving access to 64 KB of memory. In the 80286, a segment descriptor provides a 24-bit base address, and this base address is added to a 16-bit offset to create an absolute address. The base address from the table fulfills the same role that the literal value of the segment register fulfills in real mode; the segment registers have been converted from direct registers to indirect registers. Each segment can be assigned one of four ring levels used for hardware-based computer security. Each segment descriptor also contains a segment limit field which specifies the maximum offset that may be used with the segment. Because offsets are 16 bits, segments are still limited to 64 KB each in 80286 protected mode.
Each time a segment register is loaded in protected mode, the 80286 must read a 6-byte segment descriptor from memory into a set of hidden internal registers. Thus, loading segment registers is much slower in protected mode than in real mode, and changing segments very frequently is to be avoided. Actual memory operations using protected mode segments are not slowed much because the 80286 and later have hardware to check the offset against the segment limit in parallel with instruction execution.
The Intel 80386 extended offsets and also the segment limit field in each segment descriptor to 32 bits, enabling a segment to span the entire memory space. It also introduced support in protected mode for paging, a mechanism making it possible to use paged virtual memory (with 4 KB page size). Paging allows the CPU to map any page of the virtual memory space to any page of the physical memory space. To do this, it uses additional mapping tables in memory called page tables. Protected mode on the 80386 can operate with paging either enabled or disabled; the segmentation mechanism is always active and generates virtual addresses that are then mapped by the paging mechanism if it is enabled. The segmentation mechanism can also be effectively disabled by setting all segments to have a base address of 0 and size limit equal to the whole address space; this also requires a minimally-sized segment descriptor table of only four descriptors (since the FS and GS segments need not be used).
Paging is used extensively by modern multitasking operating systems. Linux, 386BSD and Windows NT were developed for the 386 because it was the first Intel architecture CPU to support paging and 32-bit segment offsets. The 386 architecture became the basis of all further development in the x86 series.
x86 processors that support protected mode boot into real mode for backward compatibility with the older 8086 class of processors. Upon power-on (a.k.a. booting), the processor initializes in real mode, and then begins executing instructions. Operating system boot code, which might be stored in ROM, may place the processor into the protected mode to enable paging and other features. The instruction set in protected mode is similar to that used in real mode. However, certain constraints that apply to real mode (such as not being able to use ax,cx,dx in addressing) do not apply in protected mode. Conversely, segment arithmetic, a common practice in real mode code, is not allowed in protected mode.
Virtual 8086 mode
There is also a sub-mode of operation in 32-bit protected mode (a.k.a. 80386 protected mode) called virtual 8086 mode, also known as V86 mode. This is basically a special hybrid operating mode that allows real mode programs and operating systems to run while under the control of a protected mode supervisor operating system. This allows for a great deal of flexibility in running both protected mode programs and real mode programs simultaneously. This mode is exclusively available for the 32-bit version of protected mode; it does not exist in the 16-bit version of protected mode, or in long mode.
Long mode
In the mid 1990s, it was obvious that the 32-bit address space of the x86 architecture was limiting its performance in applications requiring large data sets. A 32-bit address space would allow the processor to directly address only 4 GB of data, a size surpassed by applications such as video processing and database engines. Using 64-bit addresses, it is possible to directly address 16 EiB of data, although most 64-bit architectures do not support access to the full 64-bit address space; for example, AMD64 supports only 48 bits from a 64-bit address, split into four paging levels.
In 1999, AMD published a (nearly) complete specification for a 64-bit extension of the x86 architecture which they called x86-64 with claimed intentions to produce. That design is currently used in almost all x86 processors, with some exceptions intended for embedded systems.
Mass-produced x86-64 chips for the general market were available four years later, in 2003, after the time was spent for working prototypes to be tested and refined; about the same time, the initial name x86-64 was changed to AMD64. The success of the AMD64 line of processors coupled with lukewarm reception of the IA-64 architecture forced Intel to release its own implementation of the AMD64 instruction set. Intel had previously implemented support for AMD64 but opted not to enable it in hopes that AMD would not bring AMD64 to market before Itanium's new IA-64 instruction set was widely adopted. It branded its implementation of AMD64 as EM64T, and later rebranded it Intel 64.
In its literature and product version names, Microsoft and Sun refer to AMD64/Intel 64 collectively as x64 in the Windows and Solaris operating systems. Linux distributions refer to it either as "x86-64", its variant "x86_64", or "amd64". BSD systems use "amd64" while macOS uses "x86_64".
Long mode is mostly an extension of the 32-bit instruction set, but unlike the 16–to–32-bit transition, many instructions were dropped in the 64-bit mode. This does not affect actual binary backward compatibility (which would execute legacy code in other modes that retain support for those instructions), but it changes the way assembler and compilers for new code have to work.
This was the first time that a major extension of the x86 architecture was initiated and originated by a manufacturer other than Intel. It was also the first time that Intel accepted technology of this nature from an outside source.
Extensions
Floating-point unit
Early x86 processors could be extended with floating-point hardware in the form of a series of floating-point numerical co-processors with names like 8087, 80287 and 80387, abbreviated x87. This was also known as the NPX (Numeric Processor eXtension), an apt name since the coprocessors, while used mainly for floating-point calculations, also performed integer operations on both binary and decimal formats. With very few exceptions, the 80486 and subsequent x86 processors then integrated this x87 functionality on chip which made the x87 instructions a de facto integral part of the x86 instruction set.
Each x87 register, known as ST(0) through ST(7), is 80 bits wide and stores numbers in the IEEE floating-point standard double extended precision format. These registers are organized as a stack with ST(0) as the top. This was done in order to conserve opcode space, and the registers are therefore randomly accessible only for either operand in a register-to-register instruction; ST0 must always be one of the two operands, either the source or the destination, regardless of whether the other operand is ST(x) or a memory operand. However, random access to the stack registers can be obtained through an instruction which exchanges any specified ST(x) with ST(0).
The operations include arithmetic and transcendental functions, including trigonometric and exponential functions, and instructions that load common constants (such as 0; 1; e, the base of the natural logarithm; log2(10); and log10(2)) into one of the stack registers. While the integer ability is often overlooked, the x87 can operate on larger integers with a single instruction than the 8086, 80286, 80386, or any x86 CPU without to 64-bit extensions can, and repeated integer calculations even on small values (e.g., 16-bit) can be accelerated by executing integer instructions on the x86 CPU and the x87 in parallel. (The x86 CPU keeps running while the x87 coprocessor calculates, and the x87 sets a signal to the x86 when it is finished or interrupts the x86 if it needs attention because of an error.)
MMX
MMX is a SIMD instruction set designed by Intel and introduced in 1997 for the Pentium MMX microprocessor. The MMX instruction set was developed from a similar concept first used on the Intel i860. It is supported on most subsequent IA-32 processors by Intel and other vendors. MMX is typically used for video processing (in multimedia applications, for instance).
MMX added 8 new registers to the architecture, known as MM0 through MM7 (henceforth referred to as MMn). In reality, these new registers were just aliases for the existing x87 FPU stack registers. Hence, anything that was done to the floating-point stack would also affect the MMX registers. Unlike the FP stack, these MMn registers were fixed, not relative, and therefore they were randomly accessible. The instruction set did not adopt the stack-like semantics so that existing operating systems could still correctly save and restore the register state when multitasking without modifications.
Each of the MMn registers are 64-bit integers. However, one of the main concepts of the MMX instruction set is the concept of packed data types, which means instead of using the whole register for a single 64-bit integer (quadword), one may use it to contain two 32-bit integers (doubleword), four 16-bit integers (word) or eight 8-bit integers (byte). Given that the MMX's 64-bit MMn registers are aliased to the FPU stack and each of the floating-point registers are 80 bits wide, the upper 16 bits of the floating-point registers are unused in MMX. These bits are set to all ones by any MMX instruction, which correspond to the floating-point representation of NaNs or infinities.
3DNow!
In 1997, AMD introduced 3DNow!. The introduction of this technology coincided with the rise of 3D entertainment applications and was designed to improve the CPU's vector processing performance of graphic-intensive applications. 3D video game developers and 3D graphics hardware vendors use 3DNow! to enhance their performance on AMD's K6 and Athlon series of processors.
3DNow! was designed to be the natural evolution of MMX from integers to floating point. As such, it uses exactly the same register naming convention as MMX, that is MM0 through MM7. The only difference is that instead of packing integers into these registers, two single-precision floating-point numbers are packed into each register. The advantage of aliasing the FPU registers is that the same instruction and data structures used to save the state of the FPU registers can also be used to save 3DNow! register states. Thus no special modifications are required to be made to operating systems which would otherwise not know about them.
and AVX
In 1999, Intel introduced the Streaming SIMD Extensions (SSE) instruction set, following in 2000 with SSE2. The first addition allowed offloading of basic floating-point operations from the x87 stack and the second made MMX almost obsolete and allowed the instructions to be realistically targeted by conventional compilers. Introduced in 2004 along with the Prescott revision of the Pentium 4 processor, SSE3 added specific memory and thread-handling instructions to boost the performance of Intel's HyperThreading technology. AMD licensed the SSE3 instruction set and implemented most of the SSE3 instructions for its revision E and later Athlon 64 processors. The Athlon 64 does not support HyperThreading and lacks those SSE3 instructions used only for HyperThreading.
SSE discarded all legacy connections to the FPU stack. This also meant that this instruction set discarded all legacy connections to previous generations of SIMD instruction sets like MMX. But it freed the designers up, allowing them to use larger registers, not limited by the size of the FPU registers. The designers created eight 128-bit registers, named XMM0 through XMM7. (Note: in AMD64, the number of SSE XMM registers has been increased from 8 to 16.) However, the downside was that operating systems had to have an awareness of this new set of instructions in order to be able to save their register states. So Intel created a slightly modified version of Protected mode, called Enhanced mode which enables the usage of SSE instructions, whereas they stay disabled in regular Protected mode. An OS that is aware of SSE will activate Enhanced mode, whereas an unaware OS will only enter into traditional Protected mode.
SSE is a SIMD instruction set that works only on floating-point values, like 3DNow!. However, unlike 3DNow! it severs all legacy connection to the FPU stack. Because it has larger registers than 3DNow!, SSE can pack twice the number of single precision floats into its registers. The original SSE was limited to only single-precision numbers, like 3DNow!. The SSE2 introduced the capability to pack double precision numbers too, which 3DNow! had no possibility of doing since a double precision number is 64-bit in size which would be the full size of a single 3DNow! MMn register. At 128 bits, the SSE XMMn registers could pack two double precision floats into one register. Thus SSE2 is much more suitable for scientific calculations than either SSE1 or 3DNow!, which were limited to only single precision. SSE3 does not introduce any additional registers.
The Advanced Vector Extensions (AVX) doubled the size of SSE registers to 256-bit YMM registers. It also introduced the VEX coding scheme to accommodate the larger registers, plus a few instructions to permute elements. AVX2 did not introduce extra registers, but was notable for the addition for masking, gather, and shuffle instructions.
AVX-512 features yet another expansion to 32 512-bit ZMM registers and a new EVEX scheme. Unlike its predecessors featuring a monolithic extension, it is divided into many subsets that specific models of CPUs can choose to implement.
Physical Address Extension (PAE)
Physical Address Extension or PAE was first added in the Intel Pentium Pro, and later by AMD in the Athlon processors, to allow up to 64 GB of RAM to be addressed. Without PAE, physical RAM in 32-bit protected mode is usually limited to 4 GB. PAE defines a different page table structure with wider page table entries and a third level of page table, allowing additional bits of physical address. Although the initial implementations on 32-bit processors theoretically supported up to 64 GB of RAM, chipset and other platform limitations often restricted what could actually be used. x86-64 processors define page table structures that theoretically allow up to 52 bits of physical address, although again, chipset and other platform concerns (like the number of DIMM slots available, and the maximum RAM possible per DIMM) prevent such a large physical address space to be realized. On x86-64 processors PAE mode must be active before the switch to long mode, and must remain active while long mode is active, so while in long mode there is no "non-PAE" mode. PAE mode does not affect the width of linear or virtual addresses.
x86-64
By the 2000s, 32-bit x86 processors' limits in memory addressing were an obstacle to their use in high-performance computing clusters and powerful desktop workstations. The aged 32-bit x86 was competing with much more advanced 64-bit RISC architectures which could address much more memory. Intel and the whole x86 ecosystem needed 64-bit memory addressing if x86 was to survive the 64-bit computing era, as workstation and desktop software applications were soon to start hitting the limits of 32-bit memory addressing. However, Intel felt that it was the right time to make a bold step and use the transition to 64-bit desktop computers for a transition away from the x86 architecture in general, an experiment which ultimately failed.
In 2001, Intel attempted to introduce a non-x86 64-bit architecture named IA-64 in its Itanium processor, initially aiming for the high-performance computing market, hoping that it would eventually replace the 32-bit x86. While IA-64 was incompatible with x86, the Itanium processor did provide emulation abilities for translating x86 instructions into IA-64, but this affected the performance of x86 programs so badly that it was rarely, if ever, actually useful to the users: programmers should rewrite x86 programs for the IA-64 architecture or their performance on Itanium would be orders of magnitude worse than on a true x86 processor. The market rejected the Itanium processor since it broke backward compatibility and preferred to continue using x86 chips, and very few programs were rewritten for IA-64.
AMD decided to take another path toward 64-bit memory addressing, making sure backward compatibility would not suffer. In April 2003, AMD released the first x86 processor with 64-bit general-purpose registers, the Opteron, capable of addressing much more than 4 GB of virtual memory using the new x86-64 extension (also known as AMD64 or x64). The 64-bit extensions to the x86 architecture were enabled only in the newly introduced long mode, therefore 32-bit and 16-bit applications and operating systems could simply continue using an AMD64 processor in protected or other modes, without even the slightest sacrifice of performance and with full compatibility back to the original instructions of the 16-bit Intel 8086. The market responded positively, adopting the 64-bit AMD processors for both high-performance applications and business or home computers.
Seeing the market rejecting the incompatible Itanium processor and Microsoft supporting AMD64, Intel had to respond and introduced its own x86-64 processor, the Prescott Pentium 4, in July 2004. As a result, the Itanium processor with its IA-64 instruction set is rarely used and x86, through its x86-64 incarnation, is still the dominant CPU architecture in non-embedded computers.
x86-64 also introduced the NX bit, which offers some protection against security bugs caused by buffer overruns.
As a result of AMD's 64-bit contribution to the x86 lineage and its subsequent acceptance by Intel, the 64-bit RISC architectures ceased to be a threat to the x86 ecosystem and almost disappeared from the workstation market. x86-64 began to be utilized in powerful supercomputers (in its AMD Opteron and Intel Xeon incarnations), a market which was previously the natural habitat for 64-bit RISC designs (such as the IBM Power microprocessors or SPARC processors). The great leap toward 64-bit computing and the maintenance of backward compatibility with 32-bit and 16-bit software enabled the x86 architecture to become an extremely flexible platform today, with x86 chips being utilized from small low-power systems (for example, Intel Quark and Intel Atom) to fast gaming desktop computers (for example, Intel Core i7 and AMD FX/Ryzen), and even dominate large supercomputing clusters, effectively leaving only the ARM 32-bit and 64-bit RISC architecture as a competitor in the smartphone and tablet market.
Virtualization
Prior to 2005, x86 architecture processors were unable to meet the Popek and Goldberg requirements - a specification for virtualization created in 1974 by Gerald J. Popek and Robert P. Goldberg. However, both proprietary and open-source x86 virtualization hypervisor products were developed using software-based virtualization. Proprietary systems include Hyper-V, Parallels Workstation, VMware ESX, VMware Workstation, VMware Workstation Player and Windows Virtual PC, while free and open-source systems include QEMU, Kernel-based Virtual Machine, VirtualBox, and Xen.
The introduction of the AMD-V and Intel VT-x instruction sets in 2005 allowed x86 processors to meet the Popek and Goldberg virtualization requirements.
AES
See also
x86 assembly language
x86 instruction listings
CPUID
Itanium
x86-64
680x0, a competing architecture in the 16 & early 32bit eras
PowerPC, a competing architecture in the later 32-bit and 64-bit eras
Microarchitecture
List of AMD microprocessors
List of Intel microprocessors
List of Intel CPU microarchitectures
List of VIA microprocessors
List of x86 manufacturers
Input/Output Base Address
Interrupt request
iAPX
Tick–tock model
Notes
References
Further reading
External links
Why Intel can't seem to retire the x86
32/64-bit x86 Instruction Reference
Intel Intrinsics Guide, an interactive reference tool for Intel intrinsic instructions
Intel® 64 and IA-32 Architectures Software Developer’s Manuals
AMD Developer Guides, Manuals & ISA Documents, AMD64 Architecture
Computer-related introductions in 1978
Intel products
Instruction set architectures
IBM PC compatibles | Operating System (OS) | 1,322 |
SCSI
Small Computer System Interface (SCSI, ) is a set of standards for physically connecting and transferring data between computers and peripheral devices. The SCSI standards define commands, protocols, electrical, optical and logical interfaces. The SCSI standard defines command sets for specific peripheral device types; the presence of "unknown" as one of these types means that in theory it can be used as an interface to almost any device, but the standard is highly pragmatic and addressed toward commercial requirements. The initial Parallel SCSI was most commonly used for hard disk drives and tape drives, but it can connect a wide range of other devices, including scanners and CD drives, although not all controllers can handle all devices.
The ancestral SCSI standard, X3.131-1986, generally referred to as SCSI-1, was published by the X3T9 technical committee of the American National Standards Institute (ANSI) in 1986. SCSI-2 was published in August 1990 as X3.T9.2/86-109, with further revisions in 1994 and subsequent adoption of a multitude of interfaces. Further refinements have resulted in improvements in performance and support for ever-increasing storage data capacity.
History
Parallel interface
SCSI is derived from "SASI", the "Shugart Associates System Interface", developed beginning 1979 and publicly disclosed in 1981. Larry Boucher is considered to be the "father" of SASI and ultimately SCSI due to his pioneering work first at Shugart Associates and then at Adaptec.
A SASI controller provided a bridge between a hard disk drive's low-level interface and a host computer, which needed to read blocks of data. SASI controller boards were typically the size of a hard disk drive and were usually physically mounted to the drive's chassis. SASI, which was used in mini- and early microcomputers, defined the interface as using a 50-pin flat ribbon connector which was adopted as the SCSI-1 connector. SASI is a fully compliant subset of SCSI-1 so that many, if not all, of the then-existing SASI controllers were SCSI-1 compatible.
Until at least February 1982, ANSI developed the specification as "SASI" and "Shugart Associates System Interface" however, the committee documenting the standard would not allow it to be named after a company. Almost a full day was devoted to agreeing to name the standard "Small Computer System Interface", which Boucher intended to be pronounced "sexy", but ENDL's Dal Allan pronounced the new acronym as "scuzzy" and that stuck.
A number of companies such as NCR Corporation, Adaptec and Optimem were early supporters of SCSI. The NCR facility in Wichita, Kansas is widely thought to have developed the industry's first SCSI controller chip; it worked the first time.
The "small" reference in "small computer system interface" is historical; since the mid-1990s, SCSI has been available on even the largest of computer systems.
Since its standardization in 1986, SCSI has been commonly used in the Amiga, Atari, Apple Macintosh and Sun Microsystems computer lines and PC server systems. Apple started using the less-expensive parallel ATA (PATA, also known as IDE) for its low-end machines with the Macintosh Quadra 630 in 1994, and added it to its high-end desktops starting with the Power Macintosh G3 in 1997. Apple dropped on-board SCSI completely in favor of IDE and FireWire with the (Blue & White) Power Mac G3 in 1999, while still offering a PCI SCSI host adapter as an option on up to the Power Macintosh G4 (AGP Graphics) models. Sun switched its lower-end range to Serial ATA (SATA). Commodore included SCSI on the Amiga 3000/3000T systems and it was an add-on to previous Amiga 500/2000 models. Starting with the Amiga 600/1200/4000 systems Commodore switched to the IDE interface. Atari included SCSI as standard in its Atari MEGA STE, Atari TT and Atari Falcon computer models. SCSI has never been popular in the low-priced IBM PC world, owing to the lower cost and adequate performance of ATA hard disk standard. However, SCSI drives and even SCSI RAIDs became common in PC workstations for video or audio production.
Modern SCSI
Recent physical versions of SCSISerial Attached SCSI (SAS), SCSI-over-Fibre Channel Protocol (FCP), and USB Attached SCSI (UAS)break from the traditional parallel SCSI bus and perform data transfer via serial communications using point-to-point links. Although much of the SCSI documentation talks about the parallel interface, all modern development efforts use serial interfaces. Serial interfaces have a number of advantages over parallel SCSI, including higher data rates, simplified cabling, longer reach, improved fault isolation and full-duplex capability. The primary reason for the shift to serial interfaces is the clock skew issue of high speed parallel interfaces, which makes the faster variants of parallel SCSI susceptible to problems caused by cabling and termination.
The non-physical iSCSI preserves the basic SCSI paradigm, especially the command set, almost unchanged, through embedding of SCSI-3 over TCP/IP. Therefore, iSCSI uses logical connections instead of physical links and can run on top of any network supporting IP. The actual physical links are realized on lower network layers, independently from iSCSI. Predominantly, Ethernet is used which is also of serial nature.
SCSI is popular on high-performance workstations, servers, and storage appliances. Almost all RAID subsystems on servers have used some kind of SCSI hard disk drives for decades (initially Parallel SCSI, interim Fibre Channel, recently SAS), though a number of manufacturers offer SATA-based RAID subsystems as a cheaper option. Moreover, SAS offers compatibility with SATA devices, creating a much broader range of options for RAID subsystems together with the existence of nearline SAS (NL-SAS) drives. Instead of SCSI, modern desktop computers and notebooks typically use SATA interfaces for internal hard disk drives, with NVMe over PCIe gaining popularity as SATA can bottleneck modern solid-state drives.
Interfaces
SCSI is available in a variety of interfaces. The first was parallel SCSI (also called SCSI Parallel Interface or SPI), which uses a parallel bus design. Since 2005, SPI was gradually replaced by Serial Attached SCSI (SAS), which uses a serial design but retains other aspects of the technology. Many other interfaces which do not rely on complete SCSI standards still implement the SCSI command protocol; others drop physical implementation entirely while retaining the SCSI architectural model. iSCSI, for example, uses TCP/IP as a transport mechanism, which is most often transported over Gigabit Ethernet or faster network links.
SCSI interfaces have often been included on computers from various manufacturers for use under Microsoft Windows, classic Mac OS, Unix, Commodore Amiga and Linux operating systems, either implemented on the motherboard or by the means of plug-in adaptors. With the advent of SAS and SATA drives, provision for parallel SCSI on motherboards was discontinued.
Parallel SCSI
Initially, the SCSI Parallel Interface (SPI) was the only interface using the SCSI protocol. Its standardization started as a single-ended 8-bit bus in 1986, transferring up to 5 MB/s, and evolved into a low-voltage differential 16-bit bus capable of up to 320 MB/s. The last SPI-5 standard from 2003 also defined a 640 MB/s speed which failed to be realized.
Parallel SCSI specifications include several synchronous transfer modes for the parallel cable, and an asynchronous mode. The asynchronous mode is a classic request/acknowledge protocol, which allows systems with a slow bus or simple systems to also use SCSI devices. Faster synchronous modes are used more frequently.
SCSI interfaces
Cabling
SCSI Parallel Interface
Internal parallel SCSI cables are usually ribbons, with two or more 50–, 68–, or 80–pin connectors attached. External cables are typically shielded (but may not be), with 50– or 68–pin connectors at each end, depending upon the specific SCSI bus width supported. The 80–pin Single Connector Attachment (SCA) is typically used for hot-pluggable devices
Fibre Channel
Fibre Channel can be used to transport SCSI information units, as defined by the Fibre Channel Protocol for SCSI (FCP). These connections are hot-pluggable and are usually implemented with optical fiber.
Serial attached SCSI
Serial attached SCSI (SAS) uses a modified Serial ATA data and power cable.
iSCSI
iSCSI (Internet Small Computer System Interface) usually uses Ethernet connectors and cables as its physical transport, but can run over any physical transport capable of transporting IP.
SRP
The SCSI RDMA Protocol (SRP) is a protocol that specifies how to transport SCSI commands over a reliable RDMA connection. This protocol can run over any RDMA-capable physical transport, e.g. InfiniBand or Ethernet when using RoCE or iWARP.
USB Attached SCSI
USB Attached SCSI allows SCSI devices to use the Universal Serial Bus.
Automation/Drive Interface
The Automation/Drive Interface − Transport Protocol (ADT) is used to connect removable media devices, such as tape drives, with the controllers of the libraries (automation devices) in which they are installed. The ADI standard specifies the use of RS-422 for the physical connections. The second-generation ADT-2 standard defines iADT, use of the ADT protocol over IP (Internet Protocol) connections, such as over Ethernet. The Automation/Drive Interface − Commands standards (ADC, ADC-2, and ADC-3) define SCSI commands for these installations.
SCSI command protocol
In addition to many different hardware implementations, the SCSI standards also include an extensive set of command definitions. The SCSI command architecture was originally defined for parallel SCSI buses but has been carried forward with minimal change for use with iSCSI and serial SCSI. Other technologies which use the SCSI command set include the ATA Packet Interface, USB Mass Storage class and FireWire SBP-2.
In SCSI terminology, communication takes place between an initiator and a target. The initiator sends a command to the target, which then responds. SCSI commands are sent in a Command Descriptor Block (CDB). The CDB consists of a one byte operation code followed by five or more bytes containing command-specific parameters.
At the end of the command sequence, the target returns a status code byte, such as 00h for success, 02h for an error (called a Check Condition), or 08h for busy. When the target returns a Check Condition in response to a command, the initiator usually then issues a SCSI Request Sense command in order to obtain a key code qualifier (KCQ) from the target. The Check Condition and Request Sense sequence involves a special SCSI protocol called a Contingent Allegiance Condition.
There are four categories of SCSI commands: N (non-data), W (writing data from initiator to target), R (reading data), and B (bidirectional). There are about 60 different SCSI commands in total, with the most commonly used being:
Test unit ready: Queries device to see if it is ready for data transfers (disk spun up, media loaded, etc.).
Inquiry: Returns basic device information.
Request sense: Returns any error codes from the previous command that returned an error status.
Send diagnostic and Receive diagnostic results: runs a simple self-test, or a specialised test defined in a diagnostic page.
Start/Stop unit: Spins disks up and down, or loads/unloads media (CD, tape, etc.).
Read capacity: Returns storage capacity.
Format unit: Prepares a storage medium for use. In a disk, a low level format will occur. Some tape drives will erase the tape in response to this command.
Read: (four variants): Reads data from a device.
Write: (four variants): Writes data to a device.
Log sense: Returns current information from log pages.
Mode sense: Returns current device parameters from mode pages.
Mode select: Sets device parameters in a mode page.
Each device on the SCSI bus is assigned a unique SCSI identification number or ID. Devices may encompass multiple logical units, which are addressed by logical unit number (LUN). Simple devices have just one LUN, more complex devices may have multiple LUNs.
A "direct access" (i.e. disk type) storage device consists of a number of logical blocks, addressed by Logical Block Address (LBA). A typical LBA equates to 512 bytes of storage. The usage of LBAs has evolved over time and so four different command variants are provided for reading and writing data. The Read(6) and Write(6) commands contain a 21-bit LBA address. The Read(10), Read(12), Read Long, Write(10), Write(12), and Write Long commands all contain a 32-bit LBA address plus various other parameter options.
The capacity of a "sequential access" (i.e. tape-type) device is not specified because it depends, amongst other things, on the length of the tape, which is not identified in a machine-readable way. Read and write operations on a sequential access device begin at the current tape position, not at a specific LBA. The block size on sequential access devices can either be fixed or variable, depending on the specific device. Tape devices such as half-inch 9-track tape, DDS (4 mm tapes physically similar to DAT), Exabyte, etc., support variable block sizes.
Device identification
Parallel interface
On a parallel SCSI bus, a device (e.g. host adapter, disk drive) is identified by a "SCSI ID", which is a number in the range 0–7 on a narrow bus and in the range 0–15 on a wide bus. On earlier models a physical jumper or switch controls the SCSI ID of the initiator (host adapter). On modern host adapters (since about 1997), doing I/O to the adapter sets the SCSI ID; for example, the adapter often contains a Option ROM (SCSI BIOS) program that runs when the computer boots up and that program has menus that let the operator choose the SCSI ID of the host adapter. Alternatively, the host adapter may come with software that must be installed on the host computer to configure the SCSI ID. The traditional SCSI ID for a host adapter is 7, as that ID has the highest priority during bus arbitration (even on a 16 bit bus).
The SCSI ID of a device in a drive enclosure that has a back plane is set either by jumpers or by the slot in the enclosure the device is installed into, depending on the model of the enclosure. In the latter case, each slot on the enclosure's back plane delivers control signals to the drive to select a unique SCSI ID. A SCSI enclosure without a back plane often has a switch for each drive to choose the drive's SCSI ID. The enclosure is packaged with connectors that must be plugged into the drive where the jumpers are typically located; the switch emulates the necessary jumpers. While there is no standard that makes this work, drive designers typically set up their jumper headers in a consistent format that matches the way that these switches implement.
Setting the bootable (or first) hard disk to SCSI ID 0 is an accepted IT community recommendation. SCSI ID 2 is usually set aside for the floppy disk drive while SCSI ID 3 is typically for a CD-ROM drive.
General
Note that a SCSI target device (which can be called a "physical unit") is sometimes divided into smaller "logical units". For example, a high-end disk subsystem may be a single SCSI device but contain dozens of individual disk drives, each of which is a logical unit. Further, a RAID array may be a single SCSI device, but may contain many logical units, each of which is a "virtual" disk—a stripe set or mirror set constructed from portions of real disk drives. The SCSI ID, WWN, etc. in this case identifies the whole subsystem, and a second number, the logical unit number (LUN) identifies a disk device (real or virtual) within the subsystem.
It is quite common, though incorrect, to refer to the logical unit itself as a "LUN". Accordingly, the actual LUN may be called a "LUN number" or "LUN id".
In modern SCSI transport protocols, there is an automated process for the "discovery" of the IDs. The SSA initiator (normally the host computer through the 'host adaptor') "walk the loop" to determine what devices are connected and then assigns each one a 7-bit "hop-count" value. Fibre Channel – Arbitrated Loop (FC-AL) initiators use the LIP (Loop Initialization Protocol) to interrogate each device port for its WWN (World Wide Name). For iSCSI, because of the unlimited scope of the (IP) network, the process is quite complicated. These discovery processes occur at power-on/initialization time and also if the bus topology changes later, for example if an extra device is added.
SCSI has the CTL (Channel, Target or Physical Unit Number, Logical Unit Number) identification mechanism per host bus adapter, or the HCTL (HBA, Channel, PUN, LUN) identification mechanism, one host adapter may have more than one channels.
Device Type
While all SCSI controllers can work with read/write storage devices, i.e. disk and tape, some will not work with some other device types; older controllers are likely to be more limited, sometimes by their driver software, and more Device Types were added as SCSI evolved. Even CD-ROMs are not handled by all controllers. Device Type is a 5-bit field reported by a SCSI Inquiry Command; defined SCSI Peripheral Device Types include, in addition to many varieties of storage device, printer, scanner, communications device, and a catch-all "processor" type for devices not otherwise listed.
SCSI enclosure services
In larger SCSI servers, the disk-drive devices are housed in an intelligent enclosure that supports SCSI Enclosure Services (SES). The initiator can communicate with the enclosure using a specialized set of SCSI commands to access power, cooling, and other non-data characteristics.
See also
Fibre Channel
List of device bandwidths
Parallel SCSI
Serial Attached SCSI
Notes
References
Bibliography
External links
InterNational Committee for Information Technology Standards: T10 Technical Committee on SCSI Storage Interfaces (SCSI standards committee)
Macintosh internals
Logical communication interfaces
Electrical communication interfaces
Computer storage buses | Operating System (OS) | 1,323 |
UCSD Pascal
UCSD Pascal is a Pascal programming language system that runs on the UCSD p-System, a portable, highly machine-independent operating system. UCSD Pascal was first released in 1977. It was developed at the University of California, San Diego (UCSD).
UCSD Pascal and the p-System
In 1977, the University of California, San Diego (UCSD) Institute for Information Systems developed UCSD Pascal to provide students with a common environment that could run on any of the then available microcomputers as well as campus DEC PDP-11 minicomputers. The operating system became known as UCSD p-System.
There were three operating systems that IBM offered for its original IBM PC. The first was UCSD p-System, with PC DOS and CP/M-86 as the other two. Vendor SofTech Microsystems emphasized p-System's application portability, with virtual machines for 20 CPUs as of the IBM PC's release. It predicted that users would be able to use applications they purchased on future computers running p-System; advertisements called it "the Universal Operating System".
PC Magazine denounced UCSD p-System on the IBM PC, stating in a review of Context MBA, written in the language, that it "simply does not produce good code". The p-System did not sell very well for the IBM PC, because of a lack of applications and because it was more expensive than the other choices. Previously, IBM had offered the UCSD p-System as an option for Displaywriter, an 8086-based dedicated word processing machine (not to be confused with IBM's DisplayWrite word processing software). (The Displaywriter's native operating system had been developed completely internally and was not opened for end-user programming.)
Notable extensions to standard Pascal include separately compilable Units and a String type. Some intrinsics were provided to accelerate string processing (e.g. scanning in an array for a particular search pattern); other language extensions were provided to allow the UCSD p-System to be self-compiling and self-hosted.
UCSD Pascal was based on a p-code machine architecture. Its contribution to these early virtual machines was to extend p-code away from its roots as a compiler intermediate language into a full execution environment. The UCSD Pascal p-Machine was optimized for the new small microcomputers with addressing restricted to 16-bit (only 64 KB of memory). James Gosling cites UCSD Pascal as a key influence (along with the Smalltalk virtual machine) on the design of the Java virtual machine.
UCSD p-System achieved machine independence by defining a virtual machine, called the p-Machine (or pseudo-machine, which many users began to call the "Pascal-machine" like the OS—although UCSD documentation always used "pseudo-machine") with its own instruction set called p-code (or pseudo-code). Urs Ammann, a student of Niklaus Wirth, originally presented a p-code in his PhD thesis, from which the UCSD implementation was derived, the Zurich Pascal-P implementation. The UCSD implementation changed the Zurich implementation to be "byte oriented". The UCSD p-code was optimized for execution of the Pascal programming language. Each hardware platform then only needed a p-code interpreter program written for it to port the entire p-System and all the tools to run on it. Later versions also included additional languages that compiled to the p-code base. For example, Apple Computer offered a Fortran Compiler (written by Silicon Valley Software, Sunnyvale California) producing p-code that ran on the Apple version of the p-system. Later, TeleSoft (also located in San Diego) offered an early Ada development environment that used p-code and was therefore able to run on a number of hardware platforms including the Motorola 68000, the System/370, and the Pascal MicroEngine.
UCSD p-System shares some concepts with the later Java platform. Both use a virtual machine to hide operating system and hardware differences, and both use programs written to that virtual machine to provide cross-platform support. Likewise both systems allow the virtual machine to be used either as the complete operating system of the target computer or to run in a "box" under another operating system.
The UCSD Pascal compiler was distributed as part of a portable operating system, the p-System.
History
UCSD p-System began around 1974 as the idea of UCSD's Kenneth Bowles, who believed that the number of new computing platforms coming out at the time would make it difficult for new programming languages to gain acceptance. He based UCSD Pascal on the Pascal-P2 release of the portable compiler from Zurich. He was particularly interested in Pascal as a language to teach programming. UCSD introduced two features that were important improvements on the original Pascal: variable length strings, and "units" of independently compiled code (an idea included into the then-evolving Ada programming language). Niklaus Wirth credits the p-System, and UCSD Pascal in particular, with popularizing Pascal. It was not until the release of Turbo Pascal that UCSD's version started to slip from first place among Pascal users.
The Pascal dialect of UCSD Pascal came from the subset of Pascal implemented in Pascal-P2, which was not designed to be a full implementation of the language, but rather "the minimum subset that would self-compile", to fit its function as a bootstrap kit for Pascal compilers. UCSD added strings from BASIC, and several other implementation dependent features. Although UCSD Pascal later obtained many of the other features of the full Pascal language, the Pascal-P2 subset persisted in other dialects, notably Borland Pascal, which copied much of the UCSD dialect.
Versions
There were four versions of UCSD p-code engine, each with several revisions of the p-System and UCSD Pascal. A revision of the p-code engine (i.e., the p-Machine) meant a change to the p-code language, and therefore compiled code is not portable between different p-Machine versions. Each revision was represented with a leading Roman Numeral, while operating system revisions were enumerated as the "dot" number following the p-code Roman Numeral. For example, II.3 represented the third revision of the p-System running on the second revision of the p-Machine.
Version I
Original version, never officially distributed outside of the University of California, San Diego. However, the Pascal sources for both Versions I.3 and I.5 were freely exchanged between interested users. Specifically, the patch revision I.5a was known to be one of the most stable.
Version II
Widely distributed, available on many early microcomputers. Numerous versions included Apple II, DEC PDP-11, Zilog Z80 and MOS 6502 based machines, Motorola 68000 and the IBM PC (Version II on the PC was restricted to one 64K code segment and one 64K stack/heap data segment; Version IV removed the code segment limit but cost a lot more).
Project members from this era include Dr Kenneth L Bowles, Mark Allen, Richard Gleaves, Richard Kaufmann, Pete Lawrence, Joel McCormack, Mark Overgaard, Keith Shillington, Roger Sumner, John Van Zandt
Version III
Custom version written for Western Digital to run on their Pascal MicroEngine microcomputer. Included support for parallel processes for the first time.
Version IV
Commercial version, developed and sold by SofTech. Based on Version II; did not include changes from Version III. Did not sell well due to combination of their pricing structure, performance problems due to p-code interpreter, and competition with native operating systems (on top of which it often ran). After SofTech dropped the product, it was picked up by Pecan Systems, a relatively small company formed of p-System users and fans. Sales revived somewhat, due mostly to Pecan's reasonable pricing structure, but the p-System and UCSD Pascal gradually lost the market to native operating systems and compilers. Available for the TI-99/4A equipped with p-code card, Commodore CBM 8096, and Sage IV.
Further use
The Corvus Systems computer used UCSD Pascal for all its user software. The "innovative concept" of the Constellation OS was to run Pascal (interpretively or compiled) and include all common software in the manual, so users could modify as needed.
See also
P-code machine
Notes
Further reading
External links
, UCSD has released portions of the p-System written before June 1, 1979, for non-commercial use. (Note: Webpage resizes browser window.)
UCSD Pascal Reunion, Presentations and Videos from a UCSD Pascal Reunion held at UCSD on October 22, 2004
PowerPoint and Video of "What the Heck was UCSD Pascal?," presented at the 2004 reunion PPT and Video
ucsd-psystem-os, cross-compilable source code for the UCSD p-System version II.0
ucsd-psystem-vm, a portable virtual machine for UCSD p-System p-code
A reconstruction of the UCSD Pascal System II.0 User Manual
Softech P-System disassembler
UCSD P-System Museum within the Jefferson Computer Museum
UCSD P-System at Pascal for Small Machines
UCSD Pascal Yahoo Group
Pascal (programming language) compilers
Discontinued operating systems
Virtual machines | Operating System (OS) | 1,324 |
HAL/S
HAL/S (High-order Assembly Language/Shuttle) is a real-time aerospace programming language compiler and cross-compiler for avionics applications used by NASA and associated agencies (JPL, etc.). It has been used in many U.S. space projects since 1973 and its most significant use was in the Space Shuttle program (approximately 85% of the Shuttle software is coded in HAL/S). It was designed by Intermetrics in 1972 for NASA and delivered in 1973. HAL/S is written in XPL, a dialect of PL/I. Although HAL/S is designed primarily for programming on-board computers, it is general enough to meet nearly all the needs in the production, verification, and support of aerospace and other real-time applications. According to documentation from 2005, it was being maintained by the HAL/S project of United Space Alliance.
Goals and principles
The three key principles in designing the language were reliability, efficiency, and machine-independence. The language is designed to allow aerospace-related tasks (such as vector/matrix arithmetic) to be accomplished in a way that is easily understandable by people who have spaceflight knowledge, but may not necessarily have proficiency with computer programming.
HAL/S was designed not to include some constructs that were thought to be the cause of errors. For instance, there is no support for dynamic memory allocation. The language provides special support for real-time execution environments.
Some features, such as "GOTO" were provided chiefly to ease mechanical translations from other languages. (page 82)
"HAL" was suggested as the name of the new language by Ed Copps, a founding director of Intermetrics, to honor Hal Laning, a colleague at MIT. On the Preface page of the HAL/S Language Specification, it says,
fundamental contributions to the concept and implementation of MAC were made by Dr. J. Halcombe Laning of the Draper Laboratory.
A proposal for a NASA standard ground-based version of HAL named HAL/G for "ground" was proposed, but the coming emergence of the soon to be named Ada programming language contributed to Intermetrics' lack of interest in continuing this work. Instead, Intermetrics would place emphasis on what would be the "Red" finalist which would not be selected.
Host compiler systems have been implemented on an IBM 360/370, Data General Eclipse, and the Modcomp IV/Classic computers. Target computer systems have included IBM 360/370, IBM AP-101 (space shuttle avionics computer), Sperry 1819A/1819B, Data General Nova and Eclipse, CII Mitra 125, Modcomp II and IV, NASA Std. Spacecraft Computer-l and Computer-2, ITEK ATAC 16M (Galileo Project), and since 1978 the RCA CDP1802 COSMAC microprocessor (Galileo Project and others).
Syntax
HAL/S is a mostly free-form language: statements may begin anywhere on a line and may spill over the next lines, and multiple statements may be fitted onto the same line if required. However, non-space characters in the first column of a program line may have special significance. For instance, the letter 'C' in the first column indicates that the whole line is a comment and should be ignored by the compiler.
One particularly interesting feature of HAL/S is that it supports, in addition to a normal single line text format, an optional three-line input format in which three source code lines are used for each statement. In this format, the first and third lines are usable for superscripts (exponents) and subscripts (indices). The multi-line format was designed to permit writing of HAL/S code that is similar to mathematical notation.
As an example, the statement could be written in single-line format as:
X = A ** 2 + B$(I) ** 2
Exponentiation is denoted by two asterisks, as in PL/I and Fortran. The subscript is denoted by a dollar sign,with the subscript expression enclosed in parentheses. The same code fragment could be written in multiple-line format as:
E 2 2
M X = A + B
S I
In the example, the base line of the statement is indicated by an 'M' in the first column, the exponent line is indicated by an 'E', and the subscript line is indicated by an 'S'.
Example
The following is a simple HAL/S program. Every program begins with a labeled PROGRAM statement; the label consists of an identifier followed by a colon. All variables must be declared in the DECLARE group, which precedes any executable statements. Every program ends with a CLOSE delimiting statement.
SIMPLE: PROGRAM;
C CODE IN THIS TYPEFACE IS
C HAL/S SOURCE
DECLARE PI CONSTANT (3.14159266);
DECLARE R SCALAR;
READ(5) R;
WRITE(6) PI R**2;
CLOSE SIMPLE;
Data types
HAL/S has native support for integers, floating point scalars, vector, matrices, booleans and strings of 8-bit characters, limited to a maximum length of 255. Structured types may be composed using a DECLARE STRUCT statement.
See also
IBM AP-101, the space shuttle avionics computer
Fortress, a programming language with advanced syntactic support for mathematical expressions
COLASL a programming language for the IBM 7030 Stretch with a similar "natural" format
References
External links
NASA Office of Logic Design: Space Shuttle Computers and Avionics
Includes language and compiler specifications, programmer's guide, and user manual.
Computers in Spaceflight: The NASA Experience – By George Tomayko (Appendix II: "HAL/S, A Real-Time Language for Spaceflight")
Spacecraft components
Avionics programming languages
High Integrity Programming Language | Operating System (OS) | 1,325 |
Apple IIGS
The Apple IIGS (styled as II), the fifth and most powerful of the Apple II family, is a 16-bit personal computer produced by Apple Computer. While featuring the Macintosh look and feel, and resolution and color similar to the Amiga and Atari ST, it remains compatible with earlier Apple II models. The "GS" in the name stands for "Graphics and Sound," referring to its enhanced multimedia hardware, especially its state-of-the-art audio.
The microcomputer is a radical departure from any previous Apple II, with a 16-bit 65C816 microprocessor, direct access to megabytes of random-access memory (RAM), and mouse. It is the first computer from Apple with a color graphical user interface (color was introduced on the Macintosh II six months later) and Apple Desktop Bus interface for keyboards, mice, and other input devices. It is the first personal computer with a wavetable synthesis chip, using technology from Ensoniq.
The IIGS set forth a promising future and evolutionary advancement of the Apple II line, but Apple increasingly focused on the Macintosh platform. Apple ceased IIGS production in December 1992.
Hardware features
The Apple IIGS made significant improvements over the Apple IIe and Apple IIc. It emulates its predecessors via a custom chip called the Mega II and uses the then-new WDC 65C816 16-bit microprocessor. The processor runs at , which is faster than the 8-bit processors used in the earlier Apple II models. The 65C816 allows the IIGS to address considerably more RAM.
The clock was a deliberate decision to limit the IIGS's performance to less than that of the Macintosh. This decision had a critical effect on the IIGS's success; the original 65C816 processor used in the IIGS was certified to run at up to . Faster versions of the 65C816 processor were readily available, with speeds of between 5 and 14 MHz, but Apple kept the machine at 2.8 MHz throughout its production run.
Its graphical capabilities are the best of the Apple II series, with higher resolution video modes and more color. These include a 640×200-pixel mode with 2-bit color and a 320×200 mode with 4-bit color, both of which can select 4 or 16 colors (respectively) at a time from a palette of 4,096 colors. By changing the palette on each scanline, it is possible to display up to 256 colors or more per screen. With clever programming, it is possible to make the IIGS display as many as 3,200 colors at once.
Audio is generated by a built-in Ensoniq 5503 digital synthesizer chip, which has its own dedicated RAM and 32 channels of sound. These channels can be paired to produce 15 voices in stereo.
The IIGS supports both 5.25-inch and 3.5-inch floppy disks and has seven general-purpose expansion slots compatible with those on the Apple II, II+, and IIe. It also has a memory expansion slot for up 8 MB of RAM. The IIGS has ports for external floppy disk drives, two serial ports for devices such as printers and modems (which can also be used to connect to a LocalTalk network), an Apple Desktop Bus port to connect the keyboard and mouse, and composite and RGB video ports.
A real-time clock is maintained by a built-in battery (a non-replaceable 3.6-volt lithium battery; removable in a later-revision motherboard).
The IIGS also supports booting from an AppleShare server, via the AppleTalk protocol, over LocalTalk cabling. This was over a decade before NetBoot offered the same capability to computers running Mac OS 8 and beyond.
Graphics
In addition to supporting all graphics modes of previous Apple II models (40 and 80 columns text, Low and Double-Low, High and Double-High resolution) the Apple IIGS's Video Graphics Chip (VGC) introduced a new graphic mode called "Super-High Resolution". This new mode offers an increased screen resolution and a vastly wider color palette, with none of the limitations of earlier Apple II graphic modes (such as color bleeding and fringing). Super-High-Resolution supports 200 lines, in either 320 or 640 pixels horizontally. Both modes use a 12-bit palette for a total of 4,096 possible colors, with up to 256 colors (or more) on screen, though not all colors can appear onscreen at the same time.
Usage of Super-High-Resolution mode may include:
320×200 pixels with a single palette of 16 colors.
320×200 pixels with up to 16 palettes of 16 colors. In this mode, the VGC holds 16 separate palettes of 16 colors in its own memory. Each of the 200 scan lines can be assigned any one of these palettes, allowing for up to 256 colors on the screen at once.
320×200 pixels with up to 200 palettes of 16 colors. In this mode, the CPU assists the VGC in swapping palettes into and out of the video memory so that each scan line can have its own palette of 16 colors allowing for up to 3,200 colors on the screen at once.
320×200 pixels with 15 colors per palette, plus a fill-mode color. In this mode, color 0 in the palette is replaced by the last non-zero color pixel displayed on the scan line (to the left), allowing fast solid-fill graphics (drawn with only the outlines).
640×200 pixels with 4 pure colors.
640×200 pixels with up to 16 palettes of 4 pure colors. In this mode, the VGC holds 16 separate palettes of 4 pure colors in its own memory. Each of the 200 scan lines can be assigned any one of these palettes allowing for up to 64 colors on the screen at once.
640x200 pixels with up to 200 palettes of 4 pure colors. In this mode, the CPU assists the VGC in swapping palettes into and out of the video memory so that each scan line can have its own palette of 4 colors allowing for up to 800 colors on the screen at once.
640×200 pixels with 16 dithered colors. In this mode, two palettes of four pure colors each are used in alternating columns. The hardware then dithers the colors of adjacent pixels to create 16 total colors on the screen.
Each scan line on the screen can independently select either 320- or 640-line mode, fill mode (320-mode only), and any of the 16 palettes, allowing graphics modes to be mixed on the screen. This is most often seen in graphics programs where the menu bar is constantly in 640-pixel resolution and the working area's mode can be changed depending on the user's needs.
Audio
The Apple IIGS's sound is provided by an Ensoniq 5503 Digital Oscillator Chip (DOC) wavetable synthesis chip designed by Bob Yannes, creator of the SID synthesizer chip used in the Commodore 64. The ES5503 DOC is the same chip used in Ensoniq Mirage and Ensoniq ESQ-1 professional-grade synthesizers. The chip has 32 oscillators, which allows for a maximum of 32 voices (with limited capabilities when all used independently), though Apple's firmware pairs them for 16 voices, to produce fuller and more flexible sound, as do most of the standard tools of the operating system (the Apple MIDISynth toolset goes even a step further for richer sound, grouping four oscillators per voice, for a limit of seven-voice audio). The IIGS is often referred to as a 15-voice system, because one voice, or "sound generator" consisting of two oscillators, is always reserved as a dedicated clock for the sound chip's timing interrupt generator. Software that does not use the system firmware, or uses custom-programmed tools (certain games, demos and music software), can access the chip directly and take advantage of all 32 voices.
The computer's audio capabilities were given as the primary reason for record label Apple Corps's 1989 resumption of legal action against Apple that had been previously suspended. Apple Corps claimed that the IIGS's audio chip violated terms of the 1981 settlement with the company that prohibited Apple, Inc. from getting involved in the music business.
A standard -inch headphone jack is on the back of the case, and standard stereo computer speakers can be attached there. This jack provides only monaural sound and a third-party adapter card is required for stereo, despite that the Ensoniq and virtually all native software produces stereo audio. The Ensoniq can drive 16 speaker output channels, but the molex expansion connector Apple provided only allows 8. There is 64 KB of dedicated memory (DOC-RAM) on the IIGS motherboard, separate from system memory, for the Ensoniq chip to store its sampled wavetable instruments.
To exploit the IIGS's audio capabilities, during its introduction, Apple sold Bose Roommate amplified speakers for the computer (matching its platinum color and with custom Bose/Apple logo grille covers).
Expansion
Like other Apple II machines before it, the IIGS is highly expandable. The expansion slots can be used for a variety of purposes, greatly increasing the computer's capabilities. SCSI host adapters can be used to connect external SCSI devices such as hard drives and a CD-ROM drive. Other mass-storage devices such as adapters supporting more recent internal 2.5-inch IDE hard drives can also be used. Another common class of Apple IIGS expansion cards is accelerator cards, such as Applied Engineering's TransWarp GS, replacing the computer's original processor with a faster one. Applied Engineering developed the PC Transporter, which is essentially an IBM-PC/XT on a card. A variety of other cards were also produced, including ones allowing new technologies such as 10BASE-T Ethernet and CompactFlash cards to be used on the IIGS.
Development
Steve Wozniak said in January 1985 that Apple was investigating the 65816, and that an 8MHz version would "beat the pants off a 68000 in most applications", but any product using it would have to be compatible with the Apple II. Rumors spread about his work on an "Apple IIx". The IIx was said to have a 16-bit CPU, one megabyte of RAM, and better graphics and sound. "IIx" was the code name for Apple's first internal project to develop a next-generation Apple II based on the 65816. The IIx project, though, became bogged down when it attempted to include various coprocessors allowing it to emulate other computer systems. Early samples of the 65816 were also problematic. These problems led to the cancellation of the IIx project, but later, a new project was formed to produce an updated Apple II. This project, which led to the released IIGS, was known by various codenames while the new system was being developed, including "Phoenix", "Rambo", "Gumby", and "Cortland". There were rumors of several vastly enhanced prototypes built over the years at Apple but none were ever released. Only one, the "Mark Twain", has been revealed so far. The Mark Twain prototype (named for Twain's famous quote "The reports of my death are greatly exaggerated") was expected to have the "ROM 04" revision (although prototypes that have been discovered do not contain any new ROM code) and featured an 8 MHz 65C816, built-in SuperDrive, 2 MB of RAM, and a hard drive.
Some design features from the unsuccessful Apple III lived on in the Apple IIGS, such as GS/OS borrowing elements from SOS (including, by way of ProDOS, the SOS file system), a unique keyboard feature for dual-speed arrow keys, and colorized ASCII text.
Release
Limited Edition ("Woz"-signed case)
As part of a commemorative celebration marking the 10th anniversary of the Apple II series' development, as well as Apple Computer itself celebrating the same anniversary, a special limited edition was introduced at product launch. The first 50,000 Apple IIGSs manufactured had a reproduced copy of Wozniak's signature ("Woz") at the front right corner of the case, with a dotted line and the phrase "Limited Edition" printed just below it. Owners of the Limited Edition, after mailing in their Apple registration card, were mailed back a certificate of authenticity signed by Wozniak and 12 key Apple engineers, as well as a personal letter from Wozniak himself (both machine-reproduced). Because the difference between standard and Limited Edition machines were purely cosmetic, many owners of new were able to "convert" to the Limited Edition by merely swapping the case lid from an older (and likely nonfunctional) machine. While of nostalgic value to Apple II users and collectors, presently these stamped-lid cases are not considered rare, nor do they have any particular monetary worth.
Upgrading an Apple IIe
Upon its release in September 1986, Apple announced it would be making a kit that would upgrade an Apple IIe to a IIGS available for purchase. This followed an Apple practice of making logic board upgrades available that dated from the earliest days of the Apple II until Steve Jobs' return to Apple in 1997. The IIe-to-IIGS upgrade replaced the IIe motherboard with a 16-bit IIGS motherboard. Users would take their Apple IIe machines into an authorized Apple dealership, where the IIe motherboard and lower baseboard of the case were swapped for an Apple IIGS motherboard with a new baseboard (with matching cut-outs for the new built-in ports). New metal sticker ID badges replaced those on the front of the IIe, rebranding the machine. Retained were the upper half of the IIe case, the keyboard, speaker and power supply. Original IIGS motherboards (those produced between 1986 and mid-1989) have electrical connections for the IIe power supply and keyboard present, although only about half of those produced have the physical plug connectors factory-presoldered in, which were mostly reserved for the upgrade kits.
The upgrade cost US$500, plus the trade-in of the user's existing Apple IIe motherboard. It did not include a mouse, and the keyboard, although functional, lacked a numeric keypad and did not mimic all the features and functions of the Apple Desktop Bus keyboard. Some cards designed for the GS did not fit in the Apple IIe's slanted case. In the end, most users found that the upgrade did not save them much money once they purchased a 3.5-inch floppy drive, analog RGB monitor, and mouse.
Software features
Software that runs on the Apple IIGS can be divided into two major categories: 8-bit software compatible with earlier Apple II systems such as the IIe and IIc, and 16-bit IIGS software, which takes advantage of its advanced features, including a near-clone of the Macintosh graphical user interface.
8-bit Apple II compatibility
Apple claimed that the IIGS was 95% compatible with contemporary Apple II software. One reviewer, for example, successfully ran demo programs that came on cassette with his 1977 Apple II. The IIGS can run all of Apple's earlier Apple II operating systems: Apple DOS, ProDOS 8, and Apple Pascal. It is also compatible with nearly all 8-bit software running on those systems. Like the Apple II+, IIe, and IIc, the IIGS also includes Applesoft BASIC and a machine-language monitor (which can be used for very simple assembly language programming) in ROM, so they can be used even with no operating system loaded from disk. The 8-bit software runs twice as fast unless the user turns down the processor speed in the IIGS control panel.
System Software
The Apple IIGS System Software utilizes a graphical user interface (GUI) very similar to that of the Macintosh and somewhat like GEM for PCs and the operating systems of contemporary Atari and Amiga computers. Early versions of the System Software are based on the ProDOS 16 operating system, which is based on the original ProDOS operating system for 8-bit Apple II computers. Although it was modified so that 16-bit Apple IIGS software can run on it, ProDOS 16 was written largely in 8-bit code and does not take full advantage of the IIGS's capabilities. Later System Software versions (starting with version 4.0) replaced ProDOS 16 with a new 16-bit operating system known as GS/OS. It better utilizes the unique capabilities of the IIGS and includes many valuable new features. The IIGS System Software was substantially enhanced and expanded over the years during which it was developed, culminating in its final official version, System 6.0.1, which was released in 1993. In July 2015, members of a computer group from France released a new, though unofficial, version of that System Software, dubbed "System 6.0.2" (and later followed by System 6.0.3 and 6.0.4), that primarily fixed some bugs.
Graphical user interface
Similar to that of the Macintosh, the IIGS System Software provides a mouse-driven graphical user interface using concepts such as windows, menus, and icons. This was implemented by a "toolbox" of code, some of which resides in the computer's ROM and some of which is loaded from disk. Only one major application can run at a time, although other, smaller programs, known as Desk Accessories, can be used simultaneously. The IIGS has a Finder application very similar to the Macintosh's, which allows the user to manipulate files and launch applications. By default, the Finder is displayed when the computer starts up and whenever the user quits an application that is started from it, although the startup application can be changed by the user.
Software companies complained that Apple did not provide technical information and development tools to create IIGS-specific software. In 1988 Compute! reported that both Cinemaware and Intergalactic Development had to write their own tools to maximize their use of IIGS audio, with the latter stating that "these sorts of problems … are becoming well-known throughout the industry".
Extensibility
The IIGS System Software can be extended through various mechanisms. New Desk Accessories are small programs ranging from a calculator to simple word processors that can be used while running any standard desktop application. Classic Desk Accessories also serve as small programs available while running other applications, but they use the text screen and can be accessed even from non-desktop applications. Control Panels and initialization files are other mechanisms that allow various functions to be added to the system. Finder Extras permits new capabilities to be added to the Finder, drivers can be used to support new hardware devices, and users can also add "tools" that provide various functions that other programs can utilize easily. These features can be used to provide features that were never planned for by the system's designers, such as a TCP/IP stack known as "Marinetti".
Multitasking capability
A third party UNIX-like multitasking kernel was produced, called GNO/ME, which runs under the GUI and provides preemptive multitasking. In addition, a system called The Manager can be used to make the Finder more like the one on the Macintosh, allowing major software (other than just the "accessory" programs) to run simultaneously through cooperative multitasking.
Reception
After previewing the computer, BYTE stated in October 1986 that "The Apple IIGS designers' achievements are remarkable, but the burden of the classic Apple II architecture, now as venerable (and outdated) as COBOL and batch processing, may have weighed them down and denied them any technological leaps beyond an exercise in miniaturization." The magazine added that "hog-tied by [classic] Apple II compatibility, [the IIGS] approaches but does not match or exceed current computer capabilities" of the Macintosh, Commodore Amiga, or Atari ST, and predicted that many vendors would "enhance existing products for the [classic] Apple II instead of writing new software" that fully exploited the IIGS's power.
inCider, which in September had warned that the next Apple II "needs (at least)... a megabyte of RAM... That's what the market wants", indeed reported in November that "Rather than risk investing time and money in programs that work only on the Apple IIGS, a number of software developers have simply upgraded old Apple II programs", and that the "most interesting program available specifically for the IIGS at this time is LearningWays' Explore-a-Story, which was released simultaneously for the good old 128K Apple IIe and IIc". The magazine concluded, "The moral is simple: Good hardware, even innovative hardware, won't give birth to good, new software overnight."
Nibble was more positive, calling the price "fantastic" for "Steve Wozniak's dream machine". It praised the IIGS's "incredible" legacy Apple II compatibility, graphics, and sound, stated that only its slower speed made the computer significantly inferior to the Macintosh, and expected that Apple would soon introduce new products to better distinguish the two product lines. The magazine concluded that "The IIGS is an incredibly fine computer, arguably the finest assemblage of chips and resistors ever soldered together... Ladies and gentlemen of Apple, on behalf of the Apple II user community, you have earned our gratitude and admiration."
Compute! described the IIGS in November 1986 as "two machines in one—a product that bridges the gap between the Macintosh and Apple IIe, and in so doing poses what may be serious competition for the Commodore Amiga and the Atari ST series". It described the IIGS's graphics "as different as night and day" from the earlier Apple IIs and the audio as "in a class by themselves... [it] justifies the price of the IIGS to many music fans and fanatics". The magazine reported that "well over one hundred outside developers were actively engaged in creating software for the IIGS", and predicted that "as new products are developed to take advantage of the IIGS, people will move away from the pure Apple II and toward the newer titles with their improved performance".
Compute!'s Apple Applications in December 1987 reported, however, that "Many publishers have canceled or postponed their plans for Apple IIGS software and instead are cautiously introducing programs for the Apple IIc and IIe", while "many of the products for the Apple IIGS are simply versions of" older Apple II software "that incorporate color and use the mouse interface". So little IIGS software was available, it said, that "the hottest product... is AppleWorks. No mouse interface, no color, no graphics. Just AppleWorks from the IIe and IIc world". The magazine stated that many customers either chose the slightly more expensive Macintosh Plus or kept their inexpensive IIc or IIe which ran AppleWorks well, with the IIGS "in a strange position" in between.
BYTEs Bruce Webster in January 1987 praised Apple for permitting Wozniak to finish the IIx project, but said that the company should have done so "a few years ago". The IIGS is an excellent replacement for the [earlier models from the] Apple II line, but it's awfully late in coming. The technology is more trailing-edge than leading-edge in many areas", with speed and graphics inferior to that of the Amiga and Atari ST. The other computers, he wrote, have both larger software libraries that use their power and lower prices; Webster found that a IIGS package costing $2500 was comparable to a $1500 Atari ST configuration. He concluded with a "qualified approval" of the computer: "It was necessary to prevent the Apple II line from dying off during the next year or so. However, Apple didn't go far enough." A BYTE review in April 1987 concluded that the IIGS "has the potential to be a powerful computer" but needed a faster CPU and more addressable memory. The magazine advised potential customers to compare the Macintosh, Amiga, and Atari ST's more powerful 68000 CPU with the IIGS's greater expandability and large Apple II software library.
Compute! in 1988 urged Apple to make the computer faster, stating that "no matter which way you cut it, the IIGS is slow" and that IIGS-specific programs could not keep up with user actions. In 1989 the magazine stated "One of the biggest complaints of IIGS-specific software is the way it imitates the pace of a zombie. You'd think 16-bit software had died and voodoo-transformed into a shuffling, stumbling imitation of real computer applications." It reported that year that after increases in September, a IIGS with color monitor, two disk drives, and ImageWriter II cost more than $3,000, a price the magazine called "staggering". inCider also criticized the price increase, warning that it "opens the door further to low-cost MS-DOS computers".
Technical specifications
Microprocessor
WDC 65C816 running at 2.8 MHz
8-bit data bus, with selectable 8- or 16-bit registers
24-bit addressing, using a 16-bit address bus and a multiplexed bank address
Memory
1 MB of RAM built-in (256 kB in original) (expandable to 8 MB)
256 kB of ROM built-in (128 kB in original)
Video modes
Emulation video
40- and 80-column text, with 24 lines (16 selectable foreground, background, and border colors)
Low resolution: 40×48 (16 colors)
High resolution: 280×192 (6 colors)
Double low resolution: 80×48 (16 colors)
Double high resolution: 560×192 (16 colors)
Native video
Super-high resolution (320 mode)
320×200 (16 colors, selectable from 4,096 color palette)
320×200 (256 colors, selectable from 4,096 color palette)
320×200 (3200 colors, selectable from 4,096 color palette)
Super-high resolution (640 mode)
640×200 (4 colors, selectable from 4,096 color palette)
640×200 (16 dithered colors, selectable from 4,096 color palette)
640×200 (64 colors, selectable from 4,096 color palette)
640×200 (800 colors, selectable from 4,096 color palette)
Fill mode
320×200, sections of screens filled in on-the-fly for up to 60 frame/s full-screen animation
Mixed mode
320/640×200, horizontal resolution selectable on a line-by-line basis
Audio
Ensoniq 5503 digital oscillator chip
8-bit audio resolution
64 kB of dedicated sound RAM
32 oscillator channels (15 voices when paired)
Support for eight independent stereo speaker channels
Expansion
Seven Apple II Bus slots (50-pin card-edge)
IIGS Memory Expansion slot (44-pin card-edge)
Internal connectors
Game I/O socket (16-pin DIP)
Ensoniq I/O expansion connector (7-pin molex)
Specialized chip controllers
IWM (Integrated Woz Machine) for floppy drives
VGC (video graphics controller) for video
Mega II (Apple IIe computer on chip)
Ensoniq 5503 DOC (sample-based synthesis)
Zilog Z8530 SCC (serial port controller)
Apple Desktop Bus microcontroller
FPI (Fast Processor Interface) or CYA (Control Your Apple)
External connectors
NTSC composite video output (RCA connector)
Joystick (DE-9)
Audio-out (-inch mono phono jack)
Printer-serial 1 (mini-DIN8)
Modem-serial 2 (mini-DIN8)
Floppy drive (D-19)
Analog RGB video (DA-15)
Apple Desktop Bus (mini-DIN4)
Revision history
While in production between September 1986 and December 1992, the Apple IIGS remained relatively unchanged from its inception. During those years, however, Apple did produce some maintenance updates to the system which mainly comprised two new ROM-based updates and a revamped motherboard. It is rumored that several prototypes that greatly enhanced the machine's features and capabilities were designed and even built, though only one has ever been publicly exposed (i.e. the "Mark Twain"). Outlined below are only those revisions and updates officially released by Apple.
Original firmware release ("ROM version 00")
During the entire first year of the machine's production, an early, almost beta-like, firmware version shipped with the machine and was notably bug-ridden. Some limitation of this include the fact that the built-in RAM Disk can't be set larger than 4 MB (even if more RAM is present) and the firmware contains the very early System 1.x toolsets. It became incompatible with most native Apple IIGS software written from late-1987 onward, and OS support only lasted up to System 3. The startup splash screen of the original ROM only displays the words "Apple IIgs" at the top center of the screen, in the same fashion that previous Apple II models identify themselves.
Video Graphics Controller (VGC) replacement
Very early production runs of the machine had a faulty video graphics controller (VGC) chip that produced strange cosmetic glitches in emulated (IIe/IIc) video modes. Specifically, the 80-column text display and monochrome double-high-resolution graphics had a symptom where small flickering or static pink bits would appear between the gaps of characters and pixels. Most users noticed this when using AppleWorks classic or the Mousedesk application that was a part of System 1 and 2. Apple resolved the issue by offering a free chip-swap upgrade to affected owners.
Second firmware release ("ROM version 01")
In August 1987, Apple released an updated ROM that was included in all new machines and was made available as a free upgrade to all existing owners. The main feature of the new ROM was the presence of the System 2.x toolsets and several bug fixes. The upgrade was vital, as software developers, including Apple, ceased support of the original ROM upon its release (most native Apple IIGS software written from late-1987 onwards would not run unless a ROM 01 or higher was present, and this included the GS/OS operating system). This update also allows up to 8 MB for the RAM Disk, added some new features for programmers, and reported the ROM version and copyright information on the startup splash screen.
Standard RAM increased to 512 KB
In March 1988, Apple began shipping IIGS units with 512 KB of RAM as standard. This was done by preinstalling the Apple IIGS Memory Expansion Card (that was once sold separately) in the memory expansion slot—the card had 256 KB of RAM on board with empty sockets for further expansion. The built-in memory on the motherboard remained at 256 KB and existing users were not offered this upgrade.
Third firmware release ("ROM version 3"); 1 MB of RAM
In August 1989, Apple increased the standard amount of RAM shipped in the IIGS to 1.125 MB. This time the additional memory was built-in on the motherboard, which required a layout change and allowed for other minor improvements as well. This update introduced both a new motherboard and a new ROM firmware update; however, neither was offered to existing owners—even as an upgrade option (the new ROM, now two chips, is incompatible with the original single-socket motherboard). Apple had cited the reason an upgrade was not being offered was that most of the features of the new machine could be obtained in existing machines by installing System 5 and a fully populated Apple IIGS Memory Expansion Card.
The new ROM firmware was expanded to 256 KB and contained the System 5.x toolsets. The newer toolsets increased the performance of the machine by up to 10%, due to the fact that less had to be loaded from disk, tool ROM read access being faster than RAM, and their highly optimized routines compared to the older toolsets (pre-GS/OS-based). In addition to several bug fixes, also added were more programmer assistance commands and features, a cleaned-up control panel with improved mouse control and RAM Disk functionality, more flexible Appletalk support and slot-mapping.
In terms of hardware, the new motherboard is a cleaner design that drew less power and resolved audio noise issues that interfered with the Ensoniq synthesizer in the original motherboard. Over four times more RAM is built-in, with double the ROM size, and an enhanced Apple Desktop Bus microcontroller provides native support for sticky keys, mouse emulation, and keyboard LED support (available on extended keyboards). Hardware shadowing of Text Page 2 was introduced, improving compatibility and performance with the classic Apple II video mode. The clock battery is now user-serviceable, being placed in a removable socket, and a jumper location was added to lock out the text-based control panel (mainly useful in school environments). Support for the Apple-IIe-to-IIGS upgrade was removed, and some cost-cutting measures had some chips soldered in place rather than being socketed. As the firmware only worked in this motherboard and no new firmware updates were ever issued, users commonly referred to this version of the IIGS as the "ROM 3".
International versions
Like the Apple IIe and Apple IIc built-in keyboards before it, the detached IIGS keyboard differs depending on what region of the world it was sold in, with extra local language characters and symbols printed on certain keycaps (e.g. French accented characters on the Canadian IIGS keyboard such as "à", "é", "ç", etc., or the British Pound "£" symbol on the UK IIGS keyboard). Unlike previous Apple II models, however, the layout and shape of keys were the same standard for all countries, and the ROMs inside the computer were also the same for all countries, including support for all the different international keyboards. In order to access the local character set layout and display, users would change settings in the built-in software-based control panel, which also provides a method of toggling between 50/60 Hz video screen refresh. The composite video output is NTSC-only on all IIGS systems; users in PAL countries are expected to use an RGB monitor or TVs which featured RGB SCART. This selectable internationalization makes it quick and simple to localize any given machine. Also present in the settings is a QWERTY/DVORAK keyboard toggle for all countries, much like that of the Apple IIc. Outside North America, the Apple IIGS shipped with a different 220 V clip-in power supply, making this and the plastic keycaps the only physical differences (and also very modular, in the sense of converting a non-localized machine to a local one).
Gus
Apple designed the Apple IIe Card to transit Apple IIe customers to the Macintosh LC, particularly schools who had a large investment in Apple II software. While Apple discussed creating an LC plug-in IIGS card, they felt that the cost of selling it would be as much as an entire LC and abandoned it. However, the educational community had a substantial investment in the IIGS software as well, which made upgrading to a Macintosh a less attractive proposition than had been for the Apple IIe. As a result, Apple software designers Dave Lyons and Andy Nicholas spearheaded a program to develop a IIGS software emulator they called Gus in their spare time, which would run on the Power Macintosh only. Apple did not officially support the project. Nevertheless, seeing the need to help switch their educational customers to the Macintosh (as well as sell Power Macs), Apple unofficially distributed the software for free to schools and other institutions that signed a non-disclosure agreement. It was never offered for public sale, but is now readily available on the internet, along with many third-party classic Apple II emulators. Gus represents one of the few software emulators developed within Apple (officially or otherwise), including MacWorks and Mac OS X Classic environment. The app was publicly demonstrated in Rhapsody's Blue Box at WWDC 1997.
Legacy
The Apple Desktop Bus, which for a long time was the standard for most input peripherals for the Macintosh, first appeared on the IIGS. In addition, the other standardized ports and addition of SCSI set a benchmark which allowed, for the first time, Apple to consolidate their peripheral offerings across both the Apple II and Macintosh product lines, permitting one device to be compatible with multiple, disparate computers.
The IIGS is also the first Apple product to bear the new brand-unifying color scheme, a warm gray color Apple dubbed "Platinum". This color would remain the Apple standard used on the vast majority of products for the next decade. The IIGS is also the second major computer design, after the Apple IIc, where Apple worked with Hartmut Esslinger's team at Frog Design. The consistent use of the new corporate color and matching peripherals ushered in the Snow White design language, which was used exclusively for the next five years and made the Apple product line instantly recognizable around the world.
The inclusion of a professional-grade sound chip in the Apple IIGS was hailed by both developers and users, and hopes were high that it would be added to the Macintosh; however, it drew another lawsuit from Apple Corps. As part of an earlier trademark dispute with the business arm of The Beatles, Apple Computer had agreed not to release music-related products. Apple Corps considered the inclusion of the Ensoniq chip in the IIGS a violation of that agreement.
Developers
John Carmack, co-founder of id Software, started his career by writing commercial software for the Apple IIGS, working with John Romero and Tom Hall. Wolfenstein 3D, based on the 1981 Apple II game Castle Wolfenstein, came full-circle when it was released for the Apple IIGS in 1998.
Two mainstream video games, Zany Golf and The Immortal, both designed by Will Harvey, originated as Apple IIGS games that were ported to other platforms, including the Sega Genesis.
Pangea Software started as an Apple IIGS game developer. Naughty Dog started with the classic Apple II machines, but later developed for the IIGS.
Rumors and canceled developments
In August 1988, inCider magazine reported Apple was working on a new Apple IIGS. It was stated it had a faster CPU, improved graphics (double the vertical resolution, 256 colors per scanline and 4,096 colors per screen), 768 kB of RAM, 256 kB of ROM, 128 kB of sound DOC-RAM and a built-in SCSI port. No new machine would appear that year.
In 1989 Compute! reported on speculation that Apple would announce at the May AppleFest a "IIGS Plus" with a processor two to three times faster, 768 kB to 1 MB RAM, and a SCSI port. The speculation was partially based on Apple CEO John Sculley stating that the IIGS would receive a new CPU in 1989. No new computer appeared, but in August the IIGS started shipping with 1 MB RAM in the base configuration.
VTech, makers of the 8-bit Apple II-compatible Laser 128, announced plans for a IIGS-compatible computer in 1988 for under $600. They demonstrated a prototype in 1989, but the computer was never released.
Cirtech produced a working prototype of a black-and-white Macintosh hardware emulation plug-in card for the IIGS dubbed "Duet". Using a 68020 processor, custom ROM and up to 8 MB RAM, Cirtech claimed it outperformed the Macintosh IIcx. The project was ultimately canceled due to the projected high cost of the board.
See also
Apple IIc Plus
Juiced.GS, the last Apple II publication
KansasFest, an annual convention for Apple II users
List of Apple IIGS games
References
External links
"The New Apple IIGS" from Compute! magazine (November 1986)
Apple II History from Steven Weyhrich
What is the Apple IIGS, reviews of many Apple IIGS applications
GS
Snow White design language
Computer-related introductions in 1986
16-bit computers
fr:Apple II#Apple IIGS (Septembre 1986) | Operating System (OS) | 1,326 |
ISO/IEC 646
ISO/IEC 646 is the name of a set of ISO standards, described as Information technology — ISO 7-bit coded character set for information interchange and developed in cooperation with ASCII at least since 1964. Since its first edition in 1967 it has specified a 7-bit character code from which several national standards are derived.
ISO/IEC 646 was also ratified by ECMA as ECMA-6. The first version of ECMA-6 had been published in 1965, based on work the ECMA's Technical Committee TC1 had carried out since December 1960.
Characters in the ISO/IEC 646 Basic Character Set are invariant characters. Since that portion of ISO/IEC 646, that is the invariant character set shared by all countries, specified only those letters used in the ISO basic Latin alphabet, countries using additional letters needed to create national variants of ISO 646 to be able to use their native scripts. Since transmission and storage of 8-bit codes was not standard at the time, the national characters had to be made to fit within the constraints of 7 bits, meaning that some characters that appear in ASCII do not appear in other national variants of ISO 646.
History
ISO/IEC 646 and its predecessor ASCII (ASA X3.4) largely endorsed existing practice regarding character encodings in the telecommunications industry.
As ASCII did not provide a number of characters needed for languages other than English, a number of national variants were made that substituted some less-used characters with needed ones. Due to the incompatibility of the various national variants, an International Reference Version (IRV) of ISO/IEC 646 was introduced, in an attempt to at least restrict the replaced set to the same characters in all variants. The original version (ISO 646 IRV) differed from ASCII only in that code point 0x24, ASCII's dollar sign ($) was replaced by the international currency symbol (¤). The final 1991 version of the code ISO 646:1991 is also known as ITU T.50, International Reference Alphabet or IRA, formerly International Alphabet No. 5 (IA5). This standard allows users to exercise the 12 variable characters (i.e., two alternative graphic characters and 10 national defined characters). Among these exercises, ISO 646:1991 IRV (International Reference Version) is explicitly defined and identical to ASCII.
The ISO 8859 series of standards governing 8-bit character encodings supersede the ISO 646 international standard and its national variants, by providing 96 additional characters with the additional bit and thus avoiding any substitution of ASCII codes. The ISO 10646 standard, directly related to Unicode, supersedes all of the ISO 646 and ISO 8859 sets with one unified set of character encodings using a larger 21-bit value.
A legacy of ISO/IEC 646 is visible on Windows, where in many East Asian locales the backslash character used in filenames is rendered as ¥ or other characters such as ₩. Despite the fact that a different code for ¥ was available even on the original IBM PC's code page 437, and a separate double-byte code for ¥ is available in Shift JIS (although this often uses alternative mapping), so much text was created with the backslash code used for ¥ (due to Shift_JIS being officially based on ISO 646:JP, although Microsoft maps it as ASCII) that even modern Windows fonts have found it necessary to render the code that way. A similar situation exists with ₩ and EUC-KR. Another legacy is the existence of trigraphs in the C programming language.
Published standards
ISO/R646-1967
ISO 646:1972
ISO 646:1983
ISO/IEC 646:1991
ECMA-6 (1965-04-30), first edition
ECMA-6 (1967-06), second edition
ECMA-6 (1970-07), third edition
ECMA-6 (1973-08), fourth edition
ECMA-6 (1984-12, 1985-03), fifth edition
ECMA-6 (1991-12, 1997-08), sixth edition
Code page layout
The following table shows the ISO/IEC 646 Invariant character set. Each character is shown with its Unicode equivalent. National code points are gray with the ASCII character that is replaced. Yellow indicates a character that, in some regions, could be combined with a previous character as a diacritic using the backspace character, which may affect glyph choice.
In addition to the invariant set restrictions, 0x23 is restricted to be either # or £ and 0x24 is restricted to be either $ or ¤ in ECMA-6:1991, equivalent to ISO 646:1991. However, these restrictions are not followed by all national variants.
Related encoding families
National Replacement Character Set
The National Replacement Character Set (NRCS) is a family of 7-bit encodings introduced in 1983 by DEC with the VT200 series of computer terminals. It is closely related to ISO 646, being based on a similar invariant subset of ASCII, differing in retaining $ as invariant but not _ (although most NRCS variants retain the _, and hence comply with the ISO 646 invariant set). Most NRCS variants are closely related to corresponding national ISO 646 variants where they exist, with the exception of the Dutch variant.
World System Teletext
The European telecommunications standard ETS 300 706, "Enhanced Teletext specification", defines Latin, Greek, Cyrillic, Arabic and Hebrew code sets with several national variants for both Latin and Cyrillic. Like NRCS and ISO 646, within the Latin variants, the family of encodings known as the G0 set are based on a similar invariant subset of ASCII, but do not retain either $ nor _ as invariant. Unlike NRCS, variants often differ considerably from corresponding national ISO 646 variants.
Variant codes and descriptions
ISO 646 national variants
Some national variants of ISO 646 are as follows:
National derivatives
Some national character sets also exist which are based on ISO 646 but do not strictly follow its invariant set (see also § Derivatives for other alphabets):
Control characters
All the variants listed above are solely graphical character sets, and are to be used with a C0 control character set such as listed in the following table:
Associated supplementary character sets
The following table lists supplementary graphical character sets defined by the same standard as specific ISO 646 variants. These would be selected by using a mechanism such as shift out or the NATS super shift (single shift), or by setting the eighth bit in environments where one was available:
Variant comparison chart
The specifics of the changes for some of these variants are given in the following table. Character assignments unchanged across all listed variants (i.e. which remain the same as ASCII) are not shown.
For ease of comparison, variants detailed include national variants of ISO 646, DEC's closely related National Replacement Character Set (NRCS) series used on VT200 terminals, the related European World System Teletext encoding series defined in ETS 300 706, and a few other closely related encodings based on ISO 646. Individual code charts are linked from the second column. The cells with non-white background emphasize the differences from US-ASCII (also the Basic Latin subset of ISO/IEC 10646 and Unicode).
Several characters could be used as combining characters, when preceded or followed with a backspace C0 control. This is attested in the code charts for IRV, GB, FR1, CA and CA2, which note that "',^ would behave as the diaeresis, acute accent, cedilla and circumflex (rather than quotation marks, a comma and an upward arrowhead) when preceded or followed by a backspace. The tilde character (~) was similarly introduced as a diacritic (˜). This encoding method originated in the typewriter/teletype era when use of backspace would overstamp a glyph, and may be considered deprecated.
Later, when wider character sets gained more acceptance, ISO 8859, vendor-specific character sets and eventually Unicode became the preferred methods of coding most of these variants.
Derivatives for other alphabets
Some 7-bit character sets for non-Latin alphabets are derived from the ISO 646 standard: these do not themselves constitute ISO 646 due to not following its invariant code points (often replacing the letters of at least one case), due to supporting differing alphabets which the set of national code points provide insufficient encoding space for. Examples include:
7-bit Turkmen (ISO-IR-230).
7-bit Greek.
In ELOT 927 (ISO-IR-088), the Greek alphabet is mapped in alphabetical order (except for the final-sigma) to positions 0x61–0x71 and 0x73–0x79, on top of the Latin lowercase letters.
ISO-IR-018 maps the Greek alphabet over both letter cases using a different scheme (not in alphabetical order, but trying where possible to match Greek letters over Roman letters which correspond in some sense), and ISO-IR-019 maps the Greek uppercase alphabet over the Latin lowercase letters using the same scheme as ISO-IR-018.
The lower half of the Symbol font character encoding uses its own scheme for mapping Greek letters of both cases over the ASCII Roman letters, also trying to map Greek letters over Roman letters which correspond in some sense, but making different decisions in this regard (see chart below). It also replaces invariant code points 0x22 and 0x27 and five national code points with mathematical symbols. Although not intended for use in typesetting Greek prose, it is sometimes used for that purpose.
ISO-IR-027 (detailed in the chart above rather than below) includes the Latin alphabet unchanged, but adds some Greek capital letters which cannot be represented with Latin-script homoglyphs; while it is explicitly based on ISO 646, some of these are mapped to code points which are invariant in ISO 646 (0x21, 0x3A and 0x3F), and it is therefore not a true ISO 646 variant.
The World System Teletext encoding for Greek uses yet another scheme of mapping Greek letters in alphabetical order over the ASCII letters of both cases, notably including several letters with diacritics.
7-bit Cyrillic
KOI-7 or Short KOI, used for Russian. The Cyrillic characters are mapped to positions 0x60–0x7E, on top of the Latin lowercase letters, matching homologous letters where possible (where в is mapped to w, not v). Superseded by the KOI-8 variants.
SRPSCII and MAKSCII, Cyrillic variants of YUSCII (the Latin variant is YU/ISO-IR-141 in the chart above), used for Serbian and Macedonian respectively. Largely homologous to the Latin variant of YUSCII (following Serbian digraphia rules), except for Љ (lj), Њ (nj), Џ (dž) and ѕ (dz), which correspond to digraphs in Latin-script orthography, and are mapped over letters which are not used in Serbian or Macedonian (q, w, x, y).
The G0 sets for the World System Teletext encodings for Russian/Bulgarian and Ukrainian use G0 sets similar to KOI-7 with some modifications. The corresponding G0 set for Serbian Cyrillic uses a scheme based on the Teletext encoding for Latin-script Serbo-Croatian and Slovene, as opposed to the significantly different YUSCII.
7-bit Hebrew, SI 960. The Hebrew alphabet is mapped to positions 0x60–0x7A, on top of the lowercase Latin letters (and grave accent for aleph). 7-bit Hebrew was always stored in visual order. This mapping with the high bit set, i.e. with the Hebrew letters in 0xE0–0xFA, is ISO 8859-8. The World System Teletext encoding for Hebrew uses the same letter mappings, but uses BS_Viewdata as its base encoding (whereas SI 960 uses US-ASCII) and includes a shekel sign at 0x7B.
7-bit Arabic, ASMO 449 (ISO-IR-089). The Arabic alphabet is mapped to positions 0x41–0x5A and 0x60–0x6A, on top of both uppercase and lowercase Latin letters.
A comparison of some of these encodings is below. Only one case is shown, except in instances where the cases are mapped to different letters. In such instances, the mapping with the smallest code is shown first. Possible transcriptions are given for some letters; where this is omitted, the letter can be considered to correspond to the Roman one which it is mapped over.
See also
ISO/IEC 2022 Information technology: Character code structure and extension techniques
ISO/IEC 6937 (ANSI)
ISO/IEC JTC 1/SC 2
Footnotes
References
Further reading
Source documents on the history of character codes, 1972-1975 : Compiled by Eric Fischer : Free Download, Borrow, and Streaming : Internet Archive (79 pages) including: (13+5 pages) and many other documents and correspondence.
External links
Zeichensatz nach ISO 646 (ASCII) (in German)
History at GNU Aspell website
ISO646 Character Tables Character Tables by Koichi Yasuoka (安岡孝) (see Domestic ISO646 Character Tables and Quasi-ISO646 Character Tables)
Turkish Text Deasciifier a tool (based on statistical pentagram analysis of the Turkish language) which reverts an ASCII'fied Turkish text by determining the appropriate (but ambiguous) diacritics normally needed in Turkish but missing in the US-ASCII set.
Character sets
Ecma standards
00646 | Operating System (OS) | 1,327 |
PR/SM
In mainframe computing PR/SM (Processor Resource/System Manager) is a type-1 Hypervisor (a virtual machine monitor) that allows multiple logical partitions to share physical resources such as CPUs, I/O channels and LAN interfaces; when sharing channels, the LPARs can share I/O devices such as direct access storage devices (DASD). PR/SM is integrated with all IBM System z machines. Similar facilities exist on the IBM Power Systems family, and its predecessors.
IBM introduced PR/SM in 1988 with the IBM 3090 processors.
IBM developed the concept of hypervisors in their CP-40 and CP-67, and in 1972 provided it for the S/370 as Virtual Machine Facility/370. IBM introduced the Start Interpretive Execution (SIE) instruction as part of 370-XA on the 3081, and VM/XA versions of VM to exploit it. PR/SM is a type-1 Hypervisor based on the CP component of VM/XA that runs directly on the machine level and allocates system resources across LPARs to share physical resources. It is a standard feature on IBM Z and IBM LinuxONE machines.
IBM introduced a related, simplified, optional feature called Dynamic Partition Manager (DPM) on its IBM z13 and first generation IBM LinuxONE machines. DPM provides Web-based user interfaces for many LPAR-related configuration and monitoring tasks.
External links
References
Virtualization software
IBM mainframe technology | Operating System (OS) | 1,328 |
User agent
In computing, a user agent is any software, acting on behalf of a user, which "retrieves, renders and facilitates end-user interaction with Web content". A user agent is therefore a special kind of software agent.
Some prominent examples of user agents are web browsers and email readers. Often, a user agent acts as the client in a client–server system. In some contexts, such as within the Session Initiation Protocol (SIP), the term user agent refers to both end points of a communications session.
User agent identification
When a software agent operates in a network protocol, it often identifies itself, its application type, operating system, device model, software vendor, or software revision, by submitting a characteristic identification string to its operating peer. In HTTP, SIP, and NNTP protocols, this identification is transmitted in a header field User-Agent. Bots, such as Web crawlers, often also include a URL and/or e-mail address so that the Webmaster can contact the operator of the bot.
Use in HTTP
In HTTP, the User-Agent string is often used for content negotiation, where the origin server selects suitable content or operating parameters for the response. For example, the User-Agent string might be used by a web server to choose variants based on the known capabilities of a particular version of client software. The concept of content tailoring is built into the HTTP standard in RFC 1945 "for the sake of tailoring responses to avoid particular user agent limitations".
The User-Agent string is one of the criteria by which Web crawlers may be excluded from accessing certain parts of a website using the Robots Exclusion Standard (robots.txt file).
As with many other HTTP request headers, the information in the "User-Agent" string contributes to the information that the client sends to the server, since the string can vary considerably from user to user.
Format for human-operated web browsers
The User-Agent string format is currently specified by section 5.5.3 of HTTP/1.1 Semantics and Content. The format of the User-Agent string in HTTP is a list of product tokens (keywords) with optional comments. For example, if a user's product were called WikiBrowser, their user agent string might be WikiBrowser/1.0 Gecko/1.0. The "most important" product component is listed first.
The parts of this string are as follows:
product name and version (WikiBrowser/1.0)
layout engine and version (Gecko/1.0)
During the first browser war, many web servers were configured to send web pages that required advanced features, including frames, to clients that were identified as some version of Mozilla only. Other browsers were considered to be older products such as Mosaic, Cello, or Samba, and would be sent a bare bones HTML document.
For this reason, most Web browsers use a User-Agent string value as follows:
For example, Safari on the iPad has used the following:
Mozilla/5.0 (iPad; U; CPU OS 3_2_1 like Mac OS X; en-us) AppleWebKit/531.21.10 (KHTML, like Gecko) Mobile/7B405
The components of this string are as follows:
Mozilla/5.0: Previously used to indicate compatibility with the Mozilla rendering engine.
(iPad; U; CPU OS 3_2_1 like Mac OS X; en-us): Details of the system in which the browser is running.
AppleWebKit/531.21.10: The platform the browser uses.
(KHTML, like Gecko): Browser platform details.
Mobile/7B405: This is used by the browser to indicate specific enhancements that are available directly in the browser or through third parties. An example of this is Microsoft Live Meeting which registers an extension so that the Live Meeting service knows if the software is already installed, which means it can provide a streamlined experience to joining meetings.
Before migrating to the Chromium code base, Opera was the most widely used web browser that did not have the User-Agent string with "Mozilla" (instead beginning it with "Opera"). Since July 15, 2013, Opera's User-Agent string begins with "Mozilla/5.0" and, to avoid encountering legacy server rules, no longer includes the word "Opera" (instead using the string "OPR" to denote the Opera version).
Format for automated agents (bots)
Automated web crawling tools can use a simplified form, where an important field is contact information in case of problems. By convention the word "bot" is included in the name of the agent. For example:
Googlebot/2.1 (+http://www.google.com/bot.html)
Automated agents are expected to follow rules in a special file called "robots.txt".
User agent spoofing
The popularity of various Web browser products has varied throughout the Web's history, and this has influenced the design of websites in such a way that websites are sometimes designed to work well only with particular browsers, rather than according to uniform standards by the World Wide Web Consortium (W3C) or the Internet Engineering Task Force (IETF). Websites often include code to detect browser version to adjust the page design sent according to the user agent string received. This may mean that less-popular browsers are not sent complex content (even though they might be able to deal with it correctly) or, in extreme cases, refused all content. Thus, various browsers have a feature to cloak or spoof their identification to force certain server-side content. For example, the Android browser identifies itself as Safari (among other things) in order to aid compatibility.
Other HTTP client programs, like download managers and offline browsers, often have the ability to change the user agent string.
Spam bots and Web scrapers often use fake user agents.
A result of user agent spoofing may be that collected statistics of Web browser usage are inaccurate.
User agent sniffing
User agent sniffing is the practice of websites showing different or adjusted content when viewed with certain user agents. An example of this is Microsoft Exchange Server 2003's Outlook Web Access feature. When viewed with Internet Explorer 6 or newer, more functionality is displayed compared to the same page in any other browsers. User agent sniffing is considered poor practice, since it encourages browser-specific design and penalizes new browsers with unrecognized user agent identifications. Instead, the W3C recommends creating standard HTML markup, allowing correct rendering in as many browsers as possible, and to test for specific browser features rather than particular browser versions or brands.
Websites intended for display by mobile phones often rely on user agent sniffing, since mobile browsers often differ greatly from each other.
Encryption strength notations
Web browsers created in the United States, such as Netscape Navigator and Internet Explorer, previously used the letters U, I, and N to specify the encryption strength in the user agent string. Until 1996, when the United States government allowed encryption with keys longer than 40 bits to be exported, vendors shipped various browser versions with different encryption strengths. "U" stands for "USA" (for the version with 128-bit encryption), "I" stands for "International" the browser has 40-bit encryption and can be used anywhere in the world and "N" stands (de facto) for "None" (no encryption). Following the lifting of export restrictions, most vendors supported 256-bit encryption.
Deprecation of User-Agent header
In 2020, Google announced that they would be phasing out support for the User-Agent header in their Google Chrome browser. They stated that other major web browser vendors were supportive of the move, but that they did not know when other vendors would follow suit. Google stated that a new feature called Client Hints would replace the functionality of the User-Agent string.
See also
Robots exclusion standard
Web crawler
Wireless Universal Resource File (WURFL)
User Agent Profile (UAProf)
Browser sniffing
Web browser engine
References
Clients (computing)
Hypertext Transfer Protocol headers | Operating System (OS) | 1,329 |
LinuxTag
LinuxTag (the name is a compound with the German Tag meaning assembly, conference or meeting) is a free software exposition with an emphasis on Linux (but also BSD), held annually in Germany. LinuxTag claims to be Europe's largest exhibition for "open source software" and aims to provide a comprehensive overview of the Linux and free software market, and to promote contacts between users and developers. LinuxTag is one of the world's most important events of this kind.
LinuxTag's slogan, "Where .COM meets .ORG", refers to its stated aim of bringing together commercial and non-commercial groups in the IT sector. Each year's event also has its own subtitle.
Promotion of free software
LinuxTag sees itself as part of the free software movement, and promotes this community in an extraordinary degree by supporting numerous open source projects. LinuxTag offers a way for these projects to promote their software and their concepts, and thus present themselves to the public in an appropriate manner, with their own booths, forums and lectures. The goal is to encourage projects to share concepts and content to the benefit of other groups and companies, and to provide forums for in-depth discussions of new technologies and opportunities.
LinuxTag e.V.
The non-profit association "LinuxTag e.V." was founded in preparation for LinuxTag's move from Kaiserslautern to the University of Stuttgart in 2000. The association plans and organizes the LinuxTag event by volunteer work and guides its ideological development. The association LinuxTag e.V. is registered in Association Register VR 2239 of the Kaiserslautern District Court. The association manages the LinuxTag name and word mark. The purpose of the association, according to its bylaws, is "the promotion of Free Software", and is pursued through the organization of the LinuxTag events.
The association is represented by a three-person executive board, supplemented by several representatives with delegated authority. All members of the LinuxTag association are volunteers and receive no remuneration for their service. (In 2005, the First Chairman and CFO were employed and remunerated by the association from 1 April until 31 July.) Surpluses resulting from the LinuxTag events or from sponsorship are reinvested in the association's non-profit activities.
History
LinuxTag was launched in 1996 by a handful of active members of the Unix Working Group (Unix-AG) at the University of Kaiserslautern. They wanted to inform the public about the young technology of Linux and Open Source software. The first LinuxTag event only drew a small number of participants. Since then, however, the event has changed venues several times to keep pace with rapidly growing numbers of exhibitors and visitors.
Kaiserslautern
The first LinuxTag conference and exhibition was held at the University of Technology in Kaiserslautern.
LinuxTag 1996 - 1999
The first LinuxTag was a theme night on Linux. In 1998, LinuxTag drew 3,000 visitors. In 1999 the event was nationally announced, and drew some 7,000 visitors. It was the first time LinuxTag filled a whole building, and the last time it was held in Kaiserslautern. In the aftermath of the event, the LinuxTag association was founded.
Stuttgart
In 2000 and 2001 LinuxTag was held in Stuttgart.
LinuxTag 2000
LinuxTag 2000 was held at the Stuttgart exhibition center from 29 June to 2 July, and received up to 17,000 visitors. The conference included a business track for the first time, devoted to such topics as IT security, legal aspects of free software, and potential uses of Linux and the open-source concept in commercial applications. IT decision-makers were shown case studies in applications of free software.
LinuxTag 2001
LinuxTag 2001 took place at Stuttgart exhibition center from 5 to 8 July, with 14,870 visitors. The event was held under the patronage of the German Ministry of the Economy. Keynote speakers were Eric S. Raymond, Rob "Cmdr Taco" Malda from Slashdot and John "Maddog" Hall of Linux International.
Karlsruhe
From 2002 to 2005, the LinuxTag conference and exhibition was held in Karlsruhe.
LinuxTag 2002
LinuxTag was held in 2002 for the first time from 06.06.2002 to 09.02.2002 in the Karlsruhe Convention Centre. There were about 13,000 visitors. The motto of the conference was "Open your mind, open your heart, open your source!". About 100 exhibitors were at the exhibition.
LinuxTag 2003
LinuxTag 2003 was titled Open Horizons and was held from 10 to 13 July for the second time in Karlsruhe in 2003.
Together with the admission ticket for 10 euros, visitors received the LinuxTag for the first time available Knoppix DVD and a Tux Pin. With 19,500 visitors, the number of visitors increased by 40 percent over the previous year.
As an exhibitor, both businesses and non-profit groups were represented. Apple showed Mac OS X in conjunction with open source. Around 150 exhibitors were there in 2003. Other highlights included the release of OpenGroupware.org on the model of OpenOffice.org as an open source and free conversion of several dozen Xbox s, some with hardware modification by two solder points, partially by importing the so-called MechInstallers on Linux.
There was also a Programming Competition, and on Sunday a world record attempt took 13 to 14 clock instead: On a server 100 Linux desktop sessions should with Gnome and KDE are simultaneously running. Here, everyone could join in on the Internet. The result was not disclosed.
Besides the exhibition also congresses were held in which renowned experts spoke on a subject group. There was, for example, a Debian conference and on Sunday there was a lecture on TCPA, followed by discussion. The Business and Administration Congress has enlarged with 400 participants by about 60%. The free lecture program was opened on Friday by the Parliamentary State Secretary, Federal Ministry of Economics and Labour (BMWA), Rezzo Schlauch. Keanote speaker were Jon "Maddog" Hall of Linux International, Georg Greve of Free Software Foundation Europe and Matthias Kalle Dalheimer of Klarälvdalens Datakonsult AB.
With Webcams you could also take a virtual visit to the fair. The Pingu-Cam (Tux of Penguin is the Linux mascot) showed pictures from the zoo of Karlsruhe, which is located right next to the fairgrounds.
LinuxTag 2004
LinuxTag 2004 took place from 23 to 26 June for the third time in the congress center in Karlsruhe in 2004. Those who filed on the homepage, got in free. For 10 euro entry but you got a Tuxpin, a Knoppix DVD and a DVD with FreeBSD, NetBSD and OpenBSD.
LinuxTag 2004 had the slogan "Free source - free world". 16,175 visitors were counted.
With the record number of about 170 exhibitors, among many freelance projects also numerous large and medium-sized enterprises were there. Hewlett-Packard was the third time Official Cornerstone Partner. Other major companies were the C & L Verlag, Intel, Novell, Oracle, SAP and Sun Microsystems. For the first time Microsoft was represented with a booth.
The one-day Business and Administration Congress on 24 June were presented in business and government case studies and success stories about the use of open source software. Among other things, also the problem with Virus and worms was an issue.
For the free conference, there was a record turnout with about 350 proposals from 20 countries. 130 thereof could be accommodated in the program. The issue of software patent was an important issue. For the first time the LPI 101 certification was possible at the LinuxTag. Contests on this LinuxTag were a coding marathon and Hacking Contest.
LinuxTag 2005
LinuxTag 2005 took place in the period from 22 to 25 June held in the Congress Center Karlsruhe. LinuxTag 2005 was the 11th LinuxTag and titled "Linux everywhere". In addition to the exhibition of different companies that have to do with Linux, more or less, there was once again in 2005, a presentation program. Also, the Business and Administration Congress took place on 22 June 2005 again. During the LinuxTag was the possibility to participate at various tutorials.
Jimmy Wales announced in his opening speech, the collaboration between Wikipedia and KDE. Through a web interface each program can directly access Wikipedia. The KDE Media Player Amarok can access the Wikipedia article of artists starting with version 1.3.
The organizer spoke of 12,000 visitors, which was mainly caused by the newly designed entrance fees and the hottest week of the year.
Wiesbaden
2006 was the LinuxTag in Wiesbaden.
LinuxTag 2006
LinuxTag 2006 took place from 3 to 6 May 2006 in the Rhein-Main-Hallen in Wiesbaden under the theme „See, what's ahead“
. According to organizers, over 9,000 people from over 30 nations attended LinuxTag 2006. There were many, often international lectures and various information booths, present among others were IBM, Avira and Sun Microsystems, but some others such as Hewlett-Packard or Red Hat were missing. Three teams participated the hacking Contest.
The most visited lecture of the year was the Keynote of Ubuntu founder Mark Shuttleworth., The "chief dreamer of Ubuntu" referred to himself and the good cooperation of users with developers. He stressed that Kubuntu and Ubuntu should be treated equally and that there is a good cooperation between the developers. There was also the opportunity to pursue some of the lectures on a video stream, which was used by an estimated 1,800 people.
Berlin
Since 2007, the LinuxTag in Berlin in the Exhibition halls held under the Berlin Radio Tower.
LinuxTag 2007
LinuxTag 2007 took place from 30 May to 2 June 2007 with the slogan "Come in: We're open" instead. It was attended by about 9600 people.
The event was held under the auspices of Interior Minister Wolfgang Schäuble.
This sparked a result of the political attitude of the Minister of the Interior a lively discussion shaft from which to mushroom to Boykottierungsaufrufen LinuxTag. The uproar in the Linux community was so great that self-reported foreign pages about it.
LinuxTag 2008
LinuxTag 2008 took place from 28 to 31 May at the Berlin Exhibition Grounds with 11,612 visitors. The second LinuxTag in the German capital was under the patronage of German Foreign Minister and Vice Chancellor Frank-Walter Steinmeier and is part of a six-day "IT Week in the Capital Region", which includes the fourth time in Berlin held IT business trade fair IT Profits under the patronage of the Federal Minister of Transport. At the same time the second German Asterisk day, the user and developer conference to Voice over IP, and the 8th @ Kit Congress to discuss legal issues of professional IT use. Important topics were "Highlights of digital lifestyle" and the "Mobile + Embedded Area".
LinuxTag 2009
LinuxTag 2009 took place from 24 to 27 June on the Berlin Exhibition Grounds. He had more than 10,000 visitors. He is under the patronage of German Foreign Minister and Vice Chancellor Frank-Walter Steinmeier. The new president of Free Software Foundation Europe Karsten Gerloff attended LinuxTag. A focal point is the "image of business processes using Linux" and "Open Source in the colors of the tricolor," for which 14 open source suppliers from France showing their product and service range.
LinuxTag 2010
LinuxTag 2010 was the 16th LinuxTag and took place from 9 to 12 June 2010 at the Berlin Exhibition Grounds. It was attended by about 11,600 people. He is under the patronage of Cornelia Rogall-Grothe, Federal Government Commissioner for Information Technology. Keynote speakers included Microsoft's general manager James Utzschneider who stunned the audience with his open approach to open source, the CEO of SugarCRM, Larry Augustin underlined the economic impact of OSS and the connection with the upcoming trend cloud computing, the open source Chef Google, Chris DiBona, underlined the high professional level of the Congress, the kernel developer Jonathan Corbet gave an outlook on the next Linux kernel 2.6.35 and Ubuntu founder Mark Shuttleworth staked out the milestones for Ubuntu desktops.
LinuxTag 2011
The 17th LinuxTag was held from 11 to 14 May 2011 on the Berlin Exhibition Grounds with the slogan "Where .com meets .org". It was attended by 11,578 visitors and was under the patronage of Cornelia Rogall-Grothe of the Federal Government Commissioner for Information Technology. Keynotes were given by Wim Coekaerts (Oracle), Bradley Kuhn (Software Freedom Conservancy) and Daniel Walsh (Red Hat).
LinuxTag 2012
The 18th LinuxTag was held from 23 to 26 May 2012 on the Berlin Exhibition Grounds with the motto "Open minds create effective solutions". He is under the patronage of Cornelia Rogall-Grothe of the Federal Government Commissioner for Information Technology. At LinuxTag the premiere of "Open MindManager Morning" took place, discussed on the industry experts and educators across the IT and changes in society, and philosophizing.
In addition, also found the premiere of the new lecture series "Open Minds Economy", organized by the Open Source Business Alliance and Messe Berlin, instead, presents the successful model of open source software in the fields of economy and society.
Keynotes were given by Jimmy Schulz, chairman of the project group "Interoperability, standards and open source" the Commission of Inquiry to Internet and digital society in the German Bundestag, Ulrich Drepper, maintainer of the GNU C standard library Glibc and Lars Knoll, employees at Nokia and Chief maintainer of the QT library.
LinuxTag 2013
The 19 LinuxTag was held from 22 to 25 May 2013 embedded on the Berlin Exhibition Grounds with the motto "In the cloud - triumph of the free software goes on".
He is under the patronage of Cornelia Rogall-Grothe of the Federal Government Commissioner for Information Technology.
It was held the premiere of Open IT Summit, which was organized as a parallel conference on LinuxTag of the Open Source Business Alliance (OSBA) and Messe Berlin with the goal to discuss the topic open source in the business environment. Also the OpenStack Day took place in cooperation with the OpenStack Foundation as a first major subconference on the subject OpenStack in Europe. The Foundation has its headquarters in the U.S. and sees itself as a global reservoir of the same name, scalable cloud management platform.
Keynotes were given by kernel developer Matthew Garrett on the subject of Unified Extensible Firmware Interface (UEFI) and Secure Boot and Benjamin Mako Hill, a researcher at the Massachusetts Institute of Technology, who called so-called anti Features unacceptable, where manufacturers to build restrictions into devices.
LinuxTag 2014
In recent years, the number of visitors of LinuxTag stagnated despite more and more users of open-source programs. This decline in visitor numbers was interpreted as a side effect of the high market penetration of free software and Linux. In addition, for some years, there were many similar regional events, which have drawn from the concept of LinuxTag.
In order to adapt to the changes, the Linuxtag focused on the core issue of the professional use of open source software in 2014. Therefore, LinuxTag started a strategic partnership with the droidcon.
The 20th LinuxTag took place between 8 and 10 May 2014 in the STATION Berlin.
In spatial and temporal proximity were the Media Convention Berlin (May 6 to 7), the re: publica (6 to 8 May) and the droidcon (8 to 10 May 2014). All events aimed towards a close relationship in order to achieve a high level of appreciation of the combined effort.
External links
Official LinuxTag website
German Official web page (with information in English as well)
The Knoppix project
OpenMusic, another LinuxTag-sponsored project.
References
Free-software events
Linux conferences
Recurring events established in 1996 | Operating System (OS) | 1,330 |
Crippleware
Crippleware has been defined in realms of both computer software and hardware. In software, crippleware means that "vital features of the program such as printing or the ability to save files are disabled until the user purchases a registration key". While crippleware allows consumers to see the software before they buy, they are unable to test its complete functionality because of the disabled functions. Hardware crippleware is "a hardware device that has not been designed to its full capability". The functionality of the hardware device is limited to encourage consumers to pay for a more expensive upgraded version. Usually the hardware device considered to be crippleware can be upgraded to better or its full potential by way of a trivial change, such as removing a jumper wire. The manufacturer would most likely release the crippleware as a low-end or economy version of their product.
Computer software
Deliberately limited programs are usually freeware versions of computer programs that lack the most advanced (or even crucial) features of the original program. Limited versions are made available in order to increase the popularity of the full program (by making it more desirable) without giving it away free. Examples include a word processor that cannot save or print, and unwanted features, for example screencasting and video editing software programs applying a watermark (often a logo) onto the video screen. However, crippleware programs can also differentiate between tiers of paying software customers.
The term "crippleware" is sometimes used to describe software products whose functions have been limited (or "crippled") with the sole purpose of encouraging or requiring the user to pay for those functions (either by paying a one-time fee or an ongoing subscription fee).
The less derogatory term, from a shareware software producer's perspective, is feature-limited. Feature-limited is merely one mechanism for marketing shareware as a damaged good; others are time-limited, usage-limited, capacity-limited, nagware and output-limited. From the producer's standpoint, feature-limited allows customers to try software with no commitment instead of relying on questionable or possibly staged reviews. Try-before-you-buy applications are very prevalent for mobile devices, with the additional damaged good of ad-displays as well as all of the other forms of damaged-good applications.
From an Open Source software providers perspective, there is the model of open core which includes a feature-limited version of the product and an open-core version. The feature-limited version can be used widely; this approach is used by products like MySQL and Eucalyptus.
Computer hardware
This product differentiation strategy has also been used in hardware products:
The Intel 486SX which was a 486DX with the FPU removed or in early versions present but disabled.
AMD disabled defective cores on their quad-core Phenom and Phenom II X4 processor dies to make cheaper triple-core Phenom and Phenom II X3 and dual-core X2 models without the expense of designing new chips. Quad-core dies with one or two faulty cores can be used as triple- or dual-core processors rather than being discarded, increasing yield. Some users have managed to "unlock" these crippled cores, when not faulty.
Casio's fx-82ES scientific calculator uses the same ROM as the fx-991ES (a model with enhanced functionality), and can be made to act as the latter by strategically cutting through the epoxy on the board, and tracing the exposed solder joints using a pencil. This is also the case with the fx-83ES and the fx-85ES.
Apple announcing it would charge $4.99 in order to enable wi-fi on some devices in 2007 (fee later reduced to $1.99) and blaming it on GAAP compliance, even though their interpretation of the accounting rules as mandating a fee was contradicted by a former chief accountant of the SEC and by a member of the Financial Accounting Standards Board.
Intel Upgrade Service (2010-2011), which allowed select types of processors to be upgraded via a software activation code, has also been criticized in such terms.
Automobiles
Tesla limits the range on lower-end versions of the Model S in software, as well as disabling Autopilot functions if those functions weren't purchased.
Digital rights management
Digital rights management is another example of this product differentiation strategy. Digital files are inherently capable of being copied perfectly in unlimited quantities; digital rights management aims to deter copyright infringement by using hardware or cryptographic techniques to limit copying or playback.
See also
Defective by Design
Dongle
Hardware restrictions
Walled garden (technology)
Planned obsolescence
Regional lockout
References
External links
"Antifeatures". Blog entry, wikified list, talk and video by FSF-Board member Benjamin Mako Hill.
Open source means freedom from 'anti-features', Norwegian magazine "Computerworld" reports on Benjamin Mako Hill's talk. (2010-02-08)
"Court order denying motion to dismiss of Melanie Tucker v. Apple Computer Inc. in the United States District Court for the Northern District of California, San Jose Division" (2006-12-20)
Want an iPhone? Beware the iHandcuffs New York Times editorial labeling iPhone OS as "crippleware". (2007-01-14)
"Stealth plan puts copy protection into every hard drive" The Register. (2000-12-20)
"Western Digital drive is DRM-crippled for your safety" The Register. (2007-12-07)
"Western Digital's 'crippleware': Some lessons from history" The Register. Follow-up to original article. (2007-12-12)
Dysphemisms
Product design | Operating System (OS) | 1,331 |
GendBuntu
GendBuntu is a version of Ubuntu adapted for use by France's National Gendarmerie. The Gendarmerie have pioneered the use of open source software on servers and personal computers since 2005 when it adopted the OpenOffice.org office suite, making the OpenDocument .odf format its nationwide standard.
Project
The GendBuntu project derives from Microsoft's decision to end the development of Windows XP, and its inevitable replacement with Windows Vista or a later edition of Windows on government computers. This meant that the Gendarmerie would have incurred large expenses for staff retraining even if it had continued to use proprietary software.
One of the main aims of the GendBuntu project was for the organisation to become independent from proprietary software distributors and editors, and achieve significant savings in software costs (estimated to be around two million euros per year).
Around 90% of the 10,000 computers purchased by the Gendarmerie per year are bought without an operating system, and have GendBuntu installed by the Gendarmerie's technical department. This has become one of the major incentives of the scheme for staff; transferring to GendBuntu from a proprietary system means the staff member receives a new computer with a widescreen monitor.
The main goal is to migrate 80,000 computers by the end of 2014, a date which coincides with the end of support for Microsoft Windows XP. 35,000 GendBuntu desktops and laptops have been deployed as of November 2011.
A major technical problem encountered during the development of the project was keeping the existing computer system online while the update took place, not only in metropolitan France but also in overseas Departments and Regions. It was solved partly by redistributing dedicated servers or workstations on Local Area Networks (depending on the number of employees working on each LAN) and with the use of an ITIL-compliant qualifying process.
An extensive IT support team helped to implement the changes. This included the "core team" at Gendarmerie headquarters at Issy-les-Moulineaux, the "running team" of four located at the Gendarmerie data center at Rosny-sous-Bois, and about 1,200 local support staff.
Timeline
2004 - OpenOffice.org software replaces 20,000 copies of the Microsoft Office suite on Gendarmerie computers, with the transfer of all 90,000 office suites being completed in 2005.
2006 - Migration begins to the Mozilla Firefox web browser, on 70,000 workstations, and to the Mozilla Thunderbird email client. The Gendarmerie follows the example of the Ministry of Culture in this decision. Other software follows, such as GIMP.
2008 - The decision is made to migrate to Ubuntu on 90% of the Gendarmerie's computers by 2016. Ubuntu is installed on 5,000 workstations installed all over the country (one on each police station's LAN), primarily for training purposes.
2009 - Nagios supervision begins
2010 - 20,000 computers ordered without a pre-installed operating system
January 2011 - Beginning of the large scale phasing in of GendBuntu 10.04 LTS
December 2011 - 25,000 computers deployed with GendBuntu 10.04 LTS
February 2013 - Upgrade from GendBuntu 10.04 LTS to GendBuntu 12.04 LTS. The local management and IT support teams will phase in the upgrade in such a way to not disrupt the running of the police stations.
May 2013 - Target for end of the migration to GendBuntu 12.04 LTS - 35,000 computers upgraded.
December 2013 - 43,000 computers deployed with GendBuntu 12.04 LTS. TCO lowered by 40%.
February 2014 - Beginning of final stage of the migration of existing Windows XP computers to GendBuntu 12.04 LTS
June 2014 - Migration completed. 65,000 computers deployed with GendBuntu 12.04 LTS (total number of computers : 77,000)
March 2017 - Migration completed. 70,000 computers deployed with GendBuntu 14.04 LTS (total number of computers: 82,000)
May 2017 - Introduction of GendBuntu 16.04 LTS
June 2018 - 82% of PC workstations running GendBuntu 16.04 LTS
Early June 2019 - 90% of workstations running GendBuntu (approx. 77,000)
Spring 2019 - Migration to GendBuntu 18.04
See also
Canaima (operating system)
Inspur
LiMux
Nova (operating system)
Ubuntu Kylin
VIT, C.A.
References
External links
Presentation of Major Stéphane Dumond, French Gendarmerie Nationale, published December 26th, 2014
French police: we saved millions of euros by adopting Ubuntu, Ars Technica, March 12, 2009
Linux software projects
State-sponsored Linux distributions
Ubuntu derivatives
Law enforcement in France
Linux distributions | Operating System (OS) | 1,332 |
Internet Explorer
Internet Explorer (formerly Microsoft Internet Explorer and Windows Internet Explorer, (from August 16, 1995 to March 30, 2021) commonly abbreviated IE or MSIE) is a discontinued series of graphical web browsers developed by Microsoft and included in the Microsoft Windows line of operating systems, starting in 1995. It was first released as part of the add-on package Plus! for Windows 95 that year. Later versions were available as free downloads, or in-service packs, and included in the original equipment manufacturer (OEM) service releases of Windows 95 and later versions of Windows. New feature development for the browser was discontinued in 2016 in favor of new browser Microsoft Edge. Since Internet Explorer is a Windows component and is included in long-term lifecycle versions of Windows such as Windows Server 2019, it will continue to receive security updates until at least 2029. Microsoft 365 ended support for Internet Explorer on August 17, 2021, and Microsoft Teams ended support for IE on November 30, 2020. Internet Explorer is set for discontinuation on June 15, 2022, after which the alternative will be Microsoft Edge with IE mode for legacy sites.
Internet Explorer was once the most widely used web browser, attaining a peak of about 95% usage share by 2003. This came after Microsoft used bundling to win the first browser war against Netscape, which was the dominant browser in the 1990s. Its usage share has since declined with the launch of Firefox (2004) and Google Chrome (2008), and with the growing popularity of mobile operating systems such as Android and iOS that do not support Internet Explorer.
Estimates for Internet Explorer's market share in 2022 are about 0.45% across all platforms, or by StatCounter's numbers ranked 9th. On traditional PCs, the only platform on which it has ever had significant share, it is ranked 6th at 1.06%, after Opera. Microsoft Edge, IE's successor, first overtook Internet Explorer in terms of market share in November 2019.
Microsoft spent over per year on Internet Explorer in the late 1990s, with over 1,000 people involved in the project by 1999.
Versions of Internet Explorer for other operating systems have also been produced, including an Xbox 360 version called Internet Explorer for Xbox and for platforms Microsoft no longer supports: Internet Explorer for Mac and Internet Explorer for UNIX (Solaris and HP-UX), and an embedded OEM version called Pocket Internet Explorer, later rebranded Internet Explorer Mobile made for Windows CE, Windows Phone, and, previously, based on Internet Explorer 7, for Windows Phone 7.
On March 17, 2015, Microsoft announced that Microsoft Edge would replace Internet Explorer as the default browser on "for certain versions of Windows 10". This makes Internet Explorer 11 the last release. Internet Explorer, however, remains on Windows 10 LTSC and Windows Server 2019 primarily for enterprise purposes. Since January 12, 2016, only Internet Explorer 11 has official support for consumers; extended support for Internet Explorer 10 ended on January 31, 2020. Support varies based on the operating system's technical capabilities and its support life cycle. On May 20, 2021, it was announced that full support for Internet Explorer would be discontinued on June 15, 2022, after which, the alternative will be Microsoft Edge with IE mode for legacy sites. Microsoft is committed to support Internet Explorer that way to 2029 at least, with a one-year notice before it is discontinued. The IE mode "uses the Trident MSHTML engine", i.e. the rendering code of Internet Explorer.
The browser has been scrutinized throughout its development for use of third-party technology (such as the source code of Spyglass Mosaic, used without royalty in early versions) and security and privacy vulnerabilities, and the United States and the European Union have alleged that integration of Internet Explorer with Windows has been to the detriment of fair browser competition.
History
Internet Explorer 1
The Internet Explorer project was started in the summer of 1994 by Thomas Reardon, who, according to the Massachusetts Institute of Technology Review of 2003, used source code from Spyglass, Inc. Mosaic, which was an early commercial web browser with formal ties to the pioneering National Center for Supercomputing Applications (NCSA) Mosaic browser. In late 1994, Microsoft licensed Spyglass Mosaic for a quarterly fee plus a percentage of Microsoft's non-Windows revenues for the software. Although bearing a name like NCSA Mosaic, Spyglass Mosaic had used the NCSA Mosaic source code sparingly.
The first version, dubbed Microsoft Internet Explorer, was installed as part of the Internet Jumpstart Kit in the Microsoft Plus! pack for Windows 95. The Internet Explorer team began with about six people in early development. Internet Explorer 1.5 was released several months later for Windows NT and added support for basic table rendering. By including it free of charge with their operating system, they did not have to pay royalties to Spyglass Inc, resulting in a lawsuit and a US$8 million settlement on January 22, 1997.
Microsoft was sued by SyNet Inc. in 1996, for trademark infringement, claiming it owned the rights to the name "Internet Explorer". It ended with Microsoft paying $5 Million to settle the lawsuit.
Internet Explorer 2
Internet Explorer 2 is the second major version of Internet Explorer, released on November 22, 1995, for Windows 95 and Windows NT, and on April 23, 1996, for Apple Macintosh and Windows 3.1.
Internet Explorer 3
Internet Explorer 3 is the third major version of Internet Explorer, released on August 13, 1996 for Microsoft Windows and on January 8, 1997 for Apple Mac OS.
Internet Explorer 4
Internet Explorer 4 is the fourth major version of Internet Explorer, released on September 1997 for Microsoft Windows, Mac OS, Solaris, and HP-UX.
Internet Explorer 5
Internet Explorer 5 is the fifth major version of Internet Explorer, released on March 18, 1999 for Windows 3.1, Windows NT 3, Windows 95, Windows NT 4.0 SP3, Windows 98, Mac OS X (up to v5.2.3), Classic Mac OS (up to v5.1.7), Solaris and HP-UX (up to 5.01 SP1).
Internet Explorer 6
Internet Explorer 6 is the sixth major version of Internet Explorer, released on August 24, 2001 for Windows NT 4.0 SP6a, Windows 98, Windows 2000, Windows ME and as the default web browser for Windows XP and Windows Server 2003.
Internet Explorer 7
Internet Explorer 7 is the seventh major version of Internet Explorer, released on October 18, 2006 for Windows XP SP2, Windows Server 2003 SP1 and as the default web browser for Windows Vista, Windows Server 2008 and Windows Embedded POSReady 2009.
Internet Explorer 8
Internet Explorer 8 is the eight major version of Internet Explorer, released on March 19, 2009 for Windows XP, Windows Server 2003, Windows Vista, Windows Server 2008 and as the default web browser for Windows 7 (later default was Internet Explorer 11) and Windows Server 2008 R2.
Internet Explorer 9
Internet Explorer 9 is the ninth major version of Internet Explorer, released on March 14, 2011 for Windows 7, Windows Server 2008 R2, Windows Vista Service Pack 2 and Windows Server 2008 SP2 with the Platform Update.
Internet Explorer 10
Internet Explorer 10 is the tenth major version of Internet Explorer, released on October 26, 2012 for Windows 7, Windows Server 2008 R2 and as the default web browser for Windows 8 and Windows Server 2012.
Internet Explorer 11
Internet Explorer 11 is featured in Windows 8.1, which was released on October 17, 2013. It includes an incomplete mechanism for syncing tabs. It is a major update to its developer tools, enhanced scaling for high DPI screens, HTML5 prerender and prefetch, hardware-accelerated JPEG decoding, closed captioning, HTML5 full screen, and is the first Internet Explorer to support WebGL and Google's protocol SPDY (starting at v3). This version of IE has features dedicated to Windows 8.1, including cryptography (WebCrypto), adaptive bitrate streaming (Media Source Extensions) and Encrypted Media Extensions.
Internet Explorer 11 was made available for Windows 7 users to download on November 7, 2013, with Automatic Updates in the following weeks.
Internet Explorer 11's user agent string now identifies the agent as "Trident" (the underlying browser engine) instead of "MSIE". It also announces compatibility with Gecko (the browser engine of Firefox).
Microsoft claimed that Internet Explorer 11, running the WebKit SunSpider JavaScript Benchmark, was the fastest browser as of October 15, 2013.
Internet Explorer 11 was made available for Windows Server 2012 and Windows Embedded 8 Standard in the spring of 2019.
End of life
Microsoft Edge, officially unveiled on January 21, 2015, has replaced Internet Explorer as the default browser on Windows 10. Internet Explorer is still installed in Windows 10 to maintain compatibility with older websites and intranet sites that require ActiveX and other Microsoft legacy web technologies.
According to Microsoft, the development of new features for Internet Explorer has ceased. However, it will continue to be maintained as part of the support policy for the versions of Windows with which it is included.
On June 1, 2020, the Internet Archive removed the latest version of Internet Explorer from its list of supported browsers, citing its dated infrastructure that makes it hard to work with, following the suggestion of Microsoft Chief of Security Chris Jackson that users not use it as their default browser, but to use it only for websites that require it.
Since November 30, 2020, the web version of Microsoft Teams can no longer be accessed using Internet Explorer 11, followed by the remaining Microsoft 365 applications since August 17, 2021. The browser itself will continue to be supported for the lifecycle of the Windows version on which it is installed until June 15, 2022.
Microsoft recommends Internet Explorer users migrate to Edge and use the built-in "Internet Explorer mode" which enables support for legacy internet applications.
Features
Internet Explorer has been designed to view a broad range of web pages and provide certain features within the operating system, including Microsoft Update. During the height of the browser wars, Internet Explorer superseded Netscape only when it caught up technologically to support the progressive features of the time.
Standards support
Internet Explorer, using the MSHTML (Trident) browser engine:
Supports HTML 4.01, parts of HTML5, CSS Level 1, Level 2, and Level 3, XML 1.0, and DOM Level 1, with minor implementation gaps.
Fully supports XSLT 1.0 as well as an obsolete Microsoft dialect of XSLT often referred to as WD-xsl, which was loosely based on the December 1998 W3C Working Draft of XSL. Support for XSLT 2.0 lies in the future: semi-official Microsoft bloggers have indicated that development is underway, but no dates have been announced.
Almost full conformance to CSS 2.1 has been added in the Internet Explorer 8 release. The MSHTML browser engine in Internet Explorer 9 in 2011, scored highest in the official W3C conformance test suite for CSS 2.1 of all major browsers.
Supports XHTML in Internet Explorer 9 (MSHTML Trident version 5.0). Prior versions can render XHTML documents authored with HTML compatibility principles and served with a text/html MIME-type.
Supports a subset of SVG in Internet Explorer 9 (MSHTML Trident version 5.0), excluding SMIL, SVG fonts and filters.
Internet Explorer uses DOCTYPE sniffing to choose between standards mode and a "quirks mode" in which it deliberately mimics nonstandard behaviors of old versions of MSIE for HTML and CSS rendering on screen (Internet Explorer always uses standards mode for printing). It also provides its own dialect of ECMAScript called JScript.
Internet Explorer was criticized by Tim Berners-Lee for its limited support for SVG, which is promoted by W3C.
Non-standard extensions
Internet Explorer has introduced an array of proprietary extensions to many of the standards, including HTML, CSS, and the DOM. This has resulted in several web pages that appear broken in standards-compliant web browsers and has introduced the need for a "quirks mode" to allow for rendering improper elements meant for Internet Explorer in these other browsers.
Internet Explorer has introduced several extensions to the DOM that have been adopted by other browsers.
These include the inner HTML property, which provides access to the HTML string within an element, which was part of IE 5 and was standardized as part of HTML 5 roughly 15 years later after all other browsers implemented it for compatibility, the XMLHttpRequest object, which allows the sending of HTTP request and receiving of HTTP response, and may be used to perform AJAX, and the designMode attribute of the content Document object, which enables rich text editing of HTML documents. Some of these functionalities were not possible until the introduction of the W3C DOM methods. Its Ruby character extension to HTML is also accepted as a module in W3C XHTML 1.1, though it is not found in all versions of W3C HTML.
Microsoft submitted several other features of IE for consideration by the W3C for standardization. These include the 'behavior' CSS property, which connects the HTML elements with JScript behaviors (known as HTML Components, HTC), HTML+TIME profile, which adds timing and media synchronization support to HTML documents (similar to the W3C XHTML+SMIL), and the VML vector graphics file format. However, all were rejected, at least in their original forms; VML was subsequently combined with PGML (proposed by Adobe and Sun), resulting in the W3C-approved SVG format, one of the few vector image formats being used on the web, which IE did not support until version 9.
Other non-standard behaviors include: support for vertical text, but in a syntax different from W3C CSS3 candidate recommendation, support for a variety of image effects and page transitions, which are not found in W3C CSS, support for obfuscated script code, in particular JScript.Encode, as well as support for embedding EOT fonts in web pages.
Favicon
Support for favicons was first added in Internet Explorer 5. Internet Explorer supports favicons in PNG, static GIF and native Windows icon formats. In Windows Vista and later, Internet Explorer can display native Windows icons that have embedded PNG files.
Usability and accessibility
Internet Explorer makes use of the accessibility framework provided in Windows. Internet Explorer is also a user interface for FTP, with operations similar to Windows Explorer. Internet Explorer 5 and 6 had a side bar for web searches, enabling jumps through pages from results listed in the side bar. Pop-up blocking and tabbed browsing were added respectively in Internet Explorer 6 and Internet Explorer 7. Tabbed browsing can also be added to older versions by installing MSN Search Toolbar or Yahoo Toolbar.
Cache
Internet Explorer caches visited content in the Temporary Internet Files folder to allow quicker access (or offline access) to previously visited pages. The content is indexed in a database file, known as Index.dat. Multiple Index.dat files exist which index different content—visited content, web feeds, visited URLs, cookies, etc.
Prior to IE7, clearing the cache used to clear the index but the files themselves were not reliably removed, posing a potential security and privacy risk. In IE7 and later, when the cache is cleared, the cache files are more reliably removed, and the index.dat file is overwritten with null bytes.
Caching has been improved in IE9.
Group Policy
Internet Explorer is fully configurable using Group Policy. Administrators of Windows Server domains (for domain-joined computers) or the local computer can apply and enforce a variety of settings on computers that affect the user interface (such as disabling menu items and individual configuration options), as well as underlying security features such as downloading of files, zone configuration, per-site settings, ActiveX control behavior and others. Policy settings can be configured for each user and for each machine. Internet Explorer also supports Integrated Windows Authentication.
Architecture
Internet Explorer uses a componentized architecture built on the Component Object Model (COM) technology. It consists of several major components, each of which is contained in a separate dynamic-link library (DLL) and exposes a set of COM programming interfaces hosted by the Internet Explorer main executable, :
is the protocol handler for HTTP, HTTPS, and FTP. It handles all network communication over these protocols.
is responsible for MIME-type handling and download of web content, and provides a thread-safe wrapper around WinInet.dll and other protocol implementations.
houses the MSHTML (Trident) browser engine introduced in Internet Explorer 4, which is responsible for displaying the pages on-screen and handling the Document Object Model (DOM) of the web pages. MSHTML.dll parses the HTML/CSS file and creates the internal DOM tree representation of it. It also exposes a set of APIs for runtime inspection and modification of the DOM tree. The DOM tree is further processed by a browser engine which then renders the internal representation on screen.
contains the user interface and window of IE in Internet Explorer 7 and above.
provides the navigation, local caching and history functionalities for the browser.
is responsible for rendering the browser user interface such as menus and toolbars.
Internet Explorer does not include any native scripting functionality. Rather, exposes an API that permits a programmer to develop a scripting environment to be plugged-in and to access the DOM tree. Internet Explorer 8 includes the bindings for the Active Scripting engine, which is a part of Microsoft Windows and allows any language implemented as an Active Scripting module to be used for client-side scripting. By default, only the JScript and VBScript modules are provided; third party implementations like ScreamingMonkey (for ECMAScript 4 support) can also be used. Microsoft also makes available the Microsoft Silverlight runtime that allows CLI languages, including DLR-based dynamic languages like IronPython and IronRuby, to be used for client-side scripting.
Internet Explorer 8 introduced some major architectural changes, called loosely coupled IE (LCIE). LCIE separates the main window process (frame process) from the processes hosting the different web applications in different tabs (tab processes). A frame process can create multiple tab processes, each of which can be of a different integrity level, each tab process can host multiple web sites. The processes use asynchronous inter-process communication to synchronize themselves. Generally, there will be a single frame process for all web sites. In Windows Vista with protected mode turned on, however, opening privileged content (such as local HTML pages) will create a new tab process as it will not be constrained by protected mode.
Extensibility
Internet Explorer exposes a set of Component Object Model (COM) interfaces that allows add-ons to extend the functionality of the browser. Extensibility is divided into two types: Browser extensibility and content extensibility. Browser extensibility involves adding context menu entries, toolbars, menu items or Browser Helper Objects (BHO). BHOs are used to extend the feature set of the browser, whereas the other extensibility options are used to expose that feature in the user interface. Content extensibility adds support for non-native content formats. It allows Internet Explorer to handle new file formats and new protocols, e.g. WebM or SPDY. In addition, web pages can integrate widgets known as ActiveX controls which run on Windows only but have vast potentials to extend the content capabilities; Adobe Flash Player and Microsoft Silverlight are examples. Add-ons can be installed either locally, or directly by a web site.
Since malicious add-ons can compromise the security of a system, Internet Explorer implements several safeguards. Internet Explorer 6 with Service Pack 2 and later feature an Add-on Manager for enabling or disabling individual add-ons, complemented by a "No Add-Ons" mode. Starting with Windows Vista, Internet Explorer and its BHOs run with restricted privileges and are isolated from the rest of the system. Internet Explorer 9 introduced a new component – Add-on Performance Advisor. Add-on Performance Advisor shows a notification when one or more of installed add-ons exceed a pre-set performance threshold. The notification appears in the Notification Bar when the user launches the browser. Windows 8 and Windows RT introduce a Metro-style version of Internet Explorer that is entirely sandboxed and does not run add-ons at all. In addition, Windows RT cannot download or install ActiveX controls at all; although existing ones bundled with Windows RT still run in the traditional version of Internet Explorer.
Internet Explorer itself can be hosted by other applications via a set of COM interfaces. This can be used to embed the browser functionality inside a computer program or create Internet Explorer shells.
Security
Internet Explorer uses a zone-based security framework that groups sites based on certain conditions, including whether it is an Internet- or intranet-based site as well as a user-editable whitelist. Security restrictions are applied per zone; all the sites in a zone are subject to the restrictions.
Internet Explorer 6 SP2 onwards uses the Attachment Execution Service of Microsoft Windows to mark executable files downloaded from the Internet as being potentially unsafe. Accessing files marked as such will prompt the user to make an explicit trust decision to execute the file, as executables originating from the Internet can be potentially unsafe. This helps in preventing the accidental installation of malware.
Internet Explorer 7 introduced the phishing filter, which restricts access to phishing sites unless the user overrides the decision. With version 8, it also blocks access to sites known to host malware. Downloads are also checked to see if they are known to be malware-infected.
In Windows Vista, Internet Explorer by default runs in what is called Protected Mode, where the privileges of the browser itself are severely restricted—it cannot make any system-wide changes. One can optionally turn this mode off, but this is not recommended. This also effectively restricts the privileges of any add-ons. As a result, even if the browser or any add-on is compromised, the damage the security breach can cause is limited.
Patches and updates to the browser are released periodically and made available through the Windows Update service, as well as through Automatic Updates. Although security patches continue to be released for a range of platforms, most feature additions and security infrastructure improvements are only made available on operating systems that are in Microsoft's mainstream support phase.
On December 16, 2008, Trend Micro recommended users switch to rival browsers until an emergency patch was released to fix a potential security risk which "could allow outside users to take control of a person's computer and steal their passwords.” Microsoft representatives countered this recommendation, claiming that "0.02% of internet sites" were affected by the flaw. A fix for the issue was released the following day with the Security Update for Internet Explorer KB960714, on Microsoft Windows Update.
In 2010, Germany's Federal Office for Information Security, known by its German initials, BSI, advised "temporary use of alternative browsers" because of a "critical security hole" in Microsoft's software that could allow hackers to remotely plant and run malicious code on Windows PCs.
In 2011, a report by Accuvant, funded by Google, rated the security (based on sandboxing) of Internet Explorer worse than Google Chrome but better than Mozilla Firefox.
A 2017 browser security white paper comparing Google Chrome, Microsoft Edge, and Internet Explorer 11 by X41 D-Sec in 2017 came to similar conclusions, also based on sandboxing and support of legacy web technologies.
Security vulnerabilities
Internet Explorer has been subjected to many security vulnerabilities and concerns such that the volume of criticism for IE is unusually high. Much of the spyware, adware, and computer viruses across the Internet are made possible by exploitable bugs and flaws in the security architecture of Internet Explorer, sometimes requiring nothing more than viewing of a malicious web page to install themselves. This is known as a "drive-by install.” There are also attempts to trick the user into installing malicious software by misrepresenting the software's true purpose in the description section of an ActiveX security alert.
A number of security flaws affecting IE originated not in the browser itself, but in ActiveX-based add-ons used by it. Because the add-ons have the same privilege as IE, the flaws can be as critical as browser flaws. This has led to the ActiveX-based architecture being criticized for being fault-prone. By 2005, some experts maintained that the dangers of ActiveX had been overstated and there were safeguards in place. In 2006, new techniques using automated testing found more than a hundred vulnerabilities in standard Microsoft ActiveX components. Security features introduced in Internet Explorer 7 mitigated some of these vulnerabilities.
In 2008, Internet Explorer had a number of published security vulnerabilities. According to research done by security research firm Secunia, Microsoft did not respond as quickly as its competitors in fixing security holes and making patches available. The firm also reported 366 vulnerabilities in ActiveX controls, an increase from the previous year.
According to an October 2010 report in The Register, researcher Chris Evans had detected a known security vulnerability which, then dating back to 2008, had not been fixed for at least six hundred days. Microsoft says that it had known about this vulnerability, but it was of exceptionally low severity as the victim web site must be configured in a peculiar way for this attack to be feasible at all.
In December 2010, researchers were able to bypass the "Protected Mode" feature in Internet Explorer.
Vulnerability exploited in attacks on U.S. firms
In an advisory on January 14, 2010, Microsoft said that attackers targeting Google and other U.S. companies used software that exploits a security hole, which had already been patched, in Internet Explorer. The vulnerability affected Internet Explorer 6 from on Windows XP and Server 2003, IE6 SP1 on Windows 2000 SP4, IE7 on Windows Vista, XP, Server 2008, and Server 2003, IE8 on Windows 7, Vista, XP, Server 2003, and Server 2008 (R2).
The German government warned users against using Internet Explorer and recommended switching to an alternative web browser, due to the major security hole described above that was exploited in Internet Explorer. The Australian and French Government issued a similar warning a few days later.
Major vulnerability across versions
On April 26, 2014, Microsoft issued a security advisory relating to (use-after-free vulnerability in Microsoft Internet Explorer 6 through 11), a vulnerability that could allow "remote code execution" in Internet Explorer versions 6 to 11. On April 28, 2014, the United States Department of Homeland Security's United States Computer Emergency Readiness Team (US-CERT) released an advisory stating that the vulnerability could result in "the complete compromise" of an affected system. US-CERT recommended reviewing Microsoft's suggestions to mitigate an attack or using an alternate browser until the bug is fixed. The UK National Computer Emergency Response Team (CERT-UK) published an advisory announcing similar concerns and for users to take the additional step of ensuring their antivirus software is up to date. Symantec, a cyber security firm, confirmed that "the vulnerability crashes Internet Explorer on Windows XP". The vulnerability was resolved on May 1, 2014, with a security update.
Market adoption and usage share
The adoption rate of Internet Explorer seems to be closely related to that of Microsoft Windows, as it is the default web browser that comes with Windows. Since the integration of Internet Explorer 2.0 with Windows 95 OSR 1 in 1996, and especially after version 4.0's release in 1997, the adoption was greatly accelerated: from below 20% in 1996, to about 40% in 1998, and over 80% in 2000. This made Microsoft the winner in the infamous 'first browser war' against Netscape. Netscape Navigator was the dominant browser during 1995 and until 1997, but rapidly lost share to IE starting in 1998, and eventually slipped behind in 1999. The integration of IE with Windows led to a lawsuit by AOL, Netscape's owner, accusing Microsoft of unfair competition. The infamous case was eventually won by AOL but by then it was too late, as Internet Explorer had already become the dominant browser.
Internet Explorer peaked during 2002 and 2003, with about 95% share. Its first notable competitor after beating Netscape was Firefox from Mozilla, which itself was an offshoot from Netscape.
Firefox 1.0 had surpassed Internet Explorer 5 in early 2005, with Firefox 1.0 at 8 percent market share.
Approximate usage over time based on various usage share counters averaged for the year overall, or for the fourth quarter, or for the last month in the year depending on availability of reference.
According to StatCounter Internet Explorer's market share fell below 50% in September 2010. In May 2012, Google Chrome overtook Internet Explorer as the most used browser worldwide, according to StatCounter. In September 2021, usage share is low globally, while a bit higher in Africa, at 2.61%.
Industry adoption
Browser Helper Objects are also used by many search engines companies and third parties for creating add-ons that access their services, such as search engine toolbars. Because of the use of COM, it is possible to embed web-browsing functionality in third-party applications. Hence, there are several Internet Explorer shells, and several content-centric applications like RealPlayer also use Internet Explorer's web browsing module for viewing web pages within the applications.
Removal
While a major upgrade of Internet Explorer can be uninstalled in a traditional way if the user has saved the original application files for installation, the matter of uninstalling the version of the browser that has shipped with an operating system remains a controversial one.
The idea of removing a stock install of Internet Explorer from a Windows system was proposed during the United States v. Microsoft Corp. case. One of Microsoft's arguments during the trial was that removing Internet Explorer from Windows may result in system instability. Indeed, programs that depend on libraries installed by IE, including Windows help and support system, fail to function without IE. Before Windows Vista, it was not possible to run Windows Update without IE because the service used ActiveX technology, which no other web browser supports.
Impersonation by malware
The popularity of Internet Explorer has led to the appearance of malware abusing its name. On January 28, 2011, a fake Internet Explorer browser calling itself "Internet Explorer – Emergency Mode" appeared. It closely resembles the real Internet Explorer but has fewer buttons and no search bar. If a user attempts to launch any other browser such as Google Chrome, Mozilla Firefox, Opera, Safari, or the real Internet Explorer, this browser will be loaded instead. It also displays a fake error message, claiming that the computer is infected with malware and Internet Explorer has entered "Emergency Mode.” It blocks access to legitimate sites such as Google if the user tries to access them.
See also
Bing Bar
History of the web browser
List of web browsers
Month of bugs
Web 2.0
Windows Filtering Platform
Winsock
Notes
References
Further reading
External links
Internet Explorer Architecture
1995 software
FTP clients
History of the Internet
News aggregator software
Proprietary software
Windows components
Windows web browsers
Computer-related introductions in 1995
Products and services discontinued in 2015
Discontinued Microsoft software
Web browsers
Xbox One software
Xbox 360 software | Operating System (OS) | 1,333 |
Architecture Neutral Distribution Format
The Architecture Neutral Distribution Format (ANDF) in computing is a technology allowing common "shrink wrapped" binary application programs to be distributed for use on conformant Unix systems, translated to run on different underlying hardware platforms. ANDF was defined by the Open Software Foundation and was expected to be a "truly revolutionary technology that will significantly advance the cause of portability and open systems", but it was never widely adopted.
As with other OSF offerings, ANDF was specified through an open selection process. OSF issued a Request for Technology for architecture-neutral software distribution technologies in April, 1989. Fifteen proposals were received, based on a variety of technical approaches, including obscured source code, compiler intermediate languages, and annotated executable code.
The technology of ANDF, chosen after an evaluation of competing approaches and implementations, was Ten15 Distribution Format, later renamed TenDRA Distribution Format, developed by the UK Defence Research Agency.
Adoption
ANDF was intended to benefit both software developers and users. Software developers could release a single binary for all platforms, and software users would have freedom to procure multiple vendors' hardware competitively. Programming language designers and implementors were also interested because standard installers would mean that only a single language front end would need to be developed.
OSF released several development 'snapshots' of ANDF, but it was never released commercially by OSF or any of its members. Various reasons have been proposed for this: for example, that having multiple installation systems would complicate software support.
After OSF stopped working on ANDF, development continued at other organizations.
See also
UNCOL
Java bytecode
Common Language Runtime
LLVM
Compilation
Software portability
WebAssembly
References
Bibliography
Stavros Macrakis, "The Structure of ANDF: Principles and Examples", Open Software Foundation, RI-ANDF-RP1-1, January, 1992.
Stavros Macrakis, "Protecting Source Code with ANDF", Open Software Foundation, November, 1992.
Open Systems Foundation. "OSF Architecture-Neutral Distribution Format Rationale", June 1991.
Open Systems Foundation. "A Brief Introduction to ANDF", January 1993. Available at Google Groups
Abstract machines
Cross-compilers
Executable file formats
Computer-related introductions in 1991 | Operating System (OS) | 1,334 |
IMAGE (database)
IMAGE is a database management system (DBMS) developed by Hewlett Packard and included with the HP 3000 minicomputer. It was the primary reason for that platform's success in the market. It was also sometimes referred to as IMAGE/3000 in its initial release, and later versions were known as TurboIMAGE, and TurboIMAGE/XL after the PA-RISC migration.
IMAGE is based on the network database model, in contrast to most modern systems which are based on the relational database model. A SQL (Structured Query Language) front-end processor was later added, offering users the ability to run SQL queries on existing databases. This produced IMAGE/SQL, the current name.
Overview
IMAGE consists of several utilities along with an API (referred to as "intrinsics" by the HP documentation):
DBSCHEMA - Compile a source schema layout. The source layout describes the tables (known as SETS) and columns (known as FIELDS).
DBUTIL - Creates and performs maintenance functions on the database.
QUERY - Generalized query tool for accessing any TurboIMAGE database.
The following is a sample list of the API calls used for application development. These calls are supported by HP's compilers: COBOL, FORTRAN, BASIC, SPL, PASCAL and C.
DBFIND - Locates a record.
DBGET - Retrieves a record.
DBPUT - Adds a record.
DBUPDATE - Updates a record.
DBINFO - Provides information on the structure of the database.
DBOPEN - Opens the database with a specified password to provide access rights to the application.
History
The significant highlights of IMAGE are:
Originally released as IMAGE/3000 around 1972 as a $10,000 option, but later included free as part of the MPE operating system.
Bundled with the HP Precision Architecture Computers as HP ALLBASE for both HP-UX and MPE/XL operating systems.
Several Fourth-generation programming language products (Powerhouse, Transact, Speedware, Protos) became available from third party vendors.
New capabilities were added including the increase of storage capacity and increase of several internal limitations such as the number of SETS allowed in a database. IMAGE/3000 was renamed TurboIMAGE due to these new capabilities.
HP provided a Third Party Interface (TPI) to DISC's OMNIDEX and Bradmark's SUPERDEX products.
HP announced the end of life for the HP3000 which included TurboIMAGE.
Marxmeier released Eloquence which is schema and API compatible with TurboIMAGE and allows TurboIMAGE applications to run on Microsoft Windows and HP-UX.
Stromasys released an HP3000 emulator allowing TurboIMAGE applications to be run on commodity hardware.
External links
http://www.robelle.com/library/smugbook/image.html
HP Computer Museum 3000 Series II Documentation - 1976 Image manual PDF
http://www.hpl.hp.com/hpjournal/pdfs/IssuePDFs/1986-12.pdf - Hewlett-Packard Journal "Data Base Management for HP Precision Architecture Computers"
Proprietary database management systems | Operating System (OS) | 1,335 |
Portable Executable
The Portable Executable (PE) format is a file format for executables, object code, DLLs and others used in 32-bit and 64-bit versions of Windows operating systems. The PE format is a data structure that encapsulates the information necessary for the Windows OS loader to manage the wrapped executable code. This includes dynamic library references for linking, API export and import tables, resource management data and thread-local storage (TLS) data. On NT operating systems, the PE format is used for EXE, DLL, SYS (device driver), MUI and other file types. The Unified Extensible Firmware Interface (UEFI) specification states that PE is the standard executable format in EFI environments.
On Windows NT operating systems, PE currently supports the x86-32, x86-64 (AMD64/Intel 64), IA-64, ARM and ARM64 instruction set architectures (ISAs). Prior to Windows 2000, Windows NT (and thus PE) supported the MIPS, Alpha, and PowerPC ISAs. Because PE is used on Windows CE, it continues to support several variants of the MIPS, ARM (including Thumb), and SuperH ISAs.
Analogous formats to PE are ELF (used in Linux and most other versions of Unix) and Mach-O (used in macOS and iOS).
History
Microsoft migrated to the PE format from the 16-bit NE formats with the introduction of the Windows NT 3.1 operating system. All later versions of Windows, including Windows 95/98/ME and the Win32s addition to Windows 3.1x, support the file structure. The format has retained limited legacy support to bridge the gap between DOS-based and NT systems. For example, PE/COFF headers still include a DOS executable program, which is by default a DOS stub that displays a message like "This program cannot be run in DOS mode" (or similar), though it can be a full-fledged DOS version of the program (a later notable case being the Windows 98 SE installer). This constitutes a form of fat binary. PE also continues to serve the changing Windows platform. Some extensions include the .NET PE format (see below), a 64-bit version called PE32+ (sometimes PE+), and a specification for Windows CE.
Technical details
Layout
A PE file consists of a number of headers and sections that tell the dynamic linker how to map the file into memory. An executable image consists of several different regions, each of which require different memory protection; so the start of each section must be aligned to a page boundary. For instance, typically the .text section (which holds program code) is mapped as execute/readonly, and the .data section (holding global variables) is mapped as no-execute/readwrite. However, to avoid wasting space, the different sections are not page aligned on disk. Part of the job of the dynamic linker is to map each section to memory individually and assign the correct permissions to the resulting regions, according to the instructions found in the headers.
Import table
One section of note is the import address table (IAT), which is used as a lookup table when the application is calling a function in a different module. It can be in the form of both import by ordinal and import by name. Because a compiled program cannot know the memory location of the libraries it depends upon, an indirect jump is required whenever an API call is made. As the dynamic linker loads modules and joins them together, it writes actual addresses into the IAT slots, so that they point to the memory locations of the corresponding library functions. Though this adds an extra jump over the cost of an intra-module call resulting in a performance penalty, it provides a key benefit: The number of memory pages that need to be copy-on-write changed by the loader is minimized, saving memory and disk I/O time. If the compiler knows ahead of time that a call will be inter-module (via a dllimport attribute) it can produce more optimized code that simply results in an indirect call opcode.
Relocations
PE files normally do not contain position-independent code. Instead they are compiled to a preferred base address, and all addresses emitted by the compiler/linker are fixed ahead of time. If a PE file cannot be loaded at its preferred address (because it's already taken by something else), the operating system will rebase it. This involves recalculating every absolute address and modifying the code to use the new values. The loader does this by comparing the preferred and actual load addresses, and calculating a delta value. This is then added to the preferred address to come up with the new address of the memory location. Base relocations are stored in a list and added, as needed, to an existing memory location. The resulting code is now private to the process and no longer shareable, so many of the memory saving benefits of DLLs are lost in this scenario. It also slows down loading of the module significantly. For this reason rebasing is to be avoided wherever possible, and the DLLs shipped by Microsoft have base addresses pre-computed so as not to overlap. In the no rebase case PE therefore has the advantage of very efficient code, but in the presence of rebasing the memory usage hit can be expensive. This contrasts with ELF which uses fully position-independent code and a global offset table, which trades off execution time in favor of lower memory usage.
.NET, metadata, and the PE format
In a .NET executable, the PE code section contains a stub that invokes the CLR virtual machine startup entry, _CorExeMain or _CorDllMain in mscoree.dll, much like it was in Visual Basic executables. The virtual machine then makes use of .NET metadata present, the root of which, IMAGE_COR20_HEADER (also called "CLR header") is pointed to by IMAGE_DIRECTORY_ENTRY_COMHEADER entry in the PE header's data directory. IMAGE_COR20_HEADER strongly resembles PE's optional header, essentially playing its role for the CLR loader.
The CLR-related data, including the root structure itself, is typically contained in the common code section, .text. It is composed of a few directories: metadata, embedded resources, strong names and a few for native-code interoperability. Metadata directory is a set of tables that list all the distinct .NET entities in the assembly, including types, methods, fields, constants, events, as well as references between them and to other assemblies.
Use on other operating systems
The PE format is also used by ReactOS, as ReactOS is intended to be binary-compatible with Windows. It has also historically been used by a number of other operating systems, including SkyOS and BeOS R3. However, both SkyOS and BeOS eventually moved to ELF.
As the Mono development platform intends to be binary compatible with the Microsoft .NET Framework, it uses the same PE format as the Microsoft implementation. The same goes for Microsoft's own cross-platform .NET Core.
On x86(-64) Unix-like operating systems, Windows binaries (in PE format) can be executed with Wine. The HX DOS Extender also uses the PE format for native DOS 32-bit binaries, plus it can, to some degree, execute existing Windows binaries in DOS, thus acting like an equivalent of Wine for DOS.
On IA-32 and x86-64 Linux one can also run Windows' DLLs under loadlibrary.
Mac OS X 10.5 has the ability to load and parse PE files, but is not binary compatible with Windows.
UEFI and EFI firmware use Portable Executable files as well as the Windows ABI x64 calling convention for applications.
See also
EXE
Executable and Linkable Format
Mach-O
a.out
Comparison of executable file formats
Executable compression
ar (Unix) since all COFF libraries use that same format
Application virtualization
References
External links
PE Format (latest online document)
Microsoft Portable Executable and Common Object File Format Specification (revision 8.1, OOXML format)
Microsoft Portable Executable and Common Object File Format Specification (revision 6.0, .doc format)
The original Portable Executable article by Matt Pietrek (MSDN Magazine, March 1994)
Part I. An In-Depth Look into the Win32 Portable Executable File Format by Matt Pietrek (MSDN Magazine, February 2002)
Part II. An In-Depth Look into the Win32 Portable Executable File Format by Matt Pietrek (MSDN Magazine, March 2002)
The .NET File Format by Daniel Pistelli
Ero Carrera's blog describing the PE header and how to walk through
PE Internals provides an easy way to learn the Portable Executable File Format
Executable file formats
Windows administration | Operating System (OS) | 1,336 |
SMSQ/E
SMSQ/E is a computer operating system originally developed in France by Tony Tebby, the designer of the original QDOS operating system for the Sinclair QL personal computer. It began life as SMSQ, a QDOS-compatible version of SMS2 intended for the Miracle Systems QXL emulator card for PCs. This was later developed into an extended version, SMSQ/E, for the Atari ST. It consists of a QDOS compatible SMS kernel, a rewritten SuperBASIC interpreter called SBasic, a complete set of SuperBASIC procedures and functions and a set of extended device drivers originally written for the QL emulator for the Atari ST.
It also integrates many extensions previously only available separately for the QL, like Toolkit II (quite essential SuperBASIC add-on), the Pointer Environment (the QL's mouse and windowing system) and the Hotkey System 2.
While SMSQ/E does not run on any unmodified QL, it runs on all of the more advanced QL compatible platforms, from the Miracle Systems (Super)GoldCard CPU plug-in cards to the Q60 motherboard.
In late 1995 a German author, Marcel Kilgus, acquired the SMSQ/E sources for adaptation to his QL emulator QPC, which from then on did not emulate any specific QL hardware anymore but employed specially adapted device drivers to achieve a tighter integration and faster emulation.
In 2000, version 2.94 was the first QL operating system that broke free of the bounds of the QL 8 colour screen, introducing GD2 (Graphic Device Interface Version 2), a QL compatible 16-bit high colour graphics sub-system.
Up to version 2.99 the system was exclusively developed by Tony Tebby and Marcel Kilgus. In 2002, Mr Tebby released all of his source code (which doesn't include most QPC specific parts), albeit under a license which is not Open Source under the Open Source Definition.
With this step Tony Tebby finally left the QL scene, but development by volunteers continues to this day.
In early 2013 the current source code was re-released under the BSD license.
Currently SMSQ/E consists of approximately 2000 68k assembler source files containing about 222,000 lines of code.
External links
A Brief History of SMSQ/E
The official SMSQ/E site Source Code, binaries and documentation
QPC: a software emulator for DOS/Windows that employs SMSQ/E
Q40/Q60: a 68040/68060 based motherboard for SMSQ/E
SMSQmulator - Java based virtual QL machine running SMSQ/E
QL/E The QL runtime Environment with SMSQ/E
The Distribution, 4.7 GB of QL related documents, software (incl. all SMSQ/E editions) and pictures
References
Discontinued operating systems
Atari operating systems
Atari ST software
Software using the BSD license | Operating System (OS) | 1,337 |
Sun Constellation System
Sun Constellation System is an open petascale computing environment introduced by Sun Microsystems in 2007.
Main hardware components
Sun Blade 6048 Modular System
Sun Blade X6275
Sun Blade X6270
Sun Blade 6000 System
Sun Datacenter Switch 3456
Sun Fire X4540
Sun Cooling Doors (5200,5600)
Software stack
OpenSolaris or Linux
Sun Grid Engine
Sun Studio Compiler Suite
Fortress (programming language)
Sun HPC ClusterTools (based on Open MPI)
Sun Ops Center
Services
Sun Datacenter Express Services
Production systems
Ranger at the Texas Advanced Computing Center (TACC) was the largest production Constellation system. Ranger had 62,976 processor cores in 3,936 nodes and a peak performance of 580 TFlops. Ranger was the 7th most powerful TOP500 supercomputer in the world at the time of its introduction.
After 5 years of service at TACC, it was dismantled and shipped to South Africa, Tanzania, and Botswana to help foster HPC development in Africa.
A number of smaller Constellation systems are deployed at other supercomputer centers, including the University of Oslo.
References
External links
Sun Constellation System at Sun.com
Sun Constellation System at SC07 (YouTube Video)
Sun Microsystems software
Sun Microsystems hardware
Petascale computers | Operating System (OS) | 1,338 |
Management features new to Windows Vista
Windows Vista contains a range of new technologies and features that are intended to help network administrators and power users better manage their systems. Notable changes include a complete replacement of both the Windows Setup and the Windows startup processes, completely rewritten deployment mechanisms, new diagnostic and health monitoring tools such as random access memory diagnostic program, support for per-application Remote Desktop sessions, a completely new Task Scheduler, and a range of new Group Policy settings covering many of the features new to Windows Vista. Subsystem for UNIX Applications, which provides a POSIX-compatible environment is also introduced.
Setup
The setup process for Windows Vista has been completely rewritten and is now image-based instead of being sector-based as previous versions of Windows were. The Windows Preinstallation Environment (WinPE) has been updated to host the entire setup process in a graphical environment (as opposed to text-based environments of previous versions of Windows), which allows the use of input devices other than the keyboard throughout the entire setup process. The new interface resembles Windows Vista itself with features such as ClearType fonts and Windows Aero visual effects. Prior to copying the setup image to disk, users can create, format, and graphically resize disk partitions. The new image-based setup also reduces the duration of the installation procedure when contrasted with Windows XP; Microsoft estimates that Windows Vista can install in as few as 20 minutes despite being more than three times the size of its predecessor.
Windows XP only supported loading storage drivers from floppy diskettes during initialization of the setup process; Windows Vista supports loading drivers for SATA, SCSI, and RAID controllers from any external source in addition to floppy diskettes prior to its installation.
At the end of the setup process, Windows Vista can also automatically download and apply security and device-driver updates from Windows Update. Previous versions of Windows could only configure updates to be installed after the operating system installation.
System recovery
The new Windows Recovery Environment (WinRE) detects and repairs various operating system problems; it presents a set of options dedicated to diagnostics including Startup Repair, System Restore, Backup and Restore, Windows Memory Diagnostic Tool, Command Prompt, and options specific to original equipment manufacturers. WinRE is accessible by pressing during operating system boot or by booting from a Windows installation source such as optical media.
Startup Repair
Startup Repair (formerly System Recovery Troubleshooter Wizard) is a diagnostic feature designed to repair systems that cannot boot due to operating system corruption, incompatible drivers, or damaged hardware; it scans for corruption of operating system components such as Boot Configuration Data and the Windows Registry and also checks boot sectors, file system metadata, Master Boot Records, and partition tables for errors and whether the root cause for failure originated during an installation of Windows. Microsoft designed Startup Repair to repair over eighty percent of issues that users may experience. Windows Vista Service Pack 1 enhances Startup Repair to replace additional system files during the repair process that may be damaged or missing due to corruption.
Component Based Servicing
Package Manager, part of the Windows Vista servicing stack, replaces the previous Package Installer (Update.exe) and Update Installer (Hotfix.exe). Microsoft delivers updates for Windows Vista as files and resources only. Package Manager, Windows Update, and the Control Panel item to turn Windows features on and off, all use the Windows Vista servicing stack. Package Manager can also install updates to an offline Windows image, including updates, boot-critical device drivers, and language packs.
Windows Vista introduced Component-Based Servicing (CBS) as an architecture for installation and servicing.
Deployment
The deployment of Windows Vista uses a hardware-independent image, the Windows Imaging Format (WIM). The image file contains the necessary bits of the operating system, and its contents are copied as is to the target system. Other system specific software, such as device drivers and other applications, are installed and configured afterwards. This reduces the time taken for installation of Windows Vista.
Corporations can author their own image files (using the WIM format) which might include all the applications that the organization wants to deploy. Also multiple images can be kept in a single image file, to target multiple scenarios. This ability is used by Microsoft to include all editions of Windows Vista on the same disc, and install the proper version based on the provided product key. In addition, initial configuration, such as locale settings, account names, etc. can be supplied in XML Answer Files to automate installation.
Microsoft provides a tool called ImageX to support creation of custom images, and edit images after they have been created. It can also be used to generate an image from a running installation, including all data and applications, for backup purposes. WIM images can also be controlled using the Windows System Image Manager, which can be used to edit images and to create XML Answer Files for unattended installations. Sysprep is also included as part of Windows Vista, and is HAL-independent.
Also included in Windows Vista is an improved version of the Files and Settings Transfer Wizard now known as Windows Easy Transfer which allows settings to be inherited from previous installations. User State Migration Tool allows migrating user accounts during large automated deployments.
ClickOnce is a deployment technology for "smart client" applications that enables self-updating Windows-based applications that can be installed and run with minimal user interaction, and in a fashion that does not require administrator access.
The ActiveX Installer Service is an optional component included with the Business, Enterprise and Ultimate editions that provides a method for network administrators in a domain to authorize the installation and upgrade of specific ActiveX controls while operating as a standard user. ActiveX components that have been listed in Group Policy can be installed without a User Account Control consent dialog being displayed.
Event logging and reporting
Windows Vista includes a number of self-diagnostic features which help identify various problems and, if possible, suggest corrective actions. The event logging subsystem in Windows Vista also has been completely overhauled and rewritten around XML to allow applications to more precisely log events. Event Viewer has also been rewritten to take advantage of these new features. There are a large number of different types of event logs that can be monitored including Administrative, Operational, Analytic, and Debug log types. For instance, selecting the Application Logs node in the Scope pane reveals numerous new subcategorized event logs, including many labeled as diagnostic logs. Event logs can now be configured to be automatically forwarded to other systems running Windows Vista or Windows Server 2008. Event logs can also be remotely viewed from other computers or multiple event logs can be centrally logged and managed from a single computer. Event logs can be filtered by one or more criteria, and custom views can be created for one or more events. Such categorizing and advanced filtering allows viewing logs related only to a certain subsystem or an issue with only a certain component. Events can also be directly associated with tasks, via the redesigned Event Viewer.
Windows Error Reporting
Windows Error Reporting has been improved significantly in Windows Vista. Most importantly a new set of public APIs have been created for reporting failures other than application crashes and hangs. Developers can create custom reports and customize the reporting user interface. The new APIs are documented in MSDN. The architecture of Windows Error Reporting has been revamped with a focus on reliability and user experience. WER can now report errors even when the process is in a very bad state for example if the process has encountered stack exhaustions, PEB/TEB corruptions, heap corruptions etc. In Windows XP, the process terminated silently without generating an error report in these conditions.
A new feature called Problem Reports and Solutions has also been added. It is a Control Panel applet that keeps a record of all system and application errors and issues, as well as presents probable solutions to problems.
Performance monitoring and diagnostics
The Performance Monitor includes several new performance counters and various tools for tuning and monitoring system performance and resources. It shows the activities of the CPU, disk I/O, network, memory and other resources in the "Resource View". It supports new graph types, the selection of multiple counters, the retrieval of counter values from a point on the graph, the saving of graphed counter values to a log file, and the option to have a line graph continuously scroll in the graph window instead of wrapping-around on itself.
Windows Task Manager presents more detailed system information and monitoring.
The perfmon /report command produces a comprehensive System Diagnostics Report.
A new feature called Resource Exhaustion Prevention can detect when memory is low and determine which applications are causing this. A memory leak diagnostic can provide information about application that may have memory leaks.
The Reliability Monitor tracks applications and driver installations, along with the date of installation. It uses system reliability statistics from the Reliability Analysis Component (RAC) to present a graphical view of variation in system reliability and stability. (The RAC updates a computer's stability index daily.)
Windows Vista introduced a new help and support architecture and interface based on the Assistance Platform client and MAML; the new architecture is not backward-compatible with previous versions of Windows.
Remote management
Remote Desktop Protocol 6.0 incorporates support for application-level remoting, improved security (TLS 1.0), support for connections via an SSL gateway, improved remoting of devices, support for .NET remoting including support for remoting of Windows Presentation Foundation applications, WMI scripting, 32-bit color support, dual-monitor support, Network Level Authentication and more.
Remote Assistance, which helps in troubleshooting remotely, is now a full-fledged standalone application and does not use the Help and Support Center or Windows Messenger. It is now based on the Windows Desktop Sharing API. Two administrators can connect to a remote computer simultaneously. Also, a session automatically reconnects after restarting the computer. It also supports session pausing, built-in diagnostics, and XML-based logging. It has been reworked to use less bandwidth for low-speed connections. NAT traversals are also supported, so a session can be established even if the user is behind a NAT device. Remote Assistance is configurable using Group Policy and supports command-line switches so that custom shortcuts can be deployed.
Windows Vista also includes Windows Remote Management (WinRM), which is Microsoft's implementation of WS-Management standard which allows remote computers to be easily managed through a SOAP-based web service. WinRM allows obtaining data (including WMI and other management information) from local and remote computers running Windows XP and Windows Server 2003 (if WinRM is installed on those computers), Windows Server 2008 and all WS-Management protocol implementations on other operating systems. Using WinRM scripting objects along with compatible command-line tools (WinRM or WinRS), allows administrators to remotely run management scripts. A WinRM session is authenticated to minimize security risks.
System tools
New /B switch in CHKDSK for NTFS volumes which clears marked bad sectors on a volume and reevaluates them.
Windows System Assessment Tool, a built-in benchmarking tool, analyzes the different subsystems (graphics, memory, etc.), produces a Windows Experience Index (formerly Windows Performance Rating) and uses the results to allow for comparison to other Windows Vista systems, and for software optimizations. The optimizations can be made by both Windows and third-party software.
Windows Backup (code-named SafeDocs) allows automatic backup of files, recovery of specific files and folders, recovery of specific file types, or recovery of all files. With Windows Vista Business, Enterprise or Ultimate, the entire disk can be backed up to a Complete PC Backup and Restore image and restored when required. Complete PC Restore can be initiated from within Windows Vista, or from the Windows Vista installation disc in the event that Windows cannot start up normally from the hard disk. Backups are created in Virtual PC format and therefore can be mounted using Microsoft Virtual PC. The Backup and Restore Center gives users the ability to schedule periodic backups of files on their computer, as well as recovery from previous backups.
Windows Update has been revised, and now runs completely as a control panel application, not as a web application as in prior versions of Windows.
System Restore is now based on Shadow Copy technology instead of a file-based filter and is therefore more proactive at creating useful restore points. Restore points are now "volume-level", meaning that performing a restore will capture the state of an entire system at a point in time. These can also be restored using the Windows Recovery Environment when booting from the Windows Vista DVD, and an "undo" restore point can be created prior to a restore, in case a user wishes to return to the pre-restored state.
System File Checker is integrated with Windows Resource Protection which protects registry keys and folders too besides critical system files. Using sfc.exe, specific folder paths can be checked, including the Windows folder and the boot folder. Also, scans can be performed against an offline Windows installation folder to replace corrupt files, in case the Windows installation is not bootable. For performing offline scans, System File Checker must be run from another working installation of Windows Vista or a later operating system or from the Windows setup DVD which gives access to the Windows Recovery Environment.
System Configuration (MSConfig) allows configuring various switches for Windows Boot Manager and Boot Configuration Data. It can also launch a variety of tools, such as system information, network diagnostics etc. and enable or disable User Account Control.
Windows Installer 4.0 (MSI 4.0) includes support for features such as User Account Control, Restart Manager, and Multilingual User Interface.
Problem Reports and Solutions is a new control panel user interface for Windows Error Reporting which allows users to see previously sent problems and any solutions or additional information that is available.
Windows Task Manager has a new "Services" tab which gives access to the list of all Windows services, and offers the ability to start and stop any service as well as enable/disable the UAC file and registry virtualization of a process. Additionally, file properties, the full path and command line of started processes, and DEP status of processes can be viewed. It also allows creating a dump file which can be useful for debugging.
Disk Defragmenter can be configured to automatically defragment the hard drive on a regular basis. It features cancellable, low I/O priority, shadow copy-aware defragmentation. It can also defragment the NTFS Master File Table (MFT). The user interface has been simplified, with the color graph, progress indicator and other information such as file system, free space etc., being removed entirely. Chunks of data over 64MB in size will not be defragmented; Microsoft has stated that this is because there is no discernible performance benefit in doing so. The defragmenter is not based on an MMC snap-in. The command line utility defrag.exe offers more control over the defragmentation process. This utility can be used to defragment specific volumes and to just analyze volumes as the defragmenter would in Windows XP. Windows Vista Service Pack 1 adds back the ability to specify which volumes are to be defragmented to the GUI.
The Disk Management console has been improved to allow the creation and the resizing of disk volumes without any data loss. Partitions (volumes) can be resized before starting Windows Vista setup or after installation.
Group Policy settings let administrators set ACLs for the volume interface for disks, CD or DVD drives, tape and floppy disk drives, USB flash drives and other portable devices.
Management Console
Windows Vista includes Microsoft Management Console 3.0 (MMC), which introduced several enhancements, including support for writing .NET snap-ins using Windows Forms and running multiple tasks in parallel. In addition, snap-ins present their UI in a different thread than that in which the operation runs, thus keeping the snap-in responsive, even while doing a computationally intensive task.
The new MMC interface includes support for better graphics and as well as featuring a task pane that shows actions available for a snap-in, when it is selected. Task Scheduler and Windows Firewall are also thoroughly configurable through the management console.
Print Management enables centralized installation and management of all printers in an organization. It allows installation of network-attached printers to a group of clients simultaneously, and provides continually updated status information for the printers and print servers. It also supports finding printers needing operator attention by filtering the display of printers based on error conditions, such as out-of-paper, and can also send e-mail notifications or run scripts when a printer encounters the error condition.
Group Policy
Windows Vista introduces a new XML based file format, ADMX as a replacement for now legacy ADM files to manage Group Policy settings, as well as a new ADML file format for Administrative Templates. Windows Vista additionally introduces a Central Store for ADMX files; Group Policy tools use ADMX files in the Central Store, and these files are replicated to all domain controllers in a domain.
Windows Vista includes over 2400 options for Group Policy, many of which relate to its new features, and which allow administrators to specify configuration for connected groups of computers, especially in a . Windows Vista supports Multiple Local Group Policy Objects which allows setting different levels of Local Group Policy for individual users. A new XML based policy definition file format, known as ADMX has been introduced. ADMX files contain the configuration settings for individual Group Policy Objects (GPO). For domain based GPOs, the ADMX files can be centrally stored, and all computers on the domain will retrieve them to configure themselves, using the File Replication Service, which is used to replicate files on a configured system from a remote location. The Group Policy service is no longer attached with the Winlogon service, rather it runs as a service on its own. Group Policy event messages are now logged in the system event log. Group Policy uses Network Location Awareness to refresh the policy configuration as soon as a network configuration change is detected.
New categories for policy settings include power management, device installations, security settings, Internet Explorer settings, and printer settings, among others. Group Policy settings also need to be used, to enable two way communication filtering in the Windows Firewall, which by default enables only incoming data filtering. Printer settings can be used to install printers based on the network location. Whenever the user connects to a different network, the available printers are updated for the new network. Group Policy settings specify which printer is available on which network. Also, printer settings can be used to allow standard users to install printers. Group Policy can also be used for specifying quality of service (QoS) settings. Device installation settings can be used to prevent users from connecting external storage devices, as a means to prevent data theft.
Windows Vista improves Folder Redirection by introducing the ability to independently redirect up to 10 user profile sub-folders to a network location. Up to Windows XP, only the Application Data, Desktop, My Documents, My Pictures, and Start Menu folders can be redirected to a file server. There is also a Management Console snap-in in Windows Vista to allow users to configure Folder Redirection for clients running Windows Vista, Windows XP, and Windows 2000.
Task Scheduler
The redesigned Task Scheduler is now based on Management Console and can be used to automate management and configuration tasks. It already has a number of preconfigured system-level tasks scheduled to run at various times. In addition to time-based triggers, Task Scheduler also supports calendar and event-based triggers, such as starting a task when a particular event is logged to the event log, or even only when multiple events have occurred. Also, several tasks that are triggered by the same event can be configured to run either simultaneously or in a pre-determined chained sequence of a series of actions, instead of having to create multiple scheduled tasks. Tasks can also be configured to run based on system status such as being idle for a pre-configured amount of time, on startup, logoff, or only during or for a specified time. Tasks can be triggered by an XPath expression for filtering events from the Windows Event Log. Tasks can also be delayed for a specified time after the triggering event has occurred, or repeat until some other event occurs. Actions that need to be done if a task fails can also be configured. There are several actions defined across various categories of applications and components. Task Scheduler keeps a history log of all execution details of all the tasks. Other features of Task Scheduler include:
Several new actions: A task can be scheduled to send an e-mail, show a message box, start an executable, or fire a COM handler when it is triggered.
Task Scheduler schema: Task Scheduler allows creating and managing tasks through XML-formatted documents.
New security features, including using Credential Manager to store passwords for tasks on workgroup computers and using Active Directory for task credentials on domain-joined computers so that they cannot be retrieved easily. Also, scheduled tasks are executed in their own session, instead of the same session as system services or the current user.
Ability to wake up a machine remotely or using BIOS timer from sleep or hibernation to execute a scheduled task or run a previously scheduled task after a machine gets turned on.
Ability to attach tasks to events directly from the Event Viewer.
The Task Scheduler 2.0 API is now fully available to VBScript, JScript, PowerShell and other scripting languages.
Command-line tools
Several new command-line tools are included in Windows Vista. Several existing tools have also been updated and some of the tools from the Windows Resource Kit are now built-in into the operating system.
auditpol — Configure, create, back up and restore audit policies on any computer in the organization from the command line with verbose logging. Replaces auditusr.exe.
bcdedit — Create, delete, and reorder the bootloader (boot.ini is no longer used).
bitsadmin — BITS administration utility.
chglogon — Enable or disable session logins.
chgport — List or change COM port mappings for DOS application compatibility.
chgusr — Change install mode.
choice — Allows users to select one item from a list of choices and returns the index of the selected choice.
clip — Redirects output of command line tools to the Windows clipboard. This text output can then be pasted into other programs.
cmdkey — Creates, displays, and deletes stored user names and passwords from Credentials Manager.
diskpart — Expanded to support hard disks with the GUID Partition Table, USB media, and a new "shrink" command has been added which facilitates shrinking a pre-existing NTFS partition.
diskraid — Launches the Diskraid application.
dispdiag — Display diagnostics.
expand — Updated version of expand.exe that allows extracting .MSU files. MSU is a self-contained update format known as a 'Microsoft Update Standalone Installer'. MSU files use Intra-Package Delta (IPD) compression technology. IPD technology reduces the download size of an MSU file but still delivers a self-contained package that contains the updated files.
forfiles — Selects a file (or set of files) and executes a command on that file. This is helpful for batch jobs.
icacls — Updated version of cacls. Displays or modifies access control lists (ACLs) and DACLs of files and directories. It can also backup and restore them and set mandatory labels of an object for interaction with Mandatory Integrity Control.
iscsicli — Microsoft iSCSI Initiator.
mklink — create, modify and delete junctions, hard links, and symbolic links.
muiunattend — Multilingual User Interface unattend actions.
netcfg — WinPE network installer.
ocsetup — Windows optional component setup.
pkgmgr — Windows package manager.
pnpunattend — Audit system, unattended online driver install.
pnputil — Microsoft PnP Utility.
query — Query {Process|Session|TermServer|User}
quser — Display information about users logged on to the system.
robocopy — the next version of xcopy with additional features. Compared to the freely available TechNet Magazine version, (XP026), the Windows Vista version additionally supports /EFSRAW switch to copy encrypted files without decrypting them and /SL switch to copy symbolic links instead of their target.
rpcping — Pings a server using RPC.
setx — Creates or modifies environment variables in the user or system environment. Can set variables based on arguments, registry keys or file input.
sxstrace — WinSxS tracing utility.
takeown — Allows administrators to take ownership of a file for which access is denied.
timeout — Accepts a timeout parameter to wait for the specified time period (in seconds) or until any key is pressed. It also accepts a parameter to ignore the key press.
tracerpt — Microsoft TraceRpt.
waitfor — Sends, or waits for, a signal on a system. When /S is not specified, the signal will be broadcast to all the systems in a domain. If /S is specified, then the signal will be sent only to the specified system.
wbadmin — Backup command-line tool.
wecutil — Windows Event collector utility.
wevtutil — Windows Event command line utility.
where — Displays the location of files that match the search pattern. By default, the search is done along the current directory and in the paths specified by the PATH environment variable.
whoami — Can be used to get user name and group information along with the respective Security Identifiers (SID), privileges, logon identifier (logon ID) for the current user (access token) on the local system. i.e. the current logged on user. If no switch is specified, the tool displays the user name in NTLM format (domain\username).
winrm.cmd — Windows Remote Management command line utility.
winrs — Windows Remote Shell (WinRS) allows establishing secure Windows Remote Management sessions to multiple remote computers from a single console.
winsat — Windows System Assessment Tool command line.
Services for UNIX has been renamed Subsystem for UNIX-based Applications, and is included with the Enterprise and Ultimate editions of Windows Vista. Network File System (NFSv3) client support is also included. However, the utilities and SDK are required to be downloaded separately. Also, the server components from the SFU product line (namely Server for NFS, User Name Mapping, Server for NIS, Password Synchronization etc.) are not included.
Scripting
Windows Vista supports scripting and automation capabilities using Windows PowerShell, an object-oriented command-line shell, released by Microsoft, but not included with the operating system. Also, WMI classes expose all controllable features of the operating system, and can be accessed from scripting languages. 13 new WMI providers are included. In addition, DHTML coupled with scripting languages or even PowerShell can be used to create desktop gadgets; gadgets can also be created for configuration of various aspects of the system.
References
Windows Vista
Windows Vista | Operating System (OS) | 1,339 |
User interface
In the industrial design field of human–computer interaction, a user interface (UI) is the space where interactions between humans and machines occur. The goal of this interaction is to allow effective operation and control of the machine from the human end, while the machine simultaneously feeds back information that aids the operators' decision-making process. Examples of this broad concept of user interfaces include the interactive aspects of computer operating systems, hand tools, heavy machinery operator controls, and process controls. The design considerations applicable when creating user interfaces are related to, or involve such disciplines as, ergonomics and psychology.
Generally, the goal of user interface design is to produce a user interface that makes it easy, efficient, and enjoyable (user-friendly) to operate a machine in the way which produces the desired result (i.e. maximum usability). This generally means that the operator needs to provide minimal input to achieve the desired output, and also that the machine minimizes undesired outputs to the user.
User interfaces are composed of one or more layers, including a human-machine interface (HMI) that interfaces machines with physical input hardware such as keyboards, mice, or game pads, and output hardware such as computer monitors, speakers, and printers. A device that implements an HMI is called a human interface device (HID). Other terms for human–machine interfaces are man–machine interface (MMI) and, when the machine in question is a computer, human–computer interface. Additional UI layers may interact with one or more human senses, including: tactile UI (touch), visual UI (sight), auditory UI (sound), olfactory UI (smell), equilibria UI (balance), and gustatory UI (taste).
Composite user interfaces (CUIs) are UIs that interact with two or more senses. The most common CUI is a graphical user interface (GUI), which is composed of a tactile UI and a visual UI capable of displaying graphics. When sound is added to a GUI, it becomes a multimedia user interface (MUI). There are three broad categories of CUI: standard, virtual and augmented. Standard CUI use standard human interface devices like keyboards, mice, and computer monitors. When the CUI blocks out the real world to create a virtual reality, the CUI is virtual and uses a virtual reality interface. When the CUI does not block out the real world and creates augmented reality, the CUI is augmented and uses an augmented reality interface. When a UI interacts with all human senses, it is called a qualia interface, named after the theory of qualia. CUI may also be classified by how many senses they interact with as either an X-sense virtual reality interface or X-sense augmented reality interface, where X is the number of senses interfaced with. For example, a Smell-O-Vision is a 3-sense (3S) Standard CUI with visual display, sound and smells; when virtual reality interfaces interface with smells and touch it is said to be a 4-sense (4S) virtual reality interface; and when augmented reality interfaces interface with smells and touch it is said to be a 4-sense (4S) augmented reality interface.
Overview
The user interface or human–machine interface is the part of the machine that handles the human–machine interaction. Membrane switches, rubber keypads and touchscreens are examples of the physical part of the Human Machine Interface which we can see and touch.
In complex systems, the human–machine interface is typically computerized. The term human–computer interface refers to this kind of system. In the context of computing, the term typically extends as well to the software dedicated to control the physical elements used for human–computer interaction.
The engineering of human–machine interfaces is enhanced by considering ergonomics (human factors). The corresponding disciplines are human factors engineering (HFE) and usability engineering (UE), which is part of systems engineering.
Tools used for incorporating human factors in the interface design are developed based on knowledge of computer science, such as computer graphics, operating systems, programming languages. Nowadays, we use the expression graphical user interface for human–machine interface on computers, as nearly all of them are now using graphics.
Multimodal interfaces allow users to interact using more than one modality of user input.
Terminology
There is a difference between a user interface and an operator interface or a human–machine interface (HMI).
The term "user interface" is often used in the context of (personal) computer systems and electronic devices.
Where a network of equipment or computers are interlinked through an MES (Manufacturing Execution System)-or Host to display information.
A human–machine interface (HMI) is typically local to one machine or piece of equipment, and is the interface method between the human and the equipment/machine. An operator interface is the interface method by which multiple pieces of equipment that are linked by a host control system are accessed or controlled.
The system may expose several user interfaces to serve different kinds of users. For example, a computerized library database might provide two user interfaces, one for library patrons (limited set of functions, optimized for ease of use) and the other for library personnel (wide set of functions, optimized for efficiency).
The user interface of a mechanical system, a vehicle or an industrial installation is sometimes referred to as the human–machine interface (HMI). HMI is a modification of the original term MMI (man–machine interface). In practice, the abbreviation MMI is still frequently used although some may claim that MMI stands for something different now. Another abbreviation is HCI, but is more commonly used for human–computer interaction. Other terms used are operator interface console (OIC) and operator interface terminal (OIT). However it is abbreviated, the terms refer to the 'layer' that separates a human that is operating a machine from the machine itself. Without a clean and usable interface, humans would not be able to interact with information systems.
In science fiction, HMI is sometimes used to refer to what is better described as a direct neural interface. However, this latter usage is seeing increasing application in the real-life use of (medical) prostheses—the artificial extension that replaces a missing body part (e.g., cochlear implants).
In some circumstances, computers might observe the user and react according to their actions without specific commands. A means of tracking parts of the body is required, and sensors noting the position of the head, direction of gaze and so on have been used experimentally. This is particularly relevant to immersive interfaces.
History
The history of user interfaces can be divided into the following phases according to the dominant type of user interface:
1945–1968: Batch interface
In the batch era, computing power was extremely scarce and expensive. User interfaces were rudimentary. Users had to accommodate computers rather than the other way around; user interfaces were considered overhead, and software was designed to keep the processor at maximum utilization with as little overhead as possible.
The input side of the user interfaces for batch machines was mainly punched cards or equivalent media like paper tape. The output side added line printers to these media. With the limited exception of the system operator's console, human beings did not interact with batch machines in real time at all.
Submitting a job to a batch machine involved, first, preparing a deck of punched cards describing a program and a dataset. Punching the program cards wasn't done on the computer itself, but on keypunches, specialized typewriter-like machines that were notoriously bulky, unforgiving, and prone to mechanical failure. The software interface was similarly unforgiving, with very strict syntaxes meant to be parsed by the smallest possible compilers and interpreters.
Once the cards were punched, one would drop them in a job queue and wait. Eventually, operators would feed the deck to the computer, perhaps mounting magnetic tapes to supply another dataset or helper software. The job would generate a printout, containing final results or an abort notice with an attached error log. Successful runs might also write a result on magnetic tape or generate some data cards to be used in a later computation.
The turnaround time for a single job often spanned entire days. If one was very lucky, it might be hours; there was no real-time response. But there were worse fates than the card queue; some computers required an even more tedious and error-prone process of toggling in programs in binary code using console switches. The very earliest machines had to be partly rewired to incorporate program logic into themselves, using devices known as plugboards.
Early batch systems gave the currently running job the entire computer; program decks and tapes had to include what we would now think of as operating system code to talk to I/O devices and do whatever other housekeeping was needed. Midway through the batch period, after 1957, various groups began to experiment with so-called "load-and-go" systems. These used a monitor program which was always resident on the computer. Programs could call the monitor for services. Another function of the monitor was to do better error checking on submitted jobs, catching errors earlier and more intelligently and generating more useful feedback to the users. Thus, monitors represented the first step towards both operating systems and explicitly designed user interfaces.
1969–present: Command-line user interface
Command-line interfaces (CLIs) evolved from batch monitors connected to the system console. Their interaction model was a series of request-response transactions, with requests expressed as textual commands in a specialized vocabulary. Latency was far lower than for batch systems, dropping from days or hours to seconds. Accordingly, command-line systems allowed the user to change his or her mind about later stages of the transaction in response to real-time or near-real-time feedback on earlier results. Software could be exploratory and interactive in ways not possible before. But these interfaces still placed a relatively heavy mnemonic load on the user, requiring a serious investment of effort and learning time to master.
The earliest command-line systems combined teleprinters with computers, adapting a mature technology that had proven effective for mediating the transfer of information over wires between human beings. Teleprinters had originally been invented as devices for automatic telegraph transmission and reception; they had a history going back to 1902 and had already become well-established in newsrooms and elsewhere by 1920. In reusing them, economy was certainly a consideration, but psychology and the Rule of Least Surprise mattered as well; teleprinters provided a point of interface with the system that was familiar to many engineers and users.
The widespread adoption of video-display terminals (VDTs) in the mid-1970s ushered in the second phase of command-line systems. These cut latency further, because characters could be thrown on the phosphor dots of a screen more quickly than a printer head or carriage can move. They helped quell conservative resistance to interactive programming by cutting ink and paper consumables out of the cost picture, and were to the first TV generation of the late 1950s and 60s even more iconic and comfortable than teleprinters had been to the computer pioneers of the 1940s.
Just as importantly, the existence of an accessible screen — a two-dimensional display of text that could be rapidly and reversibly modified — made it economical for software designers to deploy interfaces that could be described as visual rather than textual. The pioneering applications of this kind were computer games and text editors; close descendants of some of the earliest specimens, such as rogue(6), and vi(1), are still a live part of Unix tradition.
1985: SAA User Interface or Text-Based User Interface
In 1985, with the beginning of Microsoft Windows and other graphical user interfaces, IBM created what is called the Systems Application Architecture (SAA) standard which include the Common User Access (CUA) derivative. CUA successfully created what we know and use today in Windows, and most of the more recent DOS or Windows Console Applications will use that standard as well.
This defined that a pulldown menu system should be at the top of the screen, status bar at the bottom, shortcut keys should stay the same for all common functionality (F2 to Open for example would work in all applications that followed the SAA standard). This greatly helped the speed at which users could learn an application so it caught on quick and became an industry standard.
1968–present: Graphical User Interface
1968 – Douglas Engelbart demonstrated NLS, a system which uses a mouse, pointers, hypertext, and multiple windows.
1970 – Researchers at Xerox Palo Alto Research Center (many from SRI) develop WIMP paradigm (Windows, Icons, Menus, Pointers)
1973 – Xerox Alto: commercial failure due to expense, poor user interface, and lack of programs
1979 – Steve Jobs and other Apple engineers visit Xerox PARC. Though Pirates of Silicon Valley dramatizes the events, Apple had already been working on developing a GUI, such as the Macintosh and Lisa projects, before the visit.
1981 – Xerox Star: focus on WYSIWYG. Commercial failure (25K sold) due to cost ($16K each), performance (minutes to save a file, couple of hours to recover from crash), and poor marketing
1982 – Rob Pike and others at Bell Labs designed Blit, which was released in 1984 by AT&T and Teletype as DMD 5620 terminal.
1984 – Apple Macintosh popularizes the GUI. Super Bowl commercial shown twice, was the most expensive commercial ever made at that time
1984 – MIT's X Window System: hardware-independent platform and networking protocol for developing GUIs on UNIX-like systems
1985 – Windows 1.0 – provided GUI interface to MS-DOS. No overlapping windows (tiled instead).
1985 – Microsoft and IBM start work on OS/2 meant to eventually replace MS-DOS and Windows
1986 – Apple threatens to sue Digital Research because their GUI desktop looked too much like Apple's Mac.
1987 – Windows 2.0 – Overlapping and resizable windows, keyboard and mouse enhancements
1987 – Macintosh II: first full-color Mac
1988 – OS/2 1.10 Standard Edition (SE) has GUI written by Microsoft, looks a lot like Windows 2
Interface design
Primary methods used in the interface design include prototyping and simulation.
Typical human–machine interface design consists of the following stages: interaction specification, interface software specification and prototyping:
Common practices for interaction specification include user-centered design, persona, activity-oriented design, scenario-based design, and resiliency design.
Common practices for interface software specification include use cases and constrain enforcement by interaction protocols (intended to avoid use errors).
Common practices for prototyping are based on libraries of interface elements (controls, decoration, etc.).
Principles of quality
All great interfaces share eight qualities or characteristics:
Clarity: The interface avoids ambiguity by making everything clear through language, flow, hierarchy and metaphors for visual elements.
Concision: It's easy to make the interface clear by over-clarifying and labeling everything, but this leads to interface bloat, where there is just too much stuff on the screen at the same time. If too many things are on the screen, finding what you're looking for is difficult, and so the interface becomes tedious to use. The real challenge in making a great interface is to make it concise and clear at the same time.
Familiarity: Even if someone uses an interface for the first time, certain elements can still be familiar. Real-life metaphors can be used to communicate meaning.
Responsiveness: A good interface should not feel sluggish. This means that the interface should provide good feedback to the user about what's happening and whether the user's input is being successfully processed.
Consistency: Keeping your interface consistent across your application is important because it allows users to recognize usage patterns.
Aesthetics: While you don't need to make an interface attractive for it to do its job, making something look good will make the time your users spend using your application more enjoyable; and happier users can only be a good thing.
Efficiency: Time is money, and a great interface should make the user more productive through shortcuts and good design.
Forgiveness: A good interface should not punish users for their mistakes but should instead provide the means to remedy them.
Principle of least astonishment
The principle of least astonishment (POLA) is a general principle in the design of all kinds of interfaces. It is based on the idea that human beings can only pay full attention to one thing at one time, leading to the conclusion that novelty should be minimized.
Principle of habit formation
If an interface is used persistently, the user will unavoidably develop habits for using the interface. The designer's role can thus be characterized as ensuring the user forms good habits. If the designer is experienced with other interfaces, they will similarly develop habits, and often make unconscious assumptions regarding how the user will interact with the interface.
A model of design criteria: User Experience Honeycomb
Peter Morville of Google designed the User Experience Honeycomb framework in 2004 when leading operations in user interface design. The framework was created to guide user interface design. It would act as a guideline for many web development students for a decade.
Usable: Is the design of the system easy and simple to use? The application should feel familiar, and it should be easy to use.
Useful: Does the application fulfill a need? A business's product or service needs to be useful.
Desirable: Is the design of the application sleek and to the point? The aesthetics of the system should be attractive, and easy to translate.
Findable: Are users able to quickly find the information they're looking for? Information needs to be findable and simple to navigate. A user should never have to hunt for your product or information.
Accessible: Does the application support enlarged text without breaking the framework? An application should be accessible to those with disabilities.
Credible: Does the application exhibit trustworthy security and company details? An application should be transparent, secure, and honest.
Valuable: Does the end-user think it's valuable? If all 6 criteria are met, the end-user will find value and trust in the application.
Types
Attentive user interfaces manage the user attention deciding when to interrupt the user, the kind of warnings, and the level of detail of the messages presented to the user.
Batch interfaces are non-interactive user interfaces, where the user specifies all the details of the batch job in advance to batch processing, and receives the output when all the processing is done. The computer does not prompt for further input after the processing has started.
Command line interfaces (CLIs) prompt the user to provide input by typing a command string with the computer keyboard and respond by outputting text to the computer monitor. Used by programmers and system administrators, in engineering and scientific environments, and by technically advanced personal computer users.
Conversational interfaces enable users to command the computer with plain text English (e.g., via text messages, or chatbots) or voice commands, instead of graphic elements. These interfaces often emulate human-to-human conversations.
Conversational interface agents attempt to personify the computer interface in the form of an animated person, robot, or other character (such as Microsoft's Clippy the paperclip), and present interactions in a conversational form.
Crossing-based interfaces are graphical user interfaces in which the primary task consists in crossing boundaries instead of pointing.
Direct manipulation interface is the name of a general class of user interfaces that allow users to manipulate objects presented to them, using actions that correspond at least loosely to the physical world.
Gesture interfaces are graphical user interfaces which accept input in a form of hand gestures, or mouse gestures sketched with a computer mouse or a stylus.
Graphical user interfaces (GUI) accept input via devices such as a computer keyboard and mouse and provide articulated graphical output on the computer monitor. There are at least two different principles widely used in GUI design: Object-oriented user interfaces (OOUIs) and application-oriented interfaces.
Hardware interfaces are the physical, spatial interfaces found on products in the real world from toasters, to car dashboards, to airplane cockpits. They are generally a mixture of knobs, buttons, sliders, switches, and touchscreens.
provide input to electronic or electro-mechanical devices by passing a finger through reproduced holographic images of what would otherwise be tactile controls of those devices, floating freely in the air, detected by a wave source and without tactile interaction.
Intelligent user interfaces are human–machine interfaces that aim to improve the efficiency, effectiveness, and naturalness of human–machine interaction by representing, reasoning, and acting on models of the user, domain, task, discourse, and media (e.g., graphics, natural language, gesture).
Motion tracking interfaces monitor the user's body motions and translate them into commands, currently being developed by Apple.
Multi-screen interfaces, employ multiple displays to provide a more flexible interaction. This is often employed in computer game interaction in both the commercial arcades and more recently the handheld markets.
Natural-language interfaces are used for search engines and on webpages. User types in a question and waits for a response.
Non-command user interfaces, which observe the user to infer their needs and intentions, without requiring that they formulate explicit commands.
Object-oriented user interfaces (OOUI) are based on object-oriented programming metaphors, allowing users to manipulate simulated objects and their properties.
Permission-driven user interfaces show or conceal menu options or functions depending on the user's level of permissions. The system is intended to improve the user experience by removing items that are unavailable to the user. A user who sees functions that are unavailable for use may become frustrated. It also provides an enhancement to security by hiding functional items from unauthorized persons.
Reflexive user interfaces where the users control and redefine the entire system via the user interface alone, for instance to change its command verbs. Typically, this is only possible with very rich graphic user interfaces.
Search interface is how the search box of a site is displayed, as well as the visual representation of the search results.
Tangible user interfaces, which place a greater emphasis on touch and physical environment or its element.
Task-focused interfaces are user interfaces which address the information overload problem of the desktop metaphor by making tasks, not files, the primary unit of interaction.
Text-based user interfaces (TUIs) are user interfaces which interact via text. TUIs include command-line interfaces and text-based WIMP environments.
Touchscreens are displays that accept input by touch of fingers or a stylus. Used in a growing amount of mobile devices and many types of point of sale, industrial processes and machines, self-service machines, etc.
Touch user interface are graphical user interfaces using a touchpad or touchscreen display as a combined input and output device. They supplement or replace other forms of output with haptic feedback methods. Used in computerized simulators, etc.
Voice user interfaces, which accept input and provide output by generating voice prompts. The user input is made by pressing keys or buttons, or responding verbally to the interface.
Web-based user interfaces or web user interfaces (WUI) that accept input and provide output by generating web pages viewed by the user using a web browser program. Newer implementations utilize PHP, Java, JavaScript, AJAX, Apache Flex, .NET Framework, or similar technologies to provide real-time control in a separate program, eliminating the need to refresh a traditional HTML-based web browser. Administrative web interfaces for web-servers, servers and networked computers are often called control panels.
Zero-input interfaces get inputs from a set of sensors instead of querying the user with input dialogs.
Zooming user interfaces are graphical user interfaces in which information objects are represented at different levels of scale and detail, and where the user can change the scale of the viewed area in order to show more detail.
Gallery
See also
Adaptive user interfaces
Brain–computer interface
Computer user satisfaction
Direct voice input
Distinguishable interfaces
Ergonomics and human factors – the study of designing objects to be better adapted to the shape of the human body
Flat design
Framebuffer
History of the GUI
Icon design
Information architecture – organizing, naming, and labelling information structures
Information visualization – the use of sensory representations of abstract data to reinforce cognition
Interaction design
Interaction technique
Interface (computer science)
Kinetic user interface
Knowledge visualization – the use of visual representations to transfer knowledge
Natural user interfaces
Ncurses, a semigraphical user interface.
Organic user interface
Post-WIMP
Tangible user interface
Unified Code for Units of Measure
Usability links
User assistance
User experience
User experience design
User interface design
Useware
Virtual artifact
Virtual user interface
References
External links
Conferences covers a wide area of user interface publications
Chapter 2. History: A brief History of user interfaces
User interface techniques
Virtual reality
Human communication
Human–machine interaction | Operating System (OS) | 1,340 |
Services menu
The Services menu (or simply Services) is a user interface element in a computer operating system. The services are programs that accept input from the user selection, process it, and optionally put the result back in the clipboard. The concept originated in the NeXTSTEP operating system, from which it was carried over into macOS and GNUstep. Similar features can be emulated on other operating systems.
macOS
Apple advertises the Services menu in connection with other features of its operating system. For example, it's possible to desktop search for a piece of text by selecting it with the mouse and using the service from Spotlight. Other central services are Grab for taking screenshots, and the system spell checker. The concept is similar to a GUI equivalent of a Unix pipe, allowing arbitrary data to be processed and passed between programs.
Services can be implemented as application services, which expose a portion of the functionality of an application to operate on selected data, usually without displaying an interface. In its developer documentation, Apple recommends that applications use services to provide features that are "generally useful", giving as an example a Usenet client providing ROT13 encryption as a service. Standalone services may also be created without a host application. Their simple, one-purpose nature and the fact that they don't require a GUI to be designed makes writing standalone services popular beginner's macOS programming projects.
Since many applications install their entries without asking the user, the macOS services menu tends to clog up with dozens of entries quickly. Most users only will ever use a small subset of the possible options, therefore cutting down and customizing the menu makes it both faster and more pleasant to use. Prior to Mac OS X Snow Leopard, third party software is required to do this; in Snow Leopard, the Services menu can be customized from the Keyboard pane of System Preferences.
Emulation
From the point of view of software, the Services menu is a means of inter-process communication. To the user, it is an interface for executing actions on selected data. The emulation of the Services menu is based on the fact that there are several ways this can be achieved in an operating system. Even in macOS, there is an alternative system called the context menu handler, which is carried over from classic Mac OS.
In the X Window System, any data selected in an application is available to all other programs. Thus the Services menu can be an application which retrieves the current selection, and lets the user choose an action. Missing is the part about returning the processed data back to the originating application. Instead, the service can open a new window to show the results.
Alternatively, the service could replace the current cut buffer with the results of the operation, leaving the user only to perform a paste (since different toolkits implement copy/select and paste commands differently, and probably not under external program control).
References
External links
Introduction to System Services at Apple Developer Connection
Services menu emulation for Linux/Unix with PyGTK
NeXT
MacOS user interface
GNUstep | Operating System (OS) | 1,341 |
Recommender system
A recommender system, or a recommendation system (sometimes replacing 'system' with a synonym such as platform or engine), is a subclass of information filtering system that seeks to predict the "rating" or "preference" a user would give to an item.
Recommender systems are used in a variety of areas, with commonly recognised examples taking the form of playlist generators for video and music services, product recommenders for online stores, or content recommenders for social media platforms and open web content recommenders. These systems can operate using a single input, like music, or multiple inputs within and across platforms like news, books, and search queries. There are also popular recommender systems for specific topics like restaurants and online dating. Recommender systems have also been developed to explore research articles and experts, collaborators, and financial services.
Overview
Recommender systems usually make use of either or both collaborative filtering and content-based filtering (also known as the personality-based approach), as well as other systems such as knowledge-based systems. Collaborative filtering approaches build a model from a user's past behavior (items previously purchased or selected and/or numerical ratings given to those items) as well as similar decisions made by other users. This model is then used to predict items (or ratings for items) that the user may have an interest in. Content-based filtering approaches utilize a series of discrete, pre-tagged characteristics of an item in order to recommend additional items with similar properties.
We can demonstrate the differences between collaborative and content-based filtering by comparing two early music recommender systems – Last.fm and Pandora Radio.
Last.fm creates a "station" of recommended songs by observing what bands and individual tracks the user has listened to on a regular basis and comparing those against the listening behavior of other users. Last.fm will play tracks that do not appear in the user's library, but are often played by other users with similar interests. As this approach leverages the behavior of users, it is an example of a collaborative filtering technique.
Pandora uses the properties of a song or artist (a subset of the 400 attributes provided by the Music Genome Project) to seed a "station" that plays music with similar properties. User feedback is used to refine the station's results, deemphasizing certain attributes when a user "dislikes" a particular song and emphasizing other attributes when a user "likes" a song. This is an example of a content-based approach.
Each type of system has its strengths and weaknesses. In the above example, Last.fm requires a large amount of information about a user to make accurate recommendations. This is an example of the cold start problem, and is common in collaborative filtering systems. Whereas Pandora needs very little information to start, it is far more limited in scope (for example, it can only make recommendations that are similar to the original seed).
Recommender systems are a useful alternative to search algorithms since they help users discover items they might not have found otherwise. Of note, recommender systems are often implemented using search engines indexing non-traditional data.
Recommender systems were first mentioned in a technical report as a "digital bookshelf" in 1990 by Jussi Karlgren at Columbia University, and implemented at scale and worked through in technical reports and publications from 1994 onwards by Jussi Karlgren, then at SICS,
and research groups led by Pattie Maes at MIT, Will Hill at Bellcore, and Paul Resnick, also at MIT
whose work with GroupLens was awarded the 2010 ACM Software Systems Award.
Montaner provided the first overview of recommender systems from an intelligent agent perspective. Adomavicius provided a new, alternate overview of recommender systems. Herlocker provides an additional overview of evaluation techniques for recommender systems, and Beel et al. discussed the problems of offline evaluations. Beel et al. have also provided literature surveys on available research paper recommender systems and existing challenges.
Recommender systems have been the focus of several granted patents.
Approaches
Collaborative filtering
One approach to the design of recommender systems that has wide use is collaborative filtering. Collaborative filtering is based on the assumption that people who agreed in the past will agree in the future, and that they will like similar kinds of items as they liked in the past. The system generates recommendations using only information about rating profiles for different users or items. By locating peer users/items with a rating history similar to the current user or item, they generate recommendations using this neighborhood. Collaborative filtering methods are classified as memory-based and model-based. A well-known example of memory-based approaches is the user-based algorithm, while that of model-based approaches is the Kernel-Mapping Recommender.
A key advantage of the collaborative filtering approach is that it does not rely on machine analyzable content and therefore it is capable of accurately recommending complex items such as movies without requiring an "understanding" of the item itself. Many algorithms have been used in measuring user similarity or item similarity in recommender systems. For example, the k-nearest neighbor (k-NN) approach and the Pearson Correlation as first implemented by Allen.
When building a model from a user's behavior, a distinction is often made between explicit and implicit forms of data collection.
Examples of explicit data collection include the following:
Asking a user to rate an item on a sliding scale.
Asking a user to search.
Asking a user to rank a collection of items from favorite to least favorite.
Presenting two items to a user and asking him/her to choose the better one of them.
Asking a user to create a list of items that he/she likes (see Rocchio classification or other similar techniques).
Examples of implicit data collection include the following:
Observing the items that a user views in an online store.
Analyzing item/user viewing times.
Keeping a record of the items that a user purchases online.
Obtaining a list of items that a user has listened to or watched on his/her computer.
Analyzing the user's social network and discovering similar likes and dislikes.
Collaborative filtering approaches often suffer from three problems: cold start, scalability, and sparsity.
Cold start: For a new user or item, there isn't enough data to make accurate recommendations. Note: one commonly implemented solution to this problem is the Multi-armed bandit algorithm.
Scalability: There are millions of users and products in many of the environments in which these systems make recommendations. Thus, a large amount of computation power is often necessary to calculate recommendations.
Sparsity: The number of items sold on major e-commerce sites is extremely large. The most active users will only have rated a small subset of the overall database. Thus, even the most popular items have very few ratings.
One of the most famous examples of collaborative filtering is item-to-item collaborative filtering (people who buy x also buy y), an algorithm popularized by Amazon.com's recommender system.
Many social networks originally used collaborative filtering to recommend new friends, groups, and other social connections by examining the network of connections between a user and their friends. Collaborative filtering is still used as part of hybrid systems.
Content-based filtering
Another common approach when designing recommender systems is content-based filtering. Content-based filtering methods are based on a description of the item and a profile of the user's preferences. These methods are best suited to situations where there is known data on an item (name, location, description, etc.), but not on the user. Content-based recommenders treat recommendation as a user-specific classification problem and learn a classifier for the user's likes and dislikes based on an item's features.
In this system, keywords are used to describe the items, and a user profile is built to indicate the type of item this user likes. In other words, these algorithms try to recommend items similar to those that a user liked in the past or is examining in the present. It does not rely on a user sign-in mechanism to generate this often temporary profile. In particular, various candidate items are compared with items previously rated by the user, and the best-matching items are recommended. This approach has its roots in information retrieval and information filtering research.
To create a user profile, the system mostly focuses on two types of information:
1. A model of the user's preference.
2. A history of the user's interaction with the recommender system.
Basically, these methods use an item profile (i.e., a set of discrete attributes and features) characterizing the item within the system. To abstract the features of the items in the system, an item presentation algorithm is applied. A widely used algorithm is the tf–idf representation (also called vector space representation). The system creates a content-based profile of users based on a weighted vector of item features. The weights denote the importance of each feature to the user and can be computed from individually rated content vectors using a variety of techniques. Simple approaches use the average values of the rated item vector while other sophisticated methods use machine learning techniques such as Bayesian Classifiers, cluster analysis, decision trees, and artificial neural networks in order to estimate the probability that the user is going to like the item.
A key issue with content-based filtering is whether the system can learn user preferences from users' actions regarding one content source and use them across other content types. When the system is limited to recommending content of the same type as the user is already using, the value from the recommendation system is significantly less than when other content types from other services can be recommended. For example, recommending news articles based on news browsing is useful. Still, it would be much more useful when music, videos, products, discussions, etc., from different services, can be recommended based on news browsing. To overcome this, most content-based recommender systems now use some form of the hybrid system.
Content-based recommender systems can also include opinion-based recommender systems. In some cases, users are allowed to leave text reviews or feedback on the items. These user-generated texts are implicit data for the recommender system because they are potentially rich resources of both feature/aspects of the item and users' evaluation/sentiment to the item. Features extracted from the user-generated reviews are improved meta-data of items, because as they also reflect aspects of the item like meta-data, extracted features are widely concerned by the users. Sentiments extracted from the reviews can be seen as users' rating scores on the corresponding features. Popular approaches of opinion-based recommender system utilize various techniques including text mining, information retrieval, sentiment analysis (see also Multimodal sentiment analysis) and deep learning.
Session-based recommender systems
These recommender systems use the interactions of a user within a session to generate recommendations. Session-based recommender systems are used at Youtube and Amazon. These are particularly useful when history (such as past clicks, purchases) of a user is not available or not relevant in the current user session. Domains where session-based recommendations are particularly relevant include video, e-commerce, travel, music and more. Most instances of session-based recommender systems rely on the sequence of recent interactions within a session without requiring any additional details (historical, demographic) of the user. Techniques for session-based recommendations are mainly based on generative sequential models such as Recurrent Neural Networks, Transformers, and other deep learning based approaches
Reinforcement learning for recommender systems
The recommendation problem can be seen as a special instance of a reinforcement learning problem whereby the user is the environment upon which the agent, the recommendation system acts upon in order to receive a reward, for instance, a click or engagement by the user. One aspect of reinforcement learning that is of particular use in the area of recommender systems is the fact that the models or policies can be learned by providing a reward to the recommendation agent. This is in contrast to traditional learning techniques which rely on supervised learning approaches that are less flexible, reinforcement learning recommendation techniques allow to potentially train models that can be optimized directly on metrics of engagement, and user interest.
Multi-criteria recommender systems
Multi-criteria recommender systems (MCRS) can be defined as recommender systems that incorporate preference information upon multiple criteria. Instead of developing recommendation techniques based on a single criterion value, the overall preference of user u for the item i, these systems try to predict a rating for unexplored items of u by exploiting preference information on multiple criteria that affect this overall preference value. Several researchers approach MCRS as a multi-criteria decision making (MCDM) problem, and apply MCDM methods and techniques to implement MCRS systems. See this chapter for an extended introduction.
Risk-aware recommender systems
The majority of existing approaches to recommender systems focus on recommending the most relevant content to users using contextual information, yet do not take into account the risk of disturbing the user with unwanted notifications. It is important to consider the risk of upsetting the user by pushing recommendations in certain circumstances, for instance, during a professional meeting, early morning, or late at night. Therefore, the performance of the recommender system depends in part on the degree to which it has incorporated the risk into the recommendation process. One option to manage this issue is DRARS, a system which models the context-aware recommendation as a bandit problem. This system combines a content-based technique and a contextual bandit algorithm.
Mobile recommender systems
Mobile recommender systems make use of internet-accessing smart phones to offer personalized, context-sensitive recommendations. This is a particularly difficult area of research as mobile data is more complex than data that recommender systems often have to deal with. It is heterogeneous, noisy, requires spatial and temporal auto-correlation, and has validation and generality problems.
There are three factors that could affect the mobile recommender systems and the accuracy of prediction results: the context, the recommendation method and privacy. Additionally, mobile recommender systems suffer from a transplantation problem – recommendations may not apply in all regions (for instance, it would be unwise to recommend a recipe in an area where all of the ingredients may not be available).
One example of a mobile recommender system are the approaches taken by companies such as Uber and Lyft to generate driving routes for taxi drivers in a city. This system uses GPS data of the routes that taxi drivers take while working, which includes location (latitude and longitude), time stamps, and operational status (with or without passengers). It uses this data to recommend a list of pickup points along a route, with the goal of optimizing occupancy times and profits.
Hybrid recommender systems
Most recommender systems now use a hybrid approach, combining collaborative filtering, content-based filtering, and other approaches . There is no reason why several different techniques of the same type could not be hybridized. Hybrid approaches can be implemented in several ways: by making content-based and collaborative-based predictions separately and then combining them; by adding content-based capabilities to a collaborative-based approach (and vice versa); or by unifying the approaches into one model (see for a complete review of recommender systems). Several studies that empirically compare the performance of the hybrid with the pure collaborative and content-based methods and demonstrated that the hybrid methods can provide more accurate
recommendations than pure approaches. These methods can also be used to overcome some of the common problems in recommender systems such as cold start and the sparsity problem, as well as the knowledge engineering bottleneck in knowledge-based approaches.
Netflix is a good example of the use of hybrid recommender systems. The website makes recommendations by comparing the watching and searching habits of similar users (i.e., collaborative filtering) as well as by offering movies that share characteristics with films that a user has rated highly (content-based filtering).
Some hybridization techniques include:
Weighted: Combining the score of different recommendation components numerically.
Switching: Choosing among recommendation components and applying the selected one.
Mixed: Recommendations from different recommenders are presented together to give the recommendation.
Feature Combination: Features derived from different knowledge sources are combined together and given to a single recommendation algorithm. </ref>
Feature Augmentation: Computing a feature or set of features, which is then part of the input to the next technique.
Cascade: Recommenders are given strict priority, with the lower priority ones breaking ties in the scoring of the higher ones.
Meta-level: One recommendation technique is applied and produces some sort of model, which is then the input used by the next technique.
The Netflix Prize
One of the events that energized research in recommender systems was the Netflix Prize. From 2006 to 2009, Netflix sponsored a competition, offering a grand prize of $1,000,000 to the team that could take an offered dataset of over 100 million movie ratings and return recommendations that were 10% more accurate than those offered by the company's existing recommender system. This competition energized the search for new and more accurate algorithms. On 21 September 2009, the grand prize of US$1,000,000 was given to the BellKor's Pragmatic Chaos team using tiebreaking rules.
The most accurate algorithm in 2007 used an ensemble method of 107 different algorithmic approaches, blended into a single prediction. As stated by the winners, Bell et al.:
Predictive accuracy is substantially improved when blending multiple predictors. Our experience is that most efforts should be concentrated in deriving substantially different approaches, rather than refining a single technique. Consequently, our solution is an ensemble of many methods.
Many benefits accrued to the web due to the Netflix project. Some teams have taken their technology and applied it to other markets. Some members from the team that finished second place founded Gravity R&D, a recommendation engine that's active in the RecSys community. 4-Tell, Inc. created a Netflix project–derived solution for ecommerce websites.
A number of privacy issues arose around the dataset offered by Netflix for the Netflix Prize competition. Although the data sets were anonymized in order to preserve customer privacy, in 2007 two researchers from the University of Texas were able to identify individual users by matching the data sets with film ratings on the Internet Movie Database. As a result, in December 2009, an anonymous Netflix user sued Netflix in Doe v. Netflix, alleging that Netflix had violated United States fair trade laws and the Video Privacy Protection Act by releasing the datasets. This, as well as concerns from the Federal Trade Commission, led to the cancellation of a second Netflix Prize competition in 2010.
Performance measures
Evaluation is important in assessing the effectiveness of recommendation algorithms. To measure the effectiveness of recommender systems, and compare different approaches, three types of evaluations are available: user studies, online evaluations (A/B tests), and offline evaluations.
The commonly used metrics are the mean squared error and root mean squared error, the latter having been used in the Netflix Prize. The information retrieval metrics such as precision and recall or DCG are useful to assess the quality of a recommendation method. Diversity, novelty, and coverage are also considered as important aspects in evaluation. However, many of the classic evaluation measures are highly criticized.
Evaluating the performance of a recommendation algorithm on a fixed test dataset will always be extremely challenging as it is impossible to accurately predict the reactions of real users to the recommendations. Hence any metric that computes the effectiveness of an algorithm in offline data will be imprecise.
User studies are rather a small scale. A few dozens or hundreds of users are presented recommendations created by different recommendation approaches, and then the users judge which recommendations are best.
In A/B tests, recommendations are shown to typically thousands of users of a real product, and the recommender system randomly picks at least two different recommendation approaches to generate recommendations. The effectiveness is measured with implicit measures of effectiveness such as conversion rate or click-through rate.
Offline evaluations are based on historic data, e.g. a dataset that contains information about how users previously rated movies.
The effectiveness of recommendation approaches is then measured based on how well a recommendation approach can predict the users' ratings in the dataset. While a rating is an explicit expression of whether a user liked a movie, such information is not available in all domains. For instance, in the domain of citation recommender systems, users typically do not rate a citation or recommended article. In such cases, offline evaluations may use implicit measures of effectiveness. For instance, it may be assumed that a recommender system is effective that is able to recommend as many articles as possible that are contained in a research article's reference list. However, this kind of offline evaluations is seen critical by many researchers. For instance, it has been shown that results of offline evaluations have low correlation with results from user studies or A/B tests. A dataset popular for offline evaluation has been shown to contain duplicate data and thus to lead to wrong conclusions in the evaluation of algorithms. Often, results of so-called offline evaluations do not correlate with actually assessed user-satisfaction. This is probably because offline training is highly biased toward the highly reachable items, and offline testing data is highly influenced by the outputs of the online recommendation module. Researchers have concluded that the results of offline evaluations should be viewed critically.
Beyond accuracy
Typically, research on recommender systems is concerned with finding the most accurate recommendation algorithms. However, there are a number of factors that are also important.
Diversity – Users tend to be more satisfied with recommendations when there is a higher intra-list diversity, e.g. items from different artists.
Recommender persistence – In some situations, it is more effective to re-show recommendations, or let users re-rate items, than showing new items. There are several reasons for this. Users may ignore items when they are shown for the first time, for instance, because they had no time to inspect the recommendations carefully.
Privacy – Recommender systems usually have to deal with privacy concerns because users have to reveal sensitive information. Building user profiles using collaborative filtering can be problematic from a privacy point of view. Many European countries have a strong culture of data privacy, and every attempt to introduce any level of user profiling can result in a negative customer response. Much research has been conducted on ongoing privacy issues in this space. The Netflix Prize is particularly notable for the detailed personal information released in its dataset. Ramakrishnan et al. have conducted an extensive overview of the trade-offs between personalization and privacy and found that the combination of weak ties (an unexpected connection that provides serendipitous recommendations) and other data sources can be used to uncover identities of users in an anonymized dataset.
User demographics – Beel et al. found that user demographics may influence how satisfied users are with recommendations. In their paper they show that elderly users tend to be more interested in recommendations than younger users.
Robustness – When users can participate in the recommender system, the issue of fraud must be addressed.
Serendipity – Serendipity is a measure of "how surprising the recommendations are". For instance, a recommender system that recommends milk to a customer in a grocery store might be perfectly accurate, but it is not a good recommendation because it is an obvious item for the customer to buy. "[Serendipity] serves two purposes: First, the chance that users lose interest because the choice set is too uniform decreases. Second, these items are needed for algorithms to learn and improve themselves".
Trust – A recommender system is of little value for a user if the user does not trust the system. Trust can be built by a recommender system by explaining how it generates recommendations, and why it recommends an item.
Labelling – User satisfaction with recommendations may be influenced by the labeling of the recommendations. For instance, in the cited study click-through rate (CTR) for recommendations labeled as "Sponsored" were lower (CTR=5.93%) than CTR for identical recommendations labeled as "Organic" (CTR=8.86%). Recommendations with no label performed best (CTR=9.87%) in that study.
Reproducibility
Recommender systems are notoriously difficult to evaluate offline, with some researchers claiming that this has led to a reproducibility crisis in recommender systems publications. A recent survey of a small number of selected publications applying deep learning or neural methods to the top-k recommendation problem, published in top conferences (SIGIR, KDD, WWW, RecSys, IJCAI), has shown that on average less than 40% of articles could be reproduced by the authors of the survey, with as little as 14% in some conferences. Overall the studies identify 26 articles, only 12 of them could be reproduced by the authors and 11 of them could be outperformed by much older and simpler properly tuned baselines on off-line evaluation metrics. The articles considers a number of potential problems in today's research scholarship and suggests improved scientific practices in that area.
More recent work on benchmarking a set of the same methods came to qualitatively very different results whereby neural methods were found to be among the best performing methods. Deep learning and neural methods for recommender systems have been used in the winning solutions in several recent recommender system challenges, WSDM, RecSys Challenge.
Moreover neural and deep learning methods are widely used in industry where they are extensively tested. The topic of reproducibility is not new in recommender systems. By 2011, Ekstrand, Konstan, et al. criticized that "it is currently difficult to reproduce and extend recommender systems research results,” and that evaluations are “not handled consistently". Konstan and Adomavicius conclude that "the Recommender Systems research community is facing a crisis where a significant number of papers present results that contribute little to collective knowledge […] often because the research lacks the […] evaluation to be properly judged and, hence, to provide meaningful contributions." As a consequence, much research about recommender systems can be considered as not reproducible. Hence, operators of recommender systems find little guidance in the current research for answering the question, which recommendation approaches to use in a recommender systems. Said & Bellogín conducted a study of papers published in the field, as well as benchmarked some of the most popular frameworks for recommendation and found large inconsistencies in results, even when the same algorithms and data sets were used. Some researchers demonstrated that minor variations in the recommendation algorithms or scenarios led to strong changes in the effectiveness of a recommender system. They conclude that seven actions are necessary to improve the current situation: "(1) survey other research fields and learn from them, (2) find a common understanding of reproducibility, (3) identify and understand the determinants that affect reproducibility, (4) conduct more comprehensive experiments (5) modernize publication practices, (6) foster the development and use of recommendation frameworks, and (7) establish best-practice guidelines for recommender-systems research."
See also
Rating site
Cold start
Collaborative filtering
Collective intelligence
Content discovery platform
Enterprise bookmarking
Filter bubble
Personalized marketing
Preference elicitation
Product finder
Configurator
Pattern recognition
References
Further reading
Books
Kim Falk (January 2019), Practical Recommender Systems, Manning Publications,
Scientific articles
Prem Melville, Raymond J. Mooney, and Ramadass Nagarajan. (2002) Content-Boosted Collaborative Filtering for Improved Recommendations. Proceedings of the Eighteenth National Conference on Artificial Intelligence (AAAI-2002), pp. 187–192, Edmonton, Canada, July 2002.
.
.
External links
Hangartner, Rick, "What is the Recommender Industry?", MSearchGroove, December 17, 2007.
ACM Conference on Recommender Systems
Recsys group at Politecnico di Milano
Data Science: Data to Insights from MIT (recommendation systems) | Operating System (OS) | 1,342 |
Arch Linux
Arch Linux () is a Linux distribution created for computers with x86-64 processors.
Arch Linux adheres to the KISS principle ("Keep It Simple, Stupid"). The project attempts to have minimal distribution-specific changes, and therefore minimal breakage with updates, and be pragmatic over ideological design choices and focus on customizability rather than user-friendliness.
Pacman, a package manager written specifically for Arch Linux, is used to install, remove and update software packages. Arch Linux uses a rolling release model, meaning there are no "major releases" of completely new versions of the system; a regular system update is all that is needed to obtain the latest Arch software; the installation images released every month by the Arch team are simply up-to-date snapshots of the main system components.
Arch Linux has comprehensive documentation, which consists of a community wiki known as the ArchWiki.
History
Inspired by CRUX, another minimalist distribution, Judd Vinet started the Arch Linux project in March 2002. The name was chosen because Vinet liked the word's meaning of "the principal," as in "arch-enemy".
Originally only for 32-bit x86 CPUs, the first x86_64 installation ISO was released in April 2006.
Vinet led Arch Linux until 1 October 2007, when he stepped down due to lack of time, transferring control of the project to Aaron Griffin.
The migration to systemd as its init system started in August 2012, and it became the default on new installations in October 2012. It replaced the SysV-style init system, used since the distribution inception.
On 24 February 2020, Aaron Griffin announced that due to his limited involvement with the project, he would, after a voting period, transfer control of the project to Levente Polyak. This change also led to a new 2-year term period being added to the Project Leader position.
The end of i686 support was announced in January 2017, with the February 2017 ISO being the last one including i686 and making the architecture unsupported in November 2017. Since then, the community derivative Arch Linux 32 can be used for i686 hardware.
In March 2021, Arch Linux developers were thinking of porting Arch Linux packages to x86_64-v3. x86-64-v3 roughly correlates to Intel Haswell era of processors.
In April 2021, Arch Linux installation images began including a guided installation script by default.
In January 2022, the linux-firmware package began compressing firmware by default, which significantly reduced the required disk space.
Repository security
Until Pacman version 4.0.0, Arch Linux's package manager lacked support for signed packages. Packages and metadata were not verified for authenticity by Pacman during the download-install process. Without package authentication checking, tampered-with or malicious repository mirrors could compromise the integrity of a system. Pacman 4 allowed verification of the package database and packages, but it was disabled by default. In November 2011, package signing became mandatory for new package builds, and as of the 21st of March 2012, every official package is signed.
In June 2012, package signing verification became official and is now enabled by default in the installation process.
Design and principles
Arch is largely based on binary packages. Packages target x86-64 microprocessors to assist performance on modern hardware. A ports/ebuild-like system is also provided for automated source compilation, known as the Arch Build System.
Arch Linux focuses on simplicity of design, meaning that the main focus involves creating an environment that is straightforward and relatively easy for the user to understand directly, rather than providing polished point-and-click style management tools — the package manager, for example, does not have an official graphical front-end. This is largely achieved by encouraging the use of succinctly commented, clean configuration files that are arranged for quick access and editing. This has earned it a reputation as a distribution for "advanced users" who are willing to use the command line.
Installation
The Arch Linux website supplies ISO images that can be run from CD or USB. After a user partitions and formats their drive, a simple command line script (pacstrap) is used to install the base system. The installation of additional packages which are not part of the base system (for example, desktop environments), can be done with either pacstrap, or Pacman after booting (or chrooting) into the new installation.
An alternative to using CD or USB images for installation is to use the static version of the package manager Pacman, from within another Linux-based operating system. The user can mount their newly formatted drive partition, and use pacstrap (or Pacman with the appropriate command-line switch) to install base and additional packages with the mountpoint of the destination device as the root for its operations. This method is useful when installing Arch Linux onto USB flash drives, or onto a temporarily mounted device which belongs to another system.
Regardless of the selected installation type, further actions need to be taken before the new system is ready for use, most notably by installing a bootloader and configuring the new system with a system name, network connection, language settings, and graphical user interface.
Arch Linux does not schedule releases for specific dates but uses a "rolling release" system where new packages are provided throughout the day. Its package management allows users to easily keep systems updated.
Occasionally, manual interventions are required for certain updates, with instructions posted on the news section of the Arch Linux website.
Guided automated install script
An experimental guided installer, archinstall is included since 2021. It allows users to easily install and configure Arch Linux including drivers, disk partitioning, network configuration, accounts setup, and installation of desktop environments.
Package management
Arch Linux's only supported binary platform is x86_64. The Arch package repositories and User Repository (AUR) contain 58,000 binary and source packages, which comes close to Debian's 68,000 packages; however, the two distributions' approaches to packaging differ, making direct comparisons difficult. For example, six out of Arch's 58,000 packages comprise the software AbiWord, of which three in the user repository replace the canonical Abiword package with an alternative build type or version (such as sourcing from the latest commit to Abiword's source control repository), whereas Debian installs a single version of Abiword across seven packages. The Arch User Repository also contains a writerperfect package which installs several document format converters, while Debian provides each of the more than 20 converters in its own subpackage.
Pacman
To facilitate regular package changes, Pacman (a contraction of "package manager") was developed by Judd Vinet to provide Arch with its own package manager to track dependencies. It is written in C.
All packages are managed using the Pacman package manager. Pacman handles package installation, upgrades, downgrades, removal and features automatic dependency resolution. The packages for Arch Linux are obtained from the Arch Linux package tree and are compiled for the x86-64 architecture. It uses binary packages in the tar.zst (for zstd compression), with .pkg placed before this to indicate that it is a Pacman package (giving .pkg.tar.zst).
As well as Arch Linux, Pacman is also used for installing packages under MSYS2 (a fork of Cygwin) on Windows.
Repositories
The following official binary repositories exist:
core, which contains all the packages needed to set up a base system. Packages in this repository include kernel packages and shell languages.
extra, which holds packages not required for the base system, including desktop environments and programs.
community, which contains packages built and voted on by the community; includes packages that have sufficient votes and have been adopted by a "trusted user".
multilib, a centralized repository for x86-64 users to more readily support 32-bit applications in a 64-bit environment. Packages in this repository include Steam and Wine.
Additionally, there are testing repositories which include binary package candidates for other repositories. Currently, the following testing repositories exist:
testing, with packages for core and extra.
community-testing, with packages for community.
multilib-testing, with packages for multilib.
The staging and community-staging repositories are used for some rebuilds to avoid broken packages in testing. The developers recommend not using these repositories for any reason.
There are also two other repositories that include the newest version of certain desktop environments.
gnome-unstable, which contains packages of a new version of the software from GNOME before being released into testing.
kde-unstable, which contains packages of a new version of KDE software before being released into testing.
The unstable repository was dropped in July 2008 and most of the packages moved to other repositories. In addition to the official repositories, there are a number of unofficial user repositories.
The most well-known unofficial repository is the Arch User Repository, or AUR, hosted on the Arch Linux site. However, the AUR does not host binary packages, hosting instead a collection of build scripts known as PKGBUILDs.
The Arch Linux repositories contain both libre and nonfree software, and the default Arch Linux kernel contains nonfree proprietary blobs, hence the distribution is not endorsed by the GNU project. The linux-libre kernel can be installed from the AUR.
Arch Build System (ABS)
The Arch Build System (ABS) is a ports-like source packaging system that compiles source tarballs into binary packages, which are installed via Pacman. The Arch Build System provides a directory tree of shell scripts, called PKGBUILDs, that enable any and all official Arch packages to be customized and compiled. Rebuilding the entire system using modified compiler flags is also supported by the Arch Build System. The Arch Build System makepkg tool can be used to create custom pkg.tar.zst packages from third-party sources. The resulting packages are also installable and trackable via Pacman.
Arch User Repository (AUR)
In addition to the repositories, the Arch User Repository (AUR) provides user-made PKGBUILD scripts for packages not included in the repositories. These PKGBUILD scripts simplify building from source by explicitly listing and checking for dependencies and configuring the install to match the Arch architecture. Arch User Repository helper programs can further streamline the downloading of PKGBUILD scripts and associated building process. However, this comes at the cost of executing PKGBUILDs not validated by a trusted person; as a result, Arch developers have stated that the utilities for automatic finding, downloading and executing of PKGBUILDs will never be included in the official repositories.
Users can create packages compatible with Pacman using the Arch Build System and custom PKGBUILD scripts. This functionality has helped support the Arch User Repository, which consists of user contributed packages to supplement the official repositories.
The Arch User Repository provides the community with packages that are not included in the repositories. Reasons include:
Licensing issues: software that cannot be redistributed, but is free to use, can be included in the Arch User Repository since all that is hosted by the Arch Linux website is a shell script that downloads the actual software from elsewhere. Examples include proprietary freeware such as Google Earth and RealPlayer.
Modified official packages: the Arch User Repository also contains many variations on the official packaging as well as beta versions of software that is contained within the repositories as stable releases.
Rarity of the software: rarely used programs have not been added to the official repositories (yet).
Betas or "nightly" versions of the software which are very new and thus unstable. Examples include the "firefox-nightly" package, which gives new daily builds of the Firefox web browser.
PKGBUILDs for any software can be contributed by ordinary users and any PKGBUILD that is not confined to the Arch User Repository for policy reasons can be voted into the community repositories.
Derivatives
There are several projects working on porting the Arch Linux ideas and tools to other kernels, including PacBSD (formerly ArchBSD) and Arch Hurd, which are based on the FreeBSD and GNU Hurd kernels, respectively. There is also the Arch Linux ARM project, which aims to port Arch Linux to ARM-based devices, including the Raspberry Pi, as well as the Arch Linux 32 project, which continued support for systems with 32-bit only CPUs after the mainline Arch Linux project dropped support for the architecture in November 2017.
Various distributions are focused around providing an Arch base with an easier install process, such as EndeavourOS and Manjaro.
SteamOS 3.0, the version of SteamOS used in Steam Deck is based on Arch Linux.
Logo
The current Arch Linux logo was designed by Thayer Williams in 2007 as part of a contest to replace the previous logo.
Reception
OSNews reviewed Arch Linux in 2002. OSNews also has 5 later reviews about Arch Linux.
LWN.net wrote a review about Arch Linux in 2005. LWN.net also has 2 later reviews about Arch Linux.
Tux Machines reviewed Arch Linux in 2007.
Chris Smart from DistroWatch Weekly wrote a review about Arch Linux in January 2009. DistroWatch Weekly reviewed Arch Linux again in September 2009 and in December 2015.
Linux maintainer Greg Kroah-Hartman has stated that he uses Arch and that it "works really really well," he also praised the Arch Wiki, and that the distribution stays close to upstream development, as well as the feedback loop with the community.
See also
Comparison of Linux distributions
List of Linux distributions
Notes
References
External links
Arch Linux on GitHub
on Libera.chat (, )
IA-32 Linux distributions
Pacman-based Linux distributions
Rolling Release Linux distributions
X86-64 Linux distributions
Linux distributions | Operating System (OS) | 1,343 |
ESDI
ESDI may refer to:
ESDi School of Design, at University Ramon Llull, Barcelona, Spain
Enhanced Small Disk Interface, a computer disk interface
European Security and Defence Identity, a European initiative in NATO overseen by the Western European Union
Escola Superior de Desenho Industrial, at Rio de Janeiro State University, Brazil | Operating System (OS) | 1,344 |
X86 virtualization
x86 virtualization is the use of hardware-assisted virtualization capabilities on an x86/x86-64 CPU.
In the late 1990s x86 virtualization was achieved by complex software techniques, necessary to compensate for the processor's lack of hardware-assisted virtualization capabilities while attaining reasonable performance. In 2005 and 2006, both Intel (VT-x) and AMD (AMD-V) introduced limited hardware virtualization support that allowed simpler virtualization software but offered very few speed benefits. Greater hardware support, which allowed substantial speed improvements, came with later processor models.
Software-based virtualization
The following discussion focuses only on virtualization of the x86 architecture protected mode.
In protected mode the operating system kernel runs at a higher privilege such as ring 0, and applications at a lower privilege such as ring 3. In software-based virtualization, a host OS has direct access to hardware while the guest OSs have limited access to hardware, just like any other application of the host OS. One approach used in x86 software-based virtualization to overcome this limitation is called ring deprivileging, which involves running the guest OS at a ring higher (lesser privileged) than 0.
Three techniques made virtualization of protected mode possible:
Binary translation is used to rewrite in terms of ring 3 instructions certain ring 0 instructions, such as POPF, that would otherwise fail silently or behave differently when executed above ring 0, making the classic trap-and-emulate virtualization impossible. To improve performance, the translated basic blocks need to be cached in a coherent way that detects code patching (used in VxDs for instance), the reuse of pages by the guest OS, or even self-modifying code.
A number of key data structures used by a processor need to be shadowed. Because most operating systems use paged virtual memory, and granting the guest OS direct access to the MMU would mean loss of control by the virtualization manager, some of the work of the x86 MMU needs to be duplicated in software for the guest OS using a technique known as shadow page tables. This involves denying the guest OS any access to the actual page table entries by trapping access attempts and emulating them instead in software. The x86 architecture uses hidden state to store segment descriptors in the processor, so once the segment descriptors have been loaded into the processor, the memory from which they have been loaded may be overwritten and there is no way to get the descriptors back from the processor. Shadow descriptor tables must therefore be used to track changes made to the descriptor tables by the guest OS.
I/O device emulation: Unsupported devices on the guest OS must be emulated by a device emulator that runs in the host OS.
These techniques incur some performance overhead due to lack of MMU virtualization support, as compared to a VM running on a natively virtualizable architecture such as the IBM System/370.
On traditional mainframes, the classic type 1 hypervisor was self-standing and did not depend on any operating system or run any user applications itself. In contrast, the first x86 virtualization products were aimed at workstation computers, and ran a guest OS inside a host OS by embedding the hypervisor in a kernel module that ran under the host OS (type 2 hypervisor).
There has been some controversy whether the x86 architecture with no hardware assistance is virtualizable as described by Popek and Goldberg. VMware researchers pointed out in a 2006 ASPLOS paper that the above techniques made the x86 platform virtualizable in the sense of meeting the three criteria of Popek and Goldberg, albeit not by the classic trap-and-emulate technique.
A different route was taken by other systems like Denali, L4, and Xen, known as paravirtualization, which involves porting operating systems to run on the resulting virtual machine, which does not implement the parts of the actual x86 instruction set that are hard to virtualize. The paravirtualized I/O has significant performance benefits as demonstrated in the original SOSP'03 Xen paper.
The initial version of x86-64 (AMD64) did not allow for a software-only full virtualization due to the lack of segmentation support in long mode, which made the protection of the hypervisor's memory impossible, in particular, the protection of the trap handler that runs in the guest kernel address space. Revision D and later 64-bit AMD processors (as a rule of thumb, those manufactured in 90 nm or less) added basic support for segmentation in long mode, making it possible to run 64-bit guests in 64-bit hosts via binary translation. Intel did not add segmentation support to its x86-64 implementation (Intel 64), making 64-bit software-only virtualization impossible on Intel CPUs, but Intel VT-x support makes 64-bit hardware assisted virtualization possible on the Intel platform.
On some platforms, it is possible to run a 64-bit guest on a 32-bit host OS if the underlying processor is 64-bit and supports the necessary virtualization extensions.
Hardware-assisted virtualization
In 2005 and 2006, Intel and AMD (working independently) created new processor extensions to the x86 architecture. The first generation of x86 hardware virtualization addressed the issue of privileged instructions. The issue of low performance of virtualized system memory was addressed with MMU virtualization that was added to the chipset later.
Central processing unit
Virtual 8086 mode
Based on painful experiences with the 80286 protected mode, which by itself was not suitable enough to run concurrent DOS applications well, Intel introduced the virtual 8086 mode in their 80386 chip, which offered virtualized 8086 processors on the 386 and later chips. Hardware support for virtualizing the protected mode itself, however, became available 20 years later.
AMD virtualization (AMD-V)
AMD developed its first generation virtualization extensions under the code name "Pacifica", and initially published them as AMD Secure Virtual Machine (SVM), but later marketed them under the trademark AMD Virtualization, abbreviated AMD-V.
On May 23, 2006, AMD released the Athlon 64 ("Orleans"), the Athlon 64 X2 ("Windsor") and the Athlon 64 FX ("Windsor") as the first AMD processors to support this technology.
AMD-V capability also features on the Athlon 64 and Athlon 64 X2 family of processors with revisions "F" or "G" on socket AM2, Turion 64 X2, and Opteron 2nd generation and third-generation, Phenom and Phenom II processors. The APU Fusion processors support AMD-V. AMD-V is not supported by any Socket 939 processors. The only Sempron processors which support it are APUs and Huron, Regor, Sargas desktop CPUs.
AMD Opteron CPUs beginning with the Family 0x10 Barcelona line, and Phenom II CPUs, support a second generation hardware virtualization technology called Rapid Virtualization Indexing (formerly known as Nested Page Tables during its development), later adopted by Intel as Extended Page Tables (EPT).
As of 2019, all Zen-based AMD processors support AMD-V.
The CPU flag for AMD-V is "svm". This may be checked in BSD derivatives via dmesg or sysctl and in Linux via /proc/cpuinfo. Instructions in AMD-V include VMRUN, VMLOAD, VMSAVE, CLGI, VMMCALL, INVLPGA, SKINIT, and STGI.
With some motherboards, users must enable AMD SVM feature in the BIOS setup before applications can make use of it.
Intel virtualization (VT-x)
Previously codenamed "Vanderpool", VT-x represents Intel's technology for virtualization on the x86 platform. On November 13, 2005, Intel released two models of Pentium 4 (Model 662 and 672) as the first Intel processors to support VT-x. The CPU flag for VT-x capability is "vmx"; in Linux, this can be checked via /proc/cpuinfo, or in macOS via sysctl machdep.cpu.features.
"VMX" stands for Virtual Machine Extensions, which adds 13 new instructions: VMPTRLD, VMPTRST, VMCLEAR, VMREAD, VMWRITE, VMCALL, VMLAUNCH, VMRESUME, VMXOFF, VMXON, INVEPT, INVVPID, and VMFUNC. These instructions permit entering and exiting a virtual execution mode where the guest OS perceives itself as running with full privilege (ring 0), but the host OS remains protected.
, almost all newer server, desktop and mobile Intel processors support VT-x, with some of the Intel Atom processors as the primary exception. With some motherboards, users must enable Intel's VT-x feature in the BIOS setup before applications can make use of it.
Intel started to include Extended Page Tables (EPT), a technology for page-table virtualization, since the Nehalem architecture, released in 2008. In 2010, Westmere added support for launching the logical processor directly in real mode a feature called "unrestricted guest", which requires EPT to work.
Since the Haswell microarchitecture (announced in 2013), Intel started to include VMCS shadowing as a technology that accelerates nested virtualization of VMMs.
The virtual machine control structure (VMCS) is a data structure in memory that exists exactly once per VM, while it is managed by the VMM. With every change of the execution context between different VMs, the VMCS is restored for the current VM, defining the state of the VM's virtual processor. As soon as more than one VMM or nested VMMs are used, a problem appears in a way similar to what required shadow page table management to be invented, as described above. In such cases, VMCS needs to be shadowed multiple times (in case of nesting) and partially implemented in software in case there is no hardware support by the processor. To make shadow VMCS handling more efficient, Intel implemented hardware support for VMCS shadowing.
VIA virtualization (VIA VT)
VIA Nano 3000 Series Processors and higher support VIA VT virtualization technology compatible with Intel VT-x. EPT is present in Zhaoxin ZX-C, a descendant of VIA QuadCore-E & Eden X4 similar to Nano C4350AL.
Interrupt virtualization (AMD AVIC and Intel APICv)
In 2012, AMD announced their Advanced Virtual Interrupt Controller (AVIC) targeting interrupt overhead reduction in virtualization environments. This technology, as announced, does not support x2APIC.
In 2016, AVIC is available on the AMD family 15h models 6Xh
(Carrizo) processors and newer.
Also in 2012, Intel announced a similar technology for interrupt and APIC virtualization, which did not have a brand name at its announcement time.
Later, it was branded as APIC virtualization (APICv)
and it became commercially available in the Ivy Bridge EP series of Intel CPUs, which is sold as Xeon E5-26xx v2 (launched in late 2013) and as Xeon E5-46xx v2 (launched in early 2014).
Graphics processing unit
Graphics virtualization is not part of the x86 architecture. Intel Graphics Virtualization Technology (GVT) provides graphics virtualization as part of more recent Gen graphics architectures. Although AMD APUs implement the x86-64 instruction set, they implement AMD's own graphics architectures (TeraScale, GCN and RDNA) which do not support graphics virtualization. Larrabee was the only graphics microarchitecture based on x86, but it likely did not include support for graphics virtualization.
Chipset
Memory and I/O virtualization is performed by the chipset. Typically these features must be enabled by the BIOS, which must be able to support them and also be set to use them.
I/O MMU virtualization (AMD-Vi and Intel VT-d)
An input/output memory management unit (IOMMU) allows guest virtual machines to directly use peripheral devices, such as Ethernet, accelerated graphics cards, and hard-drive controllers, through DMA and interrupt remapping. This is sometimes called PCI passthrough.
An IOMMU also allows operating systems to eliminate bounce buffers needed to allow themselves to communicate with peripheral devices whose memory address spaces are smaller than the operating system's memory address space, by using memory address translation. At the same time, an IOMMU also allows operating systems and hypervisors to prevent buggy or malicious hardware from compromising memory security.
Both AMD and Intel have released their IOMMU specifications:
AMD's I/O Virtualization Technology, "AMD-Vi", originally called "IOMMU"
Intel's "Virtualization Technology for Directed I/O" (VT-d), included in most high-end (but not all) newer Intel processors since the Core 2 architecture.
In addition to the CPU support, both motherboard chipset and system firmware (BIOS or UEFI) need to fully support the IOMMU I/O virtualization functionality for it to be usable. Only the PCI or PCI Express devices supporting function level reset (FLR) can be virtualized this way, as it is required for reassigning various device functions between virtual machines. If a device to be assigned does not support Message Signaled Interrupts (MSI), it must not share interrupt lines with other devices for the assignment to be possible.
All conventional PCI devices routed behind a PCI/PCI-X-to-PCI Express bridge can be assigned to a guest virtual machine only all at once; PCI Express devices have no such restriction.
Network virtualization (VT-c)
Intel's "Virtualization Technology for Connectivity" (VT-c).
PCI-SIG Single Root I/O Virtualization (SR-IOV)
PCI-SIG Single Root I/O Virtualization (SR-IOV) provides a set of general (non-x86 specific) I/O virtualization methods based on PCI Express (PCIe) native hardware, as standardized by PCI-SIG:
Address translation services (ATS) supports native IOV across PCI Express via address translation. It requires support for new transactions to configure such translations.
Single-root IOV (SR-IOV or SRIOV) supports native IOV in existing single-root complex PCI Express topologies. It requires support for new device capabilities to configure multiple virtualized configuration spaces.
Multi-root IOV (MR-IOV) supports native IOV in new topologies (for example, blade servers) by building on SR-IOV to provide multiple root complexes which share a common PCI Express hierarchy.
In SR-IOV, the most common of these, a host VMM configures supported devices to create and allocate virtual "shadows" of their configuration spaces so that virtual machine guests can directly configure and access such "shadow" device resources. With SR-IOV enabled, virtualized network interfaces are directly accessible to the guests,
avoiding involvement of the VMM and resulting in high overall performance; for example, SR-IOV achieves over 95% of the bare metal network bandwidth in NASA's virtualized datacenter and in the Amazon Public Cloud.
See also
Comparison of application virtual machines
Comparison of platform virtualization software
Hardware-assisted virtualization
Hypervisor
I/O virtualization
Network virtualization
Operating system-level virtualization
Timeline of virtualization development
Virtual machine
List of IOMMU-supporting hardware
Second Level Address Translation (SLAT)
Message Signaled Interrupts (MSI)
References
External links
Everything You Need to Know About the Intel Virtualization Technology
A special course at the University of San Francisco on Intel EM64T and VT Extensions (2007)
2 day open source & open access class on writing a VT-x VMM
X86 architecture
Hardware virtualization | Operating System (OS) | 1,345 |
Object-oriented programming
Object-oriented programming (OOP) is a programming paradigm based on the concept of "objects", which can contain data and code: data in the form of fields (often known as attributes or properties), and code, in the form of procedures (often known as methods).
A feature of objects is that an object's own procedures can access and often modify the data fields of itself (objects have a notion of or ). In OOP, computer programs are designed by making them out of objects that interact with one another. OOP languages are diverse, but the most popular ones are class-based, meaning that objects are instances of classes, which also determine their types.
Many of the most widely used programming languages (such as C++, Java, Python, etc.) are multi-paradigm and they support object-oriented programming to a greater or lesser degree, typically in combination with imperative, procedural programming. Significant object-oriented languages include:
Java,
C++,
C#,
Python,
R,
PHP,
Visual Basic.NET,
JavaScript,
Ruby,
Perl,
SIMSCRIPT,
Object Pascal,
Objective-C,
Dart,
Swift,
Scala,
Kotlin,
Common Lisp,
MATLAB,
and
Smalltalk.
History
Terminology invoking "objects" and "oriented" in the modern sense of object-oriented programming made its first appearance at MIT in the late 1950s and early 1960s. In the environment of the artificial intelligence group, as early as 1960, "object" could refer to identified items (LISP atoms) with properties (attributes);
Alan Kay later cited a detailed understanding of LISP internals as a strong influence on his thinking in 1966.
Another early MIT example was Sketchpad created by Ivan Sutherland in 1960–1961; in the glossary of the 1963 technical report based on his dissertation about Sketchpad, Sutherland defined notions of "object" and "instance" (with the class concept covered by "master" or "definition"), albeit specialized to graphical interaction.
Also, an MIT ALGOL version, AED-0, established a direct link between data structures ("plexes", in that dialect) and procedures, prefiguring what were later termed "messages", "methods", and "member functions".
Simula introduced important concepts that are today an essential part of object-oriented programming, such as class and object, inheritance, and dynamic binding.
The object-oriented Simula programming language was used mainly by researchers involved with physical modelling, such as models to study and improve the movement of ships and their content through cargo ports.
In the 1970s, the first version of the Smalltalk programming language was developed at Xerox PARC by Alan Kay, Dan Ingalls and Adele Goldberg. Smalltalk-72 included a programming environment and was dynamically typed, and at first was interpreted, not compiled. Smalltalk became noted for its application of object orientation at the language-level and its graphical development environment. Smalltalk went through various versions and interest in the language grew. While Smalltalk was influenced by the ideas introduced in Simula 67 it was designed to be a fully dynamic system in which classes could be created and modified dynamically.
In the 1970s, Smalltalk influenced the Lisp community to incorporate object-based techniques that were introduced to developers via the Lisp machine. Experimentation with various extensions to Lisp (such as LOOPS and Flavors introducing multiple inheritance and mixins) eventually led to the Common Lisp Object System, which integrates functional programming and object-oriented programming and allows extension via a Meta-object protocol. In the 1980s, there were a few attempts to design processor architectures that included hardware support for objects in memory but these were not successful. Examples include the Intel iAPX 432 and the Linn Smart Rekursiv.
In 1981, Goldberg edited the August issue of Byte Magazine, introducing Smalltalk and object-oriented programming to a wider audience. In 1986, the Association for Computing Machinery organised the first Conference on Object-Oriented Programming, Systems, Languages, and Applications (OOPSLA), which was unexpectedly attended by 1,000 people. In the mid-1980s Objective-C was developed by Brad Cox, who had used Smalltalk at ITT Inc., and Bjarne Stroustrup, who had used Simula for his PhD thesis, eventually went to create the object-oriented C++. In 1985, Bertrand Meyer also produced the first design of the Eiffel language. Focused on software quality, Eiffel is a purely object-oriented programming language and a notation supporting the entire software lifecycle. Meyer described the Eiffel software development method, based on a small number of key ideas from software engineering and computer science, in Object-Oriented Software Construction. Essential to the quality focus of Eiffel is Meyer's reliability mechanism, Design by Contract, which is an integral part of both the method and language.
In the early and mid-1990s object-oriented programming developed as the dominant programming paradigm when programming languages supporting the techniques became widely available. These included Visual FoxPro 3.0, C++, and Delphi. Its dominance was further enhanced by the rising popularity of graphical user interfaces, which rely heavily upon object-oriented programming techniques. An example of a closely related dynamic GUI library and OOP language can be found in the Cocoa frameworks on Mac OS X, written in Objective-C, an object-oriented, dynamic messaging extension to C based on Smalltalk. OOP toolkits also enhanced the popularity of event-driven programming (although this concept is not limited to OOP).
At ETH Zürich, Niklaus Wirth and his colleagues had also been investigating such topics as data abstraction and modular programming (although this had been in common use in the 1960s or earlier). Modula-2 (1978) included both, and their succeeding design, Oberon, included a distinctive approach to object orientation, classes, and such.
Object-oriented features have been added to many previously existing languages, including Ada, BASIC, Fortran, Pascal, and COBOL. Adding these features to languages that were not initially designed for them often led to problems with compatibility and maintainability of code.
More recently, a number of languages have emerged that are primarily object-oriented, but that are also compatible with procedural methodology. Two such languages are Python and Ruby. Probably the most commercially important recent object-oriented languages are Java, developed by Sun Microsystems, as well as C# and Visual Basic.NET (VB.NET), both designed for Microsoft's .NET platform. Each of these two frameworks shows, in its own way, the benefit of using OOP by creating an abstraction from implementation. VB.NET and C# support cross-language inheritance, allowing classes defined in one language to subclass classes defined in the other language.
Features
Object-oriented programming uses objects, but not all of the associated techniques and structures are supported directly in languages that claim to support OOP. It performs operations on operands. The features listed below are common among languages considered to be strongly class- and object-oriented (or multi-paradigm with OOP support), with notable exceptions mentioned.
Shared with non-OOP languages
Variables that can store information formatted in a small number of built-in data types like integers and alphanumeric characters. This may include data structures like strings, lists, and hash tables that are either built-in or result from combining variables using memory pointers.
Procedures – also known as functions, methods, routines, or subroutines – that take input, generate output, and manipulate data. Modern languages include structured programming constructs like loops and conditionals.
Modular programming support provides the ability to group procedures into files and modules for organizational purposes. Modules are namespaced so identifiers in one module will not conflict with a procedure or variable sharing the same name in another file or module.
Objects and classes
Languages that support object-oriented programming (OOP) typically use inheritance for code reuse and extensibility in the form of either classes or prototypes. Those that use classes support two main concepts:
Classes – the definitions for the data format and available procedures for a given type or class of object; may also contain data and procedures (known as class methods) themselves, i.e. classes contain the data members and member functions
Objects – instances of classes
Objects sometimes correspond to things found in the real world. For example, a graphics program may have objects such as "circle", "square", "menu". An online shopping system might have objects such as "shopping cart", "customer", and "product". Sometimes objects represent more abstract entities, like an object that represents an open file, or an object that provides the service of translating measurements from U.S. customary to metric.
Each object is said to be an instance of a particular class (for example, an object with its name field set to "Mary" might be an instance of class Employee). Procedures in object-oriented programming are known as methods; variables are also known as fields, members, attributes, or properties. This leads to the following terms:
Class variables – belong to the class as a whole; there is only one copy of each one
Instance variables or attributes – data that belongs to individual objects; every object has its own copy of each one
Member variables – refers to both the class and instance variables that are defined by a particular class
Class methods – belong to the class as a whole and have access to only class variables and inputs from the procedure call
Instance methods – belong to individual objects, and have access to instance variables for the specific object they are called on, inputs, and class variables
Objects are accessed somewhat like variables with complex internal structure, and in many languages are effectively pointers, serving as actual references to a single instance of said object in memory within a heap or stack. They provide a layer of abstraction which can be used to separate internal from external code. External code can use an object by calling a specific instance method with a certain set of input parameters, read an instance variable, or write to an instance variable. Objects are created by calling a special type of method in the class known as a constructor. A program may create many instances of the same class as it runs, which operate independently. This is an easy way for the same procedures to be used on different sets of data.
Object-oriented programming that uses classes is sometimes called class-based programming, while prototype-based programming does not typically use classes. As a result, significantly different yet analogous terminology is used to define the concepts of object and instance.
In some languages classes and objects can be composed using other concepts like traits and mixins.
Class-based vs prototype-based
In class-based languages the classes are defined beforehand and the objects are instantiated based on the classes. If two objects apple and orange are instantiated from the class Fruit, they are inherently fruits and it is guaranteed that you may handle them in the same way; e.g. a programmer can expect the existence of the same attributes such as color or sugar_content or is_ripe.
In prototype-based languages the objects are the primary entities. No classes even exist. The prototype of an object is just another object to which the object is linked. Every object has one prototype link (and only one). New objects can be created based on already existing objects chosen as their prototype. You may call two different objects apple and orange a fruit, if the object fruit exists, and both apple and orange have fruit as their prototype. The idea of the fruit class doesn't exist explicitly, but as the equivalence class of the objects sharing the same prototype. The attributes and methods of the prototype are delegated to all the objects of the equivalence class defined by this prototype. The attributes and methods owned individually by the object may not be shared by other objects of the same equivalence class; e.g. the attribute sugar_content may be unexpectedly not present in apple. Only single inheritance can be implemented through the prototype.
Dynamic dispatch/message passing
It is the responsibility of the object, not any external code, to select the procedural code to execute in response to a method call, typically by looking up the method at run time in a table associated with the object. This feature is known as dynamic dispatch. If the call variability relies on more than the single type of the object on which it is called (i.e. at least one other parameter object is involved in the method choice), one speaks of multiple dispatch.
A method call is also known as message passing. It is conceptualized as a message (the name of the method and its input parameters) being passed to the object for dispatch.
Encapsulation
Encapsulation is a design pattern in which data are visible only to semantically related functions, so as to prevent misuse. The success of data encapsulation leads to frequent incorporation of data hiding as a design principle in object oriented and pure functional programming.
If a class does not allow calling code to access internal object data and permits access through methods only, this is a strong form of abstraction or information hiding known as encapsulation. Some languages (Java, for example) let classes enforce access restrictions explicitly, for example denoting internal data with the private keyword and designating methods intended for use by code outside the class with the public keyword. Methods may also be designed public, private, or intermediate levels such as protected (which allows access from the same class and its subclasses, but not objects of a different class). In other languages (like Python) this is enforced only by convention (for example, private methods may have names that start with an underscore). Encapsulation prevents external code from being concerned with the internal workings of an object. This facilitates code refactoring, for example allowing the author of the class to change how objects of that class represent their data internally without changing any external code (as long as "public" method calls work the same way). It also encourages programmers to put all the code that is concerned with a certain set of data in the same class, which organizes it for easy comprehension by other programmers. Encapsulation is a technique that encourages decoupling.
Composition, inheritance, and delegation
Objects can contain other objects in their instance variables; this is known as object composition. For example, an object in the Employee class might contain (either directly or through a pointer) an object in the Address class, in addition to its own instance variables like "first_name" and "position". Object composition is used to represent "has-a" relationships: every employee has an address, so every Employee object has access to a place to store an Address object (either directly embedded within itself, or at a separate location addressed via a pointer).
Languages that support classes almost always support inheritance. This allows classes to be arranged in a hierarchy that represents "is-a-type-of" relationships. For example, class Employee might inherit from class Person. All the data and methods available to the parent class also appear in the child class with the same names. For example, class Person might define variables "first_name" and "last_name" with method "make_full_name()". These will also be available in class Employee, which might add the variables "position" and "salary". This technique allows easy re-use of the same procedures and data definitions, in addition to potentially mirroring real-world relationships in an intuitive way. Rather than utilizing database tables and programming subroutines, the developer utilizes objects the user may be more familiar with: objects from their application domain.
Subclasses can override the methods defined by superclasses. Multiple inheritance is allowed in some languages, though this can make resolving overrides complicated. Some languages have special support for mixins, though in any language with multiple inheritance, a mixin is simply a class that does not represent an is-a-type-of relationship. Mixins are typically used to add the same methods to multiple classes. For example, class UnicodeConversionMixin might provide a method unicode_to_ascii() when included in class FileReader and class WebPageScraper, which don't share a common parent.
Abstract classes cannot be instantiated into objects; they exist only for the purpose of inheritance into other "concrete" classes that can be instantiated. In Java, the final keyword can be used to prevent a class from being subclassed.
The doctrine of composition over inheritance advocates implementing has-a relationships using composition instead of inheritance. For example, instead of inheriting from class Person, class Employee could give each Employee object an internal Person object, which it then has the opportunity to hide from external code even if class Person has many public attributes or methods. Some languages, like Go do not support inheritance at all.
The "open/closed principle" advocates that classes and functions "should be open for extension, but closed for modification".
Delegation is another language feature that can be used as an alternative to inheritance.
Polymorphism
Subtyping – a form of polymorphism – is when calling code can be agnostic as to which class in the supported hierarchy it is operating on – the parent class or one of its descendants. Meanwhile, the same operation name among objects in an inheritance hierarchy may behave differently.
For example, objects of type Circle and Square are derived from a common class called Shape. The Draw function for each type of Shape implements what is necessary to draw itself while calling code can remain indifferent to the particular type of Shape being drawn.
This is another type of abstraction that simplifies code external to the class hierarchy and enables strong separation of concerns.
Open recursion
In languages that support open recursion, object methods can call other methods on the same object (including themselves), typically using a special variable or keyword called this or self. This variable is late-bound; it allows a method defined in one class to invoke another method that is defined later, in some subclass thereof.
OOP languages
Simula (1967) is generally accepted as being the first language with the primary features of an object-oriented language. It was created for making simulation programs, in which what came to be called objects were the most important information representation. Smalltalk (1972 to 1980) is another early example, and the one with which much of the theory of OOP was developed. Concerning the degree of object orientation, the following distinctions can be made:
Languages called "pure" OO languages, because everything in them is treated consistently as an object, from primitives such as characters and punctuation, all the way up to whole classes, prototypes, blocks, modules, etc. They were designed specifically to facilitate, even enforce, OO methods. Examples: Ruby, Scala, Smalltalk, Eiffel, Emerald, JADE, Self, Raku.
Languages designed mainly for OO programming, but with some procedural elements. Examples: Java, Python, C++, C#, Delphi/Object Pascal, VB.NET.
Languages that are historically procedural languages, but have been extended with some OO features. Examples: PHP, Perl, Visual Basic (derived from BASIC), MATLAB, COBOL 2002, Fortran 2003, ABAP, Ada 95, Pascal.
Languages with most of the features of objects (classes, methods, inheritance), but in a distinctly original form. Examples: Oberon (Oberon-1 or Oberon-2).
Languages with abstract data type support which may be used to resemble OO programming, but without all features of object-orientation. This includes object-based and prototype-based languages. Examples: JavaScript, Lua, Modula-2, CLU.
Chameleon languages that support multiple paradigms, including OO. Tcl stands out among these for TclOO, a hybrid object system that supports both prototype-based programming and class-based OO.
OOP in dynamic languages
In recent years, object-oriented programming has become especially popular in dynamic programming languages. Python, PowerShell, Ruby and Groovy are dynamic languages built on OOP principles, while Perl and PHP have been adding object-oriented features since Perl 5 and PHP 4, and ColdFusion since version 6.
The Document Object Model of HTML, XHTML, and XML documents on the Internet has bindings to the popular JavaScript/ECMAScript language. JavaScript is perhaps the best known prototype-based programming language, which employs cloning from prototypes rather than inheriting from a class (contrast to class-based programming). Another scripting language that takes this approach is Lua.
OOP in a network protocol
The messages that flow between computers to request services in a client-server environment can be designed as the linearizations of objects defined by class objects known to both the client and the server. For example, a simple linearized object would consist of a length field, a code point identifying the class, and a data value. A more complex example would be a command consisting of the length and code point of the command and values consisting of linearized objects representing the command's parameters. Each such command must be directed by the server to an object whose class (or superclass) recognizes the command and is able to provide the requested service. Clients and servers are best modeled as complex object-oriented structures. Distributed Data Management Architecture (DDM) took this approach and used class objects to define objects at four levels of a formal hierarchy:
Fields defining the data values that form messages, such as their length, code point and data values.
Objects and collections of objects similar to what would be found in a Smalltalk program for messages and parameters.
Managers similar to IBM i Objects, such as a directory to files and files consisting of metadata and records. Managers conceptually provide memory and processing resources for their contained objects.
A client or server consisting of all the managers necessary to implement a full processing environment, supporting such aspects as directory services, security and concurrency control.
The initial version of DDM defined distributed file services. It was later extended to be the foundation of Distributed Relational Database Architecture (DRDA).
Design patterns
Challenges of object-oriented design are addressed by several approaches. Most common is known as the design patterns codified by Gamma et al.. More broadly, the term "design patterns" can be used to refer to any general, repeatable, solution pattern to a commonly occurring problem in software design. Some of these commonly occurring problems have implications and solutions particular to object-oriented development.
Inheritance and behavioral subtyping
It is intuitive to assume that inheritance creates a semantic "is a" relationship, and thus to infer that objects instantiated from subclasses can always be safely used instead of those instantiated from the superclass. This intuition is unfortunately false in most OOP languages, in particular in all those that allow mutable objects. Subtype polymorphism as enforced by the type checker in OOP languages (with mutable objects) cannot guarantee behavioral subtyping in any context. Behavioral subtyping is undecidable in general, so it cannot be implemented by a program (compiler). Class or object hierarchies must be carefully designed, considering possible incorrect uses that cannot be detected syntactically. This issue is known as the Liskov substitution principle.
Gang of Four design patterns
Design Patterns: Elements of Reusable Object-Oriented Software is an influential book published in 1994 by Erich Gamma, Richard Helm, Ralph Johnson, and John Vlissides, often referred to humorously as the "Gang of Four". Along with exploring the capabilities and pitfalls of object-oriented programming, it describes 23 common programming problems and patterns for solving them.
As of April 2007, the book was in its 36th printing.
The book describes the following patterns:
Creational patterns (5): Factory method pattern, Abstract factory pattern, Singleton pattern, Builder pattern, Prototype pattern
Structural patterns (7): Adapter pattern, Bridge pattern, Composite pattern, Decorator pattern, Facade pattern, Flyweight pattern, Proxy pattern
Behavioral patterns (11): Chain-of-responsibility pattern, Command pattern, Interpreter pattern, Iterator pattern, Mediator pattern, Memento pattern, Observer pattern, State pattern, Strategy pattern, Template method pattern, Visitor pattern
Object-orientation and databases
Both object-oriented programming and relational database management systems (RDBMSs) are extremely common in software . Since relational databases don't store objects directly (though some RDBMSs have object-oriented features to approximate this), there is a general need to bridge the two worlds. The problem of bridging object-oriented programming accesses and data patterns with relational databases is known as object-relational impedance mismatch. There are a number of approaches to cope with this problem, but no general solution without downsides. One of the most common approaches is object-relational mapping, as found in IDE languages such as Visual FoxPro and libraries such as Java Data Objects and Ruby on Rails' ActiveRecord.
There are also object databases that can be used to replace RDBMSs, but these have not been as technically and commercially successful as RDBMSs.
Real-world modeling and relationships
OOP can be used to associate real-world objects and processes with digital counterparts. However, not everyone agrees that OOP facilitates direct real-world mapping (see Criticism section) or that real-world mapping is even a worthy goal; Bertrand Meyer argues in Object-Oriented Software Construction that a program is not a model of the world but a model of some part of the world; "Reality is a cousin twice removed". At the same time, some principal limitations of OOP have been noted.
For example, the circle-ellipse problem is difficult to handle using OOP's concept of inheritance.
However, Niklaus Wirth (who popularized the adage now known as Wirth's law: "Software is getting slower more rapidly than hardware becomes faster") said of OOP in his paper, "Good Ideas through the Looking Glass", "This paradigm closely reflects the structure of systems 'in the real world', and it is therefore well suited to model complex systems with complex behaviours" (contrast KISS principle).
Steve Yegge and others noted that natural languages lack the OOP approach of strictly prioritizing things (objects/nouns) before actions (methods/verbs). This problem may cause OOP to suffer more convoluted solutions than procedural programming.
OOP and control flow
OOP was developed to increase the reusability and maintainability of source code. Transparent representation of the control flow had no priority and was meant to be handled by a compiler. With the increasing relevance of parallel hardware and multithreaded coding, developing transparent control flow becomes more important, something hard to achieve with OOP.
Responsibility- vs. data-driven design
Responsibility-driven design defines classes in terms of a contract, that is, a class should be defined around a responsibility and the information that it shares. This is contrasted by Wirfs-Brock and Wilkerson with data-driven design, where classes are defined around the data-structures that must be held. The authors hold that responsibility-driven design is preferable.
SOLID and GRASP guidelines
SOLID is a mnemonic invented by Michael Feathers which spells out five software engineering design principles:
Single responsibility principle
Open/closed principle
Liskov substitution principle
Interface segregation principle
Dependency inversion principle
GRASP (General Responsibility Assignment Software Patterns) is another set of guidelines advocated by Craig Larman.
Criticism
The OOP paradigm has been criticised for a number of reasons, including not meeting its stated goals of reusability and modularity, and for overemphasizing one aspect of software design and modeling (data/objects) at the expense of other important aspects (computation/algorithms).
Luca Cardelli has claimed that OOP code is "intrinsically less efficient" than procedural code, that OOP can take longer to compile, and that OOP languages have "extremely poor modularity properties with respect to class extension and modification", and tend to be extremely complex. The latter point is reiterated by Joe Armstrong, the principal inventor of Erlang, who is quoted as saying:
A study by Potok et al. has shown no significant difference in productivity between OOP and procedural approaches.
Christopher J. Date stated that critical comparison of OOP to other technologies, relational in particular, is difficult because of lack of an agreed-upon and rigorous definition of OOP; however, Date and Darwen have proposed a theoretical foundation on OOP that uses OOP as a kind of customizable type system to support RDBMS.
In an article Lawrence Krubner claimed that compared to other languages (LISP dialects, functional languages, etc.) OOP languages have no unique strengths, and inflict a heavy burden of unneeded complexity.
Alexander Stepanov compares object orientation unfavourably to generic programming:
Paul Graham has suggested that OOP's popularity within large companies is due to "large (and frequently changing) groups of mediocre programmers". According to Graham, the discipline imposed by OOP prevents any one programmer from "doing too much damage".
Leo Brodie has suggested a connection between the standalone nature of objects and a tendency to duplicate code in violation of the don't repeat yourself principle of software development.
Steve Yegge noted that, as opposed to functional programming:
Rich Hickey, creator of Clojure, described object systems as overly simplistic models of the real world. He emphasized the inability of OOP to model time properly, which is getting increasingly problematic as software systems become more concurrent.
Eric S. Raymond, a Unix programmer and open-source software advocate, has been critical of claims that present object-oriented programming as the "One True Solution", and has written that object-oriented programming languages tend to encourage thickly layered programs that destroy transparency. Raymond compares this unfavourably to the approach taken with Unix and the C programming language.
Rob Pike, a programmer involved in the creation of UTF-8 and Go, has called object-oriented programming "the Roman numerals of computing" and has said that OOP languages frequently shift the focus from data structures and algorithms to types. Furthermore, he cites an instance of a Java professor whose "idiomatic" solution to a problem was to create six new classes, rather than to simply use a lookup table.
Formal semantics
Objects are the run-time entities in an object-oriented system. They may represent a person, a place, a bank account, a table of data, or any item that the program has to handle.
There have been several attempts at formalizing the concepts used in object-oriented programming. The following concepts and constructs have been used as interpretations of OOP concepts:
co algebraic data types
recursive types
encapsulated state
inheritance
records are basis for understanding objects if function literals can be stored in fields (like in functional-programming languages), but the actual calculi need be considerably more complex to incorporate essential features of OOP. Several extensions of System F<: that deal with mutable objects have been studied; these allow both subtype polymorphism and parametric polymorphism (generics)
Attempts to find a consensus definition or theory behind objects have not proven very successful (however, see Abadi & Cardelli, A Theory of Objects for formal definitions of many OOP concepts and constructs), and often diverge widely. For example, some definitions focus on mental activities, and some on program structuring. One of the simpler definitions is that OOP is the act of using "map" data structures or arrays that can contain functions and pointers to other maps, all with some syntactic and scoping sugar on top. Inheritance can be performed by cloning the maps (sometimes called "prototyping").
See also
Comparison of programming languages (object-oriented programming)
Comparison of programming paradigms
Component-based software engineering
Design by contract
Object association
Object database
Object model reference
Object modeling language
Object-oriented analysis and design
Object-relational impedance mismatch (and The Third Manifesto)
Object-relational mapping
Systems
CADES
Common Object Request Broker Architecture (CORBA)
Distributed Component Object Model
Distributed Data Management Architecture
Jeroo
Modeling languages
IDEF4
Interface description language
Lepus3
UML
References
Further reading
External links
Introduction to Object Oriented Programming Concepts (OOP) and More by L.W.C. Nirosh
Discussion on Cons of OOP
OOP Concepts (Java Tutorials)
Programming paradigms
Norwegian inventions | Operating System (OS) | 1,346 |
Unified Extensible Firmware Interface
The Unified Extensible Firmware Interface (UEFI) is a publicly available specification that defines a software interface between an operating system and platform firmware. UEFI replaces the legacy Basic Input/Output System (BIOS) firmware interface originally present in all IBM PC-compatible personal computers, with most UEFI firmware implementations providing support for legacy BIOS services. UEFI can support remote diagnostics and repair of computers, even with no operating system installed.
Intel developed the original Extensible Firmware Interface (EFI) specifications. Some of the EFI's practices and data formats mirror those of Microsoft Windows. In 2005, UEFI deprecated EFI 1.10 (the final release of EFI). The Unified EFI Forum is the industry body that manages the UEFI specifications throughout.
History
The original motivation for EFI came during early development of the first Intel–HP Itanium systems in the mid-1990s. BIOS limitations (such as 16-bit real mode, 1MB addressable memory space, assembly language programming, and PC AT hardware) had become too restrictive for the larger server platforms Itanium was targeting. The effort to address these concerns began in 1998 and was initially called Intel Boot Initiative. It was later renamed to Extensible Firmware Interface (EFI).
In July 2005, Intel ceased its development of the EFI specification at version 1.10, and contributed it to the Unified EFI Forum, which has developed the specification as the Unified Extensible Firmware Interface (UEFI). The original EFI specification remains owned by Intel, which exclusively provides licenses for EFI-based products, but the UEFI specification is owned by the UEFI Forum.
Version 2.0 of the UEFI specification was released on 31 January 2006. It added cryptography and security.
Version 2.1 of the UEFI specification was released on 7 January 2007. It added network authentication and the user interface architecture ('Human Interface Infrastructure' in UEFI).
The latest UEFI specification, version 2.9, was published in March 2021.
The first open source UEFI implementation, Tiano, was released by Intel in 2004. Tiano has since then been superseded by EDK and EDK2 and is now maintained by the TianoCore community.
In December 2018, Microsoft announced Project Mu, a fork of TianoCore EDK2 used in Microsoft Surface and Hyper-V products. The project promotes the idea of Firmware as a Service.
In October 2018, Arm announced Arm ServerReady, a compliance certification program for landing the generic off-the-shelf operating systems and hypervisors on Arm-based servers. The program requires the system firmware to comply with Server Base Boot Requirements (SBBR). SBBR requires UEFI, ACPI and SMBIOS compliance. In October 2020, Arm announced the extension of the program to the edge and IoT market. The new program name is Arm SystemReady. Arm SystemReady defined the Base Boot Requirements (BBR) specification that currently provides three recipes, two of which are related to UEFI: 1) SBBR: which requires UEFI, ACPI and SMBIOS compliance suitable for the enterprise level operating environment such as Windows, Red Hat Enterprise Linux, VMware ESXi; and 2) EBBR: which requires compliance to a set of UEFI interfaces as defined in the Embedded Base Boot Requirements (EBBR) suitable for the embedded environment such as Yocto. Many Linux and BSD distros can support both recipes.
Advantages
The interface defined by the EFI specification includes data tables that contain platform information, and boot and runtime services that are available to the OS loader and OS. UEFI firmware provides several technical advantages over a traditional BIOS system:
Ability to boot a disk containing large partitions (over 2 TB) with a GUID Partition Table (GPT)
Flexible pre-OS environment, including network capability, GUI, multi language
32-bit (for example IA-32, ARM32) or 64-bit (for example x64, AArch64) pre-OS environment
C language programming
Modular design
Backward and forward compatibility
Compatibility
Processor compatibility
As of version 2.5, processor bindings exist for Itanium, x86, x86-64, ARM (AArch32) and ARM64 (AArch64). Only little-endian processors can be supported. Unofficial UEFI support is under development for POWERPC64 by implementing TianoCore on top of OPAL, the OpenPOWER abstraction layer, running in little-endian mode. Similar projects exist for MIPS and RISC-V. As of UEFI 2.7, RISC-V processor bindings have been officially established for 32-, 64- and 128-bit modes.
Standard PC BIOS is limited to a 16-bit processor mode and 1 MB of addressable memory space, resulting from the design based on the IBM 5150 that used a 16-bit Intel 8088 processor. In comparison, the processor mode in a UEFI environment can be either 32-bit (x86-32, AArch32) or 64-bit (x86-64, Itanium, and AArch64). 64-bit UEFI firmware implementations support long mode, which allows applications in the preboot environment to use 64-bit addressing to get direct access to all of the machine's memory.
UEFI requires the firmware and operating system loader (or kernel) to be size-matched; that is, a 64-bit UEFI firmware implementation can load only a 64-bit operating system (OS) boot loader or kernel (unless the CSM-based Legacy boot is used) and the same applies to 32-bit. After the system transitions from "Boot Services" to "Runtime Services", the operating system kernel takes over. At this point, the kernel can change processor modes if it desires, but this bars usage of the runtime services (unless the kernel switches back again). As of version 3.15, the Linux kernel supports 64-bit kernels to be booted on 32-bit UEFI firmware implementations running on x86-64 CPUs, with UEFI handover support from a UEFI boot loader as the requirement. UEFI handover protocol deduplicates the UEFI initialization code between the kernel and UEFI boot loaders, leaving the initialization to be performed only by the Linux kernel's UEFI boot stub.
Disk device compatibility
In addition to the standard PC disk partition scheme that uses a master boot record (MBR), UEFI also works with the GUID Partition Table (GPT) partitioning scheme, which is free from many of the limitations of MBR. In particular, the MBR limits on the number and size of disk partitions (up to four primary partitions per disk, and up to 2 TB per disk) are relaxed. More specifically, GPT allows for a maximum disk and partition size of 8 ZB .
Linux
Support for GPT in Linux is enabled by turning on the option CONFIG_EFI_PARTITION (EFI GUID Partition Support) during kernel configuration. This option allows Linux to recognize and use GPT disks after the system firmware passes control over the system to Linux.
For reverse compatibility, Linux can use GPT disks in BIOS-based systems for both data storage and booting, as both GRUB 2 and Linux are GPT-aware. Such a setup is usually referred to as BIOS-GPT. As GPT incorporates the protective MBR, a BIOS-based computer can boot from a GPT disk using a GPT-aware boot loader stored in the protective MBR's bootstrap code area. In the case of GRUB, such a configuration requires a BIOS boot partition for GRUB to embed its second-stage code due to absence of the post-MBR gap in GPT partitioned disks (which is taken over by the GPT's Primary Header and Primary Partition Table). Commonly 1 MB in size, this partition's Globally Unique Identifier (GUID) in GPT scheme is and is used by GRUB only in BIOS-GPT setups. From GRUB's perspective, no such partition type exists in case of MBR partitioning. This partition is not required if the system is UEFI-based because no embedding of the second-stage code is needed in that case.
UEFI systems can access GPT disks and boot directly from them, which allows Linux to use UEFI boot methods. Booting Linux from GPT disks on UEFI systems involves creation of an EFI system partition (ESP), which contains UEFI applications such as bootloaders, operating system kernels, and utility software. Such a setup is usually referred to as UEFI-GPT, while ESP is recommended to be at least 512 MB in size and formatted with a FAT32 filesystem for maximum compatibility.
For backward compatibility, most UEFI implementations also support booting from MBR-partitioned disks, through the Compatibility Support Module (CSM) that provides legacy BIOS compatibility. In that case, booting Linux on UEFI systems is the same as on legacy BIOS-based systems.
Microsoft Windows
The 64-bit versions of Windows Vista SP1 and later and 32-bit versions of Windows 8, 8.1, and 10 can boot from a GPT disk which is larger than 2 TB.
Features
Services
EFI defines two types of services: boot services and runtime services. Boot services are available only while the firmware owns the platform (i.e., before the ExitBootServices() call), and they include text and graphical consoles on various devices, and bus, block and file services. Runtime services are still accessible while the operating system is running; they include services such as date, time and NVRAM access.
Graphics Output Protocol (GOP) services
The Graphics Output Protocol (GOP) provides runtime services; see also Graphics features section below. The operating system is permitted to directly write to the framebuffer provided by GOP during runtime mode.
UEFI Memory map services
SMM services
ACPI services
SMBIOS services
Device tree services (for RISC processors)
Variable services
UEFI variables provide a way to store data, in particular non-volatile data. Some UEFI variables are shared between platform firmware and operating systems. Variable namespaces are identified by GUIDs, and variables are key/value pairs. For example, UEFI variables can be used to keep crash messages in NVRAM after a crash for the operating system to retrieve after a reboot.
Time services
UEFI provides time services. Time services include support for time zone and daylight saving fields, which allow the hardware real-time clock to be set to local time or UTC. On machines using a PC-AT real-time clock, by default the hardware clock still has to be set to local time for compatibility with BIOS-based Windows, unless using recent versions and an entry in the Windows registry is set to indicate the use of UTC.
Applications
Beyond loading an OS, UEFI can run UEFI applications, which reside as files on the EFI System Partition. They can be executed from the UEFI Shell, by the firmware's boot manager, or by other UEFI applications. UEFI applications can be developed and installed independently of the original equipment manufacturers (OEMs).
A type of UEFI application is an OS boot loader such as GRUB, rEFInd, Gummiboot, and Windows Boot Manager; which loads some OS files into memory and executes them. Also, an OS boot loader can provide a user interface to allow the selection of another UEFI application to run. Utilities like the UEFI Shell are also UEFI applications.
Protocols
EFI defines protocols as a set of software interfaces used for communication between two binary modules. All EFI drivers must provide services to others via protocols. The EFI Protocols are similar to the BIOS interrupt calls.
Device drivers
In addition to standard instruction set architecture-specific device drivers, EFI provides for a ISA-independent device driver stored in non-volatile memory as EFI byte code or EBC. System firmware has an interpreter for EBC images. In that sense, EBC is analogous to Open Firmware, the ISA-independent firmware used in PowerPC-based Apple Macintosh and Sun Microsystems SPARC computers, among others.
Some architecture-specific (non-EFI Byte Code) EFI drivers for some device types can have interfaces for use by the OS. This allows the OS to rely on EFI for drivers to perform basic graphics and network functions before, and if, operating-system-specific drivers are loaded.
In other cases, the EFI driver can be filesystem drivers that allow for booting from other types of disk volumes. Examples include efifs for 37 file systems (based on GRUB2 code), used by Rufus for chain-loading NTFS ESPs.
Graphics features
The EFI 1.0 specification defined a UGA (Universal Graphic Adapter) protocol as a way to support graphics features. UEFI did not include UGA and replaced it with GOP (Graphics Output Protocol).
UEFI 2.1 defined a "Human Interface Infrastructure" (HII) to manage user input, localized strings, fonts, and forms (in the HTML sense). These enable original equipment manufacturers (OEMs) or independent BIOS vendors (IBVs) to design graphical interfaces for pre-boot configuration.
Most early UEFI firmware implementations were console-based. Today many UEFI firmware implementations are GUI-based.
EFI system partition
An EFI system partition, often abbreviated to ESP, is a data storage device partition that is used in computers adhering to the UEFI specification. Accessed by the UEFI firmware when a computer is powered up, it stores UEFI applications and the files these applications need to run, including operating system boot loaders. Supported partition table schemes include MBR and GPT, as well as El Torito volumes on optical discs. For use on ESPs, UEFI defines a specific version of the FAT file system, which is maintained as part of the UEFI specification and independently from the original FAT specification, encompassing the FAT32, FAT16 and FAT12 file systems. The ESP also provides space for a boot sector as part of the backward BIOS compatibility.
Booting
UEFI booting
Unlike the legacy PC BIOS, UEFI does not rely on boot sectors, defining instead a boot manager as part of the UEFI specification. When a computer is powered on, the boot manager checks the boot configuration and based on its settings, then executes the specified OS boot loader or operating system kernel (usually boot loader). The boot configuration is defined by variables stored in NVRAM, including variables that indicate the file system paths to OS loaders or OS kernels.
OS boot loaders can be automatically detected by UEFI, which enables easy booting from removable devices such as USB flash drives. This automated detection relies on standardized file paths to the OS boot loader, with the path varying depending on the computer architecture. The format of the file path is defined as ; for example, the file path to the OS loader on an x86-64 system is , and on ARM64 architecture.
Booting UEFI systems from GPT-partitioned disks is commonly called UEFI-GPT booting. Despite the fact that the UEFI specification requires MBR partition tables to be fully supported, some UEFI firmware implementations immediately switch to the BIOS-based CSM booting depending on the type of boot disk's partition table, effectively preventing UEFI booting to be performed from EFI System Partition on MBR-partitioned disks. Such a boot scheme is commonly called UEFI-MBR.
It is also common for a boot manager to have a textual user interface so the user can select the desired OS (or setup utility) from a list of available boot options.
CSM booting
To ensure backward compatibility, UEFI firmware implementations on PC-class machines could support booting in legacy BIOS mode from MBR-partitioned disks through the Compatibility Support Module (CSM) that provides legacy BIOS compatibility. In this scenario, booting is performed in the same way as on legacy BIOS-based systems, by ignoring the partition table and relying on the content of a boot sector.
BIOS-style booting from MBR-partitioned disks is commonly called BIOS-MBR, regardless of it being performed on UEFI or legacy BIOS-based systems. Furthermore, booting legacy BIOS-based systems from GPT disks is also possible, and such a boot scheme is commonly called BIOS-GPT.
The Compatibility Support Module allows legacy operating systems and some legacy option ROMs that do not support UEFI to still be used. It also provides required legacy System Management Mode (SMM) functionality, called CompatibilitySmm, as an addition to features provided by the UEFI SMM. An example of such a legacy SMM functionality is providing USB legacy support for keyboard and mouse, by emulating their classic PS/2 counterparts.
In November 2017, Intel announced that it planned to phase out support for CSM by 2020.
Network booting
The UEFI specification includes support for booting over network via the Preboot eXecution Environment (PXE). PXE booting network protocols include Internet Protocol (IPv4 and IPv6), User Datagram Protocol (UDP), Dynamic Host Configuration Protocol (DHCP), Trivial File Transfer Protocol (TFTP) and iSCSI.
OS images can be remotely stored on storage area networks (SANs), with Internet Small Computer System Interface (iSCSI) and Fibre Channel over Ethernet (FCoE) as supported protocols for accessing the SANs.
Version 2.5 of the UEFI specification adds support for accessing boot images over the HTTP protocol.
Secure Boot
The UEFI 2.3.1 Errata C specification (or higher) defines a protocol known as Secure Boot, which can secure the boot process by preventing the loading of UEFI drivers or OS boot loaders that are not signed with an acceptable digital signature. The mechanical details of how precisely these drivers are to be signed are not specified. When Secure Boot is enabled, it is initially placed in "setup" mode, which allows a public key known as the "platform key" (PK) to be written to the firmware. Once the key is written, Secure Boot enters "User" mode, where only UEFI drivers and OS boot loaders signed with the platform key can be loaded by the firmware. Additional "key exchange keys" (KEK) can be added to a database stored in memory to allow other certificates to be used, but they must still have a connection to the private portion of the platform key. Secure Boot can also be placed in "Custom" mode, where additional public keys can be added to the system that do not match the private key.
Secure Boot is supported by Windows 8 and 8.1, Windows Server 2012 and 2012 R2, Windows 10, Windows Server 2016, 2019, and 2022, and Windows 11, VMware vSphere 6.5 and a number of Linux distributions including Fedora (since version 18), openSUSE (since version 12.3), RHEL (since version 7), CentOS (since version 7), Debian (since version 10), and Ubuntu (since version 12.04.2). , FreeBSD support is in a planning stage.
UEFI shell
UEFI provides a shell environment, which can be used to execute other UEFI applications, including UEFI boot loaders. Apart from that, commands available in the UEFI shell can be used for obtaining various other information about the system or the firmware, including getting the memory map (memmap), modifying boot manager variables (bcfg), running partitioning programs (diskpart), loading UEFI drivers, and editing text files (edit).
Source code for a UEFI shell can be downloaded from the Intel's TianoCore UDK/EDK2 project. A pre-built ShellBinPkg is also available. Shell v2 works best in UEFI 2.3+ systems and is recommended over Shell v1 in those systems. Shell v1 should work in all UEFI systems.
Methods used for launching UEFI shell depend on the manufacturer and model of the system motherboard. Some of them already provide a direct option in firmware setup for launching, e.g. compiled x86-64 version of the shell needs to be made available as <EFI_SYSTEM_PARTITION>/SHELLX64.EFI. Some other systems have an already embedded UEFI shell which can be launched by appropriate key press combinations. For other systems, the solution is either creating an appropriate USB flash drive or adding manually (bcfg) a boot option associated with the compiled version of shell.
Commands
The following is a list of commands supported by the EFI shell.
help
guid
set
alias
dh
unload
map
mount
cd
echo
pause
ls
mkdir
mode
cp
comp
rm
memmap
type
dmpstore
load
ver
err
time
date
stall
reset
vol
attrib
cls
bcfg
edit
Edd30
dblk
pci
mm
mem
EddDebug
Extensions
Extensions to UEFI can be loaded from virtually any non-volatile storage device attached to the computer. For example, an original equipment manufacturer (OEM) can distribute systems with an EFI system partition on the hard drive, which would add additional functions to the standard UEFI firmware stored on the motherboard's ROM.
UEFI Capsule
UEFI Capsule defines a Firmware-to-OS firmware update interface, marketed as modern and secure. Windows 8, Windows 8.1, Windows 10 and Fwupd for Linux support the UEFI Capsule.
Hardware
Like BIOS, UEFI initializes and tests system hardware components (e.g. Memory training, PCIe link training, USB link training), and then loads the boot loader from mass storage device or network booting. In x86 systems, the UEFI firmware is usually stored in the NOR flash chip of the motherboard.
UEFI classes
UEFI machines can have one of the following "classes", which were used to help ease the transition to UEFI. Intel has ended Legacy BIOS in 2020. Starting from the 10th Gen Intel Core, Intel no longer provides Legacy Video BIOS for the iGPU (Intel Graphics Technology). Legacy boot requires a Legacy Video BIOS, which can still be provided by a video card.
Class 0: Legacy BIOS
Class 1: UEFI in CSM-only mode (i.e. no UEFI booting)
Class 2: UEFI with CSM
Class 3: UEFI without CSM
Class 3+: UEFI with Secure Boot Enabled
Boot stages
SEC – Security Phase
This is the first stage of the UEFI boot but may have platform specific binary code that precedes it. (e.g., Intel ME, AMD PSP, CPU microcode). It consists of minimal code written in assembly language for the specific architecture. It initializes a temporary memory (often CPU cache as RAM) and serves as the system's software root of trust with the option of verifying PEI before hand-off.
PEI – Pre-EFI Initialization
The second stage of UEFI boot consists of a dependency-aware dispatcher that loads and runs PEI modules (PEIMs) to handle early hardware initialization tasks such as main memory initialization and firmware recovery operations. Additionally, it is responsible for discovery of the current boot mode and handling many ACPI S0ix/ACPI S3 operations. In the case of ACPI S0ix/ACPI S3 resume, it is responsible for restoring many hardware registers to a pre-sleep state. PEI also uses CPU cache as RAM.
DXE – Driver Execution Environment
This stage consist of C modules and a dependency-aware dispatcher. With main memory now available, CPU, chipset, mainboard and boot devices are initialized in DXE and BDS.
BDS – Boot Device Select
BDS is a part of the DXE. In this stage, boot devices are initialized, UEFI drivers or Option ROMs of PCI devices are executed according to system configuration, and boot options are processed.
TSL – Transient System Load
This is the stage between boot device selection and hand-off to the OS. At this point one may enter UEFI shell, or execute an UEFI application such as the OS boot loader.
RT – Runtime
The UEFI hands off to the operating system (OS) after is executed. A UEFI compatible OS is now responsible for exiting boot services triggering the firmware to unload all no longer needed code and data, leaving only runtime services code/data, e.g. SMM and ACPI. A typical modern OS will prefer to use its own programs (such as kernel drivers) to control hardware devices.
When a legacy OS is used, CSM will handle this call ensuring the system is compatible with legacy BIOS expectations.
Implementation and adoption
Intel EFI
Intel's implementation of EFI is the Intel Platform Innovation Framework, codenamed Tiano. Tiano runs on Intel's XScale, Itanium, x86-32 and x86-64 processors, and is proprietary software, although a portion of the code has been released under the BSD license or Eclipse Public License (EPL) as TianoCore. TianoCore can be used as a payload for coreboot.
Phoenix Technologies' implementation of UEFI is branded as SecureCore Technology (SCT). American Megatrends offers its own UEFI firmware implementation known as Aptio, while Insyde Software offers InsydeH2O.
In December 2018, Microsoft released an open source version of its TianoCore EDK2-based UEFI implementation from the Surface line, Project Mu.
Das U-Boot
An implementation of the UEFI API was introduced into the Universal Boot Loader (Das U-Boot) in 2017. On the ARMv8 architecture Linux distributions use the U-Boot UEFI implementation in conjunction with GNU GRUB for booting (e.g. SUSE Linux), the same holds true for OpenBSD. For booting from iSCSI iPXE can be used as a UEFI application loaded by U-Boot.
Platforms using EFI/UEFI
Intel's first Itanium workstations and servers, released in 2000, implemented EFI 1.02.
Hewlett-Packard's first Itanium 2 systems, released in 2002, implemented EFI 1.10; they were able to boot Windows, Linux, FreeBSD and HP-UX; OpenVMS added UEFI capability in June 2003.
In January 2006, Apple Inc. shipped its first Intel-based Macintosh computers. These systems used EFI instead of Open Firmware, which had been used on its previous PowerPC-based systems. On 5 April 2006, Apple first released Boot Camp, which produces a Windows drivers disk and a non-destructive partitioning tool to allow the installation of Windows XP or Vista without requiring a reinstallation of Mac OS X. A firmware update was also released that added BIOS compatibility to its EFI implementation. Subsequent Macintosh models shipped with the newer firmware.
During 2005, more than one million Intel systems shipped with Intel's implementation of UEFI. New mobile, desktop and server products, using Intel's implementation of UEFI, started shipping in 2006. For instance, boards that use the Intel 945 chipset series use Intel's UEFI firmware implementation.
Since 2005, EFI has also been implemented on non-PC architectures, such as embedded systems based on XScale cores.
The EDK (EFI Developer Kit) includes an NT32 target, which allows EFI firmware and EFI applications to run within a Windows application. But no direct hardware access is allowed by EDK NT32. This means only a subset of EFI application and drivers can be executed at the EDK NT32 target.
In 2008, more x86-64 systems adopted UEFI. While many of these systems still allow booting only the BIOS-based OSes via the Compatibility Support Module (CSM) (thus not appearing to the user to be UEFI-based), other systems started to allow booting UEFI-based OSes. For example, IBM x3450 server, MSI motherboards with ClickBIOS, HP EliteBook Notebook PCs.
In 2009, IBM shipped System x machines (x3550 M2, x3650 M2, iDataPlex dx360 M2) and BladeCenter HS22 with UEFI capability. Dell shipped PowerEdge T610, R610, R710, M610 and M710 servers with UEFI capability. More commercially available systems are mentioned in a UEFI whitepaper.
In 2011, major vendors (such as ASRock, Asus, Gigabyte, and MSI) launched several consumer-oriented motherboards using the Intel 6-series LGA 1155 chipset and AMD 9 Series AM3+ chipsets with UEFI.
With the release of Windows 8 in October 2012, Microsoft's certification requirements now require that computers include firmware that implements the UEFI specification. Furthermore, if the computer supports the "Connected Standby" feature of Windows 8 (which allows devices to have power management comparable to smartphones, with an almost instantaneous return from standby mode), then the firmware is not permitted to contain a Compatibility Support Module (CSM). As such, systems that support Connected Standby are incapable of booting Legacy BIOS operating systems.
In October 2017, Intel announced that it would remove legacy PC BIOS support from all its products by 2020, in favor of UEFI Class 3.
Operating systems
An operating system that can be booted from a (U)EFI is called a (U)EFI-aware operating system, defined by (U)EFI specification. Here the term booted from a (U)EFI means directly booting the system using a (U)EFI operating system loader stored on any storage device. The default location for the operating system loader is <EFI_SYSTEM_PARTITION>/BOOT/BOOT<MACHINE_TYPE_SHORT_NAME>.EFI, where short name of the machine type can be IA32, X64, IA64, ARM or AA64. Some operating systems vendors may have their own boot loaders. They may also change the default boot location.
The Linux kernel has been able to use EFI at boot time since early 2000s, using the elilo EFI boot loader or, more recently, EFI versions of GRUB. Grub+Linux also supports booting from a GUID partition table without UEFI. The distribution Ubuntu added support for UEFI Secure Boot as of version 12.10. Furthermore, the Linux kernel can be compiled with the option to run as an EFI bootloader on its own through the EFI bootstub feature.
HP-UX has used (U)EFI as its boot mechanism on IA-64 systems since 2002.
OpenVMS has used EFI on IA-64 since its initial evaluation release in December 2003, and for production releases since January 2005. The x86-64 port of OpenVMS also uses UEFI to boot the operating system.
Apple uses EFI for its line of Intel-based Macs. Mac OS X v10.4 Tiger and Mac OS X v10.5 Leopard implement EFI v1.10 in 32-bit mode even on newer 64-bit CPUs, but full support arrived with OS X v10.8 Mountain Lion.
The Itanium versions of Windows 2000 (Advanced Server Limited Edition and Datacenter Server Limited Edition) implemented EFI 1.10 in 2002. MS Windows Server 2003 for IA-64, MS Windows XP 64-bit Edition and Windows 2000 Advanced Server Limited Edition, all of which are for the Intel Itanium family of processors, implement EFI, a requirement of the platform through the DIG64 specification.
Microsoft introduced UEFI for x64 Windows operating systems with Windows Vista SP1 and Windows Server 2008 however only UGA (Universal Graphic Adapter) 1.1 or Legacy BIOS INT 10h is supported; Graphics Output Protocol (GOP) is not supported. Therefore, PCs running 64-bit versions of Windows Vista SP1, Windows Vista SP2, Windows 7, Windows Server 2008 and Windows Server 2008 R2 are compatible with UEFI Class 2. 32-bit UEFI was originally not supported since vendors did not have any interest in producing native 32-bit UEFI firmware because of the mainstream status of 64-bit computing. Windows 8 finally introduced further optimizations for UEFI systems, including Graphics Output Protocol (GOP) support, a faster startup, 32-bit UEFI support, and Secure Boot support. Microsoft began requiring UEFI to run Windows with Windows 11.
On 5 March 2013, the FreeBSD Foundation awarded a grant to a developer seeking to add UEFI support to the FreeBSD kernel and bootloader. The changes were initially stored in a discrete branch of the FreeBSD source code, but were merged into the mainline source on 4 April 2014 (revision 264095); the changes include support in the installer as well. UEFI boot support for amd64 first appeared in FreeBSD 10.1 and for arm64 in FreeBSD 11.0.
Oracle Solaris 11.1 and later support UEFI boot for x86 systems with UEFI firmware version 2.1 or later. GRUB 2 is used as the boot loader on x86.
OpenBSD 5.9 introduced UEFI boot support for 64-bit x86 systems using its own custom loader, OpenBSD 6.0 extended that support to include ARMv7.
Use of UEFI with virtualization
HP Integrity Virtual Machines provides UEFI boot on HP Integrity Servers. It also provides a virtualized UEFI environment for the guest UEFI-aware OSes.
Intel hosts an Open Virtual Machine Firmware project on SourceForge.
VMware Fusion 3 software for Mac OS X can boot Mac OS X Server virtual machines using UEFI.
VMware Workstation prior to version 11 unofficially supports UEFI, but is manually enabled by editing the .vmx file. VMware Workstation version 11 and above supports UEFI, independently of whether the physical host system is UEFI-based. VMware Workstation 14 (and accordingly, Fusion 10) adds support for the Secure Boot feature of UEFI.
The vSphere ESXi 5.0 hypervisor officially support UEFI. Version 6.5 adds support for Secure Boot.
VirtualBox has implemented UEFI since 3.1, but limited to Unix/Linux operating systems and Windows 8 and later (does not work with Windows Vista x64 and Windows 7 x64).
QEMU/KVM can be used with the Open Virtual Machine Firmware (OVMF) provided by TianoCore.
The VMware ESXi version 5 hypervisor, part of VMware vSphere, supports virtualized UEFI as an alternative to the legacy PC BIOS inside a virtual machine.
The second generation of the Microsoft Hyper-V virtual machine supports virtualized UEFI.
Google Cloud Platform Shielded VMs support virtualized UEFI to enable Secure Boot.
Applications development
EDK2 Application Development Kit (EADK) makes it possible to use standard C library functions in UEFI applications. EADK can be freely downloaded from the Intel's TianoCore UDK / EDK2 SourceForge project. As an example, a port of the Python interpreter is made available as a UEFI application by using the EADK. The development has moved to GitHub since UDK2015.
A minimalistic "hello, world" C program written using EADK looks similar to its usual C counterpart:
#include <Uefi.h>
#include <Library/UefiLib.h>
#include <Library/ShellCEntryLib.h>
EFI_STATUS EFIAPI ShellAppMain(IN UINTN Argc, IN CHAR16 **Argv)
{
Print(L"hello, world\n");
return EFI_SUCCESS;
}
Criticism
Numerous digital rights activists have protested against UEFI.
Ronald G. Minnich, a co-author of coreboot, and Cory Doctorow, a digital rights activist, have criticized UEFI as an attempt to remove the ability of the user to truly control the computer. It does not solve the BIOS's long-standing problems of requiring two different drivers—one for the firmware and one for the operating system—for most hardware.
Open-source project TianoCore also provides UEFI interfaces. TianoCore lacks the specialized drivers that initialize chipset functions, which are instead provided by coreboot, of which TianoCore is one of many payload options. The development of coreboot requires cooperation from chipset manufacturers to provide the specifications needed to develop initialization drivers.
Secure Boot
In 2011, Microsoft announced that computers certified to run its Windows 8 operating system had to ship with Microsoft's public key enrolled and Secure Boot enabled. Following the announcement, the company was accused by critics and free software/open source advocates (including the Free Software Foundation) of trying to use the Secure Boot functionality of UEFI to hinder or outright prevent the installation of alternative operating systems such as Linux. Microsoft denied that the Secure Boot requirement was intended to serve as a form of lock-in, and clarified its requirements by stating that x86-based systems certified for Windows 8 must allow Secure Boot to enter custom mode or be disabled, but not on systems using the ARM architecture. Windows 10 allows OEMs to decide whether or not Secure Boot can be managed by users of their x86 systems.
Other developers raised concerns about the legal and practical issues of implementing support for Secure Boot on Linux systems in general. Former Red Hat developer Matthew Garrett noted that conditions in the GNU General Public License version 3 may prevent the use of the GNU GRand Unified Bootloader without a distribution's developer disclosing the private key (however, the Free Software Foundation has since clarified its position, assuring that the responsibility to make keys available was held by the hardware manufacturer), and that it would also be difficult for advanced users to build custom kernels that could function with Secure Boot enabled without self-signing them. Other developers suggested that signed builds of Linux with another key could be provided, but noted that it would be difficult to persuade OEMs to ship their computers with the required key alongside the Microsoft key.
Several major Linux distributions have developed different implementations for Secure Boot. Garrett himself developed a minimal bootloader known as a shim, which is a precompiled, signed bootloader that allows the user to individually trust keys provided by Linux distributions. Ubuntu 12.10 uses an older version of shim pre-configured for use with Canonical's own key that verifies only the bootloader and allows unsigned kernels to be loaded; developers believed that the practice of signing only the bootloader is more feasible, since a trusted kernel is effective at securing only the user space, and not the pre-boot state for which Secure Boot is designed to add protection. That also allows users to build their own kernels and use custom kernel modules as well, without the need to reconfigure the system. Canonical also maintains its own private key to sign installations of Ubuntu pre-loaded on certified OEM computers that run the operating system, and also plans to enforce a Secure Boot requirement as wellrequiring both a Canonical key and a Microsoft key (for compatibility reasons) to be included in their firmware. Fedora also uses shim, but requires that both the kernel and its modules be signed as well.
It has been disputed whether the operating system kernel and its modules must be signed as well; while the UEFI specifications do not require it, Microsoft has asserted that their contractual requirements do, and that it reserves the right to revoke any certificates used to sign code that can be used to compromise the security of the system. In Windows, only WHQL kernel driver is allowed if Secure Boot enabled. In February 2013, another Red Hat developer attempted to submit a patch to the Linux kernel that would allow it to parse Microsoft's authenticode signing using a master X.509 key embedded in PE files signed by Microsoft. However, the proposal was criticized by Linux creator Linus Torvalds, who attacked Red Hat for supporting Microsoft's control over the Secure Boot infrastructure.
On 26 March 2013, the Spanish free software development group Hispalinux filed a formal complaint with the European Commission, contending that Microsoft's Secure Boot requirements on OEM systems were "obstructive" and anti-competitive.
At the Black Hat conference in August 2013, a group of security researchers presented a series of exploits in specific vendor implementations of UEFI that could be used to exploit Secure Boot.
In August 2016 it was reported that two security researchers had found the "golden key" security key Microsoft uses in signing operating systems. Technically, no key was exposed, however, an exploitable binary signed by the key was. This allows any software to run as though it was genuinely signed by Microsoft and exposes the possibility of rootkit and bootkit attacks. This also makes patching the fault impossible, since any patch can be replaced (downgraded) by the (signed) exploitable binary. Microsoft responded in a statement that the vulnerability only exists in ARM architecture and Windows RT devices, and has released two patches; however, the patches do not (and cannot) remove the vulnerability, which would require key replacements in end user firmware to fix.
Many Linux distributions support UEFI Secure Boot now, such as RHEL (RHEL 7 and later), CentOS (CentOS 7 and later), Ubuntu, Fedora, Debian (Debian 10 and later), OpenSUSE, SUSE Linux.
Firmware problems
The increased prominence of UEFI firmware in devices has also led to a number of technical problems blamed on their respective implementations.
Following the release of Windows 8 in late 2012, it was discovered that certain Lenovo computer models with Secure Boot had firmware that was hardcoded to allow only executables named "Windows Boot Manager" or "Red Hat Enterprise Linux" to load, regardless of any other setting. Other problems were encountered by several Toshiba laptop models with Secure Boot that were missing certain certificates required for its proper operation.
In January 2013, a bug surrounding the UEFI implementation on some Samsung laptops was publicized, which caused them to be bricked after installing a Linux distribution in UEFI mode. While potential conflicts with a kernel module designed to access system features on Samsung laptops were initially blamed (also prompting kernel maintainers to disable the module on UEFI systems as a safety measure), Matthew Garrett discovered that the bug was actually triggered by storing too many UEFI variables to memory, and that the bug could also be triggered under Windows under certain conditions. In conclusion, he determined that the offending kernel module had caused kernel message dumps to be written to the firmware, thus triggering the bug.
See also
OpenBIOS
UEFI Platform Initialization (UEFI PI)
Advanced Configuration and Power Interface (ACPI)
System Management BIOS (SMBIOS)
Trusted Platform Module (TPM)
UEFITool
Notes
References
Further reading
External links
UEFI Specifications
Intel-sponsored open-source EFI Framework initiative
Intel EFI/UEFI portal
Microsoft UEFI Support and Requirements for Windows Operating Systems
How Windows 8 Hybrid Shutdown / Fast Boot feature works
Securing the Windows 10 Boot Process
LoJax: First UEFI rootkit found in the wild, courtesy of the Sednit group
Articles with example C code | Operating System (OS) | 1,347 |
Unified Emulator Format
Unified Emulator Format (UEF) is a container format for the compressed storage of audio tapes, ROMs, floppy discs and machine state snapshots for the 8-bit range of computers manufactured by Acorn Computers. First implemented by Thomas Harte's ElectrEm emulator and related tools, it is now supported by major emulators of Acorn machines and carried by two online archives of Acorn software numbering thousands of titles.
UEF attempts to concisely reproduce media borne signals rather than simply the data represented by them, the intention being an accurate archive of original media rather than merely a capability to reproduce files stored on them. A selection of metadata can be included, such as compatibility ratings, position markers, images of packaging and the text of instruction manuals.
The Acorn machines implement the Kansas City standard (KCS) for tape data encoding and as a result the file format is suitable for creating backups of original media for several non-Acorn machines. As of version 0.10 the file format carries BASICODE signals as well.
TZX is a chunked format with similar scope for the ZX Spectrum series.
History
Before the development of the UEF, archives of Acorn computer software on the World Wide Web had adopted a convention of hosting ZIP archives of the raw files on a tape, each raw file accompanied by a sidecar file, with extension .inf, carrying the load and execution addresses from the file header. The INF convention, described and implemented by Wouter Scholten in bbcim (1995), extends the output format of the *INFO command (Acorn DFS, ADFS) to cover CRCs and the order of files on tape. While it works adequately for storing user files, it does not preserve the baud rate of the recording, precise timing information or the non-standard data streams used in copy protected titles.
In the case of disc-based software, it became increasingly convenient to send a sector dump of the disc instead, and by the time of the UEF's introduction the file extensions .ssd and .dsd were already established for single-sided and double-sided raw images of DFS discs, respectively. Distributed bare or in a ZIP archive, they remain popular on archive sites.
Aims
In a 2010 post to the Stardot forum, Harte explained at length his reasons for creating the format: being the first to address emulation of the Acorn Electron and its primary medium, tape, Harte wanted a fine-grained and technically optimal representation of media, compared to existing ad hoc formats; and to package the multiple media elements of a software release into a single file, so that downloading a UEF is "more like obtaining the original product". He went on to observe that it was the tools in use, and "user need", that determined the actual uses to which the UEF had been put.
Structure
A UEF file consists of a fixed length header that identifies itself, followed by a linked list of chunks containing the data of interest. The header comprises the magic string UEF File!, a terminating null character, and the two-byte version number of the UEF specification in use. A reading application needs to pay attention to the version number, as the unit of measurement in some chunks differs according to the specification version, and one chunk has been redefined between versions.
Each chunk consists of a two-byte ID which determines its meaning, the length of the body in four bytes, and the body itself. An application can readily skip the bodies of chunks it does not need to process. After the last chunk the file simply ends. Currently, UEF chunks do not nest.
The whole UEF file, including the header, may optionally be compressed in gzip format. By examining the start of the file for a gzip or UEF header, a decompression library can be invoked as appropriate.
Content
The Unified Emulator Format models software on cassette as a contiguous sequence of segments, which may be carrier tones, the modulated asynchronous signals of ordinary data blocks, security cycles (modulated synchronous signals, said to be an "identification feature") or gaps where no recognised signal is present. Tape UEF chunks are concatenated in the order they appear, to build up the representation of a whole recording. When generated from a real source tape, each waveform on the tape corresponds directly to a tape chunk, such that the source can be accurately reconstructed (with any non-encodable signals replaced by gaps of equal length.)
Standard Acorn streams (chunk ID: 0x0100) are encoded so that their bytes reappear in the UEF chunk body. From version 0.10, direct support is extended to all asynchronous formats (0x0104) including the 8,N,2 format of BASICODE. Otherwise there is a generic chunk (0x0102) to accommodate any arbitrary sequence of bits. Security wave chunks (0x0114) also carry bit streams, encoded in a different form to allow the half-length one bits observed in commercial recordings to be represented.
There are some modal variables affecting the interpretation of these chunks: the baud rate, 1200 baud for Acorn signals or 300 baud for KCS; the exact carrier frequency, which determines the playing time of the reconstructed tape; and the phase of the signal. The latter two may change within a published recording, and their absolute values depend on the tape player, amplifier and sound card used to digitise the signal.
A UEF file can contain markers to separate the tapes of a multiple-tape distribution, and the sides of each tape; positions of interest within each side can also be marked.
Discs are stored as raw sector dumps of each surface, along with their geometry and a byte identifying the file system. Previous versions of the specification had provisions to encode discs at the byte stream level, or the magnetic domain level. With SSD and DSD sector dumps serving standard BBC discs well, and the mature FDI format catering for copy-protected software, the disc image function of UEF is little used.
Sideways ROMs are likewise stored as raw data, plus an indication of their purpose and a ROM slot recommendation. Again the user base prefers bare ROM dumps for archival.
State snapshot UEF files include standardised chunks to store the major portions of an Acorn Electron or BBC Micro's state: main, shadow and expansion bus memory, the CPU and the WD1770 floppy drive controller; also the Electron ULA and the Slogger Master RAM Board, a common Electron add-on. A patch memory chunk rewrites a block of memory at any address, allowing the UEF format to package pokes. To store state elements not accommodated in the standard chunks, emulators can define their own chunks. A private use area of chunk IDs is reserved for this or any other purpose, although some emulators save state under invalid chunk IDs in the public space.
Multiplexed data is an extension for emulators, used by ElectrEm but without a published specification:
One salient application mentioned by Harte is to superimpose "new graphics on old games", and a single example, a 256-colour enhanced Daredevil Dennis, is available from StairwayToHell.com to run in ElectrEm.
Multiplexed data chunks are intended to follow ordinary data chunks in any of the above classes, supplementing the data. Their contents are not meant to be visible to the Acorn computer, whether real or emulated, but otherwise their meaning has not been specified.
Chunks providing content information include the file origin chunk, which identifies the application that generated the UEF file. Inlay scan chunks, intended as a file preview, hold a raw bitmap of the cover art although anything beyond a thumbnail can take up more data than a typical game. The UEF author can also provide the text of an instruction booklet or a URL for more information, a short title for display, minimum machine specification and keyboard mapping for the enclosed software; and where a game does not use the whole screen, the coordinates of the visible area can be given. A minority of UEF files available online contain anything in this class but an origin chunk.
A UEF file can contain multiple classes of data at once, as Harte intended; it is not possible to know which classes it contains without scanning the whole file. In its file selection box ElectrEm displays an icon according to the first data class chunk it finds.
Applications
MakeUEF
MakeUEF is a Windows application written by Thomas Harte and expanded by Fraser Ross to convert audio samples into UEF files. Two grades are offered. An 'amateur' version reads WAV files or a live signal played to the sound card, and transcribes only standard data blocks with accuracy. The 'professional' grade accepts only CSW files, which represent waves preprocessed into rectangular pulse trains, but it encodes all audio information supported by the UEF specification.
MakeUEF claims to have been the sole creator of all UEF files available on the Web before November 2004, the month of its version 1.0 release. Although the file format was more capable, supporting "gap lengths" since February 2001 at the latest, only "program data" was retained by MakeUEF prior to version 1.0. From November 2004 the fidelity of MakeUEF improved and the file spec was further refined, and an extension of .hq.uef ("high quality") was adopted to reflect this. The AcornPreservation.org archive only carries the HQ.UEF variety as well as the CSW source files. Its sister site StairwayToHell.com accepts 'amateur' UEF translations and files produced by pre-1.0 MakeUEF. the latter site hosts 1,494 transcriptions of BBC Micro cassette titles and at least 800 of Electron titles.
Others
Several emulators of Acorn machines support UEF natively, to read and write tape data (at original speed or faster) and store state snapshots. Examples include ElectrEm, BeebEm and B-Em.
FreeUEF by Thomas Harte and the UEFReader Java Sound plugin convert a UEF file to a wave suitable for recording on tape or playing back to a physical computer.
UberCassette are cross-platform, multi-format encoders emitting UEF from samples of Acorn cassettes.
The UEFwalk Perl script validates and extracts data from UEF files.
The XVUEF patch extends the Xv image editor to support the little-used inlay scan chunks of the UEF.
Use on real BBC Micros
The GoMMC and GoSDC hardware extensions, produced by John Kortink from 2004, provide a virtual cassette playing capability. The accompanying PC tools import the cassette data from UEF files and store the extracted cassette stream on a memory card.
In February 2012, Martin Barr released version 5.0 of UPURS, a ROM based suite of utilities to aid data transfer to real BBC Microcomputers. As part of that release, the tool UPCFS saw its first release which enabled a claimed 86% compatibility rate with existing decompressed UEF files allowing them to be transferred to a real BBC Micro using a custom User Port cable that presents an RS-232 capable connection to a PC.
References
External links
Specifications document
Computer file formats | Operating System (OS) | 1,348 |
Odakyu 30000 series EXE
The , branded "EXE/EXEα" ("Excellent Express/Excellent Express Alpha"), is an electric multiple unit (EMU) train type operated by the private railway operator Odakyu Electric Railway in Japan on Odakyu Odawara Line and Odakyu Enoshima Line "Romancecar" services since 1996.
Design
Seven 4+6-car trainsets (70 vehicles) were built between 1996 and 1999 to replace ageing Odakyu 3100 series NSE trains. Unlike earlier Romancecar trainsets, which used articulated carriages, the 30000 series sets have 20 m long bogie cars. The inner driving cabs of the 4+6-car formations have gangway doors.
The passenger doors use wide sliding doors, with wide doors on cars 2, 5, and 8 to provide wheelchair accessibility.
Operations
The 30000 series trains are used on Odakyu Odawara Line Hakone services between in Tokyo and Hakone-Yumoto Station in Kanagawa Prefecture (about 88 km), as well as Sagami and Homeway services. They are also on Odakyu Enoshima Line Enoshima services between Shinjuku and .
Trainsets were introduced on combined Hakone and Enoshima services, with trains dividing at , and later at . 10-car Hakone services to Hakone-Yumoto also divide at with just the 6-car sets continuing onward to Hakone-Yumoto.
Formations
, the fleet consists of seven 4+6-car trainsets, formed as follows, with car 1 at the western end. All seven sets are based at Ebina Depot.
Unrefurbished sets
Cars 2, 3, 5, 8, and 9 are each fitted with a single-arm pantograph.
Only one bogie on car 9 is motored.
Refurbished sets
Refurbished trainsets are formed as follows, with five motored cars per ten-car formation.
Cars 2, 3, 5, 8, and 9 are each fitted with a PT7113-A single-arm pantograph.
Interior
Passenger accommodation consists of monoclass unidirectional 2+2 abreast seating, with wide seats and a seating pitch of . The first eight half-sets delivered had green-coloured seats in the six-cars sets (evoking the forests of Hakone) and blue-coloured seats in the four-car sets (evoking the sea of Enoshima), but from 1999 onward, the seats in all sets was standardized with grey and brown seat covers. Wheelchair spaces are located in cars 5 and 8.
Refreshment counters are provided in cars 3 and 9. Toilets are provided in cars 2, 5, and 8, and the toilet in car 5 is a universal access type.
History
The first trains entered revenue service on 23 March 1996.
Build history
The fleet was built between 1996 and 1999 in three batches as follows.
Refurbishment
The fleet underwent a programme of refurbishment from fiscal 2016, with the first 4+6-car trainset treated returning to service in March 2017, rebranded "EXEα".
Refurbishment was carried out by Nippon Sharyo, with the design overseen by Noriaki Okabe Architecture Network. It includes the following changes:
Redesigned interiors and seating
Replacement of Japanese-style squat toilets with Western-style toilets (Toto "Washlet" type)
Additional luggage racks
LED lighting in passenger saloons
Installation of security cameras in vestibule and passenger saloons
Fully enclosed traction motors
Conversion of one former trailer car (car 3) to a motored car, and the addition of a second motored bogie to car 9, which previously only had one motored bogie.
The first train set to be refurbished, four-car set 30051, was returned to Odakyu from the Nippon Sharyo factory in Toyokawa, Aichi, in November 2016.
In popular culture
The Odakyu 30000 series EXE is featured as a non-driveable train in the Microsoft Train Simulator computer game.
References
External links
"Romancecar lineup"
Electric multiple units of Japan
30000 series EXE
Nippon Sharyo rolling stock
Kawasaki multiple units
Train-related introductions in 1996 | Operating System (OS) | 1,349 |
Multilingual User Interface
Multilingual User Interface (MUI) is the name of a Microsoft technology for Microsoft Windows, Microsoft Office and other applications that allows for the installation of multiple interface languages on a single system. On a system with MUI, each user would be able to select their own preferred display language. MUI technology was introduced with Windows 2000 and has been used in every release since (up to Windows 10). The MUI technology was covered by an international patent titled "Multilingual User Interface for an Operating System". The inventors are Bjorn C. Rettig, Edward S. Miller, Gregory Wilson, and Shan Xu.
Functionally, MUI packs for a certain product perform the same task as localized versions of those product, but with some key technical differences. While both localized versions of software and MUI versions display menus and dialogs in the targeted language, only localized versions have translated file and folder names. A localized version of Windows translates the base operating system, as well as all included programs, including file and folder names, objects names, strings in registry, and any other internal strings used by Windows into a particular language. Localized versions of Windows support upgrading from a previous localized version and user interface resources are completely localized, which is not the case for MUI versions of a product. MUI versions of a product do not contain translated administrative functions such as registry entries and items in Microsoft Management Console. The advantage of using MUIs over localized versions is each user on a computer could use a different language MUI without requiring different versions of software installed and dealing with the conflicts that could arise as a result. For example, using MUI technology, any version of Windows can host Windows applications in any other language.
MUI in Windows 2000 and Windows XP
MUI products for these versions were available only through volume agreements from Microsoft. They were not available through retail channels. However, some OEMs distributed the product.
List of languages in Windows XP
Up to Windows XP, MUI packs for a product are applied on top of an English version to provide a localized user experience. There are a total of 5 sets of MUI packs.
Set 1
German
French
Japanese
Korean
Chinese (Simplified)
Chinese (Traditional)
Set 2
Arabic
Hebrew
Spanish
Italian
Swedish
Dutch
Portuguese (Brazil)
Set 3
Norwegian
Danish
Finnish
Russian
Czech
Set 4
Polish
Hungarian
Portuguese (Portugal)
Turkish
Greek
Set 5
Bulgarian
Estonian
Croatian
Latvian
Lithuanian
Romanian
Slovak
Slovenian
Thai
MUI in Windows Vista and Windows 7
Windows Vista
Windows Vista further advanced MUI technology with support for single, language-neutral, language-independent binary files supporting multiple language skins, with the language-specific resources contained in separate binaries. The MUI architecture separates the language resources for the user interface from the binary code of the operating system. This separation makes it possible to change languages completely without changing the core binaries of Windows Vista, or to have multiple languages installed on the same computer while using the same core binaries. Languages are applied as language packs containing the resources required to localize part of or the entire user interface in Windows Vista.
MUI packs are available to Windows Vista Enterprise users and as an Ultimate Extras to Windows Vista Ultimate users.
Beginning with Windows Vista, the set of associated MUI APIs are also made available to developers for application development.
At launch, the following 16 language packs were released:
Danish
German
English
Spanish
French
Italian
Dutch
Norwegian
Portuguese (Brazil)
Finnish
Swedish
Russian
Korean
Chinese (Simplified)
Chinese (Traditional)
Japanese
On October 23, 2007, the remaining 19 language packs were released:
Czech
Estonian
Croatian
Latvian
Lithuanian
Hungarian
Polish
Portuguese (Portugal)
Romanian
Slovak
Slovenian
Serbian
Turkish
Greek
Bulgarian
Ukrainian
Hebrew
Arabic
Thai
Windows 7
MUI packs are also available to Windows 7 Enterprise and Ultimate edition users. Beginning with Windows 7, Microsoft started referring to MUIs as "Language Packs," although this isn't to be confused with Language Interface Packs (LIP).
MUI in Windows 8/8.1/RT and Windows 10
Beginning with Windows 8/RT, most editions of Windows are able to download and install all Language Packs, with a few exceptions:
In Single Language editions of Windows, only one language pack is allowed to be installed, the same behavior as editions of Windows 7 and earlier that are not Enterprise or Ultimate.
In OEM editions of Windows, the exact language packs that are preinstalled/available for download depend on the device manufacturer and country/region of purchase (and the mobile operator for devices with cellular connectivity). This is a mixture of a local-market feature and a feature for everyone everywhere. There may be multiple display languages preinstalled on the device by the manufacturer and/or wireless carrier, but each manufacturer and/or wireless carrier installs two different sets of languages: one set of preloaded languages and one set of languages that can be installed by the end user. This rule is currently used in Windows Phones as of Windows Phone 7 and PCs as of Windows 8 (since Windows 8 and Windows Phone 8 share the same Windows NT kernel) and was later dropped in Windows 10 version 1803, but was later quietly reinstated as of Windows 10 version 1809. An end user could install a retail license on top of an OEM license by performing a clean install through the Media Creation Tool to circumvent the region locking and install any display language that they want.
The Windows update process does not affect the currently installed display languages in any way, but it may give the end user access to newly released language packs made available by the OEM (PCs only). However, when installing a new feature update, it may change the display language back to the one set during the initial setup process. For example, if the Samsung ATIV Smart PC on AT&T is upgraded from Windows 8.1 to Windows 10 Anniversary Update (not necessarily done in one go), it will now be able to install Portuguese (Brazil), Vietnamese, Chinese (Simplified and Traditional), and Japanese in addition to English, Spanish, French, German, Italian, and Korean (the last three languages can be downloaded by the end user at the time of its launch), just like with the Galaxy S8 series and the Verizon-based Galaxy Book.
On the other hand, a Samsung Galaxy Book device does not support Afrikaans as a display language, because Samsung apps do not officially support Afrikaans. Furthermore, cellular variants of the Galaxy Book laptops sold in North America support fewer display languages than their Wi-Fi-only counterparts, just like on their smartphones.
Certain language packs like English (Australia) and English (Canada) are only supported on the Xbox consoles and the Surface Duo.
Some LIP packs require certain MUI packs (base languages) to be present or compatible. If that base language is not present or compatible, then that LIP cannot be installed on that device.
Beginning with Windows 10 version 1803, Microsoft started referring to language packs as "Local Experience Packs" (LXPs), but they still work in the same way. In addition to downloading from Windows Settings, these 110 LXPs are also available for download through the Microsoft Store app and through the web interface, the latter enabling remote installation for consumer editions of Windows. However, as with any other application from the Microsoft Store, only the LXPs that are compatible with that Windows device are shown through the Microsoft Store app. These LXPs receive updates through the Microsoft Store, outside of the normal Windows update cycle.
List of supported languages
As of Windows 11, the following language packs are supported.
PCs
Mobile
The multilingual user interface for Windows Phones did not appear until version 7.0.
See also
GNU gettext
Language Interface Pack (LIP)
References
External links
Windows MUI Knowledge Center (archived)
Windows Language Packs
Windows administration | Operating System (OS) | 1,350 |
DECSYSTEM-20
The DECSYSTEM-20 was a 36-bit Digital Equipment Corporation PDP-10 mainframe computer running the TOPS-20 operating system (products introduced in 1977).
PDP-10 computers running the TOPS-10 operating system were labeled DECsystem-10 as a way of differentiating them from the PDP-11. Later on, those systems running TOPS-20 (on the KL10 PDP-10 processors) were labeled DECSYSTEM-20 (the block capitals being the result of a lawsuit brought against DEC by Singer, which once made a computer called "The System Ten"). The DECSYSTEM-20 was sometimes called PDP-20, although this designation was never used by DEC.
Models
The following models were produced:
DECSYSTEM-2020: KS10 bit-slice processor with up to 512 kilowords of solid state RAM (The ADP OnSite version of the DECSYSTEM-2020 supported 1 MW of RAM)
DECSYSTEM-2040: KL10 ECL processor with up to 1024 kilowords of magnetic core RAM
DECSYSTEM-2050: KL10 ECL processor with 2k words of cache and up to 1024 kilowords of RAM
DECSYSTEM-2060: KL10 ECL processor with 2k words of cache and up to 4096 kilowords of solid state memory
DECSYSTEM-2065: DECSYSTEM-2060 with MCA25 pager (double-sized (1024 entry) two-way associative hardware page table)
The only significant difference the user could see between a DECsystem-10 and a DECSYSTEM-20 was the operating system and the color of the paint. Most (but not all) machines sold to run TOPS-10 were painted "Blasi Blue", whereas most TOPS-20 machines were painted "Terracotta" (often mistakenly called "Chinese Red" or orange; the actual name of the color on the paint cans was Terra Cotta).
There were some significant internal differences between the earlier KL10 Model A processors, used in the earlier DECsystem-10s running on KL10 processors, and the later KL10 Model Bs, used for the DECSYSTEM-20s. Model As used the original PDP-10 memory bus, with external memory modules. The later Model B processors used in the DECSYSTEM-20 used internal memory, mounted in the same cabinet as the CPU. The Model As also had different packaging; they came in the original tall PDP-10 cabinets, rather than the short ones used later on for the DECSYSTEM-20.
The last released implementation of DEC's 36-bit architecture was the single cabinet DECSYSTEM-2020, using a KS10 processor.
The DECSYSTEM-20 was primarily designed and used as a small mainframe for timesharing. That is, multiple users would concurrently log on to individual user accounts and share use of the main processor to compile and run applications. Separate disk allocations were maintained for all users by the operating system, and various levels of protection could be maintained by for System, Owner, Group, and World users. A model 2060, for example, could typically host up to 40 to 60 simultaneous users before exhibiting noticeably reduced response time.
Remaining machines
The Living Computer Museum of Seattle, Washington maintains a 2065 running TOPS-10, which is available to interested parties via SSH upon registration (at no cost) at their website.
References
C. Gordon Bell, Alan Kotok, Thomas N. Hasting, Richard Hill, "The Evolution of the DECsystem-10", in C. Gordon Bell, J. Craig Mudge, John E. McNamara, Computer Engineering: A DEC View of Hardware Systems Design (Digital Equipment, Bedford, 1979)
Frank da Cruz, Christine Gianone, The DECSYSTEM-20 at Columbia University 1977–1988
Further reading
Storage Organization and Management in TENEX. Daniel L. Murphy. AFIPS Proceedings, 1972 FJCC.
"DECsystem-10/DECSYSTEM-20 Processor Reference Manual". 1982.
"Manuals for DEC 36-bit computers".
"Introduction to DECSYSTEM-20 Assembly Language Programming" (Ralph E. Gorin, 1981, )
External links
PDP-10 Models—Explains all the various KL-10 models in detail
Columbia University DECSYSTEM-20
Login into the Living Computer Museum, a portal into the Paul Allen collection of timesharing and interactive computers, including an operational DECSYSTEM-20 KL-10 2065
36-bit computers
DEC mainframe computers
Computer-related introductions in 1977 | Operating System (OS) | 1,351 |
IEFBR14
IEFBR14 is an IBM mainframe utility program. It runs in all IBM mainframe environments derived from OS/360, including z/OS. It is a placeholder that returns the exit status zero, similar to the true command on UNIX-like systems.
Purpose
Allocation (also called Initiation)
On OS/360 and derived mainframe systems, most programs never specify files (usually called datasets) directly, but instead reference them indirectly through the Job Control Language (JCL) statements that invoke the programs. These data definition (or "DD") statements can include a "disposition" (DISP=...) parameter that indicates how the file is to be managed — whether a new file is to be created or an old one re-used; and whether the file should be deleted upon completion or retained; etc.
IEFBR14 was created because while DD statements can create or delete files easily, they cannot do so without a program to be run due to a certain peculiarity of the Job Management system, which always requires that the Initiator actually execute a program, even if that program is effectively a null statement. The program used in the JCL does not actually need to use the files to cause their creation or deletion — the DD DISP=... specification does all the work. Thus a very simple do-nothing program was needed to fill that role.
IEFBR14 can thus be used to create or delete a data set using JCL.
Deallocation (also called Termination)
A secondary reason to run IEFBR14 was to unmount devices (usually tapes or disks) that had been left mounted from a previous job, perhaps because of an error in that job's JCL or because the job ended in error. In either event, the system operators would often need to demount the devices, and a started task – DEALLOC – was often provided for this purpose.
Simply entering the command
S DEALLOC
at the system console would run the started task, which consisted of just one step. However, due to the design of Job Management, DEALLOC must actually exist in the system's procedure library, SYS1.PROCLIB, lest the start command fail.
Also, all such started tasks must be a single jobstep as the "Started Task Control" (STC) module within the Job Management component of the operating system only accepts single-step jobs, and it fails all multi-step jobs, without exception.
//STEP01 EXEC PGM=IEFBR14
Parsing and validation
At least on z/OS, branching off to execute another program would cause the calling program to be evaluated for syntax errors at that point.
Naming
The "IEF" derives from a convention on mainframe computers that programs supplied by IBM were grouped together by function or creator and that each group shared a three-letter prefix. In OS/360, the first letter was almost always "I", and the programs produced by the Job Management group (including IEFBR14) all used the prefix "IEF". Other common prefixes included "IEB" for dataset utility programs, "IEH" for system utility programs, and "IEW" for program linkage and loading. Other major components were (and still are) "IEA" (Operating System Supervisor) and "IEC" (Input/Output Supervisor).
As explained below, "BR 14" was the essential function of the program, to simply return to the operating system. This portion of a program name was often mnemonic — for example, IEBUPDTE was the dataset utility (IEB) that applied updates (UPDTE) to source code files, and IEHINITT was the system utility (IEH) that initialized (INIT) magnetic tape labels (T).
As explained further in "Usage" below, the name "BR14" comes from the IBM assembler-language instruction "Branch (to the address in) Register 14", which by convention is used to "return from a subroutine". Most early users of OS/360 were familiar with IBM Assembler Language and would have recognized this at once.
Usage
Example JCL would be :
//IEFBR14 JOB ACCT,'DELETE DATASET',MSGCLASS=J,CLASS=A
//STEP0001 EXEC PGM=IEFBR14
//DELDD DD DSN=xxxxx.yyyyy.zzzzz,
// DISP=(MOD,DELETE,DELETE),UNIT=DASD
To create a Partitioned Data Set:
//TZZZ84R JOB NOTIFY=&SYSUID,MSGCLASS=X
//STEP01 EXEC PGM=IEFBR14
//DD1 DD DSN=TKOL084.DEMO,DISP=(NEW,CATLG,DELETE),
// DCB=(RECFM=FB,LRECL=80,BLKSIZE=80,DSORG=PO),
// SPACE=(TRK,(1,1,1),RLSE),
// UNIT=SYSDA
Implementation
IEFBR14 consisted initially of a single instruction a "Branch to Register" 14. The mnemonic used in the IBM Assembler was BR and hence the name: IEF BR 14. BR 14 is identically equivalent to BCR 15,14 (Branch Always [ mask = 15 = always ] to the address contained in general purpose register 14). BR is a pseudo instruction for BCR 15. The system assembler accepts many cases of such pseudo-instructions, as logical equivalents to the canonical System/360 instructions. The canonical instance of BR 14 is BCR 15,14.
The linkage convention for OS/360 and its descendants requires that a program be invoked with register 14 containing the address to return control to when complete, and register 15 containing the address at which the called program is loaded into memory; at completion, the program loads a return code in register 15, and then branches to the address contained in register 14. But, initially IEFBR14 was not coded with these characteristics in mind, as IEFBR14 was initially used as a dummy control section, one which simply returned to the caller, not as an executable module.
The original version of the program did not alter register 15 at all as its original application was as a placeholder in certain load modules which were generated during Sysgen (system generation), not as an executable program, per se. Since IEFBR14 was always invoked by the functional equivalent of the canonical BALR 14,15 instruction, the return code in register 15 was always non-zero. Later, a second instruction was to be added to clear the return code so that it would exit with a determinate status, namely zero. Initially, programmers were not using all properties of the Job Control Language, anyway, so an indeterminate return code was not a problem. However, subsequently programmers were indeed using these properties, so a determinate status became mandatory. This modification to IEFBR14 did not in any way impact its original use as a placeholder.
The machine code for the modified program is:
SR R15,R15 put zero completion code into register 15
BR R14 branch to the address in register 14 (which is actually an SVC 3 instruction in the Communications Vector Table)
The equivalent machine code, eliminating the BR for clarity, is:
SR R15,R15 put zero completion code into register 15
SVC 3 issue EXIT SVC to terminate the jobstep
This makes perfect sense as the OS/360 Initiator initially "attaches" the job-step task using the ATTACH macro-instruction (SVC 42), and "unwinding" the effect of this ATTACH macro (it being a Type 2 SVC instruction) must be a complementary instruction, namely an EXIT macro (necessarily a Type 1 SVC instruction, SVC 3).
See also
/bin/true - the UNIX-equivalent "do nothing" program
References
Trombetta, Michael & Finkelstein Sue Carolyn (1985). "OS JCL and utilities". Addison Wesley. page 152.
IBM mainframe operating systems | Operating System (OS) | 1,352 |
Sony Tablet
Xperia Tablet (former code names Sony S1 and Sony S2), formerly known as Sony Tablet, is the brand name of a series of tablet computers. The first models used to run Google's operating system Android 3.1 Honeycomb, but more recent models operate on the Android 4.1.2 system. The first models were informally announced on 26 April 2011, using the code names, by the Sony Corporation in the Sony IT Mobile Meeting. They featured touchscreens, two cameras (a rear-facing 5 MP, a front-facing 0.3 MP), infrared sensor, Wi-Fi. Also, they support PlayStation Suite, DLNA, and are 3G/4G compatible. The retail price in the U.S at the time of release was US$499–599. In Europe, prices were at €499. To increase the number of apps available and provide marketing support for both tablets, Sony and Adobe Systems will hold a $200,000 competition targeting app developers. The series was formally launched in Berlin and Tokyo on 31 August 2011. The latest in the series is the Xperia Z4 Tablet.
History
On April 26, 2011, Sony announced that it would be developing two Android tablets, codenamed S1 and S2. The S1 (which became the Tablet S) was said to be "optimized for rich media entertainment" while the S2 (later Tablet P) would be "ideal for mobile communication and entertainment".
Promotional videos
On 15 June 2011, Sony released the first in a series of five videos titled "Two Will", promoting and featuring the Tablets in an elaborately designed Rube Goldberg Machine. The episodes are entitled:
Prologue
The First Impression
Going smoothly
Filled with fun
Together anywhere
Tablet S
The Sony Tablet S (former code name Sony S1) has one touchscreen display in a slate layout, and a unique wrap design inspired by the way some persons fold magazines while reading them. In landscape orientation, the unit along the top is about three times thicker than along the bottom, forming a mild slant. It was released on 11 September 2011, as the first available member of the Sony Tablet series. The suggested retail prices are $499 for the 16 GB model and $599 for the 32 GB model. In early reviews in late 2011, the units compared favorably to similar high-end tablets.
Tablet P
Xperia Tablet S
The Xperia Tablet S was announced at Internationale Funkausstellung Berlin (IFA) 2012. It was released in the USA on September 7, 2012. It comes in three different configurations: 16/32/64GB. It also retains the same 9.4 inch diagonal screen size with a resolution of 1280 x 800. However, it refined the wrap design from a wedge-shape to a more understated design. Initial release contained firmware with Android Ice Cream Sandwich 4.0. Sony promised to release Android Jelly Bean 4.1 some time mid-April.
Xperia Tablet Z
The Xperia Tablet Z was announced at Mobile World Congress in January 2013. Among major changes include a move from Tegra-based processor to the quad-core Snapdragon S4 Pro CPU, with a larger screen size of 10.1 inches and upgraded resolution of 1920 x 1200. It will come in the same 16/32/64GB configurations, and up to 64 GB in MicroSD expansion. In addition, it will include a MHL port.
Xperia Tablet Z2
The Xperia Tablet Z2 was released in 2014 with 10.1-inch 1920x1200 screen and 16 GB of internal storage and 3 GB of RAM and shipped with Android 4.4 (Upgradable to 5.1). The device also supports microSD cards.
Xperia Tablet Z3 Compact
The Xperia Tablet Z3 Compact was released in 2014 with 8.0-inch 1920x1200 screen and 16 GB of internal storage and 3 GB of RAM and shipped with Android 4.4 (Upgradable to 6.0). The device also supports microSD cards.
Xperia Z4 Tablet
The Xperia Z4 Tablet was released in 2014 with 10.1-inch 2560x1600 screen and 32 GB of internal storage and 3 GB of RAM and shipped with Android 5.0 (Upgradable to 7.0). The device also supports microSD cards.
See also
Comparison of tablet computers
References
External links
Android (operating system) devices
Tablet computers
Products introduced in 2011
Sony hardware
Sony products | Operating System (OS) | 1,353 |
COMSPEC
or is one of the environment variables used in DOS, OS/2 and Windows, which normally points to the command line interpreter, which is by default in DOS, Windows 95, 98, and ME or in OS/2 and Windows NT. The variable name is written in all-uppercase under DOS and OS/2. Under Windows, which also supports lowercase environment variable names, the variable name is inside the DOS emulator NTVDM and for any DOS programs, and under CMD.EXE.
The variable's contents can be displayed by typing or at the command prompt.
The environment variable by default points to the full path of the command line interpreter. It can also be made by a different company or be a different version.
Another use of this environment variable is on a computer with no hard disk, which needs to boot from a floppy disk, is to configure a ram disk. The COMMAND.COM file is copied to the ram disk during boot and the COMSPEC environment variable is set to the new location on the ram disk. This way the boot disk can be removed without the need to reinsert it after a big application has been stopped. The command line interpreter will be reloaded from the ram disk instead of the boot disk.
References
External links
Creating a customized Command Prompt shortcut - Example of COMSPEC usage
Windows administration
DOS environment variables
Windows environment variables | Operating System (OS) | 1,354 |
Mixed criticality
A mixed criticality system is a system containing computer hardware and software that can execute several applications of different criticality, such as safety-critical and non-safety critical, or of different Safety Integrity Level (SIL). Different criticality applications are engineered to different levels of assurance, with high criticality applications being the most costly to design and verify. These kinds of systems are typically embedded in a machine such as an aircraft whose safety must be ensured.
Principle
Traditional safety-critical systems had to be tested and certified in their entirety to show that they were safe to use. However, many such systems are composed of a mixture of safety-critical and non-critical parts, as for example when an aircraft contains a passenger entertainment system that is isolated from the safety-critical flight systems. Some issues to address in mixed criticality systems include real-time behaviour, memory isolation, data and control coupling.
Computer scientists have developed techniques for handling systems which thus have mixed criticality, but there are many challenges remaining especially for multi-core hardware.
Priority and Criticality
Basically, most errors are currently committed when making confusion between priority attribution and criticality management. As priority defines an order between different tasks or messages to be transmitted inside a system, criticality defines classes of messages which can have different parameters depending on the current use case. For example, in case of car crash avoidance or obstacle anticipation, camera sensors can suddenly emit messages more often, and so create an overload in the system. That is when we need to make Mixed-Criticality operate : to select messages to absolutely guarantee on the system in these overload cases.
Research projects
EU funded research projects on mixed criticality include:
MultiPARTES
DREAMS
PROXIMA
CONTREX
SAFURE
CERTAINTY
VIRTICAL
T-CREST
PROARTIS
ACROSS (Artemis)
EMC2 (Artemis)
RECOMP Artemis
ARAMIS and ARAMIS II
IMPReSS
UK EPSRC funded research projects on mixed criticality include:
MCC
Several research projects have decided to present their research results at the EU-funded Mixed-Criticality Forum
Workshops and Seminars
Workshops and seminars on Mixed Criticality Systems include:
1st International Workshop on Mixed Criticality Systems (WMC 2013)
2nd International Workshop on Mixed Criticality Systems (WMC 2014)
3rd International Workshop on Mixed Criticality Systems (WMC 2015)
4th International Workshop on Mixed Criticality Systems (WMC 2015)
Dagstuhl Seminar on Mixed Criticality on Multicore/Manycore Platforms (2015)
Dagstuhl Seminar on Mixed Criticality on Multicore/Manycore Platforms (2017)
References
External links
Karlsruhe Institute of Technology: Mixed Criticality in Safety-Critical Systems
Washington University in St Louis: A Research Agenda for Mixed-Criticality Systems
Software engineering
Safety engineering | Operating System (OS) | 1,355 |
HiSoft Systems
HiSoft Systems is a software company based in the UK, creators of a range of programming tools for microcomputers in 1980s and 1990s. Their first products were Pascal and Assembler implementations for the NASCOM 1 and 2 kit-based computers, followed by Pascal and C for computers, as well as a BASIC compiler for this platform and a C compiler for CP/M. While compilers for the were typical products for this platform, with integrated editor, compiler and runtime environment fitting in RAM together with program's source, the C compiler for CP/M was typical for this operating system, batch operated, with separate compilation and linking stages.
Their most well-known products were the Devpac assembler IDE environments (earlier known as GenST and GenAm for the Atari ST and Amiga, respectively). The Devpac IDE was a full editor/assembler/debugger environment written entirely in 68k assembler and was a favourite tool among programmers on the Atari GEM platform.
HiSoft also sold HiSoft BASIC, HiSoft C Interpreter for the Atari ST, Aztec C, Personal Pascal and FTL Modula-2.
The business was created in 1980 and was based in Dunstable, Bedfordshire before relocating to the village of Greenfield in the same county.
In November 2001, HiSoft's staff were employed by Maxon Computer Limited, the UK arm of MAXON Computer GmbH. to work on Cinema 4D.
David Link, the founder and owner, ran a café () in the village of Emsworth for a year until July 2007 and a restaurant/bar/guest house() in Shanklin, Isle of Wight, from 2010 until January 2015.
HiSOFT continues, with David Link at the helm, and is now actively involved in designing websites and selling the Ambur POS system.
References
External links
HiSoft Systems
World of Spectrum HiSoft archive
ZX Spectrum
Software companies of the United Kingdom
Amiga
Atari ST
Software companies established in 1980
1980 establishments in the United Kingdom | Operating System (OS) | 1,356 |
IObit Malware Fighter
IObit Malware Fighter (introduced in 2004) is an anti-malware and anti-virus program for the Microsoft Windows operating system (Windows XP and later). It is designed to remove and protect against malware, including, but not limited to: Trojans, rootkits, and ransomware.
Overview
IObit Malware Fighter has a freeware version, which can run alongside the user's existing anti-virus solution. In the paid edition, the product comes with anti-virus protection. As of version 6, released in 2018, the product includes the Bitdefender engine in its commercial version, along with their own anti-malware engine. New features of the latest release includes an improved user interface called "Safe Box" created to protect specific folders from unauthorized access, and "MBR Guard" which protects the user's system from malicious attacks such as Petya and cryptocurrency mining scripts.
Releases
In 2010, the first beta for IObit Malware Fighter 1.0 was released to the public.
In 2013, IObit Malware Fighter 2 was released. In this version, IObit debuted their "cloud security" component, in which the user can upload a file to the cloud to determine whether it is malicious or not. In 2015, version 3 was released, and then, in 2016, version 4, which added the Bitdefender anti-virus engine in its commercial edition.
In 2017, version 5 was released. Among new features was an anti-ransomware component. Version 6 was released in May 2018.
In 2018, version 6 was released. It added new features, including Safe Box, and MBR Guard.
Reception
In November 2011, the free and paid versions of IObit Malware Fighter were reviewed by Bright Hub, in which the reviewer was unable to recommend the product, citing poor malware protection.
In May 2013, IObit Malware Fighter received a "dismal" score, half a star out of five, for its paid version by PC Magazine.
In December 2013, the paid version of IObit Malware Fighter received a 1 out of 5 star rating from Softpedia.
In March 2015, the commercial version of IObit Malware Fighter 3 received a negative review from PC Magazine, with the reviewer calling the product "useless".
IObit Malware Fighter received a 4 out of 5 star editors rating on CNET's Download.com.
In May 2017, PC Magazine gave the paid version of IObit Malware Fighter a 2 out of 5 star rating.
In July 2017, TechRadar gave the IObit Malware Fighter paid version a two and a half star rating, in which the reviewer complained about the product's overall protection against malware.
In May 2018, IObit Malware Fighter 6 received a negative review from ReviewedByPro.com website with the reviewer stating that the software “is not capable on protecting the entire family, or heavy internet users as the defenses are not very reliable and security features do not work well”.
See also
IObit Uninstaller
References
External links
Spyware removal
Windows-only software
Windows security software
Computer security software
2004 software
Antivirus software | Operating System (OS) | 1,357 |
Memory-disk synchronization
Memory-disk synchronisation is a process used in computers that immediately writes to disk any data queued for writing in volatile memory. Data is often held in this way for efficiency's sake, since writing to disk is a much slower process than writing to RAM. Disk synchronization is needed when the computer is going to be shut down, or occasionally if a particularly important bit of data has just been written.
In Unix-like systems, a disk synchronization may be requested by any user with the sync command.
See also
mmap, a POSIX-compliant Unix system call that maps files or devices into memory
msync, a POSIX-compliant Unix system call that forcefully flush memory to disk and synchronize
Computer memory | Operating System (OS) | 1,358 |
Output device
An output device is any piece of computer hardware equipment which converts information into a human-perceptible form or, historically, into a physical machine-readable form for use with other non-computerized equipment. It can be text, graphics, tactile, audio, or video. Examples include monitors, printers, speakers, headphones, projectors, GPS devices, optical mark readers, and braille readers.
In the industrial setting, output devices also include "printers" for paper tape and punched cards, especially where the tape or cards are subsequently used to control industrial equipment, such as an industrial loom with electrical robotics which is not fully computerized.
Monitors
A display device is the most common form of output device. It presents output visually on computer screen. The output appears temporarily on the screen and can easily altered or erased, it is sometimes referred to as soft copy also. The display device for a desktop PC is called monitor.
With all-in-one PCs, notebook computers, hand held PCs and other devices; the term display screen is used for the display device. The display devices are also used in home entertainment systems, mobile systems, cameras and video games.
Display devices form images by illuminating a desired configuration of pixels. Raster display devices are organized in the form of a 2-dimensional matrix with rows and columns.
Headless operation
A computer can still function without an output device, as is commonly done with servers, where the primary interaction is typically over a data network. A number of protocols exist over serial ports or LAN cables to determine operational status, and to gain control over low-level configuration from a remote location without having a local display device. If the server is configured with a video output, it is often possible to connect a temporary display device for maintenance or administration purposes while the server continues to operate normally; sometimes several servers are multiplexed to a single display device though a KVM switch or equivalent.
Types of display (monitor)
There are 2 types of monitors, they are Monochrome & Colored Monitors. Monochrome monitors actually display two colors, one for the foreground and one for the background. The colors can be black and white, green and black, or amber and black. The Colored Monitor is a display device capable of displaying many colors. The Color monitors can display anywhere from 16 to over 1 million different colors.
Monochrome display
A monochrome monitor is a type of CRT computer display which was very common in the early days of computing, from the 1960s through the 1980s, before color monitors became popular. The most important component in the monitor is the picture tube. CRT basically means cathode ray tube. The CRT use cathode-ray-tube technology to display images, so they are large, bulky and heavy like conventional or old televisions, because old televisions also used the CRT technology only to display the television films or television images. To form the image on the screen, an electronic gun sealed inside a large glass tube fires electrons at phosphorous coated screen to light up the appropriate pixels in the appropriate color to display images. The phosphors glow only for a limited period of time after the exposure of the electrons, the monitor image must be redrawn/refreshed on a continual basis. Typical refreshment rates are between 60 and 85 times in a second.
They are still widely used in applications such as computerized cash register systems. Green screen was the common name for a monochrome monitor using a green "P1" phosphor screen.
Colored display
The color monitors are sometimes called RGB monitors, because they accept three separate signals (red, green, and blue). In contrast, a monochrome monitor can display only two colors one for the background and one for the foreground. Color monitors implement the RGB color model by using three different phosphors that appear red, green, and blue when activated. By placing the phosphors directly next to each other, and activating them with different intensities, color monitors can create an unlimited number of colors. In practice, however, the real number of colors that any monitor can display is controlled by the video adapter.
Monitor displays types include:
CRT display monitors
TFT (Thin-film transistor),
flat panel
LCD (Liquid Crystal Display)
OLED
LED
See also
Input Device
CPU
References | Operating System (OS) | 1,359 |
Laptop (disambiguation)
A laptop is a personal computer for mobile use.
Laptop may also refer to:
Laptop (2008 film), a 2008 Malayalam film
Laptop (2012 film), a 2012 Bengali film
Laptop (band), an American band with Jesse Hartman
See also | Operating System (OS) | 1,360 |
Microsoft Office 2007
Microsoft Office 2007 (codenamed Office 12) is a version of Microsoft Office, a family of office suites and productivity software for Windows, developed and published by Microsoft. It was released to manufacturing on November 3, 2006; it was subsequently made available to volume license customers on November 30, 2006, and later to retail on January 30, 2007, shortly after the completion of Windows Vista. The ninth major release of Office for Windows, Office 2007 was preceded by Office 2003 and succeeded by Office 2010. The Mac OS X equivalent, Microsoft Office 2008 for Mac, was released on January 15, 2008.
Office 2007 introduced a new graphical user interface called the Fluent User Interface, which uses ribbons and an Office menu instead of menu bars and toolbars. Office 2007 also introduced Office Open XML file formats as the default file formats in Excel, PowerPoint, and Word. The new formats are intended to facilitate the sharing of information between programs, improve security, reduce the size of documents, and enable new recovery scenarios.
Office 2007 is incompatible with Windows 2000 and earlier versions of Windows. Office 2007 is compatible Windows XP SP2 or later, Windows Server 2003 SP1 or later, Windows Vista, Windows Server 2008, Windows 7, Windows Server 2008 R2, Windows 8, Windows Server 2012, Windows 8.1, Windows Server 2012 R2 and Windows 10. It is not officially supported on Windows 11. It is the last version of Microsoft Office to support the 64—bit versions of Windows XP and Windows Server 2003, the 32—bit versions of Windows XP SP2 and Windows Server 2003 SP1 and Windows Vista versions below SP1; as the following version, Microsoft Office 2010 will only support Windows XP 32-bit SP3 or later, Windows Server 2003 32-bit SP2 or later and Windows Vista SP1 or later.
Office 2007 includes new applications and server-side tools, including Microsoft Office Groove, a collaboration and communication suite for smaller businesses, which was originally developed by Groove Networks before being acquired by Microsoft in 2005. Also included is Office SharePoint Server 2007, a major revision to the server platform for Office applications, which supports Excel Services, a client-server architecture for supporting Excel workbooks that are shared in real time between multiple machines, and are also viewable and editable through a web page.
With Microsoft FrontPage discontinued, Microsoft SharePoint Designer, which is aimed towards development of SharePoint portals, becomes part of the Office 2007 family. Its designer-oriented counterpart, Microsoft Expression Web, is targeted for general web development. However, neither application has been included in Office 2007 software suites.
Speech recognition functionality has been removed from the individual programs in the Office 2007 suite, as Windows Speech Recognition was integrated into Windows Vista. Windows XP users must install a previous version of Office to use speech recognition features.
According to Forrester Research, as of May 2010, Microsoft Office 2007 is used in 81% of enterprises it surveyed (its sample comprising 115 North American and European enterprise and SMB decision makers).
Mainstream support for Office 2007 ended on October 9, 2012, and extended support ended on October 10, 2017. On August 27, 2021, Microsoft announced that Outlook 2007 and Outlook 2010 would be cut off from connecting to Microsoft 365 Exchange servers on November 1, 2021.
Development
Microsoft announced Beta 1 of Office 2007 with the revelation of the ribbon on March 9, 2006, at CeBIT in Germany.
Beta 2 was announced by Bill Gates at WinHEC 2006, and was initially released to the public at no cost from Microsoft's web site. However, because of an unprecedented number of downloads, a fee of $1.50 was introduced for each product downloaded after August 2, 2006, The beta was updated on September 14, 2006, in Beta 2 Technical Refresh (Beta2TR). It included an updated user interface, better accessibility support, improvements in the robustness of the platform, and greater functionality.
Office 2007 was released to volume licensing customers on November 30, 2006, and to the general public on January 30, 2007. Mainstream support ended on October 9, 2012. Extended support ended on October 10, 2017.
Service packs
Since the initial release of Microsoft Office 2007, three service packs containing updates as well as additional features have been released. Microsoft Office 2007 Service Packs are cumulative, so previous Service Packs are not a prerequisite for installation.
Microsoft Office 2007 Service Pack 1 was released on December 11, 2007. Official documentation claims that SP1 is not simply a rollup of publicly released patches, but that it also contains fixes for a total of 481 issues throughout the entire Office suite. Microsoft Office 2007 Service Pack 2 was released on April 28, 2009. It added improved support for ODF, XPS and PDF standards, and included several bug fixes. Microsoft Office 2007 Service Pack 3 was released on October 25, 2011.
Editions
1 Office Customization Tool is used to customize the installation of Office 2007 by creating a Windows Installer patch file (.MSP) and replacing the Custom Installation Wizard and Custom Deployment Wizard included in earlier versions of the Office Resource Kit that created a Windows Installer Transform (.MST).
Volume licensing
Eligible employees of companies with volume license agreements for Microsoft Office receive additional tools, including enterprise content management, electronic forms, Information Rights Management capabilities and copies for use on a home computer.
New features
User interface
The new user interface (UI), officially known as Fluent User Interface, has been implemented in the core Microsoft Office applications: Word, Excel, PowerPoint, Access, and in the item inspector used to create or edit individual items in Outlook. These applications have been selected for the UI overhaul because they center around document authoring. The rest of the applications in the suite changed to the new UI in subsequent versions. The default font used in this edition is Calibri. Original prototypes of the new user interface were revealed at MIX 2008 in Las Vegas.
Office button
The Office 2007 button, located on the top-left of the window, replaces the File menu and provides access to functionality common across all Office applications, including opening, saving, printing, and sharing a file. It can also close the application. Users can also choose color schemes for the interface. A notable accessibility improvement is that the Office button follows Fitts's law.
Ribbon
The ribbon, a panel that houses a fixed arrangement of command buttons and icons, organizes commands as a set of tabs, each grouping relevant commands. The ribbon is present in Microsoft Word 2007, Excel 2007, PowerPoint 2007, Access 2007 and some Outlook 2007 windows. The ribbon is not user customizable in Office 2007. Each application has a different set of tabs that exposes functions that the application offers. For example, while Excel has a tab for the graphing capabilities, Word does not; instead it has tabs to control the formatting of a text document. Within each tab, various related options may be grouped together. The ribbon is designed to make the features of the application more discoverable and accessible with fewer mouse clicks as compared to the menu-based UI used prior to Office 2007. Moving the mouse scroll wheel while on any of the tabs on the ribbon cycles—through the tabs. The ribbon can be minimized by double clicking the active section's title, such as the Home text in the picture below. Office 2007 does not natively support removing, modifying or replacing ribbon. Third party add-ins, however, can bring menus and toolbars back to Office 2007 or customize the ribbon commands. Add-ins that restore menus and toolbars include Classic Menu for Office, ToolbarToggle, and Ubitmenu. Others like RibbonCustomizer enable the customization of ribbons. Office 2010 does allow user customization of the ribbon out of the box.
Contextual Tabs
Some tabs, called Contextual Tabs, appear only when certain objects are selected. Contextual Tabs expose functionality specific only to the object with focus. For example, selecting a picture brings up the Pictures tab, which presents options for dealing with the picture. Similarly, focusing on a table exposes table-related options in a specific tab. Contextual Tabs remain hidden except when an applicable object is selected.
Live Preview
Microsoft Office 2007 also introduces a feature called Live Preview, which temporarily applies formatting on the focused text or object when any formatting button is moused-over. The temporary formatting is removed when the mouse pointer is moved from the button. This allows users to have a preview of how the option would affect the appearance of the object, without actually applying it.
Mini Toolbar
The new Mini Toolbar is a small toolbar with basic formatting commands that appears within the document editing area, much like a context menu. When the mouse selects part of the text, Mini Toolbar appears close to selected text. It remains semi-transparent until the mouse pointer is hovered on it, to avoid obstructing what is underneath. Mini Toolbar can also be made to appear by right-clicking in the editing area or via key on keyboard, in which case it appears near the cursor, above or below the traditional context menu. Mini Toolbar is not customizable in Office 2007, but can be turned off.
Quick Access Toolbar
The Quick Access toolbar (by default) sits in the title bar and serves as a repository of most used functions, such as save, undo/redo and print. It is customizable, although this feature is limited, compared to toolbars in previous Office versions. Any command available in the entire Office application can be added to the Quick Access toolbar, including commands not available on the ribbon as well as macros. Keyboard shortcuts for any of the commands on the toolbar are also fully customizable, similar to previous Office versions.
Other UI features
Super-tooltips, or screentips, that can house formatted text and even images, are used to provide detailed descriptions of what most buttons do.
A zoom slider present in the bottom-right corner, allowing for dynamic and rapid magnification of documents.
The status bar is fully customizable. Users can right click the status bar and add or remove what they want the status bar to display.
SmartArt
SmartArt, found under the Insert tab in the ribbon in PowerPoint, Word, Excel, and Outlook, is a new group of editable and formatted diagrams. There are 115 preset SmartArt graphics layout templates in categories such as list, process, cycle, and hierarchy. When an instance of a SmartArt is inserted, a Text Pane appears next to it to guide the user through entering text in the hierarchical levels. Each SmartArt graphic, based on its design, maps the text outline, automatically resized for best fit, onto the graphic. There are a number of "quick styles" for each graphic that apply largely different 3D effects to the graphic, and the graphic's shapes and text can be formatted through shape styles and WordArt styles. In addition, SmartArt graphics change their colors, fonts, and effects to match the document's theme.
File formats
Office Open XML
Microsoft Office 2007 introduced a new file format, called Office Open XML, as the default file format. Such files are saved using an extra X letter in their extension (.docx/xlsx/pptx/etc.). However, it can still save documents in the old format, which is compatible with previous versions. Alternatively, Microsoft has made available a free add-on known as the Microsoft Office Compatibility Pack that lets Office 2000, XP, and 2003 open, edit, and save documents created under the newer 2007 format.
Office Open XML is based on XML and uses the ZIP file container. According to Microsoft, documents created in this format are up to 75% smaller than the same documents saved with previous Microsoft Office file formats, owing to the ZIP data compression.
Files containing macros are saved with an extra M letter in their extension instead (.docm/xlsm/pptm/etc.).
PDF
Initially, Microsoft promised to support exporting to Portable Document Format (PDF) in Office 2007. However, due to legal objections from Adobe Systems, Office 2007 originally did not offer PDF support out of the box, but rather as a separate free download. However, starting with Service Pack 2, Office allows users to natively export PDF files.
XPS
Office 2007 documents can also be exported as XPS documents. This is part of Service Pack 2 and prior to that, was available as a free plug-in in a separate download.
OpenDocument
Microsoft backs an open-source effort to support OpenDocument in Office 2007, as well as earlier versions (up to Office 2000), through a converter add-in for Word, Excel and PowerPoint, and also a command-line utility. As of 2008, the project supports conversion between ODF and Office Open XML file formats for all three applications. According to ODF Alliance this support falls short and substantial improvements are still needed for interoperability in real-world situations.
Third-party plugins able to read from and write to the ISO-standard Open Document Format (ODF) are available as a separate download.
Office 2007 Service Pack 2 adds native support for the OpenDocument Format. The ODF Alliance has released test results on ODF support of Office 2007 SP2, concluding that Office ODF support, both SP2 and other add-ons, have "serious shortcomings that, left unaddressed, would break the open standards based interoperability that the marketplace, especially governments, is demanding". Particularly, SP2 has no support for encrypted ODF files and has limited interoperability with other ODF spreadsheet implementations.
The ISO/IEC 26300 OpenDocument standard specifies encryption of files, which is based on sha1, Blowfish, and RFC 2898.
Microsoft Office 2007 SP2 does not support reading and writing encrypted (password protected) ODF files. Users are presented with a message: “cannot use password protection using the ODF format.”
The ISO/IEC 26300 OpenDocument standard has no spreadsheet formula language included (or referenced) in the standard specification. Office 2007 SP2 uses the spreadsheet formula language specified in the ISO/IEC 29500 Office Open XML open standard when creating ODF documents. According to the ODF Alliance report "ODF spreadsheets created in Excel 2007 SP2 do not in fact conform to ODF 1.1 because Excel 2007 incorrectly encodes formulas with cell addresses. Section 8.3.1 of ODF 1.1 says that addresses in formulas "start with a "[" and end with a "]"." In Excel 2007 cell addresses were not enclosed with the necessary square brackets." The ISO/IEC 26300 specification states that the semantics and the syntax depends on the used namespace, which is implementation dependent, leaving the syntax implementation defined as well.
Microsoft stated that they consider adding support for an official ODF formula language (OpenFormula), once a future version of the ISO/IEC 26300 standard specification includes one.
Microsoft's ODF spreadsheet support in SP2 is not fully inter-operable with other implementations of OpenDocument, such as the IBM Symphony, which use the non-standardized OpenOffice.org 2.x formula language, and OpenOffice.org 3.x, which uses a draft of OpenFormula. The company had previously reportedly stated that "where ODF 1.1 is ambiguous or incomplete, the Office implementation can be guided by current practice in OpenOffice.org, mainly, and other implementations including KOffice and AbiWord. Peter Amstein and the Microsoft Office team are reluctant to make liberal use of extension mechanisms, even though provided in ODF 1.1. They want to avoid all appearance of an embrace-extend attempt."
The EU investigated Microsoft Office OpenDocument Format support to see if it provided consumers greater choice.
Metadata
In Office 2007, Microsoft introduced the Document Inspector, an integral metadata removal tool that strips Word, Excel, and PowerPoint documents of information such as author name and comments and other "metadata".
User assistance system
In Microsoft Office 2007, the Office Assistants were eliminated in favour of a new online help system. One of its features is the extensive use of Super Tooltips, which explain in about one paragraph what each function performs. Some of them also use diagrams or pictures. These appear and disappear like normal tooltips, and replace normal tooltips in many areas. The Help content also directly integrates searching and viewing Office Online articles.
Collaboration features
SharePoint
Microsoft Office 2007 includes features geared towards collaboration and data sharing. As such, Microsoft Office 2007 features server components for applications such as Excel, which work in conjunction with SharePoint Services, to provide a collaboration platform. SharePoint works with Microsoft Office SharePoint Server 2007, which is used to host a SharePoint site, and uses IIS and ASP.NET 2.0. Excel server exposes Excel Services, which allows any worksheet to be created, edited and maintained via web browsers. It features Excel Web Access, the client-side component which is used to render the worksheet on a browser, Excel Calculation Service which is the server side component which populates the worksheet with data and perform calculations, and Excel Web Services that extends Excel functionalities into individual web services. SharePoint can also be used to host Word documents for collaborative editing, by sharing a document. SharePoint can also be used to hold PowerPoint slides in a Slide Library, from which the slides can be used as a formatting template. It also notifies users of a slide automatically in case the source slide is modified. Also by using SharePoint, PowerPoint can manage shared review of presentations. Any SharePoint hosted document can be accessed from the application which created the document or from other applications such as a browser or Microsoft Office Outlook.
Groove
Microsoft Office 2007 also includes Groove, which brings collaborative features to a peer-to-peer paradigm. Groove can host documents, including presentations, workbooks and others, created in Microsoft Office 2007 application in a shared workspace, which can then be used in collaborative editing of documents. Groove can also be used in managing workspace sessions, including access control of the workspace. To collaborate on one or more documents, a Workspace must be created, and then those who are to work on it must be invited. Any file shared on the workspace are automatically shared among all participants. The application also provides real-time messaging, including one-to-one as well as group messaging, and presence features, as well as monitoring workspace activities with alerts, which are raised when pre-defined set of activities are detected. Groove also provides features for conflict resolution for conflicting edits. Schedules for a collaboration can also be decided by using a built-in shared calendar, which can also be used to keep track of the progress of a project. However, the calendar is not compatible with Microsoft Outlook.
Themes and Quick Styles
Microsoft Office 2007 places more emphasis on Document Themes and Quick Styles. The Document Theme defines the colors, fonts and graphic effects for a document. Almost everything that can be inserted into a document is automatically styled to match the overall document theme creating a consistent document design. The new Office Theme file format (.THMX) is shared between Word, Excel, PowerPoint and Outlook email messages. Similar themes are also available for data reports in Access and Project or shapes in Visio.
Quick Styles are galleries with a range of styles based on the current theme. There are quick styles galleries for text, tables, charts, SmartArt, WordArt and more. The style range goes from simple/light to more graphical/darker.
Application-specific changes
Word
New style sheets (quick styles) and ability to switch easily among them.
Default font now Calibri instead of Times New Roman, as featured in previous versions of Microsoft Office.
Word count listed by default in the status bar. The word count dynamically updates as you type.
New contextual spell checker, signified by a wavy blue underline to the traditional wavy red underline for misspellings and wavy green underline for grammar errors, sometimes catches incorrect usage of correctly spelled words, such as in "I think we will loose this battle".
Translation tool tip option available for English (U.S.), French (France), and Spanish (International Sort). When selected, hovering the mouse cursor over a word displays its translation in the particular language. Non-English versions have different sets of languages. Other languages can be added by using a separate multilingual pack.
Automated generation of citations and bibliographies according to defined style rules, including APA, Chicago, and MLA. Changing style updates all references automatically. Connect to web services to access online reference databases.
Redesigned native mathematical equation support with TeX-like linear input/edit language or GUI. Also supports the Unicode Plain Text Encoding of Mathematics.
Preset gallery of cover pages with fields for Author, Title, Date, Abstract, etc. Cover pages follow the theme of the document (found under the Page Layout tab).
Document comparison engine updated to support moves, differences in tables, and also easy to follow tri-pane view of original document, new document, and differences.
Full screen reading layout that shows two pages at a time with maximal screen usage, plus a few critical tools for reviewing.
Building Blocks, which lets one save frequently used content, so that they are easily accessible for further use. Building blocks can have data mapped controls in them to allow for form building or structured document authoring.
The ability to save multiple versions of a document (which had existed since Word 97) has been removed.
Blog entries can be authored in Word itself and uploaded directly to a blog. Supported blogging sites include Windows Live Spaces, WordPress, SharePoint, Blogger, Telligent Community etc.
Drops function for Insert/Picture/From Scanner or Camera. Can be added manually.
Drops the "Bullets and Numbering" dialog boxes and rich, easily controlled range of options for formatting Outline Numbered lists
For Indian languages, proofing tools were introduced in Office for Bengali, Malayalam, Odia, Konkani and Assamese
Outlook
As a major change in Outlook 2007, Exchange 5.5 support has been dropped. Like Evolution, Outlook Express and Entourage, Outlook now works only with Exchange 2000 and above.
Word becomes the default text editor for this and all subsequent versions.
Outlook now indexes (using the Windows Search APIs) the e-mails, contacts, tasks, calendar entries, RSS feeds and other items, to speed up searches. As such, it features word-wheeled search, which displays results as characters are typed.
Search folders, which are saved searches, have been updated to include RSS feeds as well. Search folders can be created with a specific search criteria, specifying the subject, type and other attributes of the information being searched. When a search folder is opened, all matching items for the search are automatically retrieved and grouped up.
Outlook now supports text-messages and SMSs, when used in conjunction with Exchange Server 2007 Unified Messaging.
Outlook includes a reader for RSS feeds, which use the Windows Common Feeds Store. RSS subscription URLs can be shared via e-mails; updates can also be pushed to a mobile device.
Outlook can now support multiple calendars being worked with simultaneously. It also includes a side-by-side view for calendars, where each calendar is displayed in a different tab, and allows easy comparison of them. Outlook also supports web calendars. Calendars can be shared with other users, and also exported as a HTML file for viewing by others who do not have Outlook installed.
Calendar view shows which tasks are due.
Flagged e-mails and notes can also be converted to Task items.
Outlook includes a To Do Bar, which consolidates appointment, calendar, and task items in a single view.
Outlook includes a Windows SideShow Calendar Gadget
Online or offline editing of all Microsoft Office 2007 documents via a SharePoint site. All edits are automatically synchronized.
Contacts can be shared among users, via e-mail, Exchange Server or a SharePoint site.
Attachment preview allows users to view Office e-mail attachments in the reading pane rather than having to open another program.
HTML in e-mails is now rendered using the Microsoft Word rendering engine which disallows several HTML tags like object, script, iframe etc. along with several CSS properties.
Microsoft Office Outlook can also include an optional Business Contact Manager (included on a separate installation disc in Office 2007 Small Business and above) which allows management of business contacts and their sales and marketing activities. Phone calls, e-mails, appointments, notes and other business metrics can be managed for each contact. It can also keep a track of billable time for each contact on the Outlook Calendar. Based on these data, a consolidated report view can be generated by Microsoft Office Outlook with Business Contact Manager. The data can be further analyzed using Microsoft Office Excel. This data can also be shared using SharePoint Services.
Excel
Support up to 1,048,576 rows and 16,384 columns (XFD) in a single worksheet, with 32,767 characters in a single cell (17,179,869,184 cells in a worksheet, 562,932,773,552,128 characters in a worksheet)
Conditional Formatting introduces support for three new features — Color Scales, Icon Sets and Data Bars
Color Scales, which automatically color the background of a group of cells with different colors according to the values.
Icon sets, which precede the text in a cell with an icon that represent some aspect of the value of the cell with respect to other values in a group of cells, can also be applied. Icons can be conditionally applied to show up only when certain criteria are met, such as a cross showing up on an invalid value, where the condition for invalidity can be specified by the user.
Data Bars show as a gradient bar in the background of a cell the contribution of the cell value in the group.
Column titles can optionally show options to control the layout of the column.
Multithreaded calculation of formulae, to speed up large calculations, especially on multi-core/multi-processor systems.
User Defined Functions (UDF), which are custom functions written to supplement Excel's set of built-in functions, supports the increased number of cells and columns. UDFs now can also be multithreaded. Server side UDFs are based on the .NET Managed code.
Importing data from external sources, such as a database, has been upgraded. Data can also be imported from formatted tables and reports, which do not have a regular grid structure
Formula Autocomplete, automatically suggests function names, arguments and named ranges, and automatically completing them if desired, based on the characters entered. Formulae can refer to a table as well
CUBE functions which allow importing data, including set aggregated data, from data analysis services, such as SQL Server Analysis Services
Page Layout view, to author spreadsheets in a way that mirrors the formatting of the printed document
PivotTables, which are used to create analysis reports out of sets of data, can now support hierarchical data by displaying a row in the table with a "+" icon, which, when clicked, shows more rows regarding it, which can also be hierarchical. PivotTables can also be sorted and filtered independently, and conditional formatting used to highlight trends in the data.
Filters, now includes a Quick filter option allowing the selection of multiple items from a drop down list of items in the column. The option to filter based on color has been added to the choices available.
Excel features a new charting engine, which supports advanced formatting, including 3D rendering, transparencies and shadows. Chart layouts can also be customized to highlight various trends in the data.
PowerPoint
Improvements to text rendering to support text based graphics.
Rendering of 3D graphics.
Support for many more sound file formats such as .mp3 and .wma.
Support for tables and enhanced support for table pasting from Excel.
Slide Library, which lets you reuse any slide or presentation as a template. Any presentation or slide can be published to the Slide Library.
Any custom-designed slide library can be saved.
Presentations can be digitally signed.
Improved Presenter View.
Added support for widescreen slides.
Allows addition of custom placeholders.
Drops function for Insert/Picture/From Scanner or Camera.
Allows for duplication of a slide through right-clicking it without having to go through Copy and Paste
OneNote
OneNote now supports multiple notebooks.
Notebooks can be shared across multiple computers. Anyone can edit even while not connected and changes are merged automatically across machines when a connection is made. Changes are labeled with author and change time/date.
Notebooks can be synchronized between two or more machines without an Internet connection when stored on removable media such as an SD card or USB flash drive; changes made to a notebook on a machine when removable media is disconnected will be synced with the notebook when the media is reinserted to that machine. To prevent potential synchronization issues associated with drive mapping during reinsertion, OneNote 2007 associates notebooks with unique device identifiers rather than drive letters to ensure that changes are successfully synced.
Notebook templates.
Word-wheeled search is also present in OneNote, which also indexes notes.
Synchronization of Tasks with Outlook 2007. Also Outlook can send mails to OneNote, or open pages in OneNote that are linked to tasks, contacts, appointments/meetings.
Support for tables. Using tabs to create tabular structure automatically converts it to a table.
Optical character recognition is performed on images (e.g., brochures, photos, prints, scans, screen clips) so that any text that appears in them is searchable. Handwritten text on a tablet PC is also searchable.
Words that are included in audio and video recordings are also tagged and indexed, so that they can be searched.
Notes can have hyperlinks among themselves, or from outside OneNote to a specific point on a page.
Embedding documents in notes.
Extensibility support for add-ins.
Drawing tools for creating diagrams in OneNote.
Typing any arithmetic expression, followed by "=" results in the result of the calculation being displayed.
OneNote Import Printer Driver, where any application can print to a virtual printer for OneNote and the "printed" document is imported to the notebook; OCR is performed on the text that appears in the printed document to facilitate searches.
OneNote Mobile is included for Smartphones and some PocketPC devices. Syncs notes two-way with OneNote. Takes text, voice, and photo notes.
Access
Access now includes support for a broader range of data types, including documents and images.
Whenever any table is updated, all reports referencing the table are also updated.
Dropdown lists for a table can be modified in place.
Lookup Fields, which get their values by "looking up" some value in a table, have been updated to support multi valued lookups.
Many new preset schemata are included.
Access can synchronize with Windows SharePoint Services 3.0 and Office SharePoint Server 2007. This feature enables a user to use Access reports while using a server-based, backed-up, IT managed version of the data.
Publisher
Templates automatically fill out with information such as company name, logo etc., wherever applicable.
Frequently used content can be stored in Content Store for quick access.
A document can be automatically converted from one publication type, such as a newsletter, to another publication type, say a web page.
Save as PDF supports commercial printing quality PDF.
Catalog Merge can create publication content automatically by retrieving data, including text, images and other supported types, from an external data source.
Design Checker, which is used to find design inconsistencies, has been updated.
InfoPath
InfoPath designed forms can now be used from a browser, provided the server is running InfoPath Forms Services in SharePoint 2007 or Office Forms Server.
A form can be sent out to people via e-mail. Such forms can be filled out from Outlook 2007 itself.
Automatic conversion of forms in Word and Excel to InfoPath forms. Forms can also be exported to Excel.
Forms can be published to a network share or to SharePoint Server.
Adding data validation, using validation formulae, and conditional formatting features without manually writing code.
Print Layout view for designing forms in a view that mirror the printed layout. Such forms can be opened using Word as well.
Ability to use Microsoft SQL Server, Microsoft Office Access, or other databases as back-end data repository.
Multiple views for the same forms, to expose different features to different class of users.
Template Parts, used to group Office InfoPath controls for use later. Template parts retain its XML schema.
Visio
PivotDiagrams, which are used to visualize data, show data groups and hierarchical relationships.
Visual modification of PivotDiagrams by dragging data around levels, to restructure the data relationships.
PivotDiagrams can show aggregate statistical summaries for the data and show them.
Shapes can be linked with external data sources. Doing so, the shapes are formatted according to the data. The data, and hence the shapes, are updated periodically. Such shapes can also be formatted manually using the Data Graphics feature.
AutoConnect : Link easily two shapes.
Data Link : Link data to shapes.
Data Graphics : Dynamic objects (text and images) linked with external data.
New Theme behaviour and new shapes.
Project
Ability to create custom templates.
Any change in the project plan or schedule highlights everything else that is affected.
Analyze changes without actually committing them. Changes can also be done and undone programmatically, to automate analysis of different changes.
Improved cost resource management and analysis for projects.
Project data can be used to automatically create charts and diagrams in Microsoft Office Excel and Microsoft Office Visio, respectively.
The project schedule can be managed as 3D Gantt chart
Sharing project data with the help of SharePoint Services.
SharePoint Designer
Microsoft Office SharePoint Designer 2007 is new addition to the Office suite, replacing discontinued FrontPage for users of SharePoint. People who don't use SharePoint can use Microsoft Expression Web
Supports features and constructs that expose SharePoint functionality
Supports ASP.NET 2.0 and Windows Workflow Foundation
Support for creating workflows and data reports, from external data sources
Can optionally render XML using XSLT
Server components
SharePoint Server 2007
Microsoft Office SharePoint Server 2007 allows sharing and collaborative editing of Office 2007 documents. It allows central storage of documents and management of Office documents, throughout the enterprise. These documents can be accessed either by the applications which created them, Microsoft Office Outlook 2007, or a web browser. Documents can also be managed through pre-defined policies that let users create and publish shared content, through a SharePoint site.
SharePoint Server allows searching of all Office documents which are being managed by it, centrally, thereby making data more accessible. It also provides access control for documents. Specialized server components can plug into the SharePoint Server to extend the functionality of the server, such as Excel Services exposing data analysis services for Excel services. Data from other data sources can also be merged with Office data.
SharePoint also lets users personalize the SharePoint sites, filtering content they are interested in. SharePoint documents can also be locally cached by clients for offline editing; the changes are later merged.
Forms Server 2007
Microsoft Office Forms Server 2007 allows InfoPath forms to be accessed and filled out using any browser, including mobile phone browsers. Forms Server 2007 also supports using a database or other data source as the back-end for the form. Additionally, it allows centralized deployment and management of forms. Forms Server 2007 hosted forms also support data validation and conditional formatting, as does their InfoPath counterpart. It also supports advanced controls like Repeating section and Repeating table. However, some InfoPath controls cannot be used if it must be hosted on a Forms server.
Groove Server 2007
Microsoft Office Groove Server 2007 is for centrally managing all deployments of Microsoft Office Groove 2007 in the enterprise. It enables using Active Directory for Groove user accounts, and create Groove Domains, with individual policy settings. It allows Groove workspaces to be hosted at the server, and the files in the workspaces made available for collaborative editing via the Groove client. It also includes the Groove Server Data Bridge component to allow communication between data stored at both Groove clients and servers and external applications.
Project Server 2007
Microsoft Office Project Server 2007 allows one to centrally manage and coordinate projects. It allows budget and resource tracking, and activity plan management. The project data and reports can also be further analyzed using Cube Building Service. The project management data can be accessed from a browser as well.
Project Portfolio Server 2007
Microsoft Office Project Portfolio Server 2007 allows creation of a project portfolio, including workflows, hosted centrally, so that the information is available throughout the enterprise, even from a browser. It also aids in centralized data aggregation regarding the project planning and execution, and in visualizing and analyzing the data to optimize the project plan. It can also support multiple portfolios per project, to track different aspects of it. It also includes reporting tools to create consolidated reports out of the project data.
PerformancePoint Server 2007
Microsoft PerformancePoint Server allows users to monitor, analyze, and plan their business as well as drive alignment, accountability, and actionable insight across the entire organization. It includes features for scorecards, dashboards, reporting, analytics, budgeting and forecasting, among others.
Removed features
The following Office 2003 features were removed in Office 2007:
Fully customizable toolbars and menus for all of its applications Quick Access Toolbar and the ribbon have limited customizability. Office 2010 reintroduced ribbon UI customizability.
Office Assistant
Speech recognition (included as part of Windows Vista and later)
Handwriting recognition and ink features (included as part of Windows Vista and later)
Ability to slipstream service packs into the original setup files (administrative installation images)
Office Web Components
Save My Settings Wizard
Choice of local installation source allowing users to choose whether to keep a locally cached copy of installation source files or remove it. Setup files are now cached locally without user preference and cannot be removed. They are recreated by Office 2007 if removed.
Several deployment-related utility Resource Kit tools. Some primary deployment tools ship with Office 2007 itself.
Office FileSearch object and File Search functionality from File menu.
Criticism
Redesigned user interface
Even though the ribbon can be hidden, PC World wrote that the new "ribbon" interface crowds the Office work area, especially for notebook users. Others have called its large icons distracting. Essentially, the GUI-type interface of the ribbon contrasts sharply with the older menus that were organized according to the typical functions undertaken in paper-based offices: for instance, the old "File" menu dealt with opening, (re-)naming, saving, and printing a file, and the old "Edit" menu dealt with making changes to the content of the file. As a result, users who were more familiar with the logic of the old menus would be somewhat frustrated with the new, more visually oriented ribbon. The ribbon cannot be moved from the top to the side of the page, as floating toolbars could be.
Some users with experience using previous versions of Microsoft Office have complained about having to find features in the ribbon. Others state that having learnt to use the new interface, it has improved the speed with which "professional-looking" documents can be created. Microsoft has released a series of small programs, help sheets, videos and add-ins to help users learn the new interface more quickly.
Patenting controversy
Microsoft contractor Mike Gunderloy left Microsoft partially over his disagreement with the company's "sweeping land grab" including its attempt to patent the ribbon interface. He says "Microsoft itself represents a grave threat to the future of software development through its increasing inclination to stifle competition through legal shenanigans." He says that by leaving Microsoft, he is "no longer contributing to the eventual death of programming."
Office Open XML
The new XML-based document file format in Microsoft Office 2007 is incompatible with previous versions of Microsoft Office unless an add-on is installed for the older version.
PC World has stated that upgrading to Office 2007 presents dangers to certain data, such as templates, macros, and mail messages.
The Microsoft Word 2007 equation editor, which uses a form of MathML called Office MathML (OMML), is also incompatible with that of Microsoft Word 2003 and previous versions. Upon converting Microsoft Word 2007 .docx files to .doc files, equations are rendered as graphics. On June 6, 2007, Inera Inc. revealed that Science and Nature refused to accept manuscripts prepared in Microsoft Word 2007 .docx format; subsequently Inera Inc. informed Microsoft that Microsoft Word 2007's file format impairs usability for scholarly publishing. Nature still does not support Office Open XML format; Science however, accepts this format but discourages its use.
Bibliographies
The new Word 2007 features for bibliographies only support a small number of fixed citation styles. Using XSLT, new styles can be added. Some extra styles, such as the standard Association for Computing Machinery publication format, are made freely available by third parties.
See also
Comparison of office suites
List of office suites
Microsoft Office 2007 filename extensions
Notes
References
2006 software
Office 2007
Products and services discontinued in 2017 | Operating System (OS) | 1,361 |
Atari Message Information System
The Atari Message Information System (AMIS) was one of the first BBS (Bulletin Board System) software packages available for the Atari 8-bit family of computers. It was known to crash pretty often and could not be left unattended for more than a few days. The autorun.sys file which contained the modem handler was at cause. Versions of the AMIS BBS were modified with the modem handler (written by Atari) supplied with the Atari XM301 modem and was deemed much more stable.
The original AMIS BBS software was written in the BASIC programming language by Tom Giese member of the MACE (Michigan Atari Computer Enthusiasts). The program included instructions for building a "ring detector" circuit for the board maintainer's modem (Atari 1030 modem) to enable it to answer incoming calls – modems at the time were most often capable of making outgoing calls, but not receiving incoming ones. The one exception being the Atari XM301 modem which had a ring detector built-in.
A sector editor was required for the BBS maintainer to manually allocate message space on their disk, one hex byte at a time.
As of March 2021, there is still an active AMIS BBS, called Amis XE, that one can connect to using telnet (amis86.ddns.net:9000) or a web client provided by the Telnet BBS Guide.
Alternate versions
The software was released into the public domain, and was heavily modified by enthusiasts and BBS maintainers. As such, several versions of AMIS exist, including:
Standard AMIS – original version by Tom Giese
MACE AMIS – from the Michigan Atari Computer Enthusiasts, by Larry Burdeno and Jim Steinbrecher
Fast AMIS
Carnival BBS
Comet AMIS – by Matt Pritchard & Tom Johnson of Algonac, Michigan; originally designed for the MPP modem (which used the joystick port, not the regular I/O or 850 Interface ports. At the time this was a very popular low cost modem, that had no software written for it, until John Demar developed a driver to enable software to communicate with the joystick port as if it were the I/O port) and modified to be used with other types of standard modems. The final version featured many automated tasks, usage logs, passwords, private mail, multiple message bases and support for hard drives and MYDOS, and was on the cutting edge of AMIS/Atari 8-bit BBS technology.
TODAMIS 1.0 – for 1030/XM301 modems, written in 1986 by Trent Dudley
AMIS XM301 was a heavily modified version of AMIS written by one of the original AMIS programmers, Mike Mitchell (Baudville BBS - Garden City, MI), and newcomer Mike Olin (Molin's Den BBS, Northville, MI), written in Basic XE by Optimized System Software.
Reed Audio BBS was a modified version of Carnival BBS that added multiple forum support & support for the Atari 1030 modem by way of a hardware ring detector (relay). Created by Todd Gordanier in 1986.
FujiAmis Modified version of AMIS to include modem configuration for the FujiNet Device, telnet IP based deployment, SpartaDos conversion to HSIO, and unlimited message base. Created by Robert Sherman in 2020
Amis XE Modified version of Fuji AMIS to work with emulators and FujiNet Modem configuration, telnet IP based deployment optimized in Basic XE.Created by Robert Sherman in 2021
References
External links
Screenshot of the Welcome Screen for Mike Mitchell's Baudville BBS
Screenshot of the Logon Screen from Fuji Amis BBS
Screenshot of the Welcome Screen from Fuji Amis BBS
Screenshot of the SysOp's Waitcall Screen from Fuji Amis BBS
Bulletin board system software
Atari 8-bit family software | Operating System (OS) | 1,362 |
Intel System Development Kit
Each time Intel launched a new microprocessor, they simultaneously provided a System Development Kit (SDK) allowing engineers, university students, and others to familiarise themselves with the new processor's concepts and features. The SDK single-board computers allowed the user to enter object code from a keyboard or upload it through a communication port, and then test run the code. The SDK boards provided a system monitor ROM to operate the keyboard and other interfaces. Kits varied in their specific features but generally offered optional memory and interface configurations, a serial terminal link, audio cassette storage, and EPROM program memory. Intel's Intellec development system could download code to the SDK boards.
In addition, Intel sold a range of larger-scale development systems which ran their proprietary operating systems and hosted development tools assemblers and later compilers targeting their processors. These included the Microcomputer Development System (MDS), Personal Development System (PDS), In-Circuit Emulators (ICE), device programmers and so on. Most of these were rendered obsolete when the IBM PC became a de facto standard, and by other standardised technologies such as JTAG.
SIM4-01
The SIM4-01 prototyping board holds a complete MCS-4 micro computer set including the first microprocessor, the Intel 4004, introduced in 1971.
SIM8-01
The SIM8-01 prototyping board holding a MCS-8 micro computer set, based on the Intel 8008 was released in 1972.
SDK-80
The 8080 System Design Kit (SDK-80) of 1975 provided a training and prototype vehicle for evaluation of the Intel 8080 microcomputer system (MCS-80), clocked at 2.048 MHz. (The basic 8080 instruction cycle time was 1.95 μs, which was four clock cycles.) The SDK-80 allowed interface to an existing application or custom interface development. A monitor ROM was provided.
RAM 256 bytes expandable to 1 KB
ROM 2 KB expandable to 4 KB
SIZE / WEIGHT 12 (W) × 0.5 (D) × 6.75 (H) inch
I/O ports: parallel (24 lines expandable to 48 lines), serial up to 4800 baud
Documentation
User's Manual
SDK-85
The SDK-85 MCS-85 System Design Kit was a single board microcomputer system kit using the Intel 8085 processor, clocked at 3 MHz with a 1.3 μs instruction cycle time. It contained all components required to complete construction of the kit, including LED display, keyboard, resistors, caps, crystal, and miscellaneous hardware. A preprogrammed ROM was supplied with a system monitor. The kit included a 6-digit LED display and a 24-key keyboard for direct insertion, examination, and execution of a user's program. It also had a serial transistor interface for a 20 mA current loop Teletype using the bit-serial SID and SOD pins on the CPU. The maximum user RAM for programs and data, on the factory standard kit, was limited to 0xC2 or 194 decimal bytes. The full 256 bytes was available on the expansion RAM. User programs could call subroutines in the monitor ROM for functions such as: Serial In/Out, CRLF, Read Keyboard, Write Display, time delay, convert binary to two character hexadecimal etc.
RAM 256 bytes expandable to 512 bytes with another 8155 RAM / 22 programmable IO lines. The 14-bit programmable Timer/Counter was used for system single-step control. The expansion Timer/Counter was available.
ROM 2 KB expandable to 4 KB with another 8755 EPROM / 16 programmable IO lines in the expansion socket.
SIZE / WEIGHT 30.5 (W) × 25.7 (D) × 1.3 (H) cm.
Documentation
User's Manual
SDK-86
The SDK-86 MCS-86 System Design Kit is a complete single board 8086 microcomputer system in kit form. It contains all necessary components to complete construction of the kit, including LED display, keyboard, resistors, caps, crystal, and miscellaneous hardware. Included are preprogrammed ROMs containing a system monitor for general software utilities and system diagnostics. The complete kit includes an 8-digit LED display and a mnemonic 24-key keyboard for direct insertion, examination, and execution of a user's program. In addition, it can be directly interfaced with a teletype terminal, CRT terminal, or the serial port of an Intellec system. The SDK-86 is a high performance prototype system with designed·in flexibility for simple interface to the user's application.
The SDK-86 (System Design Kit) was the first available computer using the Intel 8086 microprocessor. It was sold as a single board kit at a cheaper price than a single 8086 chip because Intel thought that the success of a microprocessor depends on its evaluation by as many users as possible. All major components were socketed and the kit could be assembled by anyone having a limited technical knowledge thanks to a clear and complete assembly manual. The system could be used with the on-board keyboard and display or connected to a serial video terminal.
The internal ROM monitor offered the following commands:
S (Substitute Memory): Displays / Modifies memory locations
X (Examine / Modify registers) : Displays / Modifies 8086 registers
D (Display memory): Displays memory content
M (Move): Moves block of memory data
I (Port Input): Receives data from input port
O (Port Output): Send data to input port
G (Go): Execute user program
N (Single Step): Execute single program instruction
R (Read File): Read object file from tape to memory
W (Write File): Writes block of memory to tape
Technical Information:
NAME SDK-86
MANUFACTURER Intel
TYPE Home Computer
ORIGIN US
YEAR 1979
BUILT IN LANGUAGE ROM Monitor
KEYBOARD Hexadecimal 24 keys
CPU Intel 8086
Freq. 2.5 or 5 MHz (jumper selectable)
RAM 2 KB expandable to 4 KB
ROM 8 KB (Monitor)
TEXT MODES 8-digit led
I/O ports: Processor bus, parallel and serial I/O
POWER SUPPLY + 5V, -12V external AC adaptor
PRICE $780
Documentation
Assembly Manual
SDK-86 User's Manual
Intel 8086 CPU User's Manual
ECK-88
The Intel ECK88 8088 Educational Component Kit was released in 1979, and used the 8088 processor.
HSE-49
The HSE-49 emulator of 1979 was a stand-alone development tool with on-board 33-key keypad, 8-character display, two 8039 microcontrollers, 2K bytes of user-program RAM, a serial port and cable, and a ROM-based monitor which supervises the emulator operation and user interface. The emulator provides a means for executing and debugging programs for the 8048/8049 family of microcontrollers at speeds up to 11 MHz. It interfaced to a user-designed system through an emulation cable and 40-pin plug, which replaced the MCS-48 device in the user's system. Using the HSE-49 keypad, a designer can run programs in real-time or single-step modes, set up to 8000 breakpoint flags, and display or change the contents of user program memory, internal and external data memory, and internal MCS-48 hardware registers. When linked to a host Intellec development system, the HSE-49 emulator system-debugging capabilities, with the development system program assembly and storage facilities, provide the tools required for total product development.
Freq. 11 MHz
RAM 2 KB
VRAM None
ROM 2 KB
SIZE / WEIGHT 14 (W) × 0.5 (D) × 10 (H) inch / 4.0 Ib
I/O ports: Emulation Cable and Plug & 20 mA Current Loop or RS232 (jumper selectable)
SDK-186
Technical Information:
NAME SDK-186
MANUFACTURER Intel
TYPE Design Kit Microcomputer
ORIGIN US
KEYBOARD None
CPU Intel 80186
COPROCESSOR Intel 8087
Documentation
SDK-286
Technical Information:
NAME SDK-286
MANUFACTURER Intel
TYPE Design Kit Microcomputer
ORIGIN US
BUILT IN LANGUAGE Monitor in ROM
CPU Intel 80286
COPROCESSOR Intel 8087
Documentation
SDK-51
The SDK-51 MCS-51 System Design Kit, released in 1982, contains all of the components of a single-board computer based on Intel's 8051 single-chip microcomputer, clocked at 12 MHz. The SDK-51 uses the external ROM version of the 8051 (8031). It provides a serial port which can support either RS232 or current loop configurations, and also an audio cassette interface to save and load programs. Unlike some of Intel's other SDKs (e.g. SDK-85, SDK-86), the built-in monitor can only be controlled via the built-in QWERTY keyboard and cannot be commanded via the serial port. However, memory dumps and disassembly listings can be dumped out to the serial port, and it can also be used to transfer data to/from a connected PC in the form of Intel hex files.
RAM up to 16 KB (1KB factory fitted)
ROM up to 8 KB expansion
SIZE / WEIGHT 12 (W) × 14 (D) × 2 (H) inch
I/O ports: parallel (32 lines), serial (RS232/current loop) up to 9600 baud
KEYBOARD Standard Qwerty layout with additional 12 button keypad
DISPLAY 24 alpha/numeric 18 segment LEDs
OS 8K Monitor in ROM
POWER SUPPLY External 5V 3A/ +12V, -12V 100mA power supply unit
PERIPHERALS Expansion area on board
PRICE $1200 in the US
Documentation
Assembly Manual
User Manual
EV80C196KB Microcontroller Evaluation Board
Intel EV80C196KB Microcontroller Evaluation Board
Technical Information:
NAME Intel EV80C196KB Microcontroller Evaluation Board
MANUFACTURER Intel
TYPE Evaluation Board For Microcomputer
ORIGIN US
YEAR 1985?
CPU Intel 80C196KB
COPROCESSOR None
SIZE / WEIGHT ?? (L) × ?? (w) × ?? (H) inch
OS Monitor in ROM
Documentation
User's Manual
See also
Intellec microcomputer development systems
References
External links
More info. about Intel SDKs
Intel products | Operating System (OS) | 1,363 |
Macintosh IIsi
The Macintosh IIsi is a personal computer designed, manufactured and sold by Apple Computer, Inc. from October 1990 to March 1993. Introduced as a lower-cost alternative to the other Macintosh II family of desktop models, it was popular for home use, as it offered more expandability and performance than the Macintosh LC, which was introduced at the same time. Like the LC, it has built-in sound support, as well as support for color displays, with a maximum screen resolution of 640 × 480 in eight-bit color.
The IIsi remained on the market for two and a half years and was discontinued shortly after the introduction of its replacement, the Centris 610.
Hardware
The IIsi's case design is a compact desktop unit not used for any other Macintosh model, one of the few Macintosh designs for which this is true. Positioned below the Macintosh IIci as Apple's entry-level professional model, the IIsi's price was lowered by the redesign of the motherboard substituting a different memory controller and the deletion of all but one of the expansion card slots (a single Processor Direct Slot) and removal of the level 2 cache slot.
It shipped with either a 40-MB or 80-MB internal hard disk, and a 1.44-MB floppy disk drive. The MC 68882 FPU was an optional upgrade, mounted on a special plug-in card. Ports included SCSI, two serial ports, an ADB port, a floppy drive port, and 3.5mm stereo headphone sound output and microphone sound input sockets.
A bridge card was available for the IIsi to convert the Processor Direct slot to a standard internal NuBus card slot, compatible with other machines in the Macintosh II family. The bridge card included a math co-processor to improve floating-point performance. The NuBus card was mounted horizontally above the motherboard.
To cut costs, the IIsi's video shared the main system memory, which also had the effect of slowing down video considerably, especially as the IIsi had 1 MB of slow RAM soldered to the motherboard. David Pogue's book Macworld Macintosh Secrets observed that one could speed up video considerably if one set the disk cache size large enough to force the computer to draw video RAM from faster RAM installed in the SIMM banks.
The IIsi also suffers from sound difficulties: over time, the speaker contacts can fail, causing the sound to periodically drop out. This problem was caused by the very modular construction of the computer, where the mono loudspeaker is on a daughterboard under the motherboard, with springy contacts. Speaker vibrations led to fretting of the touching surfaces. The problem could be solved by removing the motherboard and using a pencil eraser to clean the contacts of the daughterboard holding the loudspeaker. As the IIsi is the only Macintosh to use this case design, these issues were never corrected in a subsequent model. The IIsi was designed to be easily and cheaply manufactured, such that no tools were required to put one together – everything is held in place with clips or latches.
Because of its heritage as a cut-down IIci, it was a simple modification to substitute a new clock crystal to increase the system's clock rate to 25 MHz for a slight increase in performance and a large increase in video rendering speed.
Trivia
Charles Bukowski was an enthusiastic user of the IIsi.
References
External links
Macintosh IIsi teardown at ifixit.com
si
IIsi
IIsi
IIsi
Macintosh case designs
Computer-related introductions in 1990 | Operating System (OS) | 1,364 |
Client (computing)
In computing, a client is a piece of computer hardware or software that accesses a service made available by a server as part of the client–server model of computer networks. The server is often (but not always) on another computer system, in which case the client accesses the service by way of a network.
A client is a computer or a program that, as part of its operation, relies on sending a request to another program or a computer hardware or software that accesses a service made available by a server (which may or may not be located on another computer). For example, web browsers are clients that connect to web servers and retrieve web pages for display. Email clients retrieve email from mail servers. Online chat use a variety of clients, which vary on the chat protocol being used. Multiplayer video games or online video games may run as a client on each computer. The term "client" may also be applied to computers or devices that run the client software or users that use the client software.
A client is part of a client–server model, which is still used today. Clients and servers may be computer programs run on the same machine and connect via inter-process communication techniques. Combined with Internet sockets, programs may connect to a service operating on a possibly remote system through the Internet protocol suite. Servers wait for potential clients to initiate connections that they may accept.
The term was first applied to devices that were not capable of running their own stand-alone programs, but could interact with remote computers via a network. These computer terminals were clients of the time-sharing mainframe computer.
Types
In one classification, client computers and devices are either thick clients, thin clients, or diskless nodes.
Thick
A thick client, also known as a rich client or fat client, is a client that performs the bulk of any data processing operations itself, and does not necessarily rely on the server. The personal computer is a common example of a fat client, because of its relatively large set of features and capabilities and its light reliance upon a server. For example, a computer running an art program (such as Krita or Sketchup) that ultimately shares the result of its work on a network is a thick client. A computer that runs almost entirely as a standalone machine save to send or receive files via a network is by a standard called a workstation.
Thin
A thin client is a minimal sort of client. Thin clients use the resources of the host computer. A thin client generally only presents processed data provided by an application server, which performs the bulk of any required data processing. A device using web application (such as Office Web Apps) is a thin client.
Diskless node
A diskless node is a mixture of the above two client models. Similar to a fat client, it processes locally, but relies on the server for storing persistent data. This approach offers features from both the fat client (multimedia support, high performance) and the thin client (high manageability, flexibility). A device running an online version of the video game Diablo III is an example of diskless node.
References
Peer-to-peer computing | Operating System (OS) | 1,365 |
Limitations on exclusive rights: Computer programs
Limitations on exclusive rights: Computer programs is the title of the current form of section 117 of the U.S. Copyright Act (17 U.S.C. § 117). In United States copyright law, it provides users with certain adaptation rights for computer software that they own.
Background
The current form of section 117 is the result of a recommendation by CONTU, the National Commission on New Technological Uses of Copyrighted Works. The U.S. Congress established CONTU to study and make recommendations on modifying the 1976 Copyright Act to deal with new technologies, particularly computer software, that Congress had not addressed when it passed the 1976 Act. CONTU operated from 1975 to 1978, and its principal recommendation to Congress was to revise the wording of section 117. Its report stated:
Because the placement of a work into a computer is the preparation of a copy, the law should provide that persons in rightful possession of copies of programs be able to use them freely without fear of exposure to copyright liability. Obviously, creators, lessors, licensors, and vendors of copies of programs intend that they be used by their customers, so that rightful users would but rarely need a legal shield against potential copyright problems. It is easy to imagine, however, a situation in which the copyright owner might desire, for good reason or none at all, to force a lawful owner or possessor of a copy to stop using a particular program. One who rightfully possesses a copy of a program, therefore, should be provided with a legal right to copy it to that extent which will permit its use by that possessor. This would include the right to load it into a computer and to prepare archival copies of it to guard against destruction or damage by mechanical or electrical failure. But this permission would not extend to other copies of the program. Thus, one could not, for example, make archival copies of a program and later sell some while retaining some for use. The sale of a copy of a program by a rightful possessor to another must be of all rights in the program, thus creating a new rightful possessor and destroying that status as regards the seller.
The revisions recommended by CONTU were approved with one important change. Instead of "rightful possessor" of a computer program Congress used the word "owner" of a computer program. It is not clear why this change was made. This one change resulted in a state of affairs in which software vendors began to take the position that customers do not own their software but rather only "license" it. The courts have split on whether the assertion in software agreements that the customer does not own the software, and has only a right to use it in accordance with the license agreement, is legally enforceable.
Users' rights under § 117
Section 117 is a limitation on the rights granted to holders of copyright on computer programs. The limitation allows the owner of a particular copy of a copyrighted computer program to make copies or adaptations of the program for any of several reasons:
Utilization of the program. The user is allowed to install the software to his hard disk and run the software in random-access memory.
Making backup and archival copies. The user is allowed to make copies of the software to protect himself from loss in the event of the original distribution media being damaged.
Making copies of software in order to repair or maintain machines, provided that the copies used in repairing the machine is destroyed after the repair or maintenance is complete.
The law allows any copies that are created for the above purposes to be transferred when the software is sold, only along with the copy made to prepare them. Adaptations made can not be transferred without permission from the copyright holder.
Reverse engineering
While it is not part of section 117, it is also lawful to reverse engineer software for compatibility purposes. Sec. 103(f) of the DMCA (17 U.S.C. § 1201 (f)) says that a person who is in legal possession of a program, is permitted to reverse-engineer and circumvent its protection against copying if this is necessary in order to achieve "interoperability" - a term broadly covering other devices and programs being able to interact with it, make use of it, and to use and transfer data to and from it, in useful ways. A limited exemption exists that allows the knowledge thus gained to be shared and used for interoperability purposes.
More generally, it has been held that reverse engineering is a fair use. In Sega v. Accolade, the Ninth Circuit held that making copies in the course of reverse engineering is a fair use, when it is the only way to get access to the "ideas and functional elements" in the copyrighted code, and when "there is a legitimate reason for seeking such access."
See also
Software patents under United States patent law
References
External links
Text of section 117
CONTU Final Draft
Computer law
United States copyright law | Operating System (OS) | 1,366 |
Back Orifice
Back Orifice (often shortened to BO) is a computer program designed for remote system administration. It enables a user to control a computer running the Microsoft Windows operating system from a remote location. The name is a play on words on Microsoft BackOffice Server software. It can also control multiple computers at the same time using imaging.
Back Orifice has a client–server architecture. A small and unobtrusive server program is on one machine, which is remotely manipulated by a client program with a graphical user interface on another computer system. The two components communicate with one another using the TCP and/or UDP network protocols. In reference to the Leet phenomenon, this program commonly runs on port 31337.
The program debuted at DEF CON 6 on August 1, 1998 and was the brainchild of Sir Dystic, a member of the U.S. hacker organization Cult of the Dead Cow. According to the group, its purpose was to demonstrate the lack of security in Microsoft's Windows 9x series of operating systems.
Although Back Orifice has legitimate purposes, such as remote administration, other factors make it suitable for illicit uses. The server can hide from cursory looks by users of the system. Since the server can be installed without user interaction, it can be distributed as the payload of a Trojan horse.
For those and other reasons, the antivirus industry immediately categorized the tool as malware and appended Back Orifice to their quarantine lists. Despite this fact, it was widely used by script kiddies because of its simple GUI and ease of installation.
Two sequel applications followed it, Back Orifice 2000, released in 1999, and Deep Back Orifice by French Canadian hacking group QHA.
See also
Back Orifice 2000
Sub7
Trojan horse (computing)
Malware
Backdoor (computing)
Rootkit
MiniPanzer and MegaPanzer
File binder
References
External links
Common trojan horse payloads
Windows remote administration software
Cult of the Dead Cow software
Remote administration software | Operating System (OS) | 1,367 |
BIOS interrupt call
BIOS interrupt calls are a facility that operating systems and application programs use to invoke the facilities of the Basic Input/Output System software on IBM PC compatible computers. Traditionally, BIOS calls are mainly used by DOS programs and some other software such as boot loaders (including, mostly historically, relatively simple application software that boots directly and runs without an operating system—especially game software). BIOS runs in the real address mode (Real Mode) of the x86 CPU, so programs that call BIOS either must also run in real mode or must switch from protected mode to real mode before calling BIOS and then switching back again. For this reason, modern operating systems that use the CPU in Protected mode or Long mode generally do not use the BIOS interrupt calls to support system functions, although they use the BIOS interrupt calls to probe and initialize hardware during booting. Real mode has the 1MB memory limitation, modern boot loaders (e.g. GRUB2, Windows Boot Manager) use the unreal mode or protected mode (and execute the BIOS interrupt calls in the Virtual 8086 mode, but only for OS booting) to access up to 4GB memory.
In all computers, software instructions control the physical hardware (screen, disk, keyboard, etc.) from the moment the power is switched on. In a PC, the BIOS, pre-loaded in ROM on the motherboard, takes control immediately after the CPU is reset, including during power-up, when a hardware reset button is pressed, or when a critical software failure (a triple fault) causes the mainboard circuitry to automatically trigger a hardware reset. The BIOS tests the hardware and initializes its state; finds, loads, and runs the boot program (usually, an OS boot loader, and historical ROM BASIC); and provides basic hardware control to the software running on the machine, which is usually an operating system (with application programs) but may be a directly booting single software application.
For IBM's part, they provided all the information needed to use their BIOS fully or to directly utilize the hardware and avoid BIOS completely, when programming the early IBM PC models (prior to the PS/2). From the beginning, programmers had the choice of using BIOS or not, on a per-hardware-peripheral basis. IBM did strongly encourage the authorship of "well-behaved" programs that accessed hardware only through BIOS INT calls (and DOS service calls), to support compatibility of software with current and future PC models having dissimilar peripheral hardware, but IBM understood that for some software developers and hardware customers, a capability for user software to directly control the hardware was a requirement. In part, this was because a significant subset of all the hardware features and functions was not exposed by the BIOS services. For two examples (among many), the MDA and CGA adapters are capable of hardware scrolling, and the PC serial adapter is capable of interrupt-driven data transfer, but the IBM BIOS supports neither of these useful technical features.
Today, the BIOS in a new PC still supports most, if not all, of the BIOS interrupt function calls defined by IBM for the IBM AT (introduced in 1984), along with many more newer ones, plus extensions to some of the originals (e.g. expanded parameter ranges) promulgated by various other organizations and collaborative industry groups. This, combined with a similar degree of hardware compatibility, means that most programs written for an IBM AT can still run correctly on a new PC today, assuming that the faster speed of execution is acceptable (which it typically is for all but games that use CPU-based timing). Despite the considerable limitations of the services accessed through the BIOS interrupts, they have proven extremely useful and durable to technological change.
Purpose of BIOS calls
BIOS interrupt calls perform hardware control or I/O functions requested by a program, return system information to the program, or do both. A key element of the purpose of BIOS calls is abstraction - the BIOS calls perform generally defined functions, and the specific details of how those functions are executed on the particular hardware of the system are encapsulated in the BIOS and hidden from the program. So, for example, a program that wants to read from a hard disk does not need to know whether the hard disk is an ATA, SCSI, or SATA drive (or in earlier days, an ESDI drive, or an MFM or RLL drive with perhaps a Seagate ST-506 controller, perhaps one of the several Western Digital controller types, or with a different proprietary controller of another brand). The program only needs to identify the BIOS-defined number of the drive it wishes to access and the address of the sector it needs to read or write, and the BIOS will take care of translating this general request into the specific sequence of elementary operations required to complete the task through the particular disk controller hardware that is connected to that drive. The program is freed from needing to know how to control at a low level every type of hard disk (or display adapter, or port interface, or real-time clock peripheral) that it may need to access. This both makes programming operating systems and applications easier and makes the programs smaller, reducing the duplication of program code, as the functionality that is included in the BIOS does not need to be included in every program that needs it; relatively short calls to the BIOS are included in the programs instead. (In operating systems where the BIOS is not used, service calls provided by the operating system itself generally fulfill the same function and purpose.)
The BIOS also frees computer hardware designers (to the extent that programs are written to use the BIOS exclusively) from being constrained to maintain exact hardware compatibility with old systems when designing new systems, in order to maintain compatibility with existing software. For example, the keyboard hardware on the IBM PCjr works very differently than the keyboard hardware on earlier IBM PC models, but to programs that use the keyboard only through the BIOS, this difference is nearly invisible. (As a good example of the other side of this issue, a significant share of the PC programs in use at the time the PCjr was introduced did not use the keyboard through BIOS exclusively, so IBM also included hardware features in the PCjr to emulate the way the original IBM PC and IBM PC XT keyboard hardware works. The hardware emulation is not exact, so not all programs that try to use the keyboard hardware directly will work correctly on the PCjr, but all programs that use only the BIOS keyboard services will.)
In addition to giving access to hardware facilities, BIOS provides added facilities that are implemented in the BIOS software. For example, the BIOS maintains separate cursor positions for up to eight text display pages and provides for TTY-like output with automatic line wrap and interpretation of basic control characters such as carriage return and line feed, whereas the CGA-compatible text display hardware has only one global display cursor and cannot automatically advance the cursor, use the cursor position to address the display memory (so as to determine which character cell will be changed or examined), or interpret control characters. For another example, the BIOS keyboard interface interprets many keystrokes and key combinations to keep track of the various shift states (left and right Shift, Ctrl, and Alt), to call the print-screen service when Shift+PrtScrn is pressed, to reboot the system when Ctrl+Alt+Del is pressed, to keep track of the lock states (Caps Lock, Num Lock, and Scroll Lock) and, in AT-class machines, control the corresponding lock-state indicator lights on the keyboard, and to perform other similar interpretive and management functions for the keyboard. In contrast, the ordinary capabilities of the standard PC and PC-AT keyboard hardware are limited to reporting to the system each primitive event of an individual key being pressed or released (i.e. making a transition from the "released" state to the "depressed" state or vice versa), performing a commanded reset and self-test of the keyboard unit, and, for AT-class keyboards, executing a command from the host system to set the absolute states of the lock-state indicators (LEDs).
Calling BIOS: BIOS software interrupts
Operating systems and other software communicate with the BIOS software, in order to control the installed hardware, via software interrupts. A software interrupt is a specific variety of the general concept of an interrupt. An interrupt is a mechanism by which the CPU can be
directed to stop executing the main-line program and immediately execute a special program, called an Interrupt Service Routine (ISR), instead. Once the ISR finishes, the CPU continues with the main program. On x86 CPUs, when an interrupt occurs, the ISR to call is found by looking it up in a table of ISR starting-point addresses (called "interrupt vectors") in memory: the Interrupt Vector Table (IVT). An interrupt is invoked by its type number, from 0 to 255, and the type number is used as an index into the Interrupt Vector Table, and at that index in the table is found the address of the ISR that will be run in response to the interrupt. A software interrupt is simply an interrupt that is triggered by a software command; therefore, software interrupts function like subroutines, with the main difference that the program that makes a software interrupt call does not need to know the address of the ISR, only its interrupt number. This has advantages for modularity, compatibility, and flexibility in system configuration.
BIOS interrupt calls can be thought of as a mechanism for passing messages between BIOS and BIOS client software such as an operating system. The messages request data or action from BIOS and return the requested data, status information, and/or the product of the requested action to the caller. The messages are broken into categories, each with its own interrupt number, and most categories contain sub-categories, called "functions" and identified by "function numbers". A BIOS client passes most information to BIOS in CPU registers, and receives most information back the same way, but data too large to fit in registers, such as tables of control parameters or disk sector data for disk transfers, is passed by allocating a buffer (i.e. some space) in memory and passing the address of the buffer in registers. (Sometimes multiple addresses of data items in memory may be passed in a data structure in memory, with the address of that structure passed to BIOS in registers.) The interrupt number is specified as the parameter of the software interrupt instruction (in Intel assembly language, an "INT" instruction), and the function number is specified in the AH register; that is, the caller sets the AH register to the number of the desired function. In general, the BIOS services corresponding to each interrupt number operate independently of each other, but the functions within one interrupt service are handled by the same BIOS program and are not independent. (This last point is relevant to reentrancy.)
The BIOS software usually returns to the caller with an error code if not successful, or with a status code and/or requested data if successful. The data itself can be as small as one bit or as large as 65,536 bytes of whole raw disk sectors (the maximum that will fit into one real-mode memory segment). BIOS has been expanded and enhanced over the years many times by many different corporate entities, and unfortunately the result of this evolution is that not all the BIOS functions that can be called use consistent conventions for formatting and communicating data or for reporting results. Some BIOS functions report detailed status information, while others may not even report success or failure but just return silently, leaving the caller to assume success (or to test the outcome some other way). Sometimes it can also be difficult to determine whether or not a certain BIOS function call is supported by the BIOS on a certain computer, or what the limits of a call's parameters are on that computer. (For some invalid function numbers, or valid function numbers with invalid values of key parameters—particularly with an early IBM BIOS version—the BIOS may do nothing and return with no error code; then it is the [inconvenient but inevitable] responsibility of the caller either to avoid this case by not making such calls, or to positively test for an expected effect of the call rather than assuming that the call was effective. Because BIOS has evolved extensively in many steps over its history, a function that is valid in one BIOS version from some certain vendor may not be valid in an earlier or divergent BIOS version from the same vendor or in a BIOS version—of any relative age—from a different vendor.)
Because BIOS interrupt calls use CPU register-based parameter passing, the calls are oriented to being made from assembly language and cannot be directly made from most high-level languages (HLLs). However, a high level language may provide a library of wrapper routines which translate parameters from the form (usually stack-based) used by the high-level language to the register-based form required by BIOS, then back to the HLL calling convention after the BIOS returns. In some variants of C, BIOS calls can be made using inline assembly language within a C module. (Support for inline assembly language is not part of the ANSI C standard but is a language extension; therefore, C modules that use inline assembly language are less portable than pure ANSI standard C modules.)
Invoking an interrupt
Invoking an interrupt can be done using the INT x86 assembly language instruction. For example, to print a character to the screen using BIOS interrupt 0x10, the following x86 assembly language instructions could be executed:
mov ah, 0x0e ; function number = 0Eh : Display Character
mov al, '!' ; AL = code of character to display
int 0x10 ; call INT 10h, BIOS video service
Interrupt table
A list of common BIOS interrupt classes can be found below. Note that some BIOSes (particularly old ones) do not implement all of these interrupt classes.
The BIOS also uses some interrupts to relay hardware event interrupts to programs which choose to receive them or to route messages for its own use. The table below includes only those BIOS interrupts which are intended to be called by programs (using the "INT" assembly-language software interrupt instruction) to request services or information.
: execute BASIC
traditionally jumped to an implementation of Cassette BASIC (provided by Microsoft) stored in Option ROMs. This call would typically be invoked if the BIOS was unable to identify any bootable disk volumes on startup.
At the time the original IBM PC (IBM machine type 5150) was released in 1981, the BASIC in ROM was a key feature. Contemporary popular personal computers such as the Commodore 64 and the Apple II line also had Microsoft Cassette BASIC in ROM (though Commodore renamed their licensed version Commodore BASIC), so in a substantial portion of its intended market, the IBM PC needed BASIC to compete. As on those other systems, the IBM PC's ROM BASIC served as a primitive diskless operating system, allowing the user to load, save, and run programs, as well as to write and refine them. (The original IBM PC was also the only PC model from IBM that, like its aforementioned two competitors, included cassette interface hardware. A base model IBM PC had only 16 KiB of RAM and no disk drives [of any kind], so the cassette interface and BASIC in ROM were essential to make the base model usable. An IBM PC with less than 32 KiB of RAM is incapable of booting from disk. Of the five 8 KiB ROM chips in an original IBM PC, totaling 40 KiB, four contain BASIC and only one contains the BIOS; when only 16 KiB of RAM are installed, the ROM BASIC accounts for over half of the total system memory [4/7ths, to be precise].)
As time went on and BASIC was no longer shipped on all PCs, this interrupt would simply display an error message indicating that no bootable volume was found (famously, "No ROM BASIC", or more explanatory messages in later BIOS versions); in other BIOS versions it would prompt the user to insert a bootable volume and press a key, and then after the user pressed a key it would loop back to the bootstrap loader (INT 19h) to try booting again.
Digital's Rainbow 100B used to call its BIOS, which was incompatible with the IBM BIOS. Turbo Pascal, Turbo C and Turbo C++ repurposed INT 18 for memory allocation and paging. Other programs also reused this vector for their own purposes.
BIOS hooks
DOS
On DOS systems, IO.SYS or IBMBIO.COM hooks INT 13 for floppy disk change detection, tracking formatting calls, correcting DMA boundary errors, and working around problems in IBM's ROM BIOS "01/10/84" with model code 0xFC before the first call.
Bypassing BIOS
Many modern operating systems (such as Linux and Windows NT) bypass the BIOS interrupt calls after startup, the OS kernel will switch the CPU into protected mode or long mode at startup, preferring to use their own programs (such as kernel drivers) to control the attached hardware directly. The reason for this was primarily that these operating systems run the processor in protected mode or long mode, whereas calling BIOS interrupt call requires switching to real mode or unreal mode, or using Virtual 8086 mode. The real mode, the unreal mode and the virtual 8086 mode are slow. However, there are also serious security reasons not to switch to real mode, and the real mode BIOS code has limitations both in functionality and speed that motivate operating system designers to find a replacement for it. In fact, the speed limitations of the BIOS made it common even in the DOS era for programs to circumvent it in order to avoid its performance limitations, especially for video graphics display and fast serial communication. The problems with BIOS functionality include limitations in the range of functions defined, inconsistency in the subsets of those functions supported on different computers, and variations in the quality of BIOSes (i.e. some BIOSes are complete and reliable, others are abridged and buggy). By taking matters into their own hands and avoiding reliance on BIOS, operating system developers can eliminate some of the risks and complications they face in writing and supporting system software. On the other hand, by doing so those developers become responsible for providing "bare-metal" driver software for every different system or peripheral device they intend for their operating system to work with (or for inducing the hardware producers to provide those drivers). Thus it should be apparent that compact operating systems developed on small budgets would tend to use BIOS heavily, while large operating systems built by huge groups of software engineers with large budgets would more often opt to write their own drivers instead of using BIOS—that is, even without considering the compatibility problems of BIOS and protected mode.
See also
DOS interrupt call
Interrupt descriptor table
Input/Output Base Address
Ralf Brown's Interrupt List
References
The x86 Interrupt List (a.k.a. RBIL, Ralf Brown's Interrupt List)
Embedded BIOS User's Manual
PhoenixBIOS 4.0 User's Manual
IBM Personal System/2 and Personal Computer BIOS Interface Technical Reference, IBM, 1988,
System BIOS for IBM PCs, Compatibles, and EISA Computers, Phoenix Technologies, 1991,
Programmer's Guide to the AMIBIOS, American Megatrends, 1993,
The Programmer's PC Sourcebook by Thom Hogan, Microsoft Press, 1991
BIOS
Interrupts
Application programming interfaces | Operating System (OS) | 1,368 |
Adoption (software implementation)
In computing, adoption means the transfer (conversion) between an old system and a target system in an organization (or more broadly, by anyone).
If a company works with an old software system, it may want to use a new system which is more efficient, has more work capacity, etc. So then a new system needs to be adopted, after which it can be used by users.
There are several adoption strategies that can be used to implement a system in an organization. The main strategies are big bang adoption, parallel adoption and phased adoption. "Big bang" is a metaphor for the cosmological theory of the same name, in which the start of the cosmos happened at one moment in time. This is also the case with the big bang adoption approach, in which the new system is supposed to be adopted wholesale on one date. In the case of parallel adoption, the old and the new system are run in parallel initially, so that all the users can get used to the new system, but still can do their work using the old system if they want to or need to do so. Phased adoption means that the adoption happens in several phases, so that after each phase the system is a little closer to being fully adopted by the organization.
Selecting an adoption strategy
The adoption strategy has to be selected before adoption begins, and is chosen based on the goals to be achieved and on the type of system to be implemented. The three types of adoption, Big Bang, parallel adoption and phased adoption, range from an instant switch to a strategy where users progressively start using the new system over a certain period of time (which can be weeks, months or even years).
The actual selection is done by prioritizing the goals to be achieved and then matching a strategy against it (Eason, 1988). Eason defines the following goals:
Possible requirement of a “critical mass” to make the system work.
If a large critical mass is, or might be, needed for the system to work effectively (e.g. due to network effects), a big bang strategy might be the answer. (Rogers, 1995)
Need for risk control, if risk is involved.
Minimising risk to the ongoing operation of the organization can be very important. Parallel and phased introductions might help to control these risks, depending on the situation.
Need for facilitation of the change.
The organization has to be ready for the changeover. Socio-technical preparations such as training sessions and ready-made scenarios must be clear.
Pace of change
If the new system is designed to deal with new requirements, such as business process reengineering, the speed at which the organization is changing over to the new processes or attempting to meet other new requirements.
Local design needs
The system might need to be adjusted to the users needs. In this case, the chosen strategy must provide the opportunity to do so.
Table Eason Matrix
The actual selection of adoption strategy depends on far more factors then these goals, but they create a window to choose one of the types. Other criteria are called variables (Gallivan, 1996). Gallivan suggests that the appropriate adoption types depends on:
Innovativeness of the individualsAttributes of the ones that are to adopt the innovation/system
The type of innovationIs it a process or product innovation?
Attributes of the innovation itselfPreparedness, communicability and divisibility
The implementation complexity.How complex is the implementation or what is it is extent?
These variables are of a higher level than the criteria of Eason and should be handled as such. Based on table 1 and on the mentioned higher level variables by Gallivan, one can make a selection of an appropriate strategy to choose.
Preparing an organization for adoption
Figure 1: Organization preparation Process
In order to prepare the organization for the adoption of the new system, the changes that will take place need to be determined. This is necessary to be able to have a plan or an overview of the changeover, and can be done by creating requirements for the system. Once the management has determined the requirements in a report of determined changes, they need to agree upon them to be able to move on with the change-process. If there is no agreement, the management needs to discuss the requirements again and again until they do agree. If agreement is achieved and the agreement contract is signed, the organization can take further steps. So now the test-phase can be prepared, in which the validity of the data that will be used will be checked and in which trials will be held (Eason, 1988).
In parallel, it is highly recommended that a comprehensive user adoption plan be prepared working together with the business and the affected users. This plan should consider all pre- and post- system rollout communications; user training & documentation; any internal marketing efforts that will be undertaken to drive adoption such as system branding or swag; as well as troubleshooting assistance during the rollout (i.e. extended help desk hours and/or a hotline, and identification of key contacts for each affected business area).
See also
SAP Implementation
References
Eason, K. (1988) Information technology and organizational change, New York: Taylor and Francis
Gallivan, M.J., (1996) Strategies for implementing new software processes: An evaluation of a contingency framework, SIGCPR/SIGMIS ’96, Denver Colorado
Rogers, E.M. (1995), Diffusion of innovations, New York: Free Press
Dodson, J. (2011), 4 Stops to Navigating the Treacherous Highway of Enterprise Software Adoption, Washington
.
Information systems | Operating System (OS) | 1,369 |
Systems design
Systems design is the process of defining the architecture, product design, modules, interfaces, and data for a system to satisfy specified requirements. Systems design could be seen as the application of systems theory to product development. There is some overlap with the disciplines of systems analysis, systems architecture and systems engineering.
Overview
If the broader topic of product development "blends the perspective of marketing, design, and manufacturing into a single approach to product development," then design is the act of taking the marketing information and creating the design of the product to be manufactured. Systems design is therefore the process of defining and developing systems to satisfy specified requirements of the user.
The basic study of system design is the understanding of component parts and their subsequent interaction with one another.
Until the 1990s, systems design had a crucial and respected role in the data processing industry. In the 1990s, standardization of hardware and software resulted in the ability to build modular systems. The increasing importance of software running on generic platforms has enhanced the discipline of software engineering.
Architectural design
The architectural design of a system emphasizes the design of the system architecture that describes the structure, behavior and more views of that system and analysis.
Logical design
The logical design of a system pertains to an abstract representation of the data flows, inputs and outputs of the system. This is often conducted via modelling, using an over-abstract (and sometimes graphical) model of the actual system. In the context of systems, designs are included. Logical design includes entity-relationship diagrams (ER diagrams).
Physical design
The physical design relates to the actual input and output processes of the system. This is explained in terms of how data is input into a system, how it is verified/authenticated, how it is processed, and how it is displayed.
In physical design, the following requirements about the system are decided.
Input requirement,
Output requirements,
Storage requirements,
Processing requirements,
System control and backup or recovery.
Put another way, the physical portion of system design can generally be broken down into three sub-tasks:
User Interface Design
Data Design
Process Design
User Interface Design is concerned with how users add information to the system and with how the system presents information back to them. Data Design is concerned with how the data is represented and stored within the system. Finally, Process Design is concerned with how data moves through the system, and with how and where it is validated, secured and/or transformed as it flows into, through and out of the system. At the end of the system design phase, documentation describing the three sub-tasks is produced and made available for use in the next phase.
Physical design, in this context, does not refer to the tangible physical design of an information system. To use an analogy, a personal computer's physical design involves input via a keyboard, processing within the CPU, and output via a monitor, printer, etc. It would not concern the actual layout of the tangible hardware, which for a PC would be a monitor, CPU, motherboard, hard drive, modems, video/graphics cards, USB slots, etc.
It involves a detailed design of a user and a product database structure processor and a control processor. The H/S personal specification is developed for the proposed system.
Related disciplines
Benchmarking – is an effort to evaluate how current systems perform
Computer programming and debugging in the software world, or detailed design in the consumer, enterprise or commercial world - specifies the final system components.
Hardware architecture and design - In engineering, hardware architecture refers to the identification of a system's physical components and their interrelationships
Design – designers will produce one or more 'models' of what they see a system eventually looking like, with ideas from the analysis section either used or discarded. A document will be produced with a description of the system, but nothing is specific – they might say 'touchscreen' or 'GUI operating system', but not mention any specific brands;
Requirements analysis – analyzes the needs of the end users or customers
System architecture – creates a blueprint for the design with the necessary structure and behavior specifications for the hardware, software, people and data resources. In many cases, multiple architectures are evaluated before one is selected.
System testing – evaluates the system's actual functionality in relation to expected or intended functionality, including all integration aspects.
Alternative design methodologies
Rapid application development (RAD)
Rapid application development (RAD) is a methodology in which a system designer produces prototypes for an end-user. The end-user reviews the prototype, and offers feedback on its suitability. This process is repeated until the end-user is satisfied with the final system.
Joint application design (JAD)
Joint application design (JAD) is a methodology which evolved from RAD, in which a system designer consults with a group consisting of the following parties:
Executive sponsor
System Designer
Managers of the system
JAD involves a number of stages, in which the group collectively develops an agreed pattern for the design and implementation of the system.
See also
Arcadia (engineering)
Architectural pattern (computer science)
Configuration design
Electronic design automation (EDA)
Electronic system-level (ESL)
Embedded system
Graphical system design
Hypersystems
Modular design
Morphological analysis (problem-solving)
SCSD (School Construction Systems Development) project
System information modelling
System development life cycle (SDLC)
System engineering
System thinking
TRIZ
References
Further reading
Bentley, Lonnie D., Kevin C. Dittman, and Jeffrey L. Whitten. System analysis and design methods. (1986, 1997, 2004).
Hawryszkiewycz, Igor T. Introduction to system analysis and design. Prentice Hall PTR, 1994.
Levin, Mark Sh. Modular system design and evaluation. Springer, 2015.
External links
Interactive System Design. Course by Chris Johnson, 1993
Course by Prof. Birgit Weller, 2020
Computer systems
Electronic design automation
Software design | Operating System (OS) | 1,370 |
IBM System/32
The IBM System/32 (IBM 5320) introduced in January 1975 was a midrange computer with built-in display screen, disk drives, printer, and database report software. It was used primarily by small to midsize businesses for accounting applications. RPG II was the primary programming language for the machine.
Overview
The 16-bit single-user System/32, also known as the IBM 5320, was introduced in 1975, and it was the successor to the IBM System/3 model 6 in the IBM midrange computer line. IBM described it as "the first system to incorporate hardware and comprehensive application software." The New York Times described the 32 as "a compact computer for first‐time users with little or no computer programming experience." Within 40 months, "the System/32 had surpassed the IBM System/3 as the most installed IBM computer."
The computer looked like a large office desk with a very small six-line by forty-character display and a keyboard similar to an IBM keypunch. Having the appearance of a computerized desk, the System/32 was nicknamed the "Bionic Desk" after The Six Million Dollar Man (bionic man), a popular U.S. TV program when the computer was introduced in 1975. The 32 had a built-in line printer, that directly faced the operator when seated, and could print reports, memos, billing statements, address labels, etc.
It had been introduced January 7, 1975 and was withdrawn from marketing on October 17, 1984. Migration to the IBM System/34 was generally simple because source code was compatible and programs just needed recompilation.
Processor
The System/32 featured a 16-bit processor with a 200ns cycle time known as the Control Storage Processor (CSP). Whereas the System/3 used a hardwired processor, the System/32 implemented the System/3 instruction set in microcode. The System/32 processor utilized a vertical microcode format, with each microinstruction occupying 16 bits of control storage. There were 19 different microinstruction opcodes, however certain microinstructions could carry out different operations depending on which bits were set in the rest of the microinstruction, with the result that there were about 70 distinct operations available. An optional set of Scientific Macroinstructions was also available, which were used to support a Fortran compiler by implementing support for floating point arithmetic in microcode. Some IBM engineers, including Glenn Henry and Frank Soltis, have retrospectively described the System/32's microcode as resembling a RISC instruction set.
The performance of the System/3 emulation was poor, which led IBM to implement performance critical parts of the SCP operating system directly in microcode. The later System/34 and System/36 systems addressed this problem by using two different processors - the System/32 CSP architecture was used exclusively for operating system, I/O control and floating point code, whereas user code ran on the Main Storage Processor (MSP) which implemented the System/3 instruction set directly in hardware without microcode. The use of microcode to implement instruction set emulation as well as performance-critical operating system components had some influence on the design of the microcode layers in the later System/38.
Memory/storage
It had 16, 24, or 32 kilobytes of main memory, and 4 or 8 kilobytes of control storage. The larger control store was an optional extra, and was needed to support the scientific instruction set.
A single hard drive was available in one of three sizes:
5 MB
9 MB
13 MB
The system included an eight-inch floppy drive that could also read floppies from the IBM 3740 family.
Only one side of the 77-track floppy diskette was used. Each track held 26 128-byte sectors. An extended format was offered by IBM, and it permitted 512 bytes per sector. Even so, that came to an 8-inch floppy holding less than one third of a megabyte.
System/32 operator
When keying input data, the operator would be viewing the character display, which was also common to the then current IBM 3740 family of data entry to floppy disk media.
A computer specialist was not required for the operation of System/32.
System software
Some terms associated with the System/32's software include:
SCP (System Control Program) the operating system of the System/32.
SEU (Source Entry Utility, the programming editor),
DFU (Data File Utility, a query and report generator),
OCL (Operations Control Language, the command-line language), and
#LIBRARY (the directory or disk partition in which executable code was stored).
See also
IBM System/3
IBM System/34
IBM System/36
IBM System/38
References
External links
A System/32 restoration project
Video of Corestore Museum System/32 performing IMPL/IPL from disk
Insightful newsgroup post about System/32 and System/34 architecture
Photographs
System/32
S/32 front view with one panel open
S/32 rear view with both panels open
Large Image of IBM 5320
System 32
Computer-related introductions in 1975
16-bit computers | Operating System (OS) | 1,371 |
Sugon
Sugon (), officially Dawning Information Industry Company Limited, is a supercomputer manufacturer based in the People's Republic of China. The company is a spin-off from research done at the Chinese Academy of Sciences (CAS), and still has close links to it.
History
The company is a development of work done at the Institute of Computer Science, CAS. Under the Chinese government's 863 Program, for the research and development of high technology products, the group launched their first supercomputer (Dawning No. 1) in 1993. In 1996 the group launched the Dawning Company to allow the transfer of research computers into the market.
The company was tasked with developing further supercomputers under the 863 program, which led to the Dawning 5000A and 6000 computers.
The company was listed on the Shanghai Stock Exchange in 2014. CAS still retains stock in the company.
U.S. sanctions
According to the United States Department of Defense the company has links to the People's Liberation Army and, in 2019, Sugon was added to the Bureau of Industry and Security's Entity List due to U.S. national security concerns. In November 2020, the then President of the United States Donald Trump issued an executive order prohibiting any American company or individual from owning shares in companies that the United States Department of Defense has listed as having links to the People's Liberation Army, which included Sugon.
Supercomputers
Dawning was the company's initial name; it was later changed to Sugon. The computers are originally known by their Dawning moniker, but can also use Sugon names in the literature. The model series is as below.
Dawning No.1
The first supercomputer created was Dawning No.1 (Shuguang Yihao, 曙光一号), which received state certification in October 1993. This supercomputer achieved 640 million FLOPS, and utilizes Motorola 88100 CPUs (4 total) and 88200 CPUs (8 total), and over 20 were built. The operating system is UNIX V.
Dawning 1000
The Dawning 1000 was Sugon's second generation supercomputer, and was originally named Dawning No.2 (Shuguang Erhao, 曙光二号). Dawning 1000 was released in 1995, and received state certification on 11 May 1995. The family of supercomputers could achieve 2.5 GFLOPS. This series of the Dawning family consists of the Dawning 1000A and 1000L.
Dawning 2000
The Dawning 2000 was initially released in 1996, and could achieve a peak performance of 4 GFLOPS. A further variant, the Dawning 2000-I, was released in 1998 with a peak performance of 20 GFLOPS. The final model in the series, the Dawning 2000-II, was released in 1999 with a peak performance of 111.7 GFLOPS.
The Dawning 2000 passed state certification on 28 January 2000. The supercomputer model was designed as a cluster to achieve over 100 GLOPS peak performance. The number of CPUs used was greatly increased to 164 in comparison to older models, and like earlier models, the operating system is UNIX.
Dawning 3000
The Dawning 3000 passed state certification on 9 March 2001. Like the Dawning 2000, the system was designed as a cluster, and could achieve 400 GFLOPS peak performance. The number of CPU increased to 280, and the system consists of ten 2-meter tall racks, weighing 5 tons total. Power consumption is 25 kW, and one of the tasks it was used for was the part of human genome mapping that China was responsible for.
Dawning 4000A
The fifth member of the Dawning family, Dawning 4000A, debuted as one of the top 10 fastest supercomputers in the world on the TOP500 list, capable of 806.1 billion FLOPS. The system, at the Shanghai Supercomputer Center, utilizes over 2,560 AMD Opteron processors, and can reach speeds of 8 teraflops.
Dawning 5000
The Dawning 5000 series was initially planned to use indigenous Loongson processors. However Shanghai Supercomputer Center required Microsoft Windows support whereas Loongson only ran under Linux.
The resulting Dawning 5000A uses 7,680 1.9 GHz AMD Opteron Quad-core processors, resulting in 30,720 cores, with an Infiniband interconnecting network. The computer occupies an area of 75 square meters and the power consumption is 700 kW. The supercomputer is capable of 180 teraflops and received state certification in June 2008.
The Dawning 5000A was ranked 10th in the November 2008 TOP500 list. Additionally at the time, it was also the largest system using Windows HPC Server 2008 for this benchmark. This system is also installed at the Shanghai Supercomputer Center and runs with SUSE Linux Enterprise Server 10.
Dawning 6000
The Dawning 6000 was announced in 2011, at 300 TFLOPS, incorporating 3000 8-core Loongson 3B processors at 3.2 GFLOP/W. It is the "first supercomputer made exclusively of Chinese components" and has a projected speed of over a PFLOP (one quadrillion operations per second). For comparison, the fastest supercomputer as of June 2014 runs at 33 PFLOPS. The same announcement said that a petascale supercomputer was under development and that the launch was anticipated in 2012 or 2013.
See also
Shanghai Supercomputer Center
Nebulae - Dawning TC3600
References
External links
Supercomputers
Computer hardware companies
Supercomputing in China
Defence companies of the People's Republic of China | Operating System (OS) | 1,372 |
System Architect
Unicom System Architect is an enterprise architecture tool that is used by the business and technology departments of corporations and government agencies to model their business operations and the systems, applications, and databases that support them. System Architect is used to build architectures using various frameworks including TOGAF, ArchiMate, DoDAF, MODAF, NAF and standard method notations such as sysML, UML, BPMN, and relational data modeling. System Architect is developed by UNICOM Systems, a division of UNICOM Global, a United States-based company.
Overview
Enterprise architecture (EA) is a mechanism for understanding all aspects of the organization, and planning for change. Those aspects include business transformation, business process rationalization, business or capability-driven solution development, application rationalization, transformation of IT to the cloud, server consolidation, service management and deployment, building systems of systems architectures, and so forth. Most simply, users use EA and System Architect to build diagrammatic and textual models of any and all aspects of their organization, including the who, what, where, when, why, and how things are done so they can understand the current situation, and plan for the future. Parts of the EA can be harvested from existing sources of information in the organization—auto-discovery of network architectures, database architectures, etc. The users building the models are typically enterprise architects, business architects, business analysts, data architects, software architects, and so forth. This information can be viewed by all stakeholders of the organization — including its workers, management, and outside vendors (depending on the level of access they have been granted to the information), through generation of the information to a static website, or enabling direct web-access to the information in the repository. The stakeholders can use this information to get answers to questions about the organization's architecture in the form of visual diagrams and reports that produce textual information, pie charts, and other dashboards.
System Architect is widely used in developing defense architectures. The Architecture Development and Analysis Survey, conducted by MITRE Corporation for the US Office of the Assistant Secretary of Defense for Networks & Information Integration (OASD NII) and revealed at the CISA worldwide conference on December 1, 2005, reported that out of 96 programs building DoDAF architectures responding to the survey, 77% used System Architect, either by itself (48%) or in conjunction with another modeling tool (29%).
System Architect has been referenced in textbooks written in the field of enterprise architecture, UML, and data modeling, and was also used to build some or all of the models that appear in some of these books.
History
System Architect was initially created and developed by Jan Popkin under the auspices of Popkin Software. System Architect was one of the first Windows-based computer-aided software engineering (CASE) tools. It evolved through the years to become an enterprise architecture modeling tool — one that enables the end user to utilize many notations and methods to model aspects of their organization in a repository, and disseminate this information to a large audience.
Telelogic acquired Popkin Software in April, 2005 and IBM acquired Telelogic in 2008.
After acquisition of Telelogic, IBM included System Architect (and all other Telelogic products) in the Rational division, named after Rational Software, which it acquired in 2003. On January 1, 2016, IBM announced that UNICOM Global had acquired System Architect from IBM, and that its core development and support team, which originated at Popkin Software, was joining UNICOM Systems to continue to build the product line.
Features
System Architect includes support for:
Enterprise Architecture Frameworks and Reference Models
TOGAF 9.2
ArchiMate v3.1
Unified Architecture Framework (UAF) (the latest version of UPDM)
DoDAF 2.02
MODAF 1.2
NATO Architecture Framework (NAF) 4.0
IAF v4 Integrated Architecture Framework
Federal Enterprise Architecture Framework 2.0 (FEAF 2.0) via Federal EA add-on option (formerly called iRMA)
Zachman Framework
sysML 1.6
UML 2.5 (comprehensive) and UML 2.0 'Lite'
Business Process Modeling Notation (BPMN) 2.0
Simulation of BPMN models through SA/Simulator add-on
BPEL Generation
Service oriented architecture
Balanced Scorecard
OMG Business Motivation Model (BMM) via Enterprise Direction diagram
Cause-Effect Analysis and Gap analysis through Network-style Explorer diagram
Landscape and Heatmap analysis Heat map through Landscape-style Explorer diagram
Analytics
Functional decomposition
Organizational chart
Network architecture modeling
Roadmapping
Application Portfolio Management and Service oriented architecture (SOA) development through SOA add-on
Application Portfolio Management, IT portfolio management, and decision-based trade-off analysis via:
Integration with Unicom Focal Point
Relational Data Modeling - Logical Entity-relationship model and Physical diagramming
Reverse engineering and/or generation of database schema via integration with IBM Infosphere Data Architect
Object-relational mapping
Data flow diagramming
IDEF
Cross-Reference Matrices
Underlying Repository in SQL Server 2012, SQL Server 2008, or SQL Server 2008 Express
Multi-user network environment
SQL-based query reporting language
Role-based access control
Native Requirements management
Interface to DOORS for Requirements management
Extensibility through:
Customizable Metamodeling
Visual Basic for Applications (VBA) for extending functionality
Model-to-model transformations
Report Generation via:
Native Report Generator using SQL-like language
Integration with IBM Rational Publishing Engine
Business Intelligence Dashboard Analysis via:
Cognos-based Business Intelligence (BI) reporting bundled with product
Integration with IBM Rational Insight
Governance of Files and Assets Associated with EA via:
Integration with IBM Rational Asset Manager (RAM)
Web access to Enterprise Architecture repository via:
HTML Generator
Report-based website generation via SA/Publisher add-on
Live web read/write access to repository via SA/XT product
Integrations:
IBM Cognos BI 10.2 via System Architect BI Integrator that ships with product
Microsoft_Power_BI via System Architect BI Integrator that ships with product
Tableau_Software via System Architect BI Integrator that ships with product
Unicom Focal Point (bidirectional integration provided at no extra charge)
IBM Tivoli TADDM/CCMDB (off-the-shelf assets tailored for the customer by services engagement)
WebSphere Business Modeler (export of BPMN diagrams from SA to WBM via no extra charge add-in)
Rational DOORS (no-extra-charge integration provided with SA)
Rational Software Architect (RSA) (bidirectional import/export of UML diagrams; no-extra-charge integration is in RSA tool)
Rational Rhapsody (export of DoDAF information from SA to Rhapsody; no-extra-charge integration is in Rhapsody tool)
Rational Change (no-extra-charge integration provided with SA)
Rational Publishing Engine (RPE) for cross-IBM-Rational-product reporting
Lanner Group Ltd Witness (paid add-on called SA Simulator III)
SAP for process architecture information via IntelliCorp (Software) LiveCapture paid add-on
SAP and other ERP systems (Siebel and PeopleSoft) for data architecture information via SA/ERP paid add-on
Microsoft Office products:
Powerpoint (SA Presentation Integration provided as add-in, uses REST technology for synchronization of Powerpoint and EA repository)
Visio (import from Visio to SA provided at no charge via macro )
Microsoft Word and Excel (auto-generation of Word documents and Excel files from EA repository via VBA macros provided in SA, and via reporting engine)
Technical overview
Graphic models and their underlying information are created and stored in a relational database in latest versions of Microsoft SQL Server or SQL Server Express. This database is considered a repository of information and in System Architect parlance is called an encyclopedia.
Users add information to the database via definition dialogs, or importing it from sources of record such as spreadsheets or other tools, and visualize the information on diagrammatic models. As definition information is changed, diagrams depicting the information change to reflect the underlying model information, and vice versa. This is termed in the industry as 'data centric' behavior, which forms the core tenet of Model Based Systems Engineering (MBSE). Users work alone or together in teams on the network. In this multi-user environment, as one user opens a definition or diagram to edit it, other users get a read-only version of this artifact. Options exist to enable users to check out multiple definitions so that they can work on sections of the architecture without anyone else modifying it while they work on it, and administrators to freeze definitions so that they are ‘set in stone’. Users may also work in a stand-alone configuration on their laptop or workstation using SQL Server Express, which is bundled with the product.
A SQL-based query reporting language enables users to build and run reports to answer questions about the information they have modeled, such as what business processes are related to what organizational goals, what applications are used to perform what business processes, what business processes operate on what data entities, what user has modified what information on what date, and so forth.
The information captured in the repository is done so against a metamodel that acts as a template for information to capture and how it is all related. Users may choose industry-standard metamodels, such as those for TOGAF, DoDAF 2, ArchiMate, sysML, UML, etc. Users may customize this meta model, to change or add to the template of information they wish to capture and how things are interrelated.
Models are typically published to a website so that they can be viewed by a wide audience. An add-on tool called SA/Publisher is used to publish websites based on SQL-based queries of the repository using System Architect’s reporting language.
System Architect DoDAF, UAF, MODAF, and NAF
System Architect provides support for the diagrams, matrices, and work products required to be captured for the US Department of Defense Architecture Framework (DoDAF) version 2.02 (as well as features of the never-officially-released 2.03 version), the Unified Architecture Framework (UAF), the NATO Architecture Framework (NAF) version 3, older versions of DoDAF—DoDAF 1.5 standard and DoDAF 1.5 ABM (supporting the Activity Based Method as specified by MITRE), and the UK Ministry of Defence Architecture Framework (MODAF).
System Architect ArchiMate
Starting with release 11.4.4.1, System Architect has native support for ArchiMate 3.0 through a licensed add-on. This is a different add-on than a "Ready For Rational" plugin produced by IBM Business Partner Corso for the ArchiMate 2.0 language.
SA/XT
System Architect XT (where XT denotes eXtended Team) is a sister product to System Architect rich client, providing a pure web interface to read and write access to the repository via a browser. SA XT enables remote users with a web browser to browse the repository, run reports against it to ask it questions, and add information into it, including adding definition information, and editing or creating diagrams.
References
Related Links
UNICOM System Architect product page
UNICOM Customer Portal
SystemArchitect.info fan portal page
Community Product Page and EA Resource Wiki on IBM mydeveloperWorks
IBM Knowledge Center for Rational System Architect
System Architect Linked In User Group -- Run by Users
System Architect User Forum on IBM developerWorks
System Architect on Twitter
System Architect on Facebook
System Architect Train on YouTube.com
Demonstrations of Product on IBM developerWorks
Free Trial of Product
Add-ons on developerWorks, including Visio Mapper Macro and NGOSS Support
Latest Product Patches on IBM's Fix Central - select 'Rational' product group, and 'IBM Rational System Architect' for 11.3 and later, or 'Telelogic System Architect' for 11.2
Section 508 Compatibility (VPAT) Statement -- Request via form
IBM Rational Tools Aid Smart Device Makers, by Charles Babcock
eWeek Review — System Architect Turns Ten, by Peter Coffee
Data Administration Newsletter Review — System Architect, by Terry Moriarty
Q&A with Jan Popkin
Enterprise Architecture and System Architect
The Open Group's showcase of TOGAF-certified tools
Data modeling resource site offers data models built in System Architect
Learn more about Gartner Magic Quadrant on Enterprise Architecture Tools
Learn more about Gartner Magic Quadrant on Business Process Analysis Tools
IBM software
Data modeling tools
UML tools
Enterprise architecture
Mitre Corporation
Divested IBM products | Operating System (OS) | 1,373 |
Multi Emulator Super System
Multi Emulator Super System (MESS) is an emulator for various consoles and computer systems, based on the MAME core. It used to be a standalone program (which has since been discontinued), but is now integrated into MAME (which is actively developed). MESS emulates portable and console gaming systems, computer platforms, and calculators. The project strives for accuracy and portability and therefore is not always the fastest emulator for any one particular system. Its accuracy makes it also useful for homebrew game development.
As of April 2015 MESS supported 994 unique systems with 2,106 total system variations. However, not all of the systems in MESS are functional; some are marked as non-working or are in development. MESS was first released in 1998 and has been under constant development since.
MAME and MESS were once separate applications, but were later developed and released together from a single source repository. MAMEDEV member David Haywood maintained and distributed UME (Universal Machine Emulator) which combined much of the functionality of MAME and MESS in a single application. On May 27, 2015, MESS was formally integrated with MAME and became a part of MAME.
License
MESS was distributed under the MAME Licence, which allowed for the redistribution of binary files and source code, either modified or unmodified, but disallowed selling MESS or using it commercially. The license is similar to other copyleft licenses in requiring that rights and obligations provided in the license must be remain intact when MESS or derivative works are distributed.
In addition to the MESS Licence, The MESS Team required that: "MESS must be distributed only in the original archives. You are not allowed to distribute a modified version, nor to remove and/or add files to the archive. Adding one text file to advertise your web site is tolerated only if your site contributes original material to the emulation scene." The MAME license required source code be included with versions of MESS that are modified from the original source, while the MESS legal page states that when distributing binary files "you should also distribute the source code. If you can't do that, you must provide a pointer to a place where the source can be obtained."
While MESS was available in both binary and source code forms, the restrictions on commercial exploitation cause it to fall outside of the Free Software Foundation's definition of free software. Similarly MESS was not considered to be open source software if appraised according to the criteria of the Open Source Definition.
Challenges
Generally the emulation only includes raw hardware logic, such as for the CPU and RAM, and specialized DSPs such as tone generators or video sprites. The MESS emulator does not include any programming code stored in ROM chips from the emulated computer, since this may be copyrighted software.
Obtaining the ROM data by oneself directly from the hardware being emulated can be extremely difficult, technical, expensive, and even destructive since it may require decapping or desoldering of integrated circuit chips from the circuit board of the device they own. A desoldered IC is placed into a chip reader device connected to a USB or serial port of another computer, with pin sockets on the reader specifically designed to match the chip package shape in question, to perform a memory dump of the ROM to a data file.
Removal of a soldered chip is often far easier than reinstalling it, especially for extremely small surface mount technology chips, and the emulated device in question may be effectively destroyed beyond recovery after the ROM has been removed for reading.
However, if one has a working system, it may be far easier to dump the ROM data to tape, disk, etc. and transfer the data file to one's target machine.
Uses
In 2013 the Internet Archive began to provide abandonware games browser-playable via JSMESS (a JavaScript port of the MESS emulator), for instance, the Atari 2600 game E.T. the Extra-Terrestrial.
See also
List of computer system emulators
List of video game emulators
References
External links
MESS User Manual
JSMESS
Historical Software at Internet Archive
Arcade Database Database containing details of any game supported by Mame/Mess, including past versions. There are images, videos, programs for downloading extra files, advanced searches, graphics and many other resources.
Classic Mac OS emulation software
Linux emulation software
Macintosh platform emulators
MacOS emulation software
Multi-emulators
Nintendo Entertainment System emulators
PlayStation emulators
Video game emulation
Windows emulation software
X86 emulators | Operating System (OS) | 1,374 |
HTC Shift
HTC Shift (Code name : Clio) is an Ultra-Mobile PC by HTC.
Features
Dual Operating System
Microsoft Windows Vista Business 32-Bit (notebook mode)
SnapVUE (PDA mode)
Processor
Intel A110 Stealey CPU 800 MHz (for Windows Vista)
ARM11 CPU (for SnapVUE)
Memory and Storage
1 GB RAM (notebook mode)
64 MB RAM (PDA mode)
40/60 GB HDD
SD card slot
Intel GMA 950 graphics
Communications
Quad band GSM / GPRS / EDGE (data only): GSM 850, GSM 900, GSM 1800, GSM 1900
Triband UMTS / HSDPA (data only): UMTS 850, UMTS 1900, UMTS 2100
Wi-Fi 802.11 b/g
Bluetooth v2.0
USB port
7" display
Active TFT touchscreen, 16M colors
800 x 480 pixels (Wide-VGA), 7 inches
QWERTY keyboard
Handwriting recognition
Fingerprint Recognition
Ringtones
MP3
Dual speakers
Upgrading
In November 2011 the team from DistantEarth have succeeded in loading the developer preview of Windows 8 onto the HTC Shift.
References
HTC Shift forums on XDA-Developers
HTC Shift on pof blog: technical information about HTC Shift
HTC Source: a news blog dedicated to HTC devices
HTC Shift X9500 on Pocketables: Many photos, features, and reviews
TechCast Reviews the HTC Shift
Mobile computers
Shift | Operating System (OS) | 1,375 |
Language Interface Pack
In Microsoft terminology, a Language Interface Pack (LIP) is a skin for localizing a Windows operating system in languages such as Lithuanian, Serbian, Hindi, Marathi, Kannada, Tamil, and Thai. Based on Multilingual User Interface (MUI) "technology", a LIP also requires the software to have a base installed language and provides users with an approximately 80 percent localized user experience by translating a reduced set of user interface elements. Unlike MUI packs which are available only to Microsoft volume license customers and for specific SKUs of Windows Vista, a Language Interface Pack is available for free and can be installed on a licensed copy of Microsoft Windows or Office and a fixed "base language". In other words, if the desired additional language has incomplete localization, users may add it for free, while if the language has complete localization, the user must pay for it by licensing a premium version of Windows. (In Windows Vista and Windows 7, only the Enterprise and Ultimate editions are "multilingual".)
Typically, a Language Interface Pack is designed for regional markets that do not have full MUI packs or fully localized versions of a product. It is an intermediate localized solution that enables computer users to adapt their software to display many commonly used features in their native language. Each new Language Interface Pack is built using the glossary created by the Community Glossary Project in cooperation with the local government, academia, and local linguistic experts.
References
Software add-ons | Operating System (OS) | 1,376 |
Multiple instruction, single data
In computing, multiple instruction, single data (MISD) is a type of parallel computing architecture where many functional units perform different operations on the same data. Pipeline architectures belong to this type, though a purist might say that the data is different after processing by each stage in the pipeline. Fault tolerance executing the same instructions redundantly in order to detect and mask errors, in a manner known as task replication, may be considered to belong to this type. Applications for this architecture are much less common than MIMD and SIMD, as the latter two are often more appropriate for common data parallel techniques. Specifically, they allow better scaling and use of computational resources. However, one prominent example of MISD in computing are the Space Shuttle flight control computers.
Systolic arrays
Systolic arrays (< wavefront processors), first described by H. T. Kung and Charles E. Leiserson are an example of MISD architecture. In a typical systolic array, parallel input data flows through a network of hard-wired processor nodes, resembling the human brain which combine, process, merge or sort the input data into a derived result.
Systolic arrays are often hard-wired for a specific operation, such as "multiply and accumulate", to perform massively parallel integration, convolution, correlation, matrix multiplication or data sorting tasks. A systolic array typically consists of a large monolithic network of primitive computing nodes, which can be hardwired or software-configured for a specific application. The nodes are usually fixed and identical, while the interconnect is programmable. More general wavefront processors, by contrast, employ sophisticated and individually programmable nodes which may or may not be monolithic, depending on the array size and design parameters. Because the wave-like propagation of data through a systolic array resembles the pulse of the human circulatory system, the name systolic was coined from medical terminology.
A significant benefit of systolic arrays is that all operand data and partial results are contained within (passing through) the processor array. There is no need to access external buses, main memory, or internal caches during each operation, as with standard sequential machines. The sequential limits on parallel performance dictated by Amdahl's law also do not apply in the same way because data dependencies are implicitly handled by the programmable node interconnect.
Therefore, Systolic arrays are extremely good at artificial intelligence, image processing, pattern recognition, computer vision, and other tasks that animal brains do exceptionally well. Wavefront processors, in general, can also be very good at machine learning by implementing self-configuring neural nets in hardware.
While systolic arrays are officially classified as MISD, their classification is somewhat problematic. Because the input is typically a vector of independent values, the systolic array is not SISD. Since these input values are merged and combined into the result(s) and do not maintain their independence as they would in a SIMD vector processing unit, the array cannot be classified as such. Consequently, the array cannot be classified as a MIMD either, since MIMD can be viewed as a mere collection of smaller SISD and SIMD machines.
Finally, because the data swarm is transformed as it passes through the array from node to node, the multiple nodes are not operating on the same data, which makes the MISD classification a misnomer. The other reason why a systolic array should not qualify as a MISD is the same as the one which disqualifies it from the SISD category: The input data is typically a vector, not a single data value, although one could argue that any given input vector is a single dataset.
The above notwithstanding, systolic arrays are often offered as a classic example of MISD architecture in textbooks on parallel computing and in the engineering class. If the array is viewed from the outside as atomic it should perhaps be classified as SFMuDMeR = single function, multiple data, merged result(s).
Footnotes
Flynn's taxonomy
Parallel computing
Misd
de:Flynnsche Klassifikation#MISD (Multiple Instruction, Single Data) | Operating System (OS) | 1,377 |
Program Segment Prefix
The Program Segment Prefix (PSP) is a data structure used in DOS systems to store the state of a program. It resembles the Zero Page in the CP/M operating system. The PSP has the following structure:
The PSP is most often used to get the command line arguments of a DOS program; for example, the command "FOO.EXE /A /F" executes FOO.EXE with the arguments '/A' and '/F'.
If the PSP entry for the command line length is non-zero and the pointer to the environment segment is neither 0000h nor FFFFh, programs should first try to retrieve the command line from the environment variable %CMDLINE% before extracting it from the PSP. This way, it is possible to pass command lines longer than 126 characters to applications.
The segment address of the PSP is passed in the DS register when the program is executed. It can also be determined later by using Int 21h function 51h or Int 21h function 62h. Either function will return the PSP address in register BX.
Alternatively, in .COM programs loaded at offset 100h, one can address the PSP directly just by using the offsets listed above. Offset 000h points to the beginning of the PSP, 0FFh points to the end, etc.
For example, the following code displays the command line arguments:
org 100h ; .COM - not using ds
; INT 21h subfunction 9 requires '$' to terminate string
xor bx,bx
mov bl,[80h]
cmp bl,7Eh
ja exit ; preventing overflow
mov byte [bx+81h],'$'
; print the string
mov ah,9
mov dx,81h
int 21h
exit:
mov ax,4C00h ; subfunction 4C
int 21h
In DOS 1.x, it was necessary for the CS (Code Segment) register to contain the same segment as the PSP at program termination, thus standard programming practice involved saving the DS register to the stack at program start (since the DS register is loaded with the PSP segment) and terminating the program with a RETF instruction, which would pop the saved segment value off the stack and jump to address 0 of the PSP, which contained an INT 20h instruction.
; save
push ds
xor ax,ax
push ax
; move to the default data group (@data)
mov ax,@data
mov ds,ax
; print message in mess1 (21h subfunction 9)
mov dx,mess1
mov ah,9
int 21h
retf
If the executable was a .COM file, this procedure was unnecessary and the program could be terminated merely with a direct INT 20h instruction or else calling INT 21h function 0. However, the programmer still had to ensure that the CS register contained the segment address of the PSP at program termination. Thus,
jmp start
mess1 db 'Hello world!$'
start:
mov dx,mess1
mov ah,9
int 21h
int 20h
In DOS 2.x and higher, program termination was accomplished instead with INT 21h function 4Ch which did not require the CS register to contain the segment value of the PSP.
See also
Zero page (CP/M)
CALL 5 (DOS)
Stack frame (Unix)
Process directory (Multics)
Process identifier (PID)
this (computer programming)
Self-reference
References
Further reading
(41 pages)
(1123+v pages, foldout, 5.25"-floppy)
External links
Accessing Command Line Arguments (Microsoft.com)
DOS technology | Operating System (OS) | 1,378 |
Sabre (computer system)
Sabre Global Distribution System, owned by Sabre Corporation, is used by travel agents and companies around the world
to search, price, book, and ticket travel services provided by airlines, hotels, car rental companies, rail providers and tour
operators. Sabre aggregates airlines, hotels, online and offline travel agents and travel buyers.
Overview
The system's parent company is organized into three business units:
Sabre Travel Network: global distribution system
Sabre Airline Solutions: airline technology
Sabre Hospitality Solutions: hotel technology solutions
Sabre is headquartered in Southlake, Texas, and has employees in various locations around the world.
History
The company's history starts with SABRE (Semi-automated Business Research Environment), a computer reservation system which was developed to automate the way American Airlines booked reservations.
In the 1950s, American Airlines was facing a serious challenge in its ability to quickly handle airline reservations in an era that witnessed high growth in passenger volumes in the airline industry. Before the introduction of SABRE, the airline's system for booking flights was entirely manual, having developed from the techniques originally developed at its Little Rock, Arkansas, reservations center in the 1920s. In this manual system, a team of eight operators would sort through a rotating file with cards for every flight. When a seat was booked, the operators would place a mark on the side of the card, and knew visually whether it was full. This part of the process was not all that slow, at least when there were not that many planes, but the entire end-to-end task of looking for a flight, reserving a seat, and then writing up the ticket could take up to three hours in some cases, and 90 minutes on average. The system also had limited room to scale. It was limited to about eight operators because that was the maximum that could fit around the file. To handle more queries the only solution was to add more layers of hierarchy to filter down requests into batches.
American Airlines had already attacked the problem to some degree, and was in the process of introducing their new Magnetronic Reservisor, an electromechanical computer, in 1952 to replace the card files. This computer consisted of a single magnetic drum, each memory location holding the number of seats left on a particular flight. Using this system, a large number of operators could access information simultaneously, so the ticket agents could be told via phone if a seat was available. On the downside, a staff member was needed at each end of the phone line, and handling the ticket took considerable effort and filing. Something much more highly automated was needed if American Airlines was going to enter the jet age, booking many times more seats.
During the testing phase of the Reservisor a high-ranking IBM salesman, Blair Smith, was flying on an American Airlines flight from Los Angeles back to IBM in New York City in 1953. He found himself sitting next to American Airlines president C. R. Smith. Noting that they shared a family name, they began talking.
Just prior to this chance meeting, IBM had been working with the United States Air Force on their Semi Automatic Ground Environment (SAGE) project. SAGE used a series of large computers to coordinate the message flow from radar sites to interceptors, dramatically reducing the time needed to direct an attack on an incoming bomber. The system used teleprinter machines located around the world to feed information into the system, which then sent orders back to teleprinters located at the fighter bases. It was one of the first online systems.
It was not lost on either man that the basic idea of the SAGE system was perfectly suited to American Airlines' booking needs. Teleprinters would be placed at American Airlines' ticketing offices to send in requests and receive responses directly, without the need for anyone on the other end of the phone. The number of available seats on the aircraft could be tracked automatically, and if a seat was available the ticket agent could be notified instantly. Booking simply took one more command, updating the availability and, if desired, could be followed by printing a ticket.
Only 30 days later IBM sent a research proposal to American Airlines, suggesting that they join forces to study the problem. A team was set up consisting of IBM engineers led by John Siegfried and a large number of American Airlines' staff led by Malcolm Perry, taken from booking, reservations, and ticket sales, calling the effort the Semi-Automated Business Research Environment, or SABRE.
A formal development arrangement was signed in 1957. The first experimental system went online in 1960, based on two IBM 7090 mainframes in a new data center located in Briarcliff Manor, New York. The system was a success. Until this point, it had cost the astonishing sum of $40 million to develop and install (about $350 million in 2000 dollars). The SABRE system by IBM in the 1960s was specified to process a very large number of transactions, such as handling 83,000 daily phone calls. The system took over all booking functions in 1964, when the name had changed to SABRE.
In 1972, SABRE was migrated to IBM System/360 systems in a new underground location in Tulsa, Oklahoma. Max Hopper joined American Airlines in 1972 as director of SABRE, and pioneered its use. Originally used only by American Airlines, the system was expanded to travel agents in 1976.
With SABRE up and running, IBM offered its expertise to other airlines, and soon developed Deltamatic for Delta Air Lines on the IBM 7074, and PANAMAC for Pan American World Airways using an IBM 7080. In 1968, they generalized their work into the PARS (Programmed Airline Reservation System), which ran on any member of the IBM System/360 family and thus could support any sized airline. The operating system component of PARS evolved into ACP (Airlines Control Program), and later to TPF (Transaction Processing Facility). Application programs were originally written in assembly language, later in SabreTalk, a proprietary dialect of PL/I, and now in C and C++.
By the 1980s, SABRE offered airline reservations through the CompuServe Information Service, and the Prodigy Internet Service GEnie under the Eaasy SABRE brand. This service was extended to America Online (AOL) in the 1990s.
American and Sabre separated on March 15, 2000. Sabre had been a publicly traded corporation, Sabre Holdings, stock symbol TSG on the New York Stock Exchange until taken private in March 2007. The corporation introduced the new logo and changed from the all-caps acronym "SABRE" to the mixed-case "Sabre Holdings", when the new corporation was formed. The Travelocity website, introduced in 1996, was owned by Sabre Holdings. Travelocity was acquired by Expedia in January 2015. Sabre Holdings' three remaining business units, Sabre Travel Network, Sabre Airline Solutions and Sabre Hospitality, today serves as a global travel technology company.
Other airline systems
In 1982, Advertising Age reported that "United Airlines operates a similar system, Apollo, while Eastern operates Mars and Delta operates Datas." Braniff International's Cowboy system was considered by Electronic Data Systems for building an airline-neutral system.
Controversy
A 1982 study by American Airlines found that travel agents selected the flight appearing on the first line more than half the time. Ninety-two percent of the time, the selected flight was on the first screen. This provided a huge incentive for American to manipulate its ranking formula, or even corrupt the search algorithm outright, to favor American flights.
At first this was limited to juggling the relative importance of factors such as the length of the flight, how close the actual departure time was to the desired time, and whether the flight had a connection, but with each success American became bolder. In late 1981, New York Air added a flight from La Guardia to Detroit, challenging American in an important market. Before long, the new flights suddenly started appearing at the bottom of the screen. Its reservations dried up, and it was forced to cut back from eight Detroit flights a day to none.
On one occasion, Sabre deliberately withheld Continental's discount fares on 49 routes where American competed. A Sabre staffer had been directed to work on a program that would automatically suppress any discount fares loaded into the computer system.
Congress investigated these practices and in 1983 Bob Crandall, president of American, was the most vocal supporter of the systems. "The preferential display of our flights, and the corresponding increase in our market share, is the competitive raison d'être for having created the system in the first place," he told them. Unimpressed, in 1984 the United States government outlawed screen bias.
Even after biases were eliminated, travel agents using the system leased and serviced by American were significantly more likely to choose American over other airlines. The same was true of United and its Apollo system. The airlines referred to this phenomenon as the "halo" effect.
The fairness rules were eliminated or allowed to expire in 2010. By then, none of the major distribution systems was majority owned by the airlines.
In 1987 Sabre's success of selling to European travel agents was inhibited by the refusal of big European carriers led by British Airways to grant the system ticketing authority for their flights even though Sabre had obtained IATA Billing and Settlement Plan (BSP) clearance for the UK in 1986. American brought High Court action which alleged that after the arrival of Sabre on its doorstep British Airways immediately offered financial incentives to travel agents who continued to use Travicom and would tie any override commissions to it. Travicom was created by Videcom, British Airways and British Caledonian and launched in 1976 as the world's first multi-access reservations system based on Videcom technology which eventually became part of Galileo UK. It connected 49 subscribing international airlines (including British Airways, British Caledonian, TWA, Pan American World Airways, Qantas, Singapore Airlines, Air France, Lufthansa, SAS, Air Canada, KLM, Alitalia, Cathay Pacific and JAL) to thousands of travel agents in the UK. It allowed agents and airlines to communicate via a common distribution language and network, handling 97% of UK airline business trade bookings by 1987.
British Airways eventually bought out the stakes in Travicom held by Videcom and British Caledonian, to become the sole owner. Although Sabre's vice-president in London, David Schwarte, made representations to the U.S. Department of Transportation and the British Monopolies Commission, British Airways defended the use of Travicom as a truly non-discriminatory system in flight selection because an agent had access to some 50 carriers worldwide, including Sabre, for flight information.
See also
Travelocity
List of global distribution systems
Passenger name record
Code sharing
Electronic Recording Machine, Accounting (ERMA) – another pioneering early system. ERMA, SAGE and SABRE helped legitimize computers in business.
Real-time operating system – SABRE was one of the first such systems
Perry O. Crawford Jr.
Travel technology
Galileo CRS
Navitaire
Travelport
References
Robert V. Head, "Getting Sabre off the Ground", IEEE Annals of the History of Computing, vol. 24, no. 4, pp. 32–39, Oct.-Dec. 2002,
Further reading
Robert V. Head, Real-Time Business Systems, Holt, Rinehart and Winston, New York, 1964. "This book embodies many of the lessons learned about new technology application management while working on the ERMA and Sabre systems".
D.G. Copeland, R.O. Mason, and J.L. McKenney, "Sabre: The Development of Information-Based Competence and Execution of Information-Based Competition," IEEE Annals of the History of Computing, vol. 17, no. 3, Fall 1995, pp. 30–57.
R.D. Norby, "The American Airlines Sabre System," in James D. Gallagher, Management Information Systems and the Computer, Am. Management Assoc. Research Study, 1961, pp. 150–176.
IBM General Information Manual, 9090 Airlines Reservation System, 1961.
External links
Oral history interview with R. Blair Smith. Charles Babbage Institute, University of Minnesota, Minneapolis. Smith discusses how a chance meeting with C. R. Smith, president of American Airlines, eventually led to the development of the SABRE system.
The Mad Men's Best Friend Was SABRE on Wired.com
Sabre at IBM100
Sabre Holdings
Virtually There public site for viewing reservations made through Sabre.
Some History features a history of ACP/TPF the Operating System used on SABRE
Travel technology
Assembly language software
American Airlines
Computer reservation systems | Operating System (OS) | 1,379 |
Cisco Eos
Cisco Eos was a software platform for Media & Entertainment (M&E) companies developed by the Cisco Media Solutions Group.
Unlike the Canon EOS product, Cisco says that "Eos" is not an acronym, but is pronounced as a word (i.e., EE-oss)
Eos is reported to be focused on helping M&E companies connect with online fans experiencing the disruptive technology known as the digitization of content.
Overview
Cisco Eos is a hosted Software as a service platform that integrates features from multiple point solutions that enable M&E companies to "create, manage, and grow online communities" built around branded entertainment content.
Eos supports all entertainment genres and incorporates social networking, content management, site administration, and audience analytics features into a single operating environment, using CMS similar to Joomla! or Wordpress.
Leadership
Cisco Eos is developed by the Cisco Media Solutions Group, led by Cisco employee Dan Scheinman.
History
Cisco concerns manufacturing network devices. In 2003, Cisco acquired Linksys and its line of home networking products. Cisco acquired Scientific Atlanta in 2006 and Pure Digital (the maker of Flip video cameras) in 2009.
Eos was launched as a homegrown addition to Cisco’s Consumer portfolio in January 2009 at the Consumer Electronics Show (CES).
At CES 2010, Cisco announced Eos 2.0. Eos 2.0 includes Facebook Connect integration, migration tools for companies with existing sites and communities, and public APIs for third party developers to expand and use the platform. Eos 2.0 also features member group management to enhance fan engagement and monetization.
Closure
In April 2011 it was announced that Cisco will be closing down the EOS team.
Customers
The first customer announced for Eos was Warner Music Group. The first two Atlantic Records sites powered by Cisco Eos were Laura Izibor and Sean Paul. Michael Nash, executive VP of digital strategy and development at WMG, said that they chose to partner with Cisco, because "we are not a technology company". According to Warner, with sites powered by Cisco Eos, the average unique user spends roughly 8.4 minutes on an artist site; this level of engagement is more than 25 percent higher than on non-Eos sites.
Additional customers announced at CES 2010 include Tenth Street Entertainment and Travel Channel. It has been stated that "According to Allen Kovac, founder and CEO, The Eleven Seven Music Group/Tenth Street Entertainment, the relationship with Cisco and the Cisco Eos platform allows the company to expand its online presence... " Adam Sutherland, VP of strategy and business development at Travel Channel, said, "The Eos platform allows us to provide tools for our community to contribute, comment and rate content."
In March 2010, Dogwoof became the first Cisco Eos customer in the U.K. Anna Godas, co-founder of Dogwoof, said "today's film industry requires Dogwoof to play a much more interactive role with both the audience and filmmakers ." In May 2010, LOCOG became the second Eos customer in the U.K. and the first in the sports vertical.
Partner program
Cisco announced its Eos Partner Program at CES 2010. Initial partners include Hot Studio, The Wonder Factory, HCL, and Infosys.
Disadvantages
Eos is more expensive than most of its competitors. All data is stored on Cisco servers which creates potential problems if the site has to be transferred.
See also
Cisco Systems
References
External links
Cisco press release
Cisco Eos Platform page
PC Mag point solution definition
Consumer Electronics Show (CES) site
Dan Scheinman
Computing platforms
Social software
EOS | Operating System (OS) | 1,380 |
System prevalence
System prevalence is a simple software architectural pattern that combines system images (snapshots) and transaction journaling to provide speed, performance scalability, transparent persistence and transparent live mirroring of computer system state.
In a prevalent system, state is kept in memory in native format, all transactions are journaled and System images are regularly saved to disk.
System images and transaction journals can be stored in language-specific serialization format for speed or in XML format for cross-language portability.
The first usage of the term and generic, publicly available implementation of a system prevalence layer was Prevayler, written for Java by Klaus Wuestefeld in 2001.
Advantages
Simply keeping system state in RAM in its normal, natural, language-specific format is orders of magnitude faster and more programmer-friendly than the multiple conversions that are needed when it is stored and retrieved from a DBMS.
As an example, Martin Fowler describes "The LMAX Architecture" with a transaction-journal and system-image (snapshot) based business system at its core, which can process 6 million transactions per second on a single thread.
Requirement
A prevalent system needs enough memory to hold its entire state in RAM (the "prevalent hypothesis"). Prevalence advocates claim this is continuously alleviated by decreasing RAM prices, and the fact that many business databases are small enough already to fit in memory.
Programmers need skill in working with business state natively in RAM, rather than using explicit API calls for storage and queries for retrieval.
The system's events must be capturable for journaling.
See also
Object-relational mapping
References
External links
"An Introduction to Object Prevalence", by Carlos Villela for IBM Developerworks.
"Prevalence: Transparent, Fault-Tolerant Object Persistence", by Jim Paterson for O'Reilly's OnJava.com
"Object Prevalence": Original Article by Klaus Wuestefeld published in 2001 on Advogato.
Madeleine: a Ruby implementation
Persistence | Operating System (OS) | 1,381 |
Physical computing
Physical computing involves interactive systems that can sense and respond to the world around them. While this definition is broad enough to encompass systems such as smart automotive traffic control systems or factory automation processes, it is not commonly used to describe them. In a broader sense, physical computing is a creative framework for understanding human beings' relationship to the digital world. In practical use, the term most often describes handmade art, design or DIY hobby projects that use sensors and microcontrollers to translate analog input to a software system, and/or control electro-mechanical devices such as motors, servos, lighting or other hardware.
Physical Computing intersects the range of activities often referred to in academia and industry as electrical engineering, mechatronics, robotics, computer science, and especially embedded development.
Examples
Physical computing is used in a wide variety of domains and applications.
In Education
The advantage of physicality in education and playfulness has been reflected in diverse informal learning environments. The Exploratorium, a pioneer in inquiry based learning, developed some of the earliest interactive exhibitry involving computers, and continues to include more and more examples of physical computing and tangible interfaces as associated technologies progress.
In Art
In the art world, projects that implement physical computing include the work of Scott Snibbe, Daniel Rozin, Rafael Lozano-Hemmer, Jonah Brucker-Cohen, and Camille Utterback.
In Product Design
Physical computing practices also exist in the product and interaction design sphere, where hand-built embedded systems are sometimes used to rapidly prototype new digital product concepts in a cost-efficient way. Firms such as IDEO and Teague are known to approach product design in this way.
In Commercial Applications
Commercial implementations range from consumer devices such as the Sony Eyetoy or games such as Dance Dance Revolution to more esoteric and pragmatic uses including machine vision utilized in the automation of quality inspection along a factory assembly line. Exergaming, such as Nintendo's Wii Fit, can be considered a form of physical computing. Other implementations of physical computing include voice recognition, which senses and interprets sound waves via microphones or other soundwave sensing devices, and computer vision, which applies algorithms to a rich stream of video data typically sensed by some form of camera. Haptic interfaces are also an example of physical computing, though in this case the computer is generating the physical stimulus as opposed to sensing it. Both motion capture and gesture recognition are fields that rely on computer vision to work their magic.
In Scientific Applications
Physical computing can also describe the fabrication and use of custom sensors or collectors for scientific experiments, though the term is rarely used to describe them as such. An example of physical computing modeling is the Illustris project, which attempts to precisely simulate the evolution of the universe from the Big Bang to the present day, 13.8 billion years later.
Methods
Prototyping plays an important role in Physical Computing. Tools like the Wiring, Arduino and Fritzing as well as I-CubeX help designers and artists to quickly prototype their interactive concepts.
Further reading
References
External links
Arduino, a highly popular open source physical computing platform
Raspberry Pi, complete computer with GPIO's to interact with the world, huge community, many tutorials available. Many Linux distros available as well as Windows IoT and OS-less unikernel RTL's such as Ultibo Core.
BeagleBone, a complete Linux computer with GPIO's, but a little less flexible
FoxBoard (and others), yet another Linux computer with GPIO, but with little information
Arieh Robotics Project Junior]. A Windows 7 based Physical Computing PC built using Microsoft Robotics Developer Studio.
BluePD BlueSense. a physical computing platform by Blue Melon. This platform is visually programmable using the popular (open source) Pure Data system.
Daniel Rozin Artist Page, bitforms gallery, features images and video of Daniel Rozin's interactive installations and sculptures.
Dwengo, a PIC microcontroller based computing platform that comes with a Breadboard for easy prototyping.
EmbeddedLab, A research lab situated within the Department of Computer Aided Architecture Design at ETH Zürich.
Fritzing - from prototype to product: a software, which supports designers and artists to take the step from physical prototyping to actual product.
GP3, another popular choice that allows building physical systems with PCs and traditional languages (C, Basic, Java, etc.) or standalone using a point and click development tool.
Physical Computing, Interactive Telecommunications Program, New York University
Physical Computing by Dan O'Sullivan
Physical Computing, Tom Igoe's collection of resources, examples, and lecture notes for the physical computing courses at ITP.
Physical Computing, A path into electronics using an approach of “learning by making”, introducing electronic prototyping in a playful, non-technical way. (Yaniv Steiner, IDII)
Theremino, an open source modular system for interfacing transducers (sensors and actuators) via USB to PC, notebooks, netbooks, tablets and cellphones.
Applications of computer vision
User interfaces
Design
Digital art
Virtual reality
Computer systems | Operating System (OS) | 1,382 |
William Jolitz
William Frederick Jolitz (born February 22, 1957), commonly known as Bill Jolitz, is an American software programmer best known for developing the 386BSD operating system from 1989 to 1994 along with his wife Lynne Jolitz.
Jolitz received his BA in Computer Science from UC Berkeley.
He and his wife reside in Los Gatos, California.
References
External links
- personal website
The Unknown Hackers - Salon article
1957 births
BSD people
Free software programmers
Living people
University of California, Berkeley alumni | Operating System (OS) | 1,383 |
Amiga, Inc.
Amiga, Inc. is a company that used to hold some trademarks and other assets associated with the Amiga personal computer (originally developed by Amiga Corporation).
Early years
In the early 1980s Jay Miner, along with other Atari, Inc. staffers, set up another chip-set project under a new company in Santa Clara, called Hi-Toro (later renamed to Amiga Corporation), where they could have some creative freedom. Atari, Inc. went into contract with Amiga for licensed use of the chipset in a new high end game console and then later for use in a computer system. $500,000 was advanced to Amiga to continue development of the chipset. Amiga negotiated with Commodore International two weeks prior to the contract deadline of 30 June 1984. In August 1984, Atari Corporation, under Jack Tramiel, sued Amiga for breach of contract. The case was settled in 1987 in a closed settlement. (See "Amiga Corporation".)
In 1994, Commodore filed for bankruptcy and its assets were purchased by Escom, a German PC manufacturer, who in turn went bankrupt in 1996. The Amiga brand was then sold to another PC manufacturer, Gateway 2000, which had announced grand plans for it. However, in 1999, Gateway sold Amiga to Amino Development for almost 5 million dollars. Gateway still retained ownership to all Amiga patents.
Dispute and settlement with Hyperion
Amiga, Inc. licensed the rights to make hardware using the AmigaOne brand to a computer vendor based in the UK, Eyetech Group. However, due to poor sales Eyetech suffered substantial losses and ceased trading.
In 2007 Amiga, Inc. announced specs for a new line of Amiga computers: low end and high models. At the same time Amiga, Inc. sued Hyperion Entertainment, a company developing AmigaOS 4 for AmigaOne boards for trademark infringement in the Washington Western District Court in Seattle, USA. The company claimed Hyperion was in breach of contract, citing trademark violation and copyright infringement concerning the development and marketing of AmigaOS 4.0.
Also in 2007, Amiga, Inc. intended to become the naming-rights sponsor for a planned ice hockey arena in Kent, Washington, but failed to deliver a promised down payment.
Pentti Kouri, Chairman of the Board and a primary source of capital for Amiga, Inc., died in 2009.
On 20 September 2009 Amiga Inc and Hyperion Entertainment reached a settlement where Hyperion is granted an exclusive, perpetual, worldwide right to AmigaOS 3.1 in order to use, develop, modify, commercialize, distribute and market AmigaOS 4.x and subsequent versions of AmigaOS (including AmigaOS 5).
Licensing rights
In 2010 Commodore USA announced they acquired the rights to the Amiga name and relaunch Amiga branded desktops running AROS
and Linux, which however Hyperion Entertainment promptly disputed, on the basis of a 2009 settlement agreement between Hyperion and Amiga Inc. After legal threats from Hyperion due to conditions in the Amiga Inc. settlement that they are now subject to as an Amiga licensee, Commodore USA later dropped their AROS plans and announced on their relaunched website, that they will create a new OS called AMIGA Workbench 5.0 (name changed to Commodore OS since Workbench was owned by Cloanto), which was later revealed will be based on Linux.
In 2011, Amiga Inc. licensed the brand name to Hong Kong based manufacturer IContain Systems, Ltd.
In 2012, Amiga Inc. completed the transfer of copyrights up to 1993 to Cloanto.
Recent events
Amiga Inc. is in dispute with Hyperion due to the release of Workbench 3.1.4 by Hyperion.
On 1 February 2019, Amiga Inc. transferred all its IP (including Amiga trademarks and remaining copyrights) to C-A Acquisition Corp., owned by Mike Battilana (director of Cloanto, company behind the Amiga Forever emulation package), later renamed to Amiga Corporation.
See also
Amiga, Inc. (South Dakota)
Amiga Corporation
Amiga computer
AmigaOne
AmigaOS 4
References
External links
Amiga, Inc. Website (Archive.org, October 2017)
Amiga companies | Operating System (OS) | 1,384 |
Macintosh SE
The Macintosh SE is a personal computer designed, manufactured, and sold by Apple Computer, from March 1987 to October 1990. It marked a significant improvement on the Macintosh Plus design and was introduced by Apple at the same time as the Macintosh II.
The SE retains the same Compact Macintosh form factor as the original Macintosh computer introduced three years earlier and uses the same design language used by the Macintosh II. An enhanced model, the SE/30, was introduced in January 1989; sales of the original SE continued. The Macintosh SE was updated in August 1989 to include a SuperDrive, with this updated version being called the "Macintosh SE FDHD" and later the "Macintosh SE SuperDrive". The Macintosh SE was replaced with the Macintosh Classic, a very similar model which retained the same central processing unit and form factor, but at a lower price point.
Overview
The Macintosh SE was introduced at the AppleWorld conference in Los Angeles on March 2, 1987. The "SE" is an initialism for "System Expansion". Its notable new features, compared to its similar predecessor, the Macintosh Plus, were:
First compact Macintosh with an internal drive bay for a hard disk (originally 20 MB or 40 MB) or a second floppy drive.
First compact Macintosh that featured an expansion slot.
First Macintosh to support the Apple Desktop Bus (ADB), previously only available on the Apple IIGS, for keyboard and mouse connections.
Improved SCSI support, providing faster data throughput (double that of the Macintosh Plus) and a standard 50-pin internal SCSI connector.
Better reliability and longer life expectancy (15 years of continuous use) due to the addition of a cooling fan.
25 percent greater speed when accessing RAM, results in a lower percentage of CPU time being spent drawing the screen. In practice this results in a 10-20 percent performance improvement.
Additional fonts and kerning routines in the Toolbox ROM
Disk First Aid is included on the system disk
The SE and Macintosh II were the first Apple computers since the Apple I to be sold without a keyboard. Instead the customer was offered the choice of the new ADB Apple Keyboard or the Apple Extended Keyboard.
Apple produced ten SEs with transparent cases as prototypes for promotional shots and employees. They are extremely rare and command a premium price for collectors.
Operating system
The Macintosh SE shipped with System 4.0 and Finder 5.4; this version is specific to this computer. (The Macintosh II, which was announced at the same time but shipped a month later, includes System 4.1 and Finder 5.5.) The README file included with the installation disks for the SE and II is the first place Apple ever used the term "Macintosh System Software", and after 1998 these two versions were retroactively given the name "Macintosh System Software 2.0.1".
Hardware
Processor: Motorola 68000, 8 MHz, with an 8 MHz system bus and a 16-bit data path
RAM: The SE came with 1 MB of RAM as standard, and is expandable to 4 MB. The logic board has four 30-pin SIMM slots; memory must be installed in pairs and must be 150 ns or faster.
Video: The built-in 512 × 342 monochrome screen uses 21,888 bytes of main memory as video memory.
Storage: The SE can accommodate either one or two floppy drives, or a floppy drive and a hard drive. After-market brackets were designed to allow the SE to accommodate two floppy drives as well as a hard drive, however it was not a configuration supported by Apple. In addition an external floppy disk drive may also be connected, making the SE the only Macintosh besides the Macintosh Portable which could support three floppy drives, though its increased storage, RAM capacity and optional internal hard drive rendered the external drives less of a necessity than for its predecessors. Single-floppy SE models also featured a drive-access light in the spot where the second floppy drive would be. Hard-drive equipped models came with a 20 MB SCSI hard disk.
Battery: Soldered into the logic board is a 3.6 V 1/2AA lithium battery, which must be present in order for basic settings to persist between power cycles. Macintosh SE machines which have sat for a long time have experienced battery corrosion and leakage, resulting in a damaged case and logic board.
Expansion: A Processor Direct Slot on the logic board allows for expansion cards, such as accelerators, to be installed. The SE can be upgraded to 50 MHz and more than 5 MB with the MicroMac accelerators. In the past other accelerators were also available such as the Sonnet Allegro. Since installing a card required opening the computer's case and exposing the user to high voltages from the internal CRT, Apple recommended that only authorized Apple dealers install the cards; the case was sealed with then-uncommon Torx screws.
Upgrades: After Apple introduced the Macintosh SE/30 in January, 1989, a logic board upgrade was sold by Apple dealers for US$1,699 as a high-cost upgrade for the SE, consisting of a new SE/30 motherboard, case front and internal chassis to accommodate the upgrade components.
ROM/Easter egg: The SE ROM size increased from 64 KB in the original Mac (and 128 KB in the Mac Plus) to 256 KB, which allowed the development team to include an Easter Egg hidden in the ROMs. By jumping to address 0x41D89A (or reading from the ROM chips), it is possible to display four images of the engineering team.
Models
Introduced March 2, 1987:
Macintosh SE with 1 Mbyte RAM and two 800k drives
Macintosh SE 1/20 with 1 Mbyte RAM, one 800k drive and 20 MB hard disk.
Introduced August 1, 1988:
Macintosh SE 1/40: The name of the Macintosh SE with a 40 MB hard disk in place of 20 MB.
Introduced August 1, 1989:
Macintosh SE FDHD: Includes the new SuperDrive, a floppy disk drive that can handle 1.4 MB High Density (HD) floppy disks. FDHD is an acronym for "Floppy Disk High Density"; later some Macintosh SE FDHDs were labeled Macintosh SE SuperDrive, to conform to Apple's marketing change with respect to their new drive. High-density floppies would become the de facto standard on both the Macintosh and PC computers from then on. An upgrade kit was sold for the original Macintosh SE which included new ROM chips and a new disk controller chip, to replace the originals.
See also
Mini vMac
References
External links
1987 Apple Computer, Inc. promotional video "Own-a-Mac - The Movie"
The Mac SE Support Pages Repair & upgrade advice. (Wayback Machine Archived Version)
Mac SE Low End Mac
SE
SE
Computer-related introductions in 1987 | Operating System (OS) | 1,385 |
LiveCode (company)
LiveCode Ltd. (formerly Runtime Revolution and Cross Worlds Computing) makes the LiveCode cross-platform development environment (formerly called Revolution) for creating applications that run on iOS, Microsoft Windows, Linux, macOS, Android and Browsers. It is similar to Apple's discontinued HyperCard.
History
LiveCode began as an expert IDE for MetaCard, a development environment and GUI toolkit originally developed for UNIX development and later ported to support Microsoft Windows and macOS compilation. Runtime Revolution Ltd acquired MetaCard in July 2003 and released subsequent versions under the Revolution brand.
MetaCard built on the success of its predecessor HyperCard. Both HyperCard and MetaCard utilized an English-like language that was arguably easier to learn than BASIC. Both RevTalk and HyperCard are development environments within the SmallTalk genre and have similar design attributes.
The language has been known by several names including Transcript, RevTalk and as of November 2010 "LiveCode". The entire product including the IDE is now officially referred to as LiveCode. The iOS version is available as of December 2010, with the Android and server versions under development.
The company is supported by a number of investors including Mike Markkula who originally invested in Apple Computer Inc in 1976 and brought that company to market.
On 11 November 2009 in San Francisco, the company officially launched version 4.0 of the Revolution programming language (renamed LiveCode in November 2010), officially bringing the revTalk language to the web.
In late 2009, the company launched the RunRev Partner Program giving all people programming in the LiveCode language the opportunity to work more closely with the core LiveCode development team. This provision of dedicated Technical Account Managers is part of the continued development of the LiveCode language and is designed to make it even more accessible.
See also
LiveCode
MetaCard
HyperCard
xTalk
References
External links
Ten Thumbs Typing Tutor, a Runtime Revolution product (formerly Learn to Type released in 1995)
LiveCode Hosting, a LiveCode hosting service
Companies established in 1998
Companies based in Edinburgh
Software companies of Scotland | Operating System (OS) | 1,386 |
IBM System/34
The IBM System/34 was an IBM midrange computer introduced in 1977. It was withdrawn from marketing in February 1985. It was a multi-user, multi-tasking successor to the single-user System/32. It included two processors, one based on the System/32 and the second based on the System/3. Like the System/32 and the System/3, the System/34 was primarily programmed in the RPG II language.
Hardware
The 5340 System Unit contained the processing unit, the disk storage and the diskette drive. It had several access doors on both sides. Inside, were swing-out assemblies where the circuit boards and memory cards were mounted. It weighed and used 220V power. The IBM 5250 series of terminals were the primary interface to the System/34.
Processors
S/34s had two processors, the Control Storage Processor (CSP), and the Main Storage Processor (MSP). The MSP was the workhorse, based on System/3 architecture; it performed the instructions in the computer programs. The CSP was the governor, a different processor with different RISC-like instruction set, based on System/32 architecture; it performed system functions in the background. The CSP also executed the optional Scientific Macroinstructions, which were a set of emulated instructions used by the System/34 Fortran compiler and optionally in assembly code. The clock speed of the CPUs inside a System/34 was fixed at 1 MHz for the MSP and 4 MHz for the CSP. Special utility programs were able to make direct calls to the CSP to perform certain functions; these are usually system programs like $CNFIG which was used to configure the computer system.
Memory and storage
The smallest S/34 had 48K of RAM and an 8.6 MB hard drive. The largest configured S/34 could support 256K of RAM and 256MB of disk space. S/34 hard drives contained a feature called "the extra cylinder," so that bad spots on the drive were detected and dynamically mapped out to good spots on the extra cylinder. Disk space on the System/34 was organized by blocks of 2560 bytes.
The System/34 supported memory paging, referring to as swapping. The System/34 could either swap out entire programs, or individual segments of a program in order to free up memory for other programs to run.
One of the machine's most distinctive features was an off-line storage mechanism that utilized "" - boxes of 8-inch floppies that the machine could load and eject in a nonsequential fashion.
Software
Operating System
The System Support Program (SSP) was the only operating system of the S/34. It contained support for multiprogramming, multiple processors, 36 devices, job queues, printer queues, security, indexed file support. Fully installed, it was about 5 MB. The Operational Control Language (OCL) was the control language of SSP.
Programming
The System/34's initial programming languages were limited to RPG II and Basic Assembler when introduced in 1977. FORTRAN was fully available six months after the 34's introduction, and COBOL was available as a PRPQ. BASIC was introduced later.
Successor systems
The IBM System/38 was intended to be the successor of the System/34 and the earlier System/3x systems. However, due to the delays in the development of the System/38 and the high cost of the hardware once complete, IBM developed the simpler and cheaper System/36 platform which was more widely adopted than the System/38. The System/36 was an evolution of the System/34 design, but the two machines were not object-code compatible. Instead, the System/36 offered source code compatibility, allowing System/34 applications to be recompiled on a System/36 with little to no changes. Some System/34 hardware was incompatible with the System/36.
A third party product from California Software Products, Inc. named BABY/34 allowed System/34 applications to be ported to IBM PC compatible hardware running MS-DOS.
References
Further reading
External links
IBM Archives: System/34
Bitsavers Archive of System/34 Documentation
System 34
Computer-related introductions in 1977
16-bit computers | Operating System (OS) | 1,387 |
OMA Device Management
OMA Device Management is a device management protocol specified by the Open Mobile Alliance (OMA) Device Management (DM) Working Group and the Data Synchronization (DS) Working Group. The current approved specification of OMA DM is version 1.2.1, the latest modifications to this version released in June 2008. The candidate release 2.0 was scheduled to be finalized in September 2013.
Overview
OMA DM specification is designed for management of mobile devices such as mobile phones, PDAs, and tablet computers. Device management is intended to support the following uses:
Provisioning – Configuration of the device (including first time use), enabling and disabling features
Device Configuration – Allow changes to settings and parameters of the device
Software Upgrades – Provide for new software and/or bug fixes to be loaded on the device, including applications and system software
Fault Management – Report errors from the device, query about status of device
All of the above functions are supported by the OMA DM specification, and a device may optionally implement all or a subset of these features. Since OMA DM specification is aimed at mobile devices, it is designed with sensitivity to the following:
small footprint devices, where memory and storage space may be limited
constraint on bandwidth of communication, such as in wireless connectivity
tight security, as the devices are vulnerable to software attacks; authentication and challenges are made part of the specifications
Technical description
OMA DM was originally developed by The SyncML Initiative Ltd, an industry consortium formed by many mobile device manufacturers. The SyncML Initiative got consolidated into the OMA umbrella as the scope and use of the specification was expanded to include many more devices and support global operation.
Technically, the OMA DM protocol uses XML for data exchange, more specifically the sub-set defined by SyncML. The device management takes place by communication between a server (which is managing the device) and the client (the device being managed). OMA DM is designed to support and utilize any number of data transports such as:
physically over both wireline (USB, RS-232) and wireless media (GSM, CDMA, IrDA, or Bluetooth)
transport layers implemented over any of WSP (WAP), HTTP, or OBEX or similar transports
The communication protocol is a request-response protocol. Authentication and challenge of authentication are built-in to ensure the server and client are communicating only after proper validation. The server and client are both stateful, meaning a specific sequence of messages are to be exchanged only after authentication is completed to perform any task.
The communication is initiated by the OMA DM server, asynchronously, using any of the methods available such as a WAP Push or SMS. The initial message from server to client is said to be in the form of a notification, or alert message.
Once the communication is established between the server and client, a sequence of messages might be exchanged to complete a given device management task. OMA DM does provide for alerts, which are messages that can occur out of sequence, and can be initiated by either server or client. Such alerts are used to handle errors, abnormal terminations etc.
Several parameters relating to the communication such as the maximum message size can be negotiated between the server and client during the initiation of a session. In order to transfer large objects, the protocol does allow for sending them in smaller chunks.
Error recovery based on timeouts are not specified completely, hence, different implementations could possibly differ (protocol is not fully specified relating to these, and seem to leave them open intentionally).
The protocol specifies exchange of Packages during a session, each package consisting of several messages, and each message in turn consisting of one or more commands. The server initiates the commands and the client is expected to execute the commands and return the result via a reply message.
References
External links
JSR 233: J2EE Mobile Device Management and Monitoring Specification
OMA Device Management Working Group
Open Mobile Alliance - Device Management Overview
Open Source OMA-DM simulator - Eclipse Koneki project
Open Mobile Alliance standards
Open standards
Computer hardware standards
XML-based standards
Networking standards | Operating System (OS) | 1,388 |
ISO/IEC 8859-1
ISO/IEC 8859-1:1998, Information technology — 8-bit single-byte coded graphic character sets — Part 1: Latin alphabet No. 1, is part of the ISO/IEC 8859 series of ASCII-based standard character encodings, first edition published in 1987. ISO 8859-1 encodes what it refers to as "Latin alphabet no. 1", consisting of 191 characters from the Latin script. This character-encoding scheme is used throughout the Americas, Western Europe, Oceania, and much of Africa. It is the basis for some popular 8-bit character sets and the first two blocks of characters in Unicode.
ISO-8859-1 was (according to the standard, at least) the default encoding of documents delivered via HTTP with a MIME type beginning with "text/" (HTML5 changed this to Windows-1252). , 1.1% of all (but only 5 of the top 1000) websites use . It is the most declared single-byte character encoding in the world on the web, but as web browsers interpret it as the superset Windows-1252 the documents may include characters from that set.
Depending on the country, use can be much higher than the global average, e.g. for Germany at 4.6% (and including Windows-1252 at 5.1%).
ISO-8859-1 was the default encoding of the values of certain descriptive HTTP headers, and defined the repertoire of characters allowed in HTML 3.2 documents, and is specified by many other standards. This is sometimes assumed to be the encoding of text on Microsoft Windows (and Unix) if there is no byte order mark (BOM); this is only gradually being changed to UTF-8.
ISO-8859-1 is the IANA preferred name for this standard when supplemented with the C0 and C1 control codes from ISO/IEC 6429. The following other aliases are registered: iso-ir-100, csISOLatin1, latin1, l1, IBM819. Code page 28591 a.k.a. Windows-28591 is used for it in Windows. IBM calls it code page 819 or CP819 (CCSID 819). Oracle calls it WE8ISO8859P1.
Coverage
Each character is encoded as a single eight-bit code value. These code values can be used in almost any data interchange system to communicate in the following languages (while it may exclude correct quotation marks such as for many languages including German and Icelandic):
Modern languages with complete coverage
Notes
Languages with incomplete coverage
ISO-8859-1 was commonly used for certain languages, even though it lacks characters used by these languages. In most cases, only a few letters are missing or they are rarely used, and they can be replaced with characters that are in ISO-8859-1 using some form of typographic approximation. The following table lists such languages.
The letter ÿ, which appears in French only very rarely, mainly in city names such as L'Haÿ-les-Roses and never at the beginning of words, is included only in lowercase form. The slot corresponding to its uppercase form is occupied by the lowercase letter ß from the German language, which did not have an uppercase form at the time when the standard was created.
Quotation marks
For some languages listed above, the correct typographical quotation marks are missing, as only , , and are included. Also, this scheme does not provide for oriented (6- or 9-shaped) single or double quotation marks. Some fonts will display the spacing grave accent (0x60) and the apostrophe (0x27) as a matching pair of oriented single quotation marks, but this is not considered part of the modern standard.
History
ISO 8859-1 was based on the Multinational Character Set (MCS) used by Digital Equipment Corporation (DEC) in the popular VT220 terminal in 1983. It was developed within the European Computer Manufacturers Association (ECMA), and published in March 1985 as ECMA-94, by which name it is still sometimes known. The second edition of ECMA-94 (June 1986) also included ISO 8859-2, ISO 8859-3, and ISO 8859-4 as part of the specification.
The original draft of ISO 8859-1 placed French Œ and œ at code points 215 (0xD7) and 247 (0xF7), as in the MCS. However, the delegate from France, being neither a linguist nor a typographer, falsely stated that these are not independent French letters on their own, but mere ligatures (like fi or fl), supported by the delegate team from Bull Publishing Company, who regularly did not print French with Œ/œ in their house style at the time. An anglophone delegate from Canada insisted on retaining Œ/œ but was rebuffed by the French delegate and the team from Bull. These code points were soon filled with × and ÷ under the suggestion of the German delegation. Support for French was further reduced when it was again falsely stated that the letter ÿ is "not French", resulting in the absence of the capital Ÿ. In fact, the letter ÿ is found in a number of French proper names, and the capital letter has been used in dictionaries and encyclopedias. These characters were added to ISO/IEC 8859-15:1999. BraSCII matches the original draft.
In 1985, Commodore adopted ECMA-94 for its new AmigaOS operating system. The Seikosha MP-1300AI impact dot-matrix printer, used with the Amiga 1000, included this encoding.
In 1990, the very first version of Unicode used the code points of ISO-8859-1 as the first 256 Unicode code points.
In 1992, the IANA registered the character map ISO_8859-1:1987, more commonly known by its preferred MIME name of ISO-8859-1 (note the extra hyphen over ISO 8859-1), a superset of ISO 8859-1, for use on the Internet. This map assigns the C0 and C1 control codes to the unassigned code values thus provides for 256 characters via every possible 8-bit value.
Code page layout
Similar character sets
ISO/IEC 8859-15
ISO/IEC 8859-15 was developed in 1999, as an update of ISO/IEC 8859-1. It provides some characters for French and Finnish text and the euro sign, which are missing from ISO/IEC 8859-1. This required the removal of some infrequently used characters from ISO/IEC 8859-1, including fraction symbols and letter-free diacritics: , , , , , , , and . Ironically, three of the newly added characters (, , and ) had already been present in DEC's 1983 Multinational Character Set (MCS), the predecessor to ISO/IEC 8859-1 (1987). Since their original code points were now reused for other purposes, the characters had to be reintroduced under different, less logical code points.
ISO-IR-204, a more minor modification, had been registered in 1998, altering ISO-8859-1 by replacing the universal currency sign (¤) with the euro sign (the same substitution made by ISO-8859-15).
Windows-1252
The popular Windows-1252 character set adds all the missing characters provided by ISO/IEC 8859-15, plus a number of typographic symbols, by replacing the rarely used C1 controls in the range 128 to 159 (hex 80 to 9F). It is very common to mislabel Windows-1252 text as being in ISO-8859-1. A common result was that all the quotes and apostrophes (produced by "smart quotes" in word-processing software) were replaced with question marks or boxes on non-Windows operating systems, making text difficult to read. Many web browsers and e-mail clients will interpret ISO-8859-1 control codes as Windows-1252 characters, and that behavior was later standardized in HTML5.
Mac Roman
The Apple Macintosh computer introduced a character encoding called Mac Roman in 1984. It was meant to be suitable for Western European desktop publishing. It is a superset of ASCII, and has most of the characters that are in ISO-8859-1 and all the extra characters from Windows-1252 but in a totally different arrangement. The few printable characters that are in ISO 8859-1, but not in this set, are often a source of trouble when editing text on websites using older Macintosh browsers, including the last version of Internet Explorer for Mac.
Other
DOS had code page 850, which had all printable characters that ISO-8859-1 had (albeit in a totally different arrangement) plus the most widely used graphic characters from code page 437.
Between 1989 and 2015, Hewlett-Packard used another superset of ISO-8859-1 on many of their calculators. This proprietary character set was sometimes referred to simply as "ECMA-94" as well.
See also
Latin script in Unicode
Unicode
Universal Character Set
UTF-8
Windows code pages
ISO/IEC JTC 1/SC 2
References
External links
ISO/IEC 8859-1:1998
ISO/IEC FDIS 8859-1:1998 — 8-bit single-byte coded graphic character sets, Part 1: Latin alphabet No. 1 (draft dated February 12, 1998, published April 15, 1998)
Standard ECMA-94: 8-Bit Single Byte Coded Graphic Character Sets — Latin Alphabets No. 1 to No. 4 2nd edition (June 1986)
ISO-IR 100 Right-Hand Part of Latin Alphabet No.1 (February 1, 1986)
The Letter Database
ISO/IEC 8859
Computer-related introductions in 1987
Character sets
8859-1 | Operating System (OS) | 1,389 |
George E. Felton
George Eric Felton (3 February 1921 – 14 June 2019) was a British computer scientist. He undertook pioneering work in the field of operating systems and programming software and is the father of the GEORGE Operating System. He held the world record for the computation of π.
Early life, education, and military service
George Felton was born in Paris in 1921 to English parents - his mother, Muriel Felton, worked at Bletchley Park during the war. He was brought up in Paris and Menton but moved to England following the early death of his father. Felton attended Bedford School and Magdalene College, Cambridge where he read the Mathematical Tripos. His university studies were interrupted by World War II during which Felton joined the RAF with a commission. Exploiting his interest in electronics he served as a Radar engineer and instructor. He was demobilised and returned to Cambridge in 1946. At Cambridge he met his wife Ruth Felton at meetings of The Round Country Dance society.
After commencing research in theoretical physics he switched his attention to Numerical Analysis and Programming, spurred on by his close contact with the construction of the EDSAC prototype computer in the Mathematical Laboratory, under Maurice Wilkes.
Career
In 1951 Felton joined Elliott Brothers in Borehamwood where he designed the programming systems and wrote software for the Nicholas and Elliott 402 computers. From mid-1954 at Ferranti's London Computer Centre Felton led the team developing innovative and comprehensive operating system and programming software for the Ferranti Pegasus and Orion computers. Hugh McGregor Ross records that "George Felton tells how, when Pegasus was new, he would borrow a front door key on Friday evenings so he could get in during the weekends. Then, alone in the building, he would start up the computer to sum a series to calculate the value of π to a then record 10,024 decimal places. This was a good test of the reliability of Pegasus". His computations of π were records in their day 1957. The team at Ferranti included Bill Elliott, Conway Berners-Lee, Christopher Strachey, Charles Owen, Hugh Devonald, Henry Goodman and Derek Milledge.
The business computing division of Ferranti was merged with International Computers and Tabulators (ICT) in 1963, and ICT was in turn merged with English Electric Leo Marconi (EELM) computers in 1968 to form International Computers Limited (ICL). Following the mergers Felton ran the division responsible for the operating system and basic software for the 1900 Series. The new system was named George, was based on ideas from the Orion and the spooling system of the Atlas computer.
Personal life
Felton was also a notable photographer. He was a member of the Royal Photographic Society and had regular acceptances in its International Print Exhibition. Examples of his work can be found online at the London Salon of Photography and at Arena Photographers.
He married Ruth A. R. Holt in Cambridge, 1951, and they had 4 sons. One of their sons, Matthew Felton, moved to the United States and became one of the designers of the Microsoft Windows NT operating system. Another son, Eric Felton, worked at ICL on CASE (Computer-Aided Software Engineering) tools.
Felton died in June 2019 at the age of 98.
Titles, honours, and awards
Fellow of the British Computer Society.
Fujitsu Gold Medal for Computing
Fellow of the Royal Photographic Society.
Notes
References
External links
The 1962 edition of the Pegasus Programming Manual (38 MB PDF) by G. E. Felton, M.A.
The National Archives: The Ferranti Collection including: Pegasus Programming G.E. Felton of Ferranti Ltd. (Paper)
Photograph with colleagues, including Conway Berners-Lee, (under Ferranti and ICL): George Felton is on the right of the picture, not identified in the caption
1921 births
2019 deaths
Alumni of Magdalene College, Cambridge
Computer companies of the United Kingdom
English computer scientists
Fellows of the British Computer Society
Fellows of the Royal Photographic Society
Royal Air Force officers
Royal Air Force personnel of World War II
British expatriates in France | Operating System (OS) | 1,390 |
BootVis
BootVis is a Microsoft computer application that allows "PC system designers and software developers" (not aimed at end-users) to check how long a Windows XP machine takes to boot, and then to optimize the boot process, sometimes considerably reducing the time required. BootVis has been replaced with XbootMgr, and is no longer available from Microsoft's website.
Use
BootVis defines boot and resume times as the time from when the power switch is pressed to the time at which the user is able to start a program from a desktop shortcut. The application measures time taken during Windows XP's boot or resume period. BootVis can also invoke the optimization routines built into Windows XP, such as defragmenting the files accessed during boot, to improve startup performance. This optimization is automatically done by Windows at three-day intervals.
Because the Global Logger session used by BootVis is triggered by registry entries, it runs every time that the entries appear in the registry, which has resulted in some users seeing large quantities of hard drive being used for the trace.log file (in C:\WINDOWS\System32\LogFiles\WMI). Upon rebooting the file will shrink but will grow again as the computer runs. The user can run BootVis again and click Trace→Stop Tracing, which will stop the file from growing and allow it to be safely deleted. The Bootvis.exe tool is no longer available from Microsoft.
Similar tools
Soluto measures the boot time and lets the user decide if and when a software shall be started automatically. It is using an information database populated by the input from the users.
WinBootInfo logs drivers and applications loaded during system boot, measures Windows boot times, records CPU and I/O activity during the boot.
Boot Log XP troubleshoots boot-up problems in Windows XP, creates a new boot log file.
r2 Studios' Startup Delayer allows users to optionally delay or disable applications that would otherwise run during start up.
References
External links
Argus Boot Accelerator
TweakHound rebuke - "Bootvis, MS wassup?"
Softpedia download page
Boot Log XP
r2 Studios' Startup Delayer
Discontinued Microsoft software
Windows-only software
Computer system optimization software | Operating System (OS) | 1,391 |
IA-32
IA-32 (short for "Intel Architecture, 32-bit", sometimes also called i386) is the 32-bit version of the x86 instruction set architecture, designed by Intel and first implemented in the 80386 microprocessor in 1985. IA-32 is the first incarnation of x86 that supports 32-bit computing; as a result, the "IA-32" term may be used as a metonym to refer to all x86 versions that support 32-bit computing.
Within various programming language directives, IA-32 is still sometimes referred to as the "i386" architecture. In some other contexts, certain iterations of the IA-32 ISA are sometimes labelled i486, i586 and i686, referring to the instruction supersets offered by the 80486, the P5 and the P6 microarchitectures respectively. These updates offered numerous additions alongside the base IA-32 set, i.e. floating-point capabilities and the MMX extensions.
Intel was historically the largest manufacturer of IA-32 processors, with the second biggest supplier having been AMD. During the 1990s, VIA, Transmeta and other chip manufacturers also produced IA-32 compatible processors (e.g. WinChip). In the modern era, Intel still produces IA-32 processors under the Intel Quark microcontroller platform; however, since the 2000s, the majority of manufacturers (Intel included) moved almost exclusively to implementing CPUs based on the 64-bit variant of x86, x86-64. x86-64, by specification, offers legacy operating modes that operate on the IA-32 ISA for backwards compatibility. Even given the contemporary prevalence of x86-64, as of 2018, IA-32 protected mode versions of many modern operating systems are still maintained, e.g. Microsoft Windows (until Windows 10; Windows 11 requires x86-64-compatible processor for x86 versions) and the Debian Linux distribution. In spite of IA-32's name (and causing some potential confusion), the 64-bit evolution of x86 that originated out of AMD would not be known as "IA-64", that name instead belonging to Intel's Itanium architecture.
Architectural features
The primary defining characteristic of IA-32 is the availability of 32-bit general-purpose processor registers (for example, EAX and EBX), 32-bit integer arithmetic and logical operations, 32-bit offsets within a segment in protected mode, and the translation of segmented addresses to 32-bit linear addresses. The designers took the opportunity to make other improvements as well. Some of the most significant changes (relative to the 16-bit 286 instruction set) are described below.
32-bit integer capability
All general-purpose registers (GPRs) are expanded from 16 bits to 32 bits, and all arithmetic and logical operations, memory-to-register and register-to-memory operations, etc., can operate directly on 32-bit integers. Pushes and pops on the stack default to 4-byte strides, and non-segmented pointers are 4 bytes wide.
More general addressing modes
Any GPR can be used as a base register, and any GPR other than ESP can be used as an index register, in a memory reference. The index register value can be multiplied by 1, 2, 4, or 8 before being added to the base register value and displacement.
Additional segment registers
Two additional segment registers, FS and GS, are provided.
Larger virtual address space
The IA-32 architecture defines a 48-bit segmented address format, with a 16-bit segment number and a 32-bit offset within the segment. Segmented addresses are mapped to 32-bit linear addresses.
Demand paging
32-bit linear addresses are virtual addresses rather than physical addresses; they are translated to physical addresses through a page table. In the 80386, 80486, and the original Pentium processors, the physical address was 32 bits; in the Pentium Pro and later processors, the Physical Address Extension allowed 36-bit physical addresses, although the linear address size was still 32 bits.
Operating modes
See also
x86-64
IA-64
List of former IA-32 compatible processor manufacturers
References
Computer-related introductions in 1985
X86 architecture
32-bit computers | Operating System (OS) | 1,392 |
Control Panel (Windows)
The Control Panel is a component of Microsoft Windows that provides the ability to view and change system settings. It consists of a set of applets that include adding or removing hardware and software, controlling user accounts, changing accessibility options, and accessing networking settings. Additional applets are provided by third parties, such as audio and video drivers, VPN tools, input devices, and networking tools.
Overview
The Control Panel has been part of Microsoft Windows since Windows 1.0, with each successive version introducing new applets. Beginning with Windows 95, the Control Panel is implemented as a special folder, i.e. the folder does not physically exist, but only contains shortcuts to various applets such as Add or Remove Programs and Internet Options. Physically, these applets are stored as .cpl files. For example, the Add or Remove Programs applet is stored under the name appwiz.cpl in the SYSTEM32 folder. ==
In Windows XP, the Control Panel home screen was changed to present a categorized navigation structure reminiscent of navigating a web page. Users can switch between this Category View and the grid-based Classic View through an option that appears on either the left side or top of the window. In Windows Vista and Windows 7, additional layers of navigation were introduced, and the Control Panel window itself became the main interface for editing settings, as opposed to launching separate dialogs.
Many of the individual Control Panel applets can be accessed in other ways. For instance, Display Properties can be accessed by right-clicking on an empty area of the desktop and choosing Properties. The Control Panel can be accessed from a command prompt by typing control; optional parameters are available to open specific control panels.
On Windows 10, Control Panel is deprecated in favor of Settings app, which was originally introduced on Windows 8 as "PC settings" to provide a touchscreen-optimized settings area using its Metro-style app platform. Some functions, particularly the ability to add and remove user accounts, were moved exclusively to this app on Windows 8 and cannot be performed from Control Panel.
As of the October 2020 update to Windows 10 trying to open the System page of Control Panel will redirect users to the Windows 10 Settings application.
Microsoft has also blocked shortcuts and third party applications that could have been used to get into the retired System page.
List of Control Panel applets
The applets listed below are components of the Microsoft Windows control panel, which allows users to define a range of settings for their computer, monitor the status of devices such as printers and modems, and set up new hardware, programs and network connections. Each applet is stored individually as a separate file (usually a .cpl file), folder or DLL, the locations of which are stored in the registry under the following keys:
HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Control Panel\CplsThis contains the string format locations of all .cpl files on the hard drive used within the control panel.
HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\ControlPanel\NamespaceThis contains the location of the CLSID variables for all the panels not included as cpl files. These are commonly folders or shell applets, though Windows Vista allows physical programs themselves to be registered as well. The CLSID then allows items such as the icon, infobox and category to be set and gives the location of the file to be used.
The control panel then uses these lists to locate the applets and load them into the control panel program (control.exe) when started by the user. In addition to using the control panel, a user can also invoke the applets manually via the command processor. For instance, the syntax "Control.exe inetcpl.cpl" or "control.exe /name Microsoft.InternetOptions" will run the internet properties applet in Windows XP or Vista respectively. While both syntax examples are accepted on Windows Vista, only the former one is accepted on Windows XP.
Standard applets
Peripheral devices
These are options in the control panel that show devices connected to the computer. They do not actually offer a direct interface to control these devices, but rather offer basic tasks such as removal procedures and links to wizards (Printers & faxes is the exception).
Such applets include:
Scanners and Cameras
Game Controllers
Portable Media Devices
and it can be helpful to monitor
Other Microsoft-distributed applets
Third-party applets
Third-party software vendors have released many applets. Although it is impossible to mention all of them, some of them are listed here:
References
External links
How to run Control Panel tools by typing a command at Microsoft.com
Computer configuration
Windows components | Operating System (OS) | 1,393 |
Software modernization
Legacy modernization, also known as software modernization or platform modernization, refers to the conversion, rewriting or porting of a legacy system to modern computer programming languages, architectures (e.g. microservices), software libraries, protocols or hardware platforms. Legacy transformation aims to retain and extend the value of the legacy investment through migration to new platforms to benefit from the advantage of the new technologies.
Strategies
Making of software modernization decisions is a process within some organizational context. “Real world” decision making in business organizations often has to be made based on “bounded rationality”. Besides that, there exist multiple (and possibly conflicting) decision criteria; the certainty, completeness, and availability of useful information (as a basis for the decision) is often limited.
Legacy system modernization is often a large, multi-year project. Because these legacy systems are often critical in the operations of most enterprises, deploying the modernized system all at once introduces an unacceptable level of operational risk. As a result, legacy systems are typically modernized incrementally. Initially, the system consists completely of legacy code. As each increment is completed, the percentage of legacy code decreases. Eventually, the system is completely modernized. A migration strategy must ensure that the system remains fully functional during the modernization effort.
Modernization strategies
There are different drivers and strategies for software modernization:
Architecture Driven Modernization (ADM) is the initiative to standardize views of the existing systems in order to enable common modernization activities like code analysis and comprehension, and software transformation.
Business-Focus Approach: The modernization strategy is tied to the business value added by the modernization. It implies defining the intersection of the criticality to the business of an applications with its technical quality. This approach pushed by Gartner puts the Application Portfolio Analysis (APA) as a prerequisite of modernization decisions for an application portfolio to measures software health, risks, complexity and cost providing insight into application strengths and weaknesses.
Model Driven Engineering (MDE) is being investigated as an approach for reverse engineering and then forward engineering software code.
Renaissance Method for iteratively evaluating legacy systems, from technical, business, and organizational perspectives.
WMU (Warrants, Maintenance, Upgrade) is a model for choosing appropriate maintenance strategies based on aspired customer satisfaction level and their effects on it.
Modernization risk management
Software modernization is a risky, difficult, long, and highly intellectual process involving multiple stakeholders. The software modernization tasks are supported by various tools related to Model-driven architecture from the Object Management Group and processes such as ISO/IEC 14764:2006 or Service-Oriented Migration and Reuse Technique (SMART). Software modernization implies various manual and automated tasks performed by specialized knowledge workers. Tools are supporting project participants' tasks and help organize the collaboration and sequencing of the work.
A general software modernization management approach taking risks (both technological and business objectives) explicitly into account consists of:
Analysis the existing portfolio: measuring the technical quality and business value. Confronting the technical quality with business goals to define the right strategy: replace, no go, low priority, good candidate.
Identify stakeholders: all persons involved in the software modernization: developers, testers, customers, end-users, architects, …
Understand the requirements: requirements are divided in 4 categories: user, system, constraints and nonfunctional.
Create the Business Case: the business case supports the decision process in considering different approaches when decision makers need it.
Understand the system to be modernized: this is a critical step as software documentation is rarely up-to-date and projects are made by numerous teams, both internal or external and usually out of sight for long time. Extracting the content of the application and its architecture design help reason about the system.
Understand and evaluate target technology: this allows compare and contrast technologies and capabilities against requirements and existing system.
Define modernization strategy: the strategy defines the transformation process. This strategy must accommodate changes happening during the modernization process (technologies changes, additional knowledge, requirement evolution).
Reconcile strategy with stakeholder needs: implied stakeholders may have varying opinions on what is important and what is the best way to proceed. It is important to have a consensus between stakeholders.
Estimate resources: when previous steps are defined, costs can be evaluated. It enables the management determining whether the modernization strategy is feasible given the available resources and constraints.
Modernization costs
Softcalc (Sneed, 1995a) is a model and tool for estimating costs of incoming maintenance requests, developed based on COCOMO and FPA.
EMEE (Early Maintenance Effort Estimation) is a new approach for quick maintenance effort estimation before starting the actual maintenance.
RENAISSANCE is a method to support system evolution by first recovering a stable basis using reengineering, and subsequently continuously improving the system by a stream of incremental changes. The approach integrates successfully with different project management processes
Challenges in legacy modernization
Primary issues with a legacy system include very old systems with lack of documentation, lack of SMEs/ knowledge on the legacy systems and dearth of technology skills in which the legacy systems have been implemented. Typical legacy systems have been in existence for more than two decades. Migrating is fraught with challenges:
Lack of visibility across large application portfolios – Large IT organizations have hundreds, if not thousands, of software systems. Technology and functional knowledge are by nature distributed, diluted, and opaque. No central point of visibility for senior management and Enterprise Architects is a top issue – it is challenging to make modernization decisions about software systems without having the necessary quantitative and qualitative data about these systems across the enterprise.
Organizational change management – Users must be re-trained and equipped to use and understand the new applications and platforms effectively.
Coexistence of legacy and new systems – Organizations with a large footprint of legacy systems cannot migrate at once. A phased modernization approach needs to be adopted. However, this brings its own set of challenges like providing complete business coverage with well understood and implemented overlapping functionality, data duplication; throw-away systems to bridge legacy and new systems needed during the interim phases.
Poor management of structural quality (see software quality), resulting in a modernized application that carries more security, reliability performance and maintainability issues than the original system.
Significant modernization costs and duration - Modernization of a complex mission-critical legacy system may need large investments and the duration of having a fully running modernized system could run into years, not to mention unforeseen uncertainties in the process.
Stakeholders commitment - Main organization stakeholders must be convinced of the investment being made for modernization, since the benefits, and an immediate ROI may not be visible as compared to the modernization costs being invested.
Software Composition – It is extremely rare that developers create 100% original code these days in anything built after 2010. They are often using 3rd party and open source frameworks and software components to gain efficiency, speed, and reusability. This introduces two risks: 1.) vulnerabilities within the 3rd party code, and 2.) open source licensing risk.
Last but not least, there is no one-stop solution-fits all kind of option in modernization. With a multitude of commercial and bespoke options available for modernization, it’s critical for the customers, the sellers and the executors to understand the intricacies of various modernization techniques, their best applicable implementations, suitability in a particular context, and the best practices to follow before selecting the right modernization approach.
Modernization options
Over the years, several different options have come into being for legacy modernization – each of them met with varying success and adoption. Even now, there is a range of possibilities, as explained below, and there is no “the option” for all legacy transformation initiatives.
Application Assessment: Baselining the existing application portfolio using Software intelligence to understand software health, quality, composition, complexity, and cloud readiness to start segmenting and prioritizing applications for various modernization options.
Application Discovery: Applications components are strongly interlaced implying requirement for understanding the complexity and resolving the interdependencies of software component.
Migration: Migration of languages (3GL or 4GL), databases (legacy to RDBMS, and one RDBMS to another), platform (from one OS to another OS), often using automated converters or Program transformation systems for high efficiency. This is a quick and cost-effective way of transforming legacy systems.
Cloud Migration: Migration of legacy applications to cloud platforms often using a methodology such as Gartner’s 5 Rs methodology to segment and prioritize apps into different models (Rehost, Refactor, Revise, Rebuild, Replace).
Re-engineering: A technique to rebuild legacy applications in new technology or platform, with same or enhanced functionality – usually by adopting Service Oriented Architecture (SOA). This is the most efficient and agile way of transforming legacy applications. This requires application-level Software intelligence with legacy systems that are not well known or documented.
Re-hosting: Running the legacy applications, with no major changes, on a different platform. Business logic is preserved as application and data are migrated into the open environment. This option only needs the replacement of middleware, hardware, operating system, and database. This is often used as an intermediate step to eliminate legacy and expensive hardware. Most common examples include mainframe applications being rehosted on UNIX or Wintel platform.
Package implementation: Replacement of legacy applications, in whole or part, with off-the-shelf software (COTS) such as ERP, CRM, SCM, Billing software etc.
A legacy code is any application based on older technologies and hardware, such as mainframes, that continues to provide core services to an organization. Legacy applications are frequently large and difficult to modify, and scrapping or replacing them often means re-engineering an organization’s business processes as well. However, more and more applications that were written in so called modern languages like java are becoming legacy. Whereas 'legacy' languages such as COBOL are top on the list for what would be considered legacy, software written in newer languages can be just as monolithic, hard to modify, and thus, be candidates of modernization projects.
Re-implementing applications on new platforms in this way can reduce operational costs, and the additional capabilities of new technologies can provide access to functions such as web services and integrated development environments. Once transformation is complete and functional equivalence has been reached the applications can be aligned more closely to current and future business needs through the addition of new functionality to the transformed application. The recent development of new technologies such as program transformation by software modernization enterprises have made the legacy transformation process a cost-effective and accurate way to preserve legacy investments and thereby avoid the costs and business impact of migration to entirely new software.
The goal of legacy transformation is to retain the value of the legacy asset on the new platform. In practice this transformation can take several forms. For example, it might involve translation of the source code, or some level of re-use of existing code plus a Web-to-host capability to provide the customer access required by the business. If a rewrite is necessary, then the existing business rules can be extracted to form part of the statement of requirements for a rewrite.
Software migration
Software migration is the process of moving from the use of one operating environment to another operating environment that is, in most cases, is thought to be a better one. For example, moving from Windows NT Server to Windows 2000 Server would usually be considered a migration because it involves making sure that new features are exploited, old settings do not require changing, and taking steps to ensure that current applications continue to work in the new environment. Migration could also mean moving from Windows NT to a UNIX-based operating system (or the reverse). Migration can involve moving to new hardware, new software, or both. Migration can be small-scale, such as migrating a single system, or large-scale, involving many systems, new applications, or a redesigned network.
One can migrate data from one kind of database to another kind of database. This usually requires the data into some common format that can be output from the old database and input into the new database. Since the new database may be organized differently, it may be necessary to write a program that can process the migrating files.
When a software migration reaches functional equivalence, the migrated application can be aligned more closely to current and future business needs through the addition of new functionality to the transformed application.
The migration of installed software from an old PC to a new PC can be done with a software migration tool. Migration is also used to refer simply to the process of moving data from one storage device to another.
Articles, papers and books
Creating reusable software
Due to the evolution of technology today some companies or groups of people don’t know the importance of legacy systems.
Some of their functions are too important to be left unused, and too expensive to reproduce again. The software industry and researchers have recently paid more attention towards component-based software development to enhance productivity and accelerate time to market.
Risk-managed modernization
In general, three classes of information system technology are of interest in legacy system modernization:
Technologies used to construct the legacy systems, including the languages and database systems.
Modern technologies, which often represent nirvana to those mired in decades-old technology and which hold (the often unfulfilled) promise of powerful, effective, easily maintained enterprise information systems.
Technologies offered by the legacy system vendors – These technologies provide an upgrade path for those too timid or wise to jump head-first into the latest wave of IT offerings. Legacy system vendors offer these technologies for one simple reason: to provide an upgrade path for system modernization that does not necessitate leaving the comfort of the “mainframe womb.” Although these technologies can provide a smoother road toward a modern system, they often result in an acceptable solution that falls short of the ideal.
See also
System migration
Data migration
References
Software maintenance | Operating System (OS) | 1,394 |
The Unix Programming Environment
The Unix Programming Environment, first published in 1984 by Prentice Hall, is a book written by Brian W. Kernighan and Rob Pike, both of Bell Labs and considered an important and early document of the Unix operating system.
Unix philosophy
The book addresses the Unix philosophy of small cooperating tools with standardized inputs and outputs. Kernighan and Pike gives a brief description of the Unix design and the Unix philosophy:
The authors further write that their goal for this book is "to communicate the UNIX programming philosophy."
Content and topics
The book starts off with an introduction to Unix for beginners. Next, it goes into the basics of the file system and shell. The reader is led through topics ranging from the use of filters, to how to use C for programming robust Unix applications, and the basics of grep, sed, make, and awk. The book closes with a tutorial on making a programming language parser with yacc and how to use troff with ms and mm to format documents, the preprocessors tbl, eqn, and pic, and making man pages with the man macro set. The appendices cover the ed editor and the abovementioned programming language, named hoc, which stands for "high-order calculator".
Historical context
Although Unix still exists decades after the publication of this book, the book describes an already mature Unix: In 1984, Unix had already been in development for 15 years (since 1969), it had been published in a peer-reviewed journal 10 years earlier (SOSP, 1974, "The UNIX Timesharing System"), and at least seven official editions of its manuals had been published (see Version 7 Unix). In 1984, several commercial and academic variants of UNIX already existed (e.g., Xenix, SunOS, BSD, UNIX System V, HP-UX), and a year earlier Dennis Ritchie and Ken Thompson won the prestigious Turing Award for their work on UNIX. The book was written not when UNIX was just starting out, but when it was already popular enough to be worthy of a book published for the masses of new users that were coming in.
In retrospect, not only was 1984 not an early stage of Unix's evolution, in some respects it was the end of Unix evolution, at least in Bell Labs: The important UNIX variants had already forked from AT&T's Research Unix earlier: System V was published in 1983, BSD was based on the 1979 Seventh Edition Unix – and most commercial Unix variants were based on System V, BSD, or some combination of both. Eighth Edition Unix came out right after this book, and further development of UNIX in Bell Labs (the Ninth and Tenth Edition) never made it outside Bell Labs – until their effort evolved into Plan 9 from Bell Labs.
C programming style
The book was written before ANSI C was first drafted; the programs in it follow the older K&R style. However, the source code available on the book's website has been updated for ANSI C conformance.
Critical reception
Technical editor Ben Everard for Linux Voice praised the book for providing relevant documentation despite being 30 years old and for being a good book for an aspiring programmer who does not know much about Linux.
Editions
(paperback)
(hardback).
Notes
1984 non-fiction books
Computer programming books
Software engineering books
Unix books
Prentice Hall books | Operating System (OS) | 1,395 |
Bitfrost
Bitfrost is the security design specification for the OLPC XO, a low cost laptop intended for children in developing countries and developed by the One Laptop Per Child (OLPC) project. Bitfrost's main architect is Ivan Krstić. The first public specification was made available in February 2007.
Bitfrost architecture
Passwords
No passwords are required to access or use the computer.
System of rights
Every program, when first installed, requests certain bundles of rights, for instance "accessing the camera", or "accessing the internet". The system keeps track of these rights, and the program is later executed in an environment which makes only the requested resources available. The implementation is not specified by Bitfrost, but dynamic creation of security contexts is required. The first implementation was based on vserver, the second and current implementation is based on user IDs and group IDs (/etc/password is edited when an activity is started), and a future implementation might involve SE Linux or some other technology.
By default, the system denies certain combinations of rights; for instance, a program would not be granted both the right to access the camera and to access the internet. Anybody can write and distribute programs that request allowable right combinations. Programs that require normally unapproved right combinations need a cryptographic signature by some authority. The laptop's user can use the built-in security panel to grant additional rights to any application.
Modifying the system
The users can modify the laptop's operating system, a special version of Fedora Linux running the new Sugar graphical user interface and operating on top of Open Firmware. The original system remains available in the background and can be restored.
By acquiring a developer key from a central location, a user may even modify the background copy of the system and many aspects of the BIOS. Such a developer key is only given out after a waiting period (so that theft of the machine can be reported in time) and is only valid for one particular machine.
Theft-prevention leases
The laptops request a new "lease" from a central network server once a day. These leases come with an expiry time (typically a month), and the laptop stops functioning if all its leases have expired. Leases can also be given out from local school servers or via a portable USB device. Laptops that have been registered as stolen cannot acquire a new lease.
The deploying country decides whether this lease system is used and sets the lease expiry time.
Microphone and camera
The laptop's built-in camera and microphone are hard-wired to LEDs, so that the user always knows when they are operating. This cannot be switched off by software.
Privacy concerns
Len Sassaman, a computer security researcher at the Catholic University of Leuven in Belgium and his colleague Meredith Patterson at the University of Iowa in Iowa City claim that the Bitfrost system has inadvertently become a possible tool for unscrupulous governments or government agencies to definitively trace the source of digital information and communications that originated on the laptops. This is a potentially serious issue as many of the countries which have the laptops have governments with questionable human rights records.
Notes
The specification itself mentions that the name "Bitfrost" is a play on the Norse mythology concept of Bifröst, the bridge between the world of mortals and the realm of Gods. According to the Prose Edda, the bridge was built to be strong, yet it will eventually be broken; the bridge is an early recognition of the idea that there's no such thing as a perfect security system.
See also
CapDesk
References
External links
Ivan Krstić's homepage
OLPC Wiki: Bitfrost
Bitfrost specification, version Draft-19 - release 1, 7 February 2007
High Security for $100 Laptop, Wired News, 7 February 2007
Making antivirus software obsolete - Technology Review magazine recognized Ivan Krstić, Bitfrost's main architect, as one of the world's top innovators under the age of 35 (Krstić was 21 at the time of publication) for his work on the system.
One Laptop per Child
Cryptographic software | Operating System (OS) | 1,396 |
Elonex ONEt
The Elonex ONEt is a netbook computer marketed to the education sector in the UK by Elonex. Inspired by the OLPC initiative, the low cost of the ONE, the ONEt and similar devices, made this subnotebook seem an attractive proposition for educators seeking to provide every child with a highly functional laptop computer. However initial ONEt trials by educators claimed that the lack of security, specifically the absence of any password protection at start-up, put personal information at risk, making it unsuitable for use in a school environment. It was released in September 2008, on sale to the general public, marketed as an upgrade to the ONE. It has Wi-Fi connectivity, a solid-state hard drive, three USB ports and an SD card slot.
Hardware
The hardware specifications published on 9 July 2008:
Processor, Main Memory
Ingenic JZ4730 JzRisc Processor (incorporates the XBurst CPU core)
On-board 1 GB Flash Memory, (2 GB in t+ model)
128 MiB RAM
Dimensions
Display: LCD display; 800×480 px Widescreen
Dimension (w.× l.×h.): 21×14×3 cm
Weight: 625 g
Networking
Wi-Fi
Ethernet
Peripherals, Ports
3 USB ports
Ethernet-over-twisted-pair network port
2 built-in speakers
Audio in & out
SD Card slot
Battery
Li ion 7.2 V 2.1 Ah - approximately 3 hours usage
Energy Consumption
Approximately 4.5 W
Operating System
The Elonex OneT has a Linux (mipsel) based operating system, and the included software comprises Sky Word (Abiword 2.4.5), Sky Table (Gnumeric 1.6.3), a PDF viewer (ePDFView 0.1.6), Scientific Calculator, Dictionary, File Manager, Web Browser (BonEcho/Firefox), Email client (Sylpheed), Sky Chatting (Pidgin), FBReader, Media Player (xine based), Xip Flash Player, Image Gallery, Paint Brush, and Sound Recorder.
Although access to the root filing system isn't possible through the included file manager it is possible to get console access as user root by installing the Xterm application from the CnM Lifestyle website. The CnM Lifestyle notebook is exactly the same as the Elonex OneT and so all applications on this page can be installed.
(To browse the raw file system, the web browser will respond appropriately to being told to load file://127.0.0.1; obviously, this is read-only access.)
Similar devices
The ONEt is similar to the CnM Mini-book from Maplin Electronics, Alpha 400 product from Bestlink or the Trendtac EPC 700 or the Skytone Alpha 400. Those devices are basically all the same and only have different OEM names.
References
External links
Loads of guidance for the ONEt and similar machines
Blog, prices, news, how-to on Alpha 400 and all 400-MHz MIPS mini-laptops.
Elonex ONEt Review UK
Netbooks | Operating System (OS) | 1,397 |
Google Pixel
Google Pixel is a brand of consumer electronic devices developed by Google that run either Chrome OS or the Android operating system. The Pixel brand was introduced in February 2013 with the first-generation Chromebook Pixel. The Pixel line includes laptops, tablets, and smartphones, as well as several accessories.
Phones
Pixel & Pixel XL
Google announced the first generation Pixel smartphones, the Pixel and the Pixel XL, on October 4, 2016 during the #MadeByGoogle event. Google emphasized the camera on the two phones, which ranked as the best smartphone camera on DxOMarkMobile with 90 points until HTC released the U11, which also scored 90 points. This is largely due to software optimizations such as HDR+. The Pixel phones also include unlimited cloud storage for pictures on Google Photos and, for devices purchased directly from Google, an unlockable bootloader. Recently, a class action lawsuit over faulty microphones in some devices enabled Pixel owners to claim up to $500 in compensation.
Display: 5.0" AMOLED display with 1080×1920 pixel resolution (Pixel); 5.5" AMOLED display with 1440×2560 pixel resolution (Pixel XL)
Processor: Qualcomm Snapdragon 821
Storage: 32 GB or 128 GB
RAM: 4 GB LPDDR4
Cameras: 12.3 MP rear camera with f/2.0 lens and IR laser-assisted autofocus; 1.55 μm pixel size. 8 MP front camera with f/2.4 lens
Battery: 2,770 mAh (Pixel); 3,450 mAh (Pixel XL); both are non-removable and have fast charging
Materials: Aluminum unibody design with hybrid coating; IP53 water and dust resistance
Colors: Very Silver, Quite Black or Really Blue (Limited Edition)
Operating system: Android 7.1 Nougat; upgradable to Android 10
Pixel 2 & 2 XL
Google announced the Pixel 2 series, consisting of the Pixel 2 and Pixel 2 XL, on October 4, 2017.
Display: 5.0" AMOLED display with 1080×1920 pixel resolution (Pixel 2); 6" P-OLED display with 1440×2880 pixel resolution (Pixel 2 XL); Both displays have Corning Gorilla Glass 5
Processor: Qualcomm Snapdragon 835
Storage: 64 GB or 128 GB
RAM: 4 GB LPDDR4X
Cameras: 12.2 MP rear camera with f/1.8 lens, IR laser-assisted autofocus, optical and electronic image stabilization; 8 MP front camera with f/2.4 lens
Battery: 2,700 mAh (Pixel 2); 3,520 mAh (Pixel 2 XL); both are non-removable and have fast charging
Materials: Aluminum unibody design with hybrid coating; IP67 water and dust resistance
Colors: Just Black, Clearly White or Kinda Blue (Pixel 2); Just Black or Black & White (Pixel 2 XL)
Operating system: Android 8.0 Oreo; upgradable to Android 11
Pixel 3 & 3 XL
Google announced the Pixel 3 and Pixel 3 XL at an event on October 9, 2018, alongside several other products.
Display: Pixel 3 5.5" OLED, 2160×1080 {18:9} pixel resolution; Pixel 3 XL 6.3" OLED, 2960×1440 {18.5:9} pixel resolution; both displays have Corning Gorilla Glass 5.
Processor: Qualcomm Snapdragon 845
Storage: 64 GB or 128 GB
RAM: 4 GB LPDDR4X
Cameras: 12.2 MP rear camera with f/1.8 lens, IR laser-assisted autofocus, optical and electronic image stabilization; 8 MP front camera with f/1.8 lens and 75° lens, second front camera with 8 MP, f/2.2, fixed focus and 97° lens; stereo audio added to video recording
Battery: 2915 mAh (Pixel 3); 3430 mAh (Pixel 3 XL); both are non-removable and have fast charging and wireless charging
Materials: Aluminum frame, matte glass back, IP68 water and dust resistance
Colors: Just Black, Clearly White, and Not Pink
Operating system: Android 9 Pie; upgradable to Android 12
Pixel 3a & 3a XL
On May 7, at I/O 2019, Google announced the Pixel 3a and Pixel 3a XL, budget alternatives to the original two Pixel 3 devices.
Display: Pixel 3a 5.6" OLED, 2220×1080 {18.5:9} pixel resolution; Pixel 3a XL 6" OLED, 2160x1080 {18:9} pixel resolution; both displays have Asahi Dragontrail Glass
Processor: Qualcomm Snapdragon 670
Storage: 64 GB
RAM: 4 GB LPDDR4X
Cameras: 12.2 MP rear camera with f/1.8 lens, IR laser-assisted autofocus, optical and electronic image stabilization; 8 MP front camera with f/2.0 lens and 84° lens
Battery: 3000 mAh (Pixel 3a); 3700 mAh (Pixel 3a XL); both are non-removable and have fast charging, but no wireless charging
Materials: Polycarbonate body
Colors: Just Black, Clearly White, Purple-ish
Operating system: Android 9 Pie, upgradable to Android 12
Pixel 4 & 4 XL
Google announced the Pixel 4 and Pixel 4 XL at an event on October 15, 2019, alongside several other products.
Display: Pixel 4 5.7" OLED, 2280×1080 {19:9} pixel resolution; Pixel 4 XL 6.3" OLED, 3040×1440 {19:9} pixel resolution; both displays have Corning Gorilla Glass 5.
Processor: Qualcomm Snapdragon 855
Storage: 64 GB or 128 GB
RAM: 6 GB LPDDR4X
Cameras: 12.2 MP sensor with f/1.8 lens & 16 MP telephoto sensor with f/2.4 lens, IR laser-assisted autofocus, optical and electronic image stabilization; 8 MP front camera with f/2.0 lens and 90° lens
Battery: 2800 mAh (Pixel 4); 3700 mAh (Pixel 4 XL); both are non-removable and have fast charging and wireless charging
Materials: Aluminum frame, matte or glossy glass back, IP68 water and dust resistance
Colors: Just Black, Clearly White, and Oh So Orange
Operating system: Android 10, upgradable to Android 12
In 2019, Google offered a bug bounty of up to $1.5 million for the Titan M security chip built into Pixel 3, Pixel 3a and Pixel 4.
Pixel 4a & 4a (5G)
Google announced the Pixel 4a on August 3, 2020 and the Pixel 4a (5G) on September 30, 2020, as budget alternatives to the original two Pixel 4 devices.
Display: 5.8" OLED (4a) 6.2" OLED (4a 5G), 2340×1080 {19.5:9} pixel resolution; the display uses Corning Gorilla Glass 3. They both have a hole punch for the front camera.
Processor: Qualcomm Snapdragon 730G (4a); Qualcomm Snapdragon 765G (4a 5G)
Storage: 128 GB
RAM: 6 GB LPDDR4X
Camera: 12.2 MP dual-pixel sensor with f/1.7 lens, autofocus with dual-pixel phase detection, optical and electrical image stabilization. In addition, the 4a 5G has a 16 MP ultrawide sensor with f/2.2 lens. Both have an 8 MP front camera with f/2.0 lens.
Battery: 3140 mAh (4a); Typical - 3885 mAh, Minimum - 3800 mAh (4a 5G); both are non-removable and feature all day battery as well as fast charging
Materials: Polycarbonate body
Colors: Just Black or Barely Blue (Limited Edition) (Pixel 4a); Just Black or Clearly White (Pixel 4a 5G)
Operating System: Android 10, upgradable to Android 12 (4a); Android 11 (4a 5G), upgradable to Android 12
Pixel 5
Google announced the Pixel 5 on September 30, 2020.
Display: 6.0" OLED, 2340×1080 {19.5:9} pixel resolution; the display uses Corning Gorilla Glass 6.
Processor: Qualcomm Snapdragon 765G
Storage: 128 GB
RAM: 8 GB LPDDR4X
Camera: 12.2 MP sensor with f/1.7 lens & 16 MP ultrawide sensor with f/2.2 lens, autofocus with dual-pixel phase detection, optical and electrical image stabilization; 8 MP front camera with f/2.0 lens.
Battery: 4080 mAh; it is non-removable and features fast charging and wireless charging, all day battery, and Battery Share.
Materials: Brushed aluminum body, IP68 water and dust resistance
Colors: Just Black and Sorta Sage
Operating System: Android 11, upgradable to Android 12
Pixel 5a
Google announced the Pixel 5a on August 17, 2021.
Display: 6.34" OLED, 2400×1080 {20:9} pixel resolution; the display uses Corning Gorilla Glass 3. It has a hole punch for the front camera.
Processor: Qualcomm Snapdragon 765G
Storage: 128 GB
RAM: 6 GB LPDDR4X
Camera: 12.2 MP sensor with f/1.7 lens & 16 MP ultrawide sensor with f/2.2 lens, autofocus with dual-pixel phase detection, optical and electrical image stabilization; 8 MP front camera with f/2.0 lens.
Battery: 4680 mAh; non-removable and features all day battery as well as fast charging
Materials: Brushed aluminum body, IP67 water and dust resistance
Colors: Mostly Black
Operating System: Android 11, upgradable to Android 12
Pixel 6 & 6 Pro
Google announced the Pixel 6 and Pixel 6 Pro on October 19, 2021.
Display: Pixel 6 6.4" OLED, 1080×2400 FHD+ pixel resolution; Pixel 6 Pro 6.7" LTPO OLED, 1440×3120 QHD+ pixel resolution; both have Corning Gorilla Glass Victus.
Processor: Google Tensor
Storage: Pixel 6 128 or 256 GB; Pixel 6 Pro 128, 256, or 512 GB
RAM: 8 GB LPDDR5 (Pixel 6); 12 GB LPDDR5 (Pixel 6 Pro)
Cameras: Pixel 6 Rear 50 MP sensor with f/1.85 lens & 12 MP ultrawide sensor with f/2.2 lens, Front 8 MP sensor with f/2.0 lens and 84° field of view; Pixel 6 Pro Rear 50 MP sensor with f/1.85 lens, 12 MP ultrawide sensor with f/2.2 lens & 48 MP telephoto sensor with f/3.5 lens, Front 11.1 MP front camera with f/2.2 lens and 94° field of view; Laser detect autofocus, optical image stabilization.
Battery: 4614 mAh (Pixel 6); 5003 mAh (Pixel 6 Pro); both are non-removable and have fast charging, wireless charging and reverse wireless charging
Materials: Aluminum frame, IP68 water and dust resistance
Colors: Pixel 6 Stormy Black, Kinda Coral and Sorta Seafoam; Pixel 6 Pro Stormy Black, Cloudy White and Sorta Sunny
Operating system: Android 12, with minimum 3 years of major OS support and 5 years of security update support.
Tablets
Pixel C
The Pixel C was announced by Google at an event on September 29, 2015, alongside the Nexus 5X and Nexus 6P phones (among other products). The Pixel C includes a USB-C port and a 3.5 mm headphone jack. The device shipped with Android 6.0.1 Marshmallow, and later received Android 7.x Nougat and Android 8.x Oreo. Google stopped selling the Pixel C in December 2017.
Display: 10.2" display with 2560×1800 pixel resolution
Processor: NVIDIA Tegra X1
Storage: 32 or 64 GB
RAM: 3 GB
Cameras: 8 MP rear camera; 2 MP front camera
Battery: 9000 mAh (non-removable)
Pixel Slate
The Pixel Slate, a 2-in-1 tablet and laptop, was announced by Google in New York City on October 9, 2018, alongside the Pixel 3 and 3 XL. The Pixel Slate includes two USB-C ports but omits the headphone jack. The device runs Chrome OS on Intel Kaby Lake processors, with options ranging from a Celeron on the low end to an i7 on the high end. In June 2019, Google announced it will not further develop the product line, and cancelled two models that were under development.
Laptops
Chromebook Pixel (2013)
Google announced the first generation Chromebook Pixel in a blog post on February 21, 2013. The laptop includes an SD/multi-card reader, Mini DisplayPort, combination headphone/microphone jack, and two USB 2.0 ports. Some of the device's other features include a backlit keyboard, a "fully clickable etched glass touchpad," integrated stereo speakers, and two built-in microphones.
Display: 12.85" display with 2560×1700 pixel resolution
Processor: 3rd generation (Ivy Bridge) Intel Core i5 processor
Storage: 32 GB internal storage and 1 TB Google Drive storage for 3 years
RAM: 4 GB
Battery: 59 Wh
Chromebook Pixel (2015)
On March 11, 2015, Google announced the second generation of the Chromebook Pixel in a blog post. The laptop includes two USB-C ports, two USB 3.0 ports, an SD card slot, and a combination headphone/microphone jack. The device also has a backlit keyboard, a "multi-touch, clickable glass touchpad," built-in stereo speakers, and two built-in microphones, among other features.
Google discontinued the 2015 Chromebook Pixel on August 29, 2016.
Display: 12.85" display with 2560×1700 pixel resolution
Processor: 5th generation (Broadwell) Intel Core i5 or i7 processor
Storage: 32 or 64 GB internal storage and 1 TB Google Drive storage for 3 years
RAM: 8 or 16 GB
Battery: 72 Wh
Pixelbook
On October 4, 2017, Google launched the first generation of the Pixelbook at its Made by Google 2017 event.
Display: 12.3" display with 2400×1600 pixel resolution (235 ppi)
Processor: 7th generation (Kaby Lake) Intel Core i5 or i7 processor
Storage: 128, 256, or 512 GB internal storage
RAM: 8 or 16 GB
Pixelbook Go
On October 15, 2019, Google announced a mid-range version of the Pixelbook, named the Pixelbook Go, at its Made by Google 2019 event.
Display: 13.3" display with 1920×1080 pixel resolution (166 ppi) or "Molecular Display" 3840×2160 pixel resolution (331 ppi)
Processor: 8th generation (Amber Lake) Intel Core m3, i5 or i7 processor
Storage: 64, 128, or 256 GB internal storage
RAM: 8 or 16 GB
Battery: 47 Wh, 56 Wh (Molecular Display)
Accessories
Pixel Buds
At Google's October 2017 hardware event, a set of wireless earbuds were unveiled alongside the Pixel 2 smartphones. The earbuds are designed for phones running Android Marshmallow or higher, and work with Google Assistant. In addition to audio playback and answering calls, the earbuds support translation in 40 languages through Google Translate. The earbuds are able to auto pair with the Pixel 2 with the help of the Google Assistant and "Nearby". The Pixel Buds are available in the colors Just Black, Clearly White and Kinda Blue. The earbuds have a battery capacity of 120 mAh while the charging case that comes with the Pixel Buds have a battery capacity of 620 mAh. The earbuds are priced at $159.
Pixelbook Pen
Alongside the launch of the Pixelbook in October 2017, Google announced the Pixelbook Pen, a stylus to be used with the Pixelbook. It has pressure sensitivity as well as support for Google Assistant. The Pen is powered by a replaceable AAAA battery and is priced at US$99.
Pixel Stand
In October 2018, Google announced the Pixel Stand alongside the Pixel 3 smartphones. In addition to standard 5 watt Qi wireless charging, the Pixel Stand has wireless 10 watt charging using a proprietary technology from Google. It also enables a software mode on the Pixel 3 that allows it to act as a smart display similar to the Google Home Hub.
See also
Android One
Google Nexus
List of Google Play edition devices
List of Google products
References
External links
Google Store
Android (operating system)
Computer-related introductions in 2013
Pixel
Smartphones
Tablet computers | Operating System (OS) | 1,398 |
Drive mapping
Drive mapping is how operating systems, such as Microsoft Windows, associate a local drive letter (A through Z) with a shared storage area to another computer (often referred as a File Server) over a network. After a drive has been mapped, a software application on a client's computer can read and write files from the shared storage area by accessing that drive, just as if that drive represented a local physical hard disk drive.
Drive mapping
Mapped Drives are hard drives (even if located on a virtual or cloud computing system, or network drives) which are always represented by names, letter(s), or number(s) and they are often followed by additional strings of data, directory tree branches, or alternate level(s) separated by a "\" symbol. Drive mapping is used to locate directories, files or objects, and programs or apps, and is needed by end users, administrators, and various other operators or groups.
Mapped drives are usually assigned a letter of the alphabet after the first few taken, such as A:\, B:\, C:\, and D:\ (which is usually an optical drive unit). Then, with the drive and/or directory (letters, symbols, numbers, names) mapped, they can be entered into the necessary address bar/location(s) and displayed as in the following:
Example 1:
C:\level\next level\following level
or
C:\BOI60471CL\Shared Documents\Multi-Media Dept
The preceding location may reach something like a company's multi-media department's database, which logically is represented with the entire string "C:\BDB60471CL\Shared Documents\Multi-Media Dept".
Mapping a drive can be complicated for a complex system. Network mapped drives (on LANs or WANs) are available only when the host computer (File Server) is also available (i.e. online): it is a requirement for use of drives on a host. All data on various mapped drives will have certain permissions set (most newer systems) and the user will need the particular security authorizations to access it.
Drive mapping over LAN usually uses the SMB protocol on Windows or NFS protocol on Unix/Linux; Drive mapping over the Internet usually uses the WebDAV protocol. WebDAV Drive mapping is supported on Windows, Mac, and Linux.
See also
Mount (computing)
Drive letter assignment
SUBST - a command on the DOS, IBM OS/2 and Microsoft Windows operating systems used for substituting paths on physical and logical drives as virtual drives.
Disk formatting
References
Windows architecture
Computer peripherals | Operating System (OS) | 1,399 |