text
stringlengths 101
134k
| type
stringclasses 12
values | __index_level_0__
int64 0
14.7k
|
---|---|---|
Microsoft hardware
Microsoft has been selling branded hardware since 1980, and developing devices in-house since 1982, when the Microsoft Hardware division was formed to design a computer mouse for use with Microsoft Word for DOS. Since then, Microsoft has developed computer hardware, gaming hardware and mobile hardware. It also produced drivers and other software for integrating the hardware with Microsoft Windows.
Products
ActiMates toys
Azure Kinect
Digital Sound System 80 speakers
Microsoft Band smartbands
Microsoft Broadband Networking networking products
Microsoft Cordless Phone System phones
Microsoft Fingerprint Reader biometric readers
Microsoft HoloLens smartglasses
Microsoft Keyboard keyboards
Microsoft Kin mobile phones
Microsoft LifeCam webcams
Microsoft LifeChat headsets
Microsoft Lumia smartphones
Microsoft Mach 20 accelerator board
Microsoft Mouse computer mice
Microsoft Response Point business telephone systems
Microsoft RoundTable videoconferencing devices
Microsoft SideWinder game controllers
Microsoft Surface tablet PCs
Microsoft wireless display adapters
Nokia 3-digit series feature phones
Xbox game controllers
Xbox video game consoles
Z-80 SoftCard coprocessor card
Zune portable media players
See also
Computer hardware
Macintosh hardware
References
External links
Windows hardware dev center
The good, bad and ugly history of Microsoft hardware | Operating System (OS) | 1,100 |
Fork (file system)
In a computer file system, a fork is a set of data associated with a file-system object. File systems without forks only allow a single set of data for the contents, while file systems with forks allow multiple such contents. Every non-empty file must have at least one fork, often of default type, and depending on the file system, a file may have one or more other associated forks, which in turn may contain primary data integral to the file, or just metadata.
Unlike extended attributes, a similar file system feature which is typically of fixed size, forks can be of variable size, possibly even larger than the file's primary data fork. The size of a file is the sum of the sizes of each fork.
Alternatives
On file systems without forks, one may instead use multiple separate files that are associated with each other, particularly sidecar files for metadata. However, the connection between these files is not automatically preserved by the file system, and must instead be handled by each program that works on files. Another alternative is a container file, which stores additional data within a given file format, or an archive file, which allows storing several files and metadata within a file (within a single fork). This requires that programs process the container file or archive file, rather than the file system handling forks. These alternatives require additional work by programs using the data, but benefit from portability to file systems that do not support forks.
Implementations
Apple
File system forks are associated with Apple's Hierarchical File System (HFS). Apple's HFS, and the original Apple Macintosh file system MFS, allowed a file system object to have two kinds of forks: a data fork and a resource fork.
The resource fork was designed to store non-compiled data that would be used by the system's graphical user interface (GUI), such as localizable text strings, a file's icon to be used by the Finder or the menus and dialog boxes associated with an application. However the feature was very flexible, so additional uses were found, such as splitting a word processing document into content and presentation, then storing each part in separate resources. As compiled software code was also stored in a resource, often applications would consist of just a resource fork and no data fork.
One of HFS+'s most obscure features is that a file may have an arbitrary number of custom "named forks" in addition to the traditional data and resource forks. This feature has gone largely unused, as Apple never added support for it under Mac OS 8.1-10.3.9. Beginning with 10.4, a partial implementation was made to support Apple's extended inline attributes.
Until Mac OS X v10.4, users running the Unix command line utilities (such as tar) included with Mac OS X would risk data loss, as the utilities were not updated to handle the resource forks of files.
Novell
Starting in 1985, Novell NetWare File System (NWFS), and its successor Novell Storage Services (NSS), were designed from the ground up to use a variety of methods to store a file's metadata. Some metadata resides in Novell Directory Services (NDS), some is stored in the directory structure on the disk, and some is stored in, as Novell terms it, 'multiple data streams' with the file itself. Multiple data streams also allow Macintosh clients to attach to and use NetWare servers.
Microsoft
NTFS, the file system introduced with Windows NT 3.1, supports file system forks known as alternate data streams (ADS). ReFS, a new file system introduced with Windows Server 2012, originally did not support ADS, but in Windows 8.1 64-bit and Server 2012 R2, support for ADS, with lengths of up to 128K, was added to ReFS.
ADS was originally intended to add compatibility with existing operating systems that support forks. A computer program may be directed to open an ADS by specifying the name of ADS after a colon sign (:) after the file path. In spite of the support, most programs, including Windows Explorer and the dir command (before Windows Vista) ignore ADS. Windows Explorer copies ADS and warns when the target file system does not support them, but only calculates the main stream's size and does not list a file or folder's streams. Since Windows Vista, the dir command supports showing ADS. Windows PowerShell v3.0 and later supports manipulating ADS.
Uses
Windows 2000 uses ADS to store thumbnails in image files, and to store summary information (such as title and author) in any file, without changing the main stream. With Windows XP, Microsoft realized that ADS is susceptible to loss when the files containing them are moved off NTFS volumes; thus Windows XP stores them in the main stream whenever the file format supports it. Windows Vista discontinued support for adding summary information altogether, as Microsoft decided that they are too sensitive for ADS to handle. But the use of ADS for other purposes did not stop. Service Pack 2 for Windows XP introduced the Attachment Execution Service that stores details on the origin of downloaded files in an ADS called zone identifier, in an effort to protect users from downloaded files that may present a risk. Internet Explorer and Windows 8 extended this function through SmartScreen. Internet Explorer also uses ADS to store favicons in Internet shortcut files.
Sun
Solaris version 9 and later allows files to have forks. Forks are called extended attributes in Solaris, although they are not within the usual meaning of "extended attribute". The maximum size of a Solaris-type extended attribute is the same as the maximum size of a file, and they are read and written in the same fashion as files. Internally, they are actually stored and accessed like normal files, so their ownership and permissions can differ from those of the parent file. Sub-directories are administratively disabled, so their names cannot contain "/" characters.
Extended attributes in Network File System Version 4 are similar to Solaris-style extended attributes.
Possible security and data loss risks
When a file system supports different forks, the applications should be aware of them, or security risks can arise. Allowing legacy software to access data without appropriate shims in place is the primary culprit for such problems.
If the different system utilities (disk explorer, antivirus software, archivers, and so on), are not aware of the different forks, the following problems can arise:
The user will never know the presence of any alternate fork nor the total size of the file, just of the main data fork.
Computer viruses can hide in alternate forks on Windows and never get detected if the antivirus software is not aware of forks.
Data can be lost when sending files via fork-unaware channels, such as e-mail, file systems without support for forks, or even when copying files between file systems with forks support if the program that made the copy does not support forks or when compressing files with software that does not support forks.
See also
Extended file attributes
References
External links
MSDN Library: File Streams
Alternate Data Streams
Alternate Data Streams in Windows
NTFS Alternate Streams
Computer file systems | Operating System (OS) | 1,101 |
Lions' Commentary on UNIX 6th Edition, with Source Code
Lions' Commentary on UNIX 6th Edition,
1976, by John Lions, with source code,
contained analytical commentary on the source code of the 6th Edition Unix computer operating system "resident nucleus" (i.e., kernel) software, plus copy formatted and indexed by Lions, of said source code obtained from the authors at AT&T Bell Labs, commonly referred to as the Lions Book.
Itself an exemplar of the early success of UNIX as portable code for a publishing platform, Lion's work was typeset using UNIX tools, on systems running code ported at the University, similar to that which it documented, see page vi.
It was commonly held to be the most copied book in computer science. Despite its age, "the Lions Book" is still considered an excellent commentary on simple, high quality code.
Lions' work was most recently reprinted in 1996 by Peer-To-Peer Communications, and has been circulated, recreated or reconstructed variously in a number of media by other parties, e.g. see webpage of Greg Lehey.
Synopsis
UNIX Operating System Source Code Level Six is the kernel source code, lightly edited by Lions to better separate the functionality — system initialization and process management, interrupts and system calls, basic I/O, file systems and pipes and character devices. All procedures and symbols are listed alphabetically with a cross reference.
The code as presented will run on a PDP-11/40 with RK-05 disk drive, LP-11 line printer interface, PCL-11 paper tape writer and KL-11 terminal interface, or a suitable PDP-11 emulator, such as SIMH.
A Commentary on the UNIX Operating System starts with notes on UNIX and other useful documentation (the UNIX manual pages, DEC hardware manuals and so on), a section on the architecture of the PDP-11 and a chapter on how to read C programs. The source commentary follows, divided into the same sections as the code. The book ends with suggested exercises for the student.
As Lions explains, this commentary supplements the comments in the source. It is possible to understand the code without the extra commentary, and the reader is advised to do so and only read the notes as needed. The commentary also remarks on how the code might be improved.
History
The source code and commentary were printed in book form in 1977, after first being assembled in May 1976, as a set of lecture notes for Lions's computer science courses (6.602B and 6.657G, mentioned in the introduction of the book) at the University of New South Wales.
UNSW had actually obtained UNIX source code in 1975, in response to a 1974 query to Dennis Ritchie at Bell. Bell Labs was a subsidiary of AT&T, under restrictions owing to its monopolistic nature as a national telecommunications infrastructure provider, and not permitted to conduct business in any other field, and so was not at liberty to profit from sale of software, however UNIX was being provided under license by another AT&T subsidiary, Western Electric at least by 1977.
The UNIX User's group, USENIX's newsletter, UNIX News, of March 1977, announced the availability of the book to UNIX licensees. Note that the newsletter's own strongly worded circulation restriction notice, could only ever have applied within the framework of existing licenses to the licensees with agreements held with the mentioned organisations, not to non-licensees, as a matter of civil contract—the newsletter displays no evidence of governmental authority of the type which might allow general suppression of circulation, such as national security Top Secret classification.)
Difficulty in keeping pace with the book's popularity, meant that by 1978 it was available only from AT&T Bell Labs.
For many years, the Lions book was the only UNIX kernel documentation available outside Bell Labs. Although the license of 6th Edition allowed classroom use of the source code, the license of 7th Edition specifically excluded such use, so subsequent to this, the book, based on the more liberally licensed version, spread widely through copy machine reproductions, made arguably under various excuses, including (but not limited to!) generous educational licensing terms afforded the publishing institution by the source code owner, as well as various copyright exemptions protecting discussion of mathematical work, though in the shadow of increasing political pressure to erode such rights, as technological means to 'self-copy' -- and even self-publish—works became cheaper, more efficient, and more prolific. UNIX itself, was one of these, having been a successful innovation financed at Bell in order to facilitate publishing of technical manuals in-house.
When AT&T announced UNIX Version 7 at USENIX in June 1979, the academic/research license no longer automatically permitted classroom use.
However, thousands of computer science students around the world spread photocopies. As they were not being taught it in class, they would sometimes meet after hours to discuss the book. Many pioneers of UNIX and open source had a treasured multiple-generation photocopy.
Other follow-on effects of the license change included Andrew S. Tanenbaum creating Minix. As Tanenbaum wrote in Operating Systems (1987):
Various UNIX people, particularly Peter H. Salus, Dennis Ritchie and Berny Goodheart, lobbied Unix's various owners (AT&T, Novell, the Santa Cruz Operation) for many years to allow the book to be published officially. In 1996, the Santa Cruz Operation finally authorised the release of the twenty-year-old 6th Edition source code (along with the source code of other versions of "Ancient UNIX"), and the full code plus the 1977 version of the commentary was published by Peer-To-Peer Communications (). The reissue includes commentary from Michael Tilson (SCO), Peter Salus, Dennis Ritchie, Ken Thompson, Peter Collinson, Greg Rose, Mike O'Dell, Berny Goodheart and Peter Reintjes.
"You are not expected to understand this"
The infamous program comment "You are not expected to understand this" occurs on line 2238 of the source code (Lions' Commentary, p. 22) at the end of a comment explaining the process exchange mechanism. It refers to line 325 of the file slp.c. The source code reads:
/*
* If the new process paused because it was
* swapped out, set the stack level to the last call
* to savu(u_ssav). This means that the return
* which is executed immediately after the call to aretu
* actually returns from the last routine which did
* the savu.
*
* You are not expected to understand this.
*/
if(rp->p_flag&SSWAP) {
rp->p_flag =& ~SSWAP;
aretu(u.u_ssav);
}
A major reason why this piece of code was hard to understand was that it depended on a quirk of the way the C-compiler for the PDP-11 saved registers in procedure calls. This code failed when ported to other machines and had to be redesigned in Version 7 UNIX. Dennis Ritchie later explained the meaning of this remark:
See also
xv6, a modern reimplementation of Sixth Edition UNIX in ANSI C for multiprocessor x86 and RISC-V systems.
References
Further reading
Andrew S. Tanenbaum, Operating Systems: Design and Implementation, (Prentice Hall, , June 1987)
Code Critic (Rachel Chalmers, Salon, 30 November 1999)
The Daemon, The GNU and the Penguin - Ch. 6 (Peter H. Salus, 1979)
Brian W. Kernighan and Dennis Ritchie, The C Programming Language'',
1976 non-fiction books
Unix books
Computer programming books
Computer science books | Operating System (OS) | 1,102 |
Microsoft ScanDisk
Microsoft ScanDisk (also called ScanDisk), is a diagnostic utility program included in MS-DOS and Windows 9x. It checks and repairs file systems errors on a disk drive, while the system starts.
Overview
The program was first introduced in MS-DOS 6.2 and succeeded its simpler predecessor, CHKDSK. It included a more user-friendly interface than CHKDSK, more configuration options, and the ability to detect and (if possible) recover from physical errors on the disk. This replaced and improved upon the limited ability offered by the MS-DOS recover utility. Unlike CHKDSK, ScanDisk would also repair crosslinked files.
In Windows 95 onwards, ScanDisk also had a graphical user interface, although the text-based user interface continued to be available for use in single-tasking ("DOS") mode.
However, ScanDisk cannot check NTFS disk drives, and therefore it is unavailable for computers that may be running NT based (including Windows 2000, Windows XP, etc.) versions of Windows; for the purpose, a newer CHKDSK is provided instead.
On Unix-like systems, there are tools like fsck_msdosfs and dosfsck to do the same task.
See also
fsck
List of DOS commands
References
Further reading
External links
Command-Line Parameters for the Scandisk Tool
External DOS commands
Hard disk software | Operating System (OS) | 1,103 |
Minerva (QDOS reimplementation)
Written by Laurence Reeves in England, Minerva was a reimplementation of Sinclair QDOS, the built-in operating system of the Sinclair QL line of personal computers. Minerva incorporated many bug fixes and enhancements to both QDOS and the SuperBASIC programming language. Later versions also provided the ability to multi-task several instances of the SuperBASIC interpreter, something not supported by QDOS.
Minerva was distributed as a ROM chip on a daughterboard which replaced the QL's original ROM chips. A Minerva Mk. II daughterboard was also produced which also incorporated an I²C interface and non-volatile real-time clock. As of version 1.89, the Minerva source code is licensed under the GNU General Public License.
Other reimplementations of QDOS include SMS2 and SMSQ/E.
External links
Laurence Reeves' page, includes complete Minerva source code
TF Services Minerva page
Discontinued operating systems
Free software operating systems
Sinclair Research | Operating System (OS) | 1,104 |
Philips Computers
Philips Telecommunicatie en Informatie Systemen (Philips Computers) was a subsidiary of Philips that designed and manufactured personal computers. Philips Computers was active from 1963 through 1992. Before that, Philips produced three computers between 1953 and 1956, all for internal use, PETER, STEVIN, and PASCAL.
Philips Computers was mostly known for its pioneer work in optical devices (through a separate subsidiary: LMSI). Philips computers were also sold under the Magnavox brand in North America. Two instances of Philips Computers products sold under other brands are known to date.
Philips computers were coupled with Philips monitors. Philips had far more success selling its monitors than its computers. Philips monitors continue being designed, produced and sold globally contemporaneously. Philips also had and has moderate success selling peripherals such as mice, keyboards and optical devices. Philips also sold and sells computer media such as diskettes and optical media (CD)s.
Philips also developed the CD-i standard but it flopped. Another experimental product was the Philips :YES, based on Intel's 80188. It also flopped.
Philips PCs were mostly equipped with motherboards designed by Philips Home Electronics in Montreal, Canada.
In the late 1990s Philips Pentium PCs were sold based on generic components and cases. These were not proprietarily designed and produced.
Philips had a subsidiary that sold the PCs under the Vendex brand: HeadStart. These systems were actively marketed in certain markets through Vendex. These systems were on display in the now defunct warehouse chain Vroom & Dreesmann in the Netherlands. Some HeadStart PCs were manufactured in South Korea by Samsung and monitors by Daewoo.
In the 2000s Philips briefly introduced a handheld PC: the Velo.
Products
P 20 series Z80
Philips P 2000 C
Philips P 2000 M
Philips P 2000 T
Philips P 2010
Philips P 2012
Philips P 2015
P 21 series 8088
Philips P 2120 (desktop) (1990s green design)
P 22 series 286
Philips P 2230 (desktop) (1990s green design)
P 31 series 8088
Philips P 3100 (desktop)
Philips P 3102 (desktop)
Philips P 3103 (desktop) (1980s red design)
Philips P 3105 (desktop)
Philips P 3120 (desktop)
P 32 series 286
Philips P 3202 (desktop) (1980s red design)
Philips P 3204 (desktop) (1980s red design)
Philips P 3230 (desktop) (1980s red design)
Philips P 3238 (desktop) (1990s green design)
P 33 series 386
Philips P 3302 (not yet identified)
Philips P 3345 (desktop) (1990s green design)
Philips P 3348 (desktop) (1990s green design)
Philips P 3355 (desktop) (1990s green design)
Philips P 3360 (desktop) (1990s green design)
Philips P 3361 (desktop) (1990s green design)
Philips P 3370 (tower) (1990s green design)
Philips P 3371 (not yet identified)
P 34 series 486
Philips P 3460 (midi tower) (1990s green design)
Philips P 3464 (not yet identified)
Philips P 3470 (not yet identified)
Philips P 3471 (not yet identified)
NMS series
Philips NMS 8250
Philips NMS 9100 8088 (desktop) (1980s red design)
Philips NMS 9100 AT 80286 (desktop) (1980s red design)
Philips NMS 9105 8088 (desktop) (1980s red design)
Philips NMS 9110 8088 (desktop) (1980s red design)
Philips NMS 9111 8088 (desktop) (1980s red design)
Philips NMS 9115 8088 (desktop) (1980s red design)
Philips NMS 9116 8088 (desktop) (1980s red design)
Philips NMS TC 100 8088 (desktop) (1990s green design)
PCD series (with CD-ROM)
Philips PCD 102 (8088 desktop) (no CD-ROM)
Philips PCD 103 (8088 desktop) (no CD-ROM)
Philips PCD 200 (286) desktop) (no CD-ROM)
Philips PCD 203 (286 desktop) (no CD-ROM)
Philips PCD 204 (286 desktop)
Philips PCD 304 (386 desktop)
Philips PCD 308 (386 desktop)
Philips PCD 318 (386 desktop)
PCL series (notebook computers)
Philips PCL 101 (8086 grey notebook)
Philips PCL 203 (80286 grey notebook)
Philips PCL 304 (80386 dark gray notebook)
Philips PCL 320 (80386 dark grey notebook)
Magnavox
Magnavox M16034GY System 160
Magnavox M1101GY System 110
Magnavox M38044GY System 380
Magnavox M58044GY System 580SX
Magnavox Headstart 286 (286 1990s grey design desktop)
Magnavox Headstart 300 (286 1980s grey design desktop)
Magnavox Headstart 300 CD (286 1980s grey design desktop)
Magnavox Headstart 500 (386 1980s grey design desktop)
Magnavox Headstart 500 CD (386 1980s grey design desktop)
Magnavox Headstart 386 SX (386 1990s grey design desktop)
Magnavox Headstart 386 SX-16 (386 1990s grey design desktop)
Magnavox Headstart 386 SX/20 (386 1990s grey design desktop)
Magnavox Headstart 386/33 (386 (not yet identified)
Magnavox Headstart 486 SX (486 1990s grey design desktop)
Magnavox Headstart 486 DX (486 1990s grey design desktop)
Magnavox Magnum GL (not yet identified)
Magnavox Magnum SX (386sx/16 turbo)
Magnavox MaxStation 286 (not yet identified)
Magnavox MaxStation 386-SX (not yet identified)
Magnavox MaxStation 386SX/16 (not yet identified)
Magnavox MaxStation 380 (not yet identified)
Magnavox MaxStation 480 (not yet identified)
Magnavox Metalis SX/16 (386) (dark grey notebook)
Magnavox Metalis SX/20 (386) (dark grey notebook)
Magnavox Professional 386 SX-20 (386 1990s grey design desktop)
Magnavox Professional 386 DX-33 (386 1990s grey design desktop)
Philips Internal Reference (not sold)
Avenger (sold as: MaxStation 286, Magnum GL, Headstart 300)
P 3212 (sold as: MaxStation 480, Headstart 380)
P 3239 (sold as: Headstart/MaxStation/Magnum/Professional 1200, 48CD, 1600, 64CD, P160, SR16CD)
P 3349 (sold as: Headstart/MaxStation/Magnum/Professional SX20, 80CD)
P 3345 (sold as: Magnavox/MaxStation 386SX, Magnum SX, Headstart Series 500)
P 3371 (sold as: Headstart/MaxStation/Magnum/Professional 3300)
Third Parties
Wang LE 150 (8088 based on P 3105 desktop)
Argent Technologies 286 (desktop, not yet confirmed)
Argent Technologies 386 SX (desktop, not yet confirmed)
Argent Technologies 386 DX (desktop, not yet confirmed)
Argent Technologies 486 DX 33 (486 based on P 3355 desktop)
Argent Technologies 486 MT (mini tower, not yet confirmed)
Vendex HeadStart
HeadStart Turbo 888 XT, 8088 desktop.
HeadStart Explorer, 8088 Amiga 500 style.
HeadStart Plus, 8088 desktop.
HeadStart LX-40, 8088 desktop.
HeadStart LX-CD, 8088 desktop with CD-ROM.
HeadStart II, 8088 desktop.
HeadStart III, 286 desktop (made in Korea).
HeadStart III CD, 286 desktop with CD-ROM.
HeadStart Pro, 286 & 386.
Monitors – Magnavox / Philips
These monitors were sold / shipped with Magnavox / Philips PCs:
20CM64: 20" VGA color CRT
3CM9809: 14" VGA color CRT
7BM623: 12" TTL CRT
7BM749: 14" VGA monochrome CRT
7CM321: 14" VGA color CRT
8CM643: 12" TTL CRT
9CM053: 14" CGA/EGA color CRT
9CM062: 14" VGA color CRT
9CM073: 14" CGA/EGA color CRT
9CM082: 14" VGA color CRT
BM7622: 12" TTL CRT
CM2080: 14" color CRT for Macintosh
CM2089: 14" VGA color CRT
CM2099: 14" VGA color CRT
CM4015: 15" VGA color CRT
CM4017: 17" VGA color CRT
CM8502: 12" TTL CRT
CM8552: 12" TTL CRT
CM8762: 12" TTL CRT
CM8833: 12" TTL CRT
CM9032: 14" VGA color CRT
CM9043: 12" TTL CRT
Monitors – Vendex
These monitors were sold / shipped with Vendex PCs:
M-888-C: CGA CRT: shipped with the HeadStart LX-40.
M-888-VC: VGA color CRT: shipped with the HeadStart III
M-888-MV: VGA monochrome CRT: shipped with the HeadStart III
M-1031-CVGA: VGA CRT color: shipped with the HeadStart III
Peripherals
Philips P 2813 keyboard: shipped with PCs with 8088 through 386 processors (also shipped with Magnavox logo)
Philips P 2814 keyboard: shipped with PCs with 286 and up processors (also shipped with Magnavox logo)
Philips serial mouse, FCC ID FSU4VVGMZAS: shipped with PCs with 8088 through 386 processors (not clear whether also shipped with Magnavox logo)
Philips NMS 1140 mouse: shipped with MSX systems
Philips NMS 1445 mouse: shipped with PCs with 8088 processors (not clear whether also shipped with Magnavox logo)
References
Philips | Operating System (OS) | 1,105 |
Lists of computers
Lists of computers cover computers, or programmable machines, by period, type, vendor and region.
Early computers
List of vacuum tube computers
List of transistorized computers
List of early microcomputers
List of computers with on-board BASIC
List of computers running CP/M
More recent computers
List of home computers
List of home computers by video hardware
List of fastest computers
Lists of microcomputers
Lists of mobile computers
List of fictional computers
Vendor-specific
HP business desktops
List of Macintosh models grouped by CPU type
List of Macintosh models by case type
List of TRS-80 and Tandy-branded computers
List of VAX computers
Regional
List of British computers
List of computer systems from Croatia
List of computer systems from Serbia
List of computer systems from Slovenia
List of computer systems from the Socialist Federal Republic of Yugoslavia
List of Soviet computer systems
See also | Operating System (OS) | 1,106 |
Graphical user interface
The graphical user interface (GUI or ) is a form of user interface that allows users to interact with electronic devices through graphical icons and audio indicator such as primary notation, instead of text-based user interfaces, typed command labels or text navigation. GUIs were introduced in reaction to the perceived steep learning curve of command-line interfaces (CLIs), which require commands to be typed on a computer keyboard.
The actions in a GUI are usually performed through direct manipulation of the graphical elements. Beyond computers, GUIs are used in many handheld mobile devices such as MP3 players, portable media players, gaming devices, smartphones and smaller household, office and industrial controls. The term GUI tends not to be applied to other lower-display resolution types of interfaces, such as video games (where head-up display (HUD) is preferred), or not including flat screens, like volumetric displays because the term is restricted to the scope of two-dimensional display screens able to describe generic information, in the tradition of the computer science research at the Xerox Palo Alto Research Center.
User interface and interaction design
Designing the visual composition and temporal behavior of a GUI is an important part of software application programming in the area of human–computer interaction. Its goal is to enhance the efficiency and ease of use for the underlying logical design of a stored program, a design discipline named usability. Methods of user-centered design are used to ensure that the visual language introduced in the design is well-tailored to the tasks.
The visible graphical interface features of an application are sometimes referred to as chrome or GUI (pronounced gooey). Typically, users interact with information by manipulating visual widgets that allow for interactions appropriate to the kind of data they hold. The widgets of a well-designed interface are selected to support the actions necessary to achieve the goals of users. A model–view–controller allows flexible structures in which the interface is independent of and indirectly linked to application functions, so the GUI can be customized easily. This allows users to select or design a different skin at will, and eases the designer's work to change the interface as user needs evolve. Good user interface design relates to users more, and to system architecture less.
Large widgets, such as windows, usually provide a frame or container for the main presentation content such as a web page, email message, or drawing. Smaller ones usually act as a user-input tool.
A GUI may be designed for the requirements of a vertical market as application-specific graphical user interfaces. Examples include automated teller machines (ATM), point of sale (POS) touchscreens at restaurants, self-service checkouts used in a retail store, airline self-ticket and check-in, information kiosks in a public space, like a train station or a museum, and monitors or control screens in an embedded industrial application which employ a real-time operating system (RTOS).
Cell phones and handheld game systems also employ application specific touchscreen GUIs. Newer automobiles use GUIs in their navigation systems and multimedia centers, or navigation multimedia center combinations.
Examples
Components
A GUI uses a combination of technologies and devices to provide a platform that users can interact with, for the tasks of gathering and producing information.
A series of elements conforming a visual language have evolved to represent information stored in computers. This makes it easier for people with few computer skills to work with and use computer software. The most common combination of such elements in GUIs is the windows, icons, text fields, canvases, menus, pointer (WIMP) paradigm, especially in personal computers.
The WIMP style of interaction uses a virtual input device to represent the position of a pointing device's interface, most often a mouse, and presents information organized in windows and represented with icons. Available commands are compiled together in menus, and actions are performed making gestures with the pointing device. A window manager facilitates the interactions between windows, applications, and the windowing system. The windowing system handles hardware devices such as pointing devices, graphics hardware, and positioning of the pointer.
In personal computers, all these elements are modeled through a desktop metaphor to produce a simulation called a desktop environment in which the display represents a desktop, on which documents and folders of documents can be placed. Window managers and other software combine to simulate the desktop environment with varying degrees of realism.
Entries may appear in a list to make space for text and details, or in a grid for compactness and larger icons with little space underneath for text. Variations inbetween exist, such as a list with multiple columns of items and a grid of items with rows of text extending sideways from the icon.
Multi-row and multi-column layouts commonly found on the web are "shelf" and "waterfall". The former is found on image search engines, where images appear with a fixed height but variable length, and is typically implemented with the CSS property and parameter display: inline-block;. A waterfall layout found on Imgur and Tweetdeck with fixed width but variable height per item is usually implemented by specifying column-width:.
Post-WIMP interface
Smaller app mobile devices such as personal digital assistants (PDAs) and smartphones typically use the WIMP elements with different unifying metaphors, due to constraints in space and available input devices. Applications for which WIMP is not well suited may use newer interaction techniques, collectively termed post-WIMP user interfaces.
As of 2011, some touchscreen-based operating systems such as Apple's iOS (iPhone) and Android use the class of GUIs named post-WIMP. These support styles of interaction using more than one finger in contact with a display, which allows actions such as pinching and rotating, which are unsupported by one pointer and mouse.
Interaction
Human interface devices, for the efficient interaction with a GUI include a computer keyboard, especially used together with keyboard shortcuts, pointing devices for the cursor (or rather pointer) control: mouse, pointing stick, touchpad, trackball, joystick, virtual keyboards, and head-up displays (translucent information devices at the eye level).
There are also actions performed by programs that affect the GUI. For example, there are components like inotify or D-Bus to facilitate communication between computer programs.
History
Early efforts
Ivan Sutherland developed Sketchpad in 1963, widely held as the first graphical computer-aided design program. It used a light pen to create and manipulate objects in engineering drawings in realtime with coordinated graphics. In the late 1960s, researchers at the Stanford Research Institute, led by Douglas Engelbart, developed the On-Line System (NLS), which used text-based hyperlinks manipulated with a then-new device: the mouse. (A 1968 demonstration of NLS became known as "The Mother of All Demos.") In the 1970s, Engelbart's ideas were further refined and extended to graphics by researchers at Xerox PARC and specifically Alan Kay, who went beyond text-based hyperlinks and used a GUI as the main interface for the Smalltalk programming language, which ran on the Xerox Alto computer, released in 1973. Most modern general-purpose GUIs are derived from this system.
The Xerox PARC user interface consisted of graphical elements such as windows, menus, radio buttons, and check boxes. The concept of icons was later introduced by David Canfield Smith, who had written a thesis on the subject under the guidance of Kay. The PARC user interface employs a pointing device along with a keyboard. These aspects can be emphasized by using the alternative term and acronym for windows, icons, menus, pointing device (WIMP). This effort culminated in the 1973 Xerox Alto, the first computer with a GUI, though the system never reached commercial production.
The first commercially available computer with a GUI was 1979 PERQ workstation, manufactured by Three Rivers Computer Corporation. Its design was heavily influenced by the work at Xerox PARC. In 1981, Xerox eventually commercialized the Alto in the form of a new and enhanced system – the Xerox 8010 Information System – more commonly known as the Xerox Star. These early systems spurred many other GUI efforts, including Lisp machines by Symbolics and other manufacturers, the Apple Lisa (which presented the concept of menu bar and window controls) in 1983, the Apple Macintosh 128K in 1984, and the Atari ST with Digital Research's GEM, and Commodore Amiga in 1985. Visi On was released in 1983 for the IBM PC compatible computers, but was never popular due to its high hardware demands. Nevertheless, it was a crucial influence on the contemporary development of Microsoft Windows.
Apple, Digital Research, IBM and Microsoft used many of Xerox's ideas to develop products, and IBM's Common User Access specifications formed the basis of the user interfaces used in Microsoft Windows, IBM OS/2 Presentation Manager, and the Unix Motif toolkit and window manager. These ideas evolved to create the interface found in current versions of Microsoft Windows, and in various desktop environments for Unix-like operating systems, such as macOS and Linux. Thus most current GUIs have largely common idioms.
Popularization
GUIs were a hot topic in the early 1980s. The Apple Lisa was released in 1983, and various windowing systems existed for DOS operating systems (including PC GEM and PC/GEOS). Individual applications for many platforms presented their own GUI variants. Despite the GUIs advantages, many reviewers questioned the value of the entire concept, citing hardware limits, and problems in finding compatible software.
In 1984, Apple released a television commercial which introduced the Apple Macintosh during the telecast of Super Bowl XVIII by CBS, with allusions to George Orwell's noted novel Nineteen Eighty-Four. The goal of the commercial was to make people think about computers, identifying the user-friendly interface as a personal computer which departed from prior business-oriented systems, and becoming a signature representation of Apple products.
Windows 95, accompanied by an extensive marketing campaign, was a major success in the marketplace at launch and shortly became the most popular desktop operating system.
In 2007, with the iPhone and later in 2010 with the introduction of the iPad, Apple popularized the post-WIMP style of interaction for multi-touch screens, and those devices were considered to be milestones in the development of mobile devices.
The GUIs familiar to most people as of the mid-late 2010s are Microsoft Windows, macOS, and the X Window System interfaces for desktop and laptop computers, and Android, Apple's iOS, Symbian, BlackBerry OS, Windows Phone/Windows 10 Mobile, Tizen, WebOS, and Firefox OS for handheld (smartphone) devices.
Comparison to other interfaces
Command-line interfaces
Since the commands available in command line interfaces can be many, complex operations can be performed using a short sequence of words and symbols. Custom functions may be used to facilitate access to frequent actions.
Command-line interfaces are more lightweight, as they only recall information necessary for a task; for example, no preview thumbnails or graphical rendering of web pages. This allows greater efficiency and productivity once many commands are learned. But reaching this level takes some time because the command words may not be easily discoverable or mnemonic. Also, using the command line can become slow and error-prone when users must enter long commands comprising many parameters or several different filenames at once. However, windows, icons, menus, pointer (WIMP) interfaces present users with many widgets that represent and can trigger some of the system's available commands.
GUIs can be made quite hard when dialogs are buried deep in a system or moved about to different places during redesigns. Also, icons and dialog boxes are usually harder for users to script.
WIMPs extensively use modes, as the meaning of all keys and clicks on specific positions on the screen are redefined all the time. Command-line interfaces use modes only in limited forms, such as for current directory and environment variables.
Most modern operating systems provide both a GUI and some level of a CLI, although the GUIs usually receive more attention. The GUI is usually WIMP-based, although occasionally other metaphors surface, such as those used in Microsoft Bob, 3dwm, or File System Visualizer.
GUI wrappers
Graphical user interface (GUI) wrappers find a way around the command-line interface versions (CLI) of (typically) Linux and Unix-like software applications and their text-based user interfaces or typed command labels. While command-line or text-based applications allow users to run a program non-interactively, GUI wrappers atop them avoid the steep learning curve of the command-line, which requires commands to be typed on the keyboard. By starting a GUI wrapper, users can intuitively interact with, start, stop, and change its working parameters, through graphical icons and visual indicators of a desktop environment, for example.
Applications may also provide both interfaces, and when they do the GUI is usually a WIMP wrapper around the command-line version. This is especially common with applications designed for Unix-like operating systems. The latter used to be implemented first because it allowed the developers to focus exclusively on their product's functionality without bothering about interface details such as designing icons and placing buttons. Designing programs this way also allows users to run the program in a shell script.
Three-dimensional graphical user interfaces (3D GUIs)
Several attempts have been made to create a multi-user three-dimensional environment or 3D GUI, including Sun's Project Looking Glass, Metisse, which was similar to Project Looking Glass, BumpTop, where users can manipulate documents and windows with realistic movement and physics as if they were physical documents, and the Croquet Project, which moved to the Open Cobalt and Open Croquet efforts.
The zooming user interface (ZUI) is a related technology that promises to deliver the representation benefits of 3D environments without their usability drawbacks of orientation problems and hidden objects. It is a logical advance on the GUI, blending some three-dimensional movement with two-dimensional or 2.5D vector objects. In 2006, Hillcrest Labs introduced the first zooming user interface for television.
For typical computer displays, three-dimensional is a misnomer—their displays are two-dimensional, for example, Metisse characterized itself as a "2.5-dimensional" UI. Semantically, however, most graphical user interfaces use three dimensions. With height and width, they offer a third dimension of layering or stacking screen elements over one another. This may be represented visually on screen through an illusionary transparent effect, which offers the advantage that information in background windows may still be read, if not interacted with. Or the environment may simply hide the background information, possibly making the distinction apparent by drawing a drop shadow effect over it.
Some environments use the methods of 3D graphics to project virtual three-dimensional user interface objects onto the screen. These are often shown in use in science fiction films (see below for examples). As the processing power of computer graphics hardware increases, this becomes less of an obstacle to a smooth user experience.
Three-dimensional graphics are currently mostly used in computer games, art, and computer-aided design (CAD). A three-dimensional computing environment can also be useful in other uses, like molecular graphics, aircraft design and Phase Equilibrium Calculations/Design of unit operations and chemical processes.
Technologies
The use of three-dimensional graphics has become increasingly common in mainstream operating systems, from creating attractive interfaces, termed eye candy, to functional purposes only possible using three dimensions. For example, user switching is represented by rotating a cube that faces are each user's workspace, and window management are represented via a Rolodex-style flipping mechanism in Windows Vista (see Windows Flip 3D). In both cases, the operating system transforms windows on-the-fly while continuing to update the content of those windows.
Interfaces for the X Window System have also implemented advanced three-dimensional user interfaces through compositing window managers such as Beryl, Compiz and KWin using the AIGLX or XGL architectures, allowing the use of OpenGL to animate user interactions with the desktop.
In science fiction
Three-dimensional GUIs appeared in science fiction literature and films before they were technically feasible or in common use. For example; the 1993 American film Jurassic Park features Silicon Graphics' three-dimensional file manager File System Navigator, a real-life file manager for Unix operating systems. The film Minority Report has scenes of police officers using specialized 3D data systems. In prose fiction, three-dimensional user interfaces have been portrayed as immersible environments like William Gibson's Cyberspace or Neal Stephenson's Metaverse. Many futuristic imaginings of user interfaces rely heavily on object-oriented user interface (OOUI) style and especially object-oriented graphical user interface (OOGUI) style.
See also
Apple Computer, Inc. v. Microsoft Corp.
Console user interface
Computer icon
Distinguishable interfaces
General Graphics Interface (software project)
GUI tree
Human factors and ergonomics
Look and feel
Natural user interface
Ncurses
Object-oriented user interface
Organic user interface
Rich web application
Skeuomorph
Skin (computing)
Theme (computing)
Text entry interface
User interface design
Vector-based graphical user interface
Notes
References
External links
Evolution of Graphical User Interface in last 50 years by Raj Lal
The men who really invented the GUI by Clive Akass
Graphical User Interface Gallery, screenshots of various GUIs
Marcin Wichary's GUIdebook, Graphical User Interface gallery: over 5500 screenshots of GUI, application and icon history
The Real History of the GUI by Mike Tuck
In The Beginning Was The Command Line by Neal Stephenson
3D Graphical User Interfaces (PDF) by Farid BenHajji and Erik Dybner, Department of Computer and Systems Sciences, Stockholm University
Rayed
Topological Analysis of the Gibbs Energy Function (Liquid-Liquid Equilibrium Correlation Data). Including a Thermodinamic Review and a Graphical User Interface (GUI) for Surfaces/Tie-lines/Hessian matrix analysis - University of Alicante (Reyes-Labarta et al. 2015-18)
Software architecture
American inventions
3D GUIs
computer information | Operating System (OS) | 1,107 |
Incompatible Timesharing System
Incompatible Timesharing System (ITS) is a time-sharing operating system developed principally by the MIT Artificial Intelligence Laboratory, with help from Project MAC. The name is the jocular complement of the MIT Compatible Time-Sharing System (CTSS).
ITS, and the software developed on it, were technically and culturally influential far beyond their core user community. Remote "guest" or "tourist" access was easily available via the early ARPAnet, allowing many interested parties to informally try out features of the operating system and application programs. The wide-open ITS philosophy and collaborative online community were a major influence on the hacker culture, as described in Steven Levy's book Hackers, and were the direct forerunners of the free and open-source software, open-design, and Wiki movements.
History
ITS development was initiated in the late 1960s by those (the majority of the MIT AI Lab staff at that time) who disagreed with the direction taken by Project MAC's Multics project (which had started in the mid-1960s), particularly such decisions as the inclusion of powerful system security. The name was chosen by Tom Knight as a joke on the name of the earliest MIT time-sharing operating system, the Compatible Time-Sharing System, which dated from the early 1960s.
By simplifying their system compared to Multics, ITS's authors were able to quickly produce a functional operating system for their lab. ITS was written in assembly language, originally for the Digital Equipment Corporation PDP-6 computer, but the majority of ITS development and use was on the later, largely compatible, PDP-10.
Although not used as intensively after about 1986, ITS continued to operate on original hardware at MIT until 1990, and then until 1995 at Stacken Computer Club in Sweden. Today, some ITS implementations continue to be remotely accessible, via emulation of PDP-10 hardware running on modern, low-cost computers supported by interested hackers.
Significant technical features
ITS introduced many then-new features:
The first device-independent graphics terminal output; programs generated generic commands to control screen content, which the system automatically translated into the appropriate character sequences for the particular type of terminal operated by the user.
A general mechanism for implementing virtual devices in software running in user processes (which were called "jobs" in ITS).
Using the virtual-device mechanism, ITS provided transparent inter-machine filesystem access. The ITS machines were all connected to the ARPAnet, and a user on one machine could perform the same operations with files on other ITS machines as if they were local files.
Sophisticated process management; user processes were organized in a tree, and a superior process could control a large number of inferior processes. Any inferior process could be frozen at any point in its operation, and its state (including contents of the registers) examined; the process could then be resumed transparently.
An advanced software interrupt facility that allowed user processes to operate asynchronously, using complex interrupt handling mechanisms.
PCLSRing, a mechanism providing what appeared (to user processes) to be quasi-atomic, safely-interruptible system calls. No process could ever observe any process (including itself) in the middle of executing any system call.
In support of the AI Lab's robotics work, ITS also supported simultaneous real-time and time-sharing operation.
User environment
The environment seen by ITS users was philosophically significantly different from that provided by most operating systems at the time.
Initially there were no passwords, and a user could work on ITS without logging on. Logging on was considered polite, though, so people knew when one was connected.
To deal with a rash of incidents where users sought out flaws in the system in order to crash it, a novel approach was taken. A command that caused the system to crash was implemented and could be run by anyone, which took away all the fun and challenge of doing so. It did, however, broadcast a message to say who was doing it.
All files were editable by all users, including online documentation and source code. A series of informal papers and technical notes documented new commands, technical issues, primitive games, mathematical puzzles, and other topics of interest to the ITS hacker community. Some were issued as more formal AI Memos, including the HAKMEM compendium.
All users could talk with instant messaging on another's terminal, or they could use a command (SHOUT) to ask all active users for help.
Users could see what was happening on another's terminal (using a command called OS for "output spy"). A target of OS could detect and kill it using another command called JEDGAR, named after FBI Director J. Edgar Hoover. This facility was later disabled with a placebo command: it appeared as if the remote session was killed, but it was not.
Tourists (guest users either at MIT AI Lab terminals, or over the ARPAnet) were tolerated and occasionally invited to actively join the ITS community. Informal policy on tourist access was later formalized in a written policy. Ease of access, with or without a guest account, allowed interested parties to informally explore and experiment with the operating system, application programs, and "hacker" culture. Working copies of documentation and source code could be freely consulted or updated by anybody on the system.
System security, to the extent that it existed, was mostly-based on de facto "security by obscurity". Guest hackers willing to dedicate significant time and effort to learning ITS were expected to behave respectfully, and to avoid interfering with the research projects which funded the hardware and software systems. There was little of exclusive value on the ITS systems except information, much of which would eventually be published for free distribution, and open and free sharing of knowledge was generally encouraged.
The wide-open ITS philosophy and collaborative community were the direct forerunner of the free and open-source software, open-design, and Wiki movements.
Important applications developed on ITS
The EMACS ("Editor MACroS") editor was originally written on ITS. In its ITS instantiation it was a collection of TECO programs (called "macros"). On later operating systems, it was written in the common language of those systems – for example, the C language under Unix, and Zetalisp under the Lisp Machine system.
GNU‘s info help system was originally an EMACS subsystem, and then was later written as a complete standalone system for Unix-like machines.
Several important programming languages and systems were developed on ITS, including MacLisp (the precursor of Zetalisp and Common Lisp), Microplanner (implemented in MacLisp), MDL (which became the basis of Infocom's programming environment), and Scheme.
Among other significant and influential software subsystems developed on ITS, the Macsyma symbolic algebra system, started in 1968, was the first widely-known mathematical computing environment. It was a forerunner of Maxima, MATLAB, Wolfram Mathematica, and many other computer algebra systems.
Terry Winograd's SHRDLU program was developed in ITS. The computer game Zork was also originally written on ITS.
Richard Greenblatt's Mac Hack VI was the top-rated chess program for years and was the first to display a graphical board representation.
Miscellaneous
The default ITS top-level command interpreter was the PDP-10 machine language debugger (DDT). The usual text editor on ITS was TECO and later Emacs, which was written in TECO. Both DDT and TECO were implemented through simple dispatch tables on single-letter commands, and thus had no true syntax. The ITS task manager was called PEEK.
The local spelling "TURIST" is an artifact of six-character filename (and other identifier) limitations, which is traceable to six SIXBIT encoded characters fitting into a single 36-bit PDP-10 word. "TURIST" may also have been a pun on Alan Turing, a pioneer of theoretical computer science. The less-complimentary term "LUSER" was also applied to guest users, especially those who repeatedly engaged in clueless or vandalous behavior.
The Jargon File started as a combined effort between people on the ITS machines at MIT and at Stanford University SAIL. The document described much of the terminology, puns, and culture of the two AI Labs and related research groups, and is the direct predecessor of the Hacker's Dictionary (1983), the first compendium of hacker jargon to be issued by a major publisher (MIT Press).
Different implementations of ITS supported an odd array of peripherals, including an automatic wire stripper devised by hacker Richard Greenblatt, who needed a supply of pre-stripped jumper wires of various lengths for wire-wrapping computer hardware he and others were prototyping. The device used a stepper motor and a formerly hand-held wire stripper tool and cutter, operated by solenoid, all under computer control from ITS software. The device was accessible by any ITS user, but was disappointingly unreliable in actual use.
The Xerox Graphics Printer (XGP), one of the first laser printers in the world, was supported by ITS by 1974. The MIT AI Lab had one of these prototype continuous roll-fed printers for experimentation and use by its staff. By 1982, the XGP was supplemented by a Xerox Dover printer, an early sheet-fed laser printer. Although any ITS user could access the laser printers, physical access to pick up printouts was limited to staff, to control usage of printer supplies which had to be specially ordered.
Original developers
Richard Greenblatt
Stewart Nelson
Tom Knight
Richard Stallman
See also
Time-sharing system evolution
References
Bibliography
documents a very early version of the system
An Introduction to ITS for the MACSYMA User
External links
ITS System Documentation
SV: An ITS system running online and open for logins
UP: Public ITS system operated by the Update Computer Club at Uppsala University
KLH10: Ken Harrenstien's PDP-10 emulator
instructions allowing ITS to run on the SIMH PDP-10 emulator.
Jargon File Entry
ITS bibliography
Time-sharing operating systems
1967 software
Massachusetts Institute of Technology software
Assembly language software
Hacker culture
Software using the GPL license | Operating System (OS) | 1,108 |
AppleWin
AppleWin (also known as Apple //e Emulator for Windows) is an open source software emulator for running Apple II programs in Microsoft Windows. AppleWin was originally written by Mike O'Brien in 1994; O'Brien himself announced an early version of the emulator in April 1995 just before the release of Windows 95. Development of AppleWin passed to Oliver Schmidt and is now maintained by Tom Charlesworth. AppleWin originally required a minimum Intel 486 CPU and is written in C++.
AppleWin has support for most programs that could run either on the Apple II+ or the Apple IIe. By default, AppleWin emulates the Extended Keyboard IIe (better known as the Platinum IIe) with built-in 80-column text support, 128 kilobytes of RAM, two 5¼-inch floppy disk drives, a joystick, a serial card and 65C02 CPU. AppleWin supports lo-res, hi-res, and double hi-res graphics modes and can emulate both color and monochrome Apple II monitors; later versions of AppleWin also can emulate a television set used as a monitor. Both 40-column and 80-column text is supported.
AppleWin can emulate the Apple II joystick (using the PC's default controller), paddle controllers (using the computer mouse), and can also emulate the Apple II joystick using the PC keyboard. AppleWin can also use the PC speaker to emulate the Apple II's sound if no sound card is available (does not work under NT-based Windows versions). Full screen mode is available through the use of DirectX. Features added to the latest versions of AppleWin include Ethernet support using Uthernet, Mockingboard and Phasor sound card support, SSI263 speech synthesis, hard drive disk images, save states, and taking screenshots.
Supported disk images
AppleWin supports ProDOS and DOS 3.3 disk image formats as well as copy-protected programs copied with "nibble copiers" to a disk image.
Specifically, AppleWin recognizes .BIN, .DO, .DSK, .NIB, .PO and .WOZ filename extensions as Apple II disk image files along with reading disk images from compressed (.zip / .gzip) files. Disk images may also be optionally "write protected" if they are mounted as "Read Only."
References
External links
comp.emulators.apple2 Usenet group (through Google Groups)
Windows emulation software
Free emulation software
Formerly proprietary software | Operating System (OS) | 1,109 |
Out-of-band management
In systems management, out-of-band management involves the use of management interfaces (or serial ports) for managing and networking equipment. Out-of-band (OOB) is a networking term which refers to having a separate channel of communication which does not travel over the usual data stream.
Out-of-band management allows the network operator to establish trust boundaries in accessing the management function to apply it to network resources. It also can be used to ensure management connectivity (including the ability to determine the status of any network component) independent of the status of other in-band network components.
In computing, one form of out-of-band management is sometimes called lights-out management (LOM) and involves the use of a dedicated management channel for device maintenance. It allows a system administrator to monitor and manage servers and other network-attached equipment by remote control regardless of whether the machine is powered on or whether an OS is installed or functional.
By contrast, in-band management through VNC or SSH is based on in-band connectivity (the usual network channel). It typically requires software that must be installed on the remote system being managed and only works after the operating system has been booted and networking is brought up. It does not allow management of remote network components independently of the current status of other network components. A classic example of this limitation is when a sysadmin attempts to reconfigure the network on a remote machine only to find themselves locked out and unable to fix the problem without physically going to the machine. Despite these limitations, in-band solutions are still common because they are simpler and much lower-cost.
Both in-band and out-of-band management are usually done through a network connection, but an out-of-band management card can use a physically separated network connector if preferred. A remote management card usually has at least a partially independent power supply and can switch the main machine on and off through the network. Because a special device is required for each machine, out-of-bandwidth management can be much more expensive.
Serial consoles are an in-between case: they are technically OOB as they do not require the primary network to be functioning for remote administration. However, without special hardware, a serial console cannot configure the UEFI (or BIOS) settings, reinstall the operating system remotely, or fix problems that prevent the system from booting.
Purpose
A complete remote management system allows remote reboot, shutdown, powering on; hardware sensor monitoring (fan speed, power voltages, chassis intrusion, etc.); broadcasting of video output to remote terminals and receiving of input from remote keyboard and mouse (KVM over IP). It also can access local media like a DVD drive, or disk images, from the remote machine. If necessary, this allows one to perform remote installation of the operating system. Remote management can be used to adjust BIOS settings that may not be accessible after the operating system has already booted. Settings for hardware RAID or RAM timings can also be adjusted as the management card needs no hard drives or main memory to operate.
As management via serial port has traditionally been important on servers, a complete remote management system also allows interfacing with the server through a serial over LAN cable.
As sending monitor output through the network is bandwidth intensive, cards like AMI's MegaRAC use built-in video compression (versions of VNC are often used in implementing this).
Devices like Dell DRAC also have a slot for a memory card where an administrator may keep server-related information independently from the main hard drive.
The remote system can be accessed either through an SSH command-line interface, specialized client software, or through various web-browser-based solutions. Client software is usually optimized to manage multiple systems easily.
There are also various scaled-down versions, up to devices that only allow remote reboot by power cycling the server. This helps if the operating system hangs, but only needs a reboot to recover.
Implementation
Remote management can be enabled on many computers (not necessarily only servers) by adding a remote management card (while some cards only support a limited list of motherboards). Newer server motherboards often have built-in remote management and need no separate management card.
Internally, Ethernet-based out-of-band management can either use a dedicated separate Ethernet connection, or some kind of traffic multiplexing can be performed on the system's regular Ethernet connection. That way, a common Ethernet connection becomes shared between the computer's operating system and the integrated baseboard management controller (BMC), usually by configuring the network interface controller (NIC) to perform Remote Management Control Protocol (RMCP) ports filtering, use a separate MAC address, or to use a virtual LAN (VLAN). Thus, out-of-band nature of the management traffic is ensured in a shared-connection scenario, as the system configures the NIC to extract the management traffic from the incoming traffic flow on the hardware level, and to route it to the BMC before reaching the host and its operating system.
Remote CLI access
An older version of out-of-band management is a layout involving the availability of a separate network that allows network administrators to get command-line interface access over the console ports of network equipment, even when those devices are not forwarding any payload traffic.
If a location has several network devices, a terminal server can provide access to different console ports for direct CLI access. In case there is only one or just a few network devices, some of them provide AUX ports making it possible to connect a dial-in modem for direct CLI access. The mentioned terminal server can often be accessed via a separate network that does not use managed switches and routers for a connection to the central site, or it has a modem connected via dial-in access through POTS or ISDN.
See also
Intelligent Platform Management Interface (IPMI; a server out-of-band management standard protocol)
Redfish (specification) (a server out-of-band management standard protocol)
Management Component Transport Protocol (MCTP; a low-level protocol used for controlling hardware components)
Desktop and mobile Architecture for System Hardware (an out-of-band management standard protocol)
Intel Active Management Technology (AMT; Intel's out-of-band management technology)
HP Integrated Lights-Out (iLO; HP's out-of-band management implementation for x86 and newer Integrity servers)
Dell DRAC (iDRAC; DELL's out-of-band management implementation)
IBM Remote Supervisor Adapter or Integrated Management Module (IMM; IBM's out-of-band management implementation)
ZPE systems serial console plus ZPE systems serial console
References
External links
System administration | Operating System (OS) | 1,110 |
Zorba (computer)
The Zorba was a portable computer running the CP/M operating system manufactured in 1983 and 1984. It was originally manufactured by Telcon Industries of Fort Lauderdale, Florida, a company specialized in telecommunication equipment manufacturing.
The Zorba was one of the last CP/M computers on the market. By the time it was introduced, the Kaypro and Osborne machines already dominated that market. The introduction of the Compaq Portable, compatible with the IBM PC and running MS-DOS, sealed the fate of the CP/M machines.
History
The Zorba was one of the last 8-bit portable computers running the CP/M operating system. It had features very similar to the Kaypro II. The original sale price was $1,595.
The rights for the Zorba were sold by Telcon for $5 million to MODCOMP (Modular Computer Systems, Inc), a company which specialized in mini-computer manufacturing. Modular Micro Group was created specifically to market the Zorba. The Zorba 7 was sold as the Modular Micros GC-200.
Modular Micro Group sold two different models. The Zorba 7 sold for $1,595. It had a 7" green CRT screen and 2 410K floppies. The Zorba 2000 sold for $2,000. It had a 9" green or yellow screen, 2 820K floppies, and an available 10 meg hard drive.
Sales were very poor. Within a year, the Zorba computer stock was sold to Gemini Electronics, a company which specialized in selling surplus stocks. The remaining inventory was sold at a price of about $799 per unit.
In all, only about 6,000 Zorba computers were manufactured and sold.
Available software
The Zorba computer came with several video emulations, including Heathkit 19/89, Zenith 19/89, and DEC VT52. This allowed them to run virtually any existing CP/M software.
A "Perfect Software Package" was available for $190. This included the Perfect Writer word processor, the "Perfect Speller" spell checker, the "Perfect Filer" database manager, and the "Perfect Calc" program for spreadsheets.
It could also run the MicroPro Software Package (WordStar, Mailmerge, SpellStar, CalcStar, and DataStar).
Features
The Zorba had a detachable 95-key keyboard with 19 function keys and numeric keypad. It had a Z80A CPU, running at 4Mhz. It came with 64 KB or RAM and 4KB of ROM (expandable to 16 KB).
The text-only screen had 80 characters x 25 lines.
It came with 2 serial ports, a parallel port, and an IEEE-488 port.
See also
Portable computer
Personal computer
Timeline of portable computers
References
External links
1984 Telcon Zorba Gemini - Early Vintage Portable Computer - Luggable Laptop CPM PC
Zorba Equipment Preservation Society
Review of the Zorba, InfoWorld magazine, June 6, 1983
Portable computers | Operating System (OS) | 1,111 |
Desktop environment
In computing, a desktop environment (DE) is an implementation of the desktop metaphor made of a bundle of programs running on top of a computer operating system that share a common graphical user interface (GUI), sometimes described as a graphical shell. The desktop environment was seen mostly on personal computers until the rise of mobile computing. Desktop GUIs help the user to easily access and edit files, while they usually do not provide access to all of the features found in the underlying operating system. Instead, the traditional command-line interface (CLI) is still used when full control over the operating system is required.
A desktop environment typically consists of icons, windows, toolbars, folders, wallpapers and desktop widgets (see Elements of graphical user interfaces and WIMP). A GUI might also provide drag and drop functionality and other features that make the desktop metaphor more complete. A desktop environment aims to be an intuitive way for the user to interact with the computer using concepts which are similar to those used when interacting with the physical world, such as buttons and windows.
While the term desktop environment originally described a style of user interfaces following the desktop metaphor, it has also come to describe the programs that realize the metaphor itself. This usage has been popularized by projects such as the Common Desktop Environment, K Desktop Environment, and GNOME.
Implementation
On a system that offers a desktop environment, a window manager in conjunction with applications written using a widget toolkit are generally responsible for most of what the user sees. The window manager supports the user interactions with the environment, while the toolkit provides developers a software library for applications with a unified look and behavior.
A windowing system of some sort generally interfaces directly with the underlying operating system and libraries. This provides support for graphical hardware, pointing devices, and keyboards. The window manager generally runs on top of this windowing system. While the windowing system may provide some window management functionality, this functionality is still considered to be part of the window manager, which simply happens to have been provided by the windowing system.
Applications that are created with a particular window manager in mind usually make use of a windowing toolkit, generally provided with the operating system or window manager. A windowing toolkit gives applications access to widgets that allow the user to interact graphically with the application in a consistent way.
History and common use
The first desktop environment was created by Xerox and was sold with the Xerox Alto in the 1970s. The Alto was generally considered by Xerox to be a personal office computer; it failed in the marketplace because of poor marketing and a very high price tag. With the Lisa, Apple introduced a desktop environment on an affordable personal computer, which also failed in the market.
The desktop metaphor was popularized on commercial personal computers by the original Macintosh from Apple in 1984, and was popularized further by Windows from Microsoft since the 1990s. , the most popular desktop environments are descendants of these earlier environments, including the Windows shell used in Microsoft Windows, and the Aqua environment used in macOS. When compared with the X-based desktop environments available for Unix-like operating systems such as Linux and FreeBSD, the proprietary desktop environments included with Windows and macOS have relatively fixed layouts and static features, with highly integrated "seamless" designs that aim to provide mostly consistent customer experiences across installations.
Microsoft Windows dominates in marketshare among personal computers with a desktop environment. Computers using Unix-like operating systems such as macOS, Chrome OS, Linux, BSD or Solaris are much less common; however, there is a growing market for low-cost Linux PCs using the X Window System or Wayland with a broad choice of desktop environments. Among the more popular of these are Google's Chromebooks and Chromeboxes, Intel's NUC, the Raspberry Pi, etc.
On tablets and smartphones, the situation is the opposite, with Unix-like operating systems dominating the market, including the iOS (BSD-derived), Android, Tizen, Sailfish and Ubuntu (all Linux-derived). Microsoft's Windows phone, Windows RT and Windows 10 are used on a much smaller number of tablets and smartphones. However, the majority of Unix-like operating systems dominant on handheld devices do not use the X11 desktop environments used by other Unix-like operating systems, relying instead on interfaces based on other technologies.
Desktop environments for the X Window System
On systems running the X Window System (typically Unix-family systems such as Linux, the BSDs, and formal UNIX distributions), desktop environments are much more dynamic and customizable to meet user needs. In this context, a desktop environment typically consists of several separate components, including a window manager (such as Mutter or KWin), a file manager (such as Files or Dolphin), a set of graphical themes, together with toolkits (such as GTK+ and Qt) and libraries for managing the desktop. All these individual modules can be exchanged and independently configured to suit users, but most desktop environments provide a default configuration that works with minimal user setup.
Some window managerssuch as IceWM, Fluxbox, Openbox, ROX Desktop and Window Makercontain relatively sparse desktop environment elements, such as an integrated spatial file manager, while others like evilwm and wmii do not provide such elements. Not all of the program code that is part of a desktop environment has effects which are directly visible to the user. Some of it may be low-level code. KDE, for example, provides so-called KIO slaves which give the user access to a wide range of virtual devices. These I/O slaves are not available outside the KDE environment.
In 1996 the KDE was announced, followed in 1997 by the announcement of GNOME. Xfce is a smaller project that was also founded in 1996, and focuses on speed and modularity, just like LXDE which was started in 2006. A comparison of X Window System desktop environments demonstrates the differences between environments. GNOME and KDE were usually seen as dominant solutions, and these are still often installed by default on Linux systems. Each of them offers:
To programmers, a set of standard APIs, a programming environment, and human interface guidelines.
To translators, a collaboration infrastructure. KDE and GNOME are available in many languages.
To artists, a workspace to share their talents.
To ergonomics specialists, the chance to help simplify the working environment.
To developers of third-party applications, a reference environment for integration. OpenOffice.org is one such application.
To users, a complete desktop environment and a suite of essential applications. These include a file manager, web browser, multimedia player, email client, address book, PDF reader, photo manager, and system preferences application.
In the early 2000s, KDE reached maturity. The Appeal and ToPaZ projects focused on bringing new advances to the next major releases of both KDE and GNOME respectively. Although striving for broadly similar goals, GNOME and KDE do differ in their approach to user ergonomics. KDE encourages applications to integrate and interoperate, is highly customizable, and contains many complex features, all whilst trying to establish sensible defaults. GNOME on the other hand is more prescriptive, and focuses on the finer details of essential tasks and overall simplification. Accordingly, each one attracts a different user and developer community. Technically, there are numerous technologies common to all Unix-like desktop environments, most obviously the X Window System. Accordingly, the freedesktop.org project was established as an informal collaboration zone with the goal being to reduce duplication of effort.
As GNOME and KDE focus on high-performance computers, users of less powerful or older computers often prefer alternative desktop environments specifically created for low-performance systems. Most commonly used lightweight desktop environments include LXDE and Xfce; they both use GTK+, which is the same underlying toolkit GNOME uses. The MATE desktop environment, a fork of GNOME 2, is comparable to Xfce in its use of RAM and processor cycles, but is often considered more as an alternative to other lightweight desktop environments.
For a while, GNOME and KDE enjoyed the status of the most popular Linux desktop environments; later, other desktop environments grew in popularity. In April 2011, GNOME introduced a new interface concept with its version 3, while a popular Linux distribution Ubuntu introduced its own new desktop environment, Unity. Some users preferred to keep the traditional interface concept of GNOME 2, resulting in the creation of MATE as a GNOME 2 fork.
Examples of desktop environments
The most common desktop environment on personal computers is Windows Shell in Microsoft Windows. Microsoft has made significant efforts in making Windows shell visually pleasing. As a result, Microsoft has introduced theme support in Windows 98, the various Windows XP visual styles, the Aero brand in Windows Vista, the Microsoft design language (codenamed "Metro") in Windows 8, and the Fluent Design System and Windows Spotlight in Windows 10. Windows shell can be extended via Shell extensions.
Mainstream desktop environments for Unix-like operating systems use the X Window System, and include KDE, GNOME, Xfce, LXDE, and Aqua, any of which may be selected by users and are not tied exclusively to the operating system in use.
A number of other desktop environments also exist, including (but not limited to) CDE, EDE, GEM, IRIX Interactive Desktop, Sun's Java Desktop System, Jesktop, Mezzo, Project Looking Glass, ROX Desktop, UDE, Xito, XFast. Moreover, there exists FVWM-Crystal, which consists of a powerful configuration for the FVWM window manager, a theme and further adds, altogether forming a "construction kit" for building up a desktop environment.
X window managers that are meant to be usable stand-alone — without another desktop environment — also include elements reminiscent of those found in typical desktop environments, most prominently Enlightenment. Other examples include OpenBox, Fluxbox, WindowLab, Fvwm, as well as Window Maker and AfterStep, which both feature the NeXTSTEP GUI look and feel. However newer versions of some operating systems make self configure.
The Amiga approach to desktop environment was noteworthy: the original Workbench desktop environment in AmigaOS evolved through time to originate an entire family of descendants and alternative desktop solutions. Some of those descendants are the Scalos, the Ambient desktop of MorphOS, and the Wanderer desktop of the AROS open source OS. WindowLab also contains features reminiscent of the Amiga UI. Third-party Directory Opus software, which was originally just a navigational file manager program, evolved to become a complete Amiga desktop replacement called Directory Opus Magellan.
OS/2 (and derivatives such as eComStation and ArcaOS) use the Workplace Shell. Earlier versions of OS/2 used the Presentation Manager.
The BumpTop project was an experimental desktop environment. Its main objective is to replace the 2D paradigm with a "real-world" 3D implementation, where documents can be freely manipulated across a virtual table.
Gallery
See also
Wayland – an alternative to X Windows which can run several different desktop environments
References | Operating System (OS) | 1,112 |
The Linux Programming Interface
The Linux Programming Interface: A Linux and UNIX System Programming Handbook is a book written by Michael Kerrisk,
which documents the APIs of the Linux kernel and of the GNU C Library (glibc).
It covers a wide array of topics dealing with the Linux operating system and operating systems in general, as well as providing a brief history of Unix and how it led to the creation of Linux. It provides many samples of code written in the C programming language, and provides learning exercises at the end of many chapters. Kerrisk is a former writer for the Linux Weekly News and the current maintainer for the Linux man pages project.
The Linux Programming Interface is widely regarded as the definitive work on Linux system programming and has been translated into several languages. Jake Edge, writer for LWN.net, in his review of the book, said "I found it to be extremely useful and expect to return to it frequently. Anyone who has an interest in programming for Linux will likely feel the same way." Federico Lucifredi, the product manager for the SUSE Linux Enterprise and openSUSE distributions, also praised the book, saying that "The Linux Programming Encyclopedia would have been a perfectly adequate title for it in my opinion" and called the book "…a work of encyclopedic breadth and depth, spanning in great detail concepts usually spread in a multitude of medium-sized books…" Lennart Poettering, the software engineer best known for PulseAudio and systemd, advises people to "get yourself a copy of The Linux Programming Interface, ignore everything it says about POSIX compatibility and hack away your amazing Linux software".
At FOSDEM 2016 Michael Kerrisk, the author of The Linux Programming Interface, explained some of the issues with the Linux kernel's user-space API he and others perceive. It is littered with design errors: APIs which are non-extensible, unmaintainable, overly complex, limited-purpose, violations of standards, and inconsistent. Most of those mistakes can't be fixed because doing so would break the ABI that the kernel presents to user-space binaries.
See also
Linux kernel interfaces
Programming Linux Games
References
External links
The Linux Programming Interface at the publisher's (No Starch Press) Website
The Linux Programming Interface Description at Kerrisk's Website
API changes
The Linux Programming Interface Traditional Chinese Translation
Computer programming books
Books about Linux
2010 non-fiction books
No Starch Press books
Interfaces of the Linux kernel | Operating System (OS) | 1,113 |
RMX (operating system)
iRMX is a real-time operating system designed specifically for use with the Intel 8080 and 8086 family of processors. It is an acronym for Real-time Multitasking eXecutive.
Overview
Intel developed iRMX in the 1970s and originally released RMX/80 in 1976 and RMX/86 in 1980 to support and create demand for their processors and Multibus system platforms.
The functional specification for RMX/86 was authored by Bruce Schafer and Miles Lewitt and was completed in the summer of 1978 soon after Intel relocated the entire Multibus business from Santa Clara, California to Aloha, Oregon. Schafer and Lewitt went on each manage one of the two teams that developed the RMX/86 product for release on schedule in 1980.
Effective 2000 iRMX is supported, maintained, and licensed worldwide by TenAsys Corporation, under an exclusive licensing arrangement with Intel.
iRMX is a layered design: containing a kernel, nucleus, basic i/o system, extended i/o system and human interface. An installation need include only the components required: intertask synchronization, communication subsystems, a filesystem, extended memory management, command shell, etc. The native filesystem is specific to iRMX, but has many similarities to the original Unix (V6) filesystem, such as 14 character path name components, file nodes, sector lists, application readable directories, etc.
iRMX supports multiple processes (known as jobs in RMX parlance) and multiple threads are supported within each process (task). In addition, interrupt handlers and threads exist to run in response to hardware interrupts. Thus, iRMX is a multi-processing, multi-threaded, pre-emptive, real-time operating system (RTOS).
Commands
The following list of commands are supported by iRMX 86.
ATTACHDEVICE
ATTACHFILE
BACKUP
COPY
CREATEDIR
DATE
DEBUG
DELETE
DETACHDEVICE
DETACHFILE
DIR
DISKVERIFY
DOWNCOPY
FORMAT
INITSTATUS
JOBDELETE
LOCDATA
LOCK
LOGICALNAMES
MEMORY
PATH
PERMIT
RENAME
RESTORE
SUBMIT
SUPER
TIME
UPCOPY
VERSION
WHOAMI
Historical uses
iRMX III on Intel Multibus hardware is used in the majority core systems on CLSCS the London Underground Central line signals control system was supplied by Westinghouse (now Invensys) and commissioned in the late 1990s. The Central line is an automatic train operation line. Automatic train protection is by trackside and train borne equipment that does not use iRMX. It is the automatic train supervision elements that use a mix of iRMX on Multibus, and Solaris on SPARC computers. 16 iRMX local site computers are distributed along the Central line together with 6 central iRMX computers at the control centre. All 22 iRMX computers are dual redundant. iRMX CLSCS continues in full operation.
Oslo Metro uses a similar, although less complex, Westinghouse-supplied iRMX control system through the central Common Tunnel tracks. This was expected to be decommissioned in 2011.
Variants
Several variations of iRMX have been developed since its original introduction on the Intel 8080: iRMX I, II and III, iRMX-86, iRMX-286, DOS-RMX, iRMX for Windows, and, most recently, INtime. While many of the original variants of iRMX are still in use, only iRMX III, iRMX for Windows, and INtime are currently supported for the development of new real-time applications. Each of these three supported variants of iRMX require an Intel 80386 equivalent or higher processor to run.
A significant architectural difference between the INtime RTOS and all other iRMX variants is the support for address segments (see x86 memory segmentation). The original 8086 family of processors relied heavily on segment registers to overcome limitations associated with addressing large amounts of memory via 16-bit registers. The iRMX operating system and the compilers developed for iRMX include features to exploit the segmented addressing features of the original x86 architecture. The INtime variant of iRMX does not include explicit support for segmentation, opting instead to support only the simpler and more common 32-bit flat addressing scheme.
Despite the fact that native processes written for INtime can only operate using unsegmented flat-mode addressing, it is possible to port and run some older iRMX applications that use segmented addressing to the INtime kernel.
When Intel introduced the Intel 80386 processor, in addition to expanding the iRMX RTOS to support 32-bit registers, iRMX III also included support for the four distinct protection rings (named rings 0 through 3) which describe the protected-mode mechanism of the Intel 32-bit architecture. In practice very few systems have ever used more than rings 0 and 3 to implement protection schemes.
iRMX
The I, II, III, -286 and -86 variants are intended as standalone real-time operating systems. A number of development utilities and applications were made for iRMX, such as compilers (PL/M, Fortran, C), an editor (Aedit), process and data acquisition applications and so on. Cross compilers hosted on the VAX/VMS system were also made available by Intel. iRMX III is still supported today and has been used as the core technology for newer real-time virtualization RTOS products including iRMX for Windows and INtime.
DOS-RMX
DOS-RMX is a variant of the standalone iRMX operating system designed to allow two operating systems to share a single hardware platform. In simplest terms, DOS and iRMX operate concurrently on a single IBM PC compatible computer, where iRMX tasks (processes) have scheduling priority over the DOS kernel, interrupts, and applications. iRMX events (e.g., hardware interrupts) pre-empt the DOS kernel to ensure that tasks can respond to real-time events in a time-deterministic manner. In a functional sense, DOS-RMX is the predecessor to iRMX for Windows and INtime.
In practice, DOS-RMX appears as a TSR to the DOS kernel. Once loaded as a TSR, iRMX takes over the CPU, changing to protected mode and running DOS in a virtual machine within an RMX task. This combination provides RMX real-time functionality as well as full DOS services.
iRMX for Windows
Like DOS-RMX, this system provides a hybrid mixture of services and capabilities defined by DOS, Windows, and iRMX. Inter-application communication via an enhanced Windows DDE capability allows RMX tasks to communicate with Windows processes.
iRMX for Windows was originally intended for use in combination with the 16-bit version of Windows. In 2002 iRMX for Windows was reintroduced by adding these RMX personalities to the INtime RTOS for Windows, allowing it to be used in conjunction with the 32-bit protected-mode versions of Windows (Windows NT, Windows 2000, etc.).
INtime
Like its iRMX predecessors, INtime is a real-time operating system. And, like DOS-RMX and iRMX for Windows, it runs concurrently with a general-purpose operating system on a single hardware platform. INtime 1.0 was originally introduced in 1997 in conjunction with the Windows NT operating system. Since then it has been upgraded to include support for all subsequent protected-mode Microsoft Windows platforms, including Windows Vista and Windows 7.
INtime can also be used as a stand-alone RTOS. INtime binaries are able to run unchanged when running on a stand-alone node of the INtime RTOS. Unlike Windows, INtime can run on an Intel 80386 or equivalent processor. Current versions of the Windows operating system generally require at least a Pentium level processor in order to boot and execute.
The introduction of INtime 3.0 included several important enhancements. Among them, support for multi-core processors and the ability to debug real-time processes on the INtime kernel using Microsoft Visual Studio. INtime is not an SMP operating system, thus support for multi-core processors is restricted to a special form of asymmetric multiprocessing. When used on a multi-core processor INtime can be configured to run on one CPU core while Windows runs on the remaining processor core(s).
BOS
Named BOS (BOS1810, BOS1820), the operating system was cloned by the East-German VEB Robotron-Projekt in Dresden in the 1980s.
Uses
Use cases can be viewed on the TenAsys website.
See also
Radisys
References
Further reading
, originally published in Embedded Systems Programming in 1989
Christopher Vickery, Real-Time and Systems Programming for PCs: Using the iRMX for Windows Operating System, McGraw-Hill (1993)
External links
iRMX information page
Richard Carver's iRMXStuff.com
Intel software
Real-time operating systems | Operating System (OS) | 1,114 |
ISO 9660
ISO 9660 (also known as ECMA-119) is a file system for optical disc media. Being sold by the International Organization for Standardization (ISO) the file system is considered an international technical standard. Since the specification is available for anybody to purchase, implementations have been written for many operating systems.
ISO 9660 traces its roots to the High Sierra Format, which arranged file information in a dense, sequential layout to minimize nonsequential access by using a hierarchical (eight levels of directories deep) tree file system arrangement, similar to UNIX and FAT. To facilitate cross platform compatibility, it defined a minimal set of common file attributes (directory or ordinary file and time of recording) and name attributes (name, extension, and version), and used a separate system use area where future optional extensions for each file may be specified. High Sierra was adopted in December 1986 (with changes) as an international standard by Ecma International as ECMA-119 and submitted for fast tracking to the ISO, where it was eventually accepted as ISO 9660:1988. Subsequent amendments to the standard were published in 2013 and 2020.
The first 16 sectors of the file system are empty and reserved for other uses. The rest begins with a volume descriptor set (a header block which describes the subsequent layout) and then the path tables, directories and files on the disc. An ISO 9660 compliant disc must contain at least one primary volume descriptor describing the file system and a volume descriptor set terminator which is a volume descriptor that marks the end of the descriptor set. The primary volume descriptor provides information about the volume, characteristics and metadata, including a root directory record that indicates in which sector the root directory is located. Other fields contain metadata such as the volume's name and creator, along with the size and number of logical blocks used by the file system. Path tables summarize the directory structure of the relevant directory hierarchy. For each directory in the image, the path table provides the directory identifier, the location of the extent in which the directory is recorded, the length of any extended attributes associated with the directory, and the index of its parent directory path table entry.
There are several extensions to ISO 9660 that relax some of its limitations. Notable examples include Rock Ridge (Unix-style permissions and longer names), Joliet (Unicode, allowing non-Latin scripts to be used), El Torito (enables CDs to be bootable) and the Apple ISO 9660 Extensions (macOS-specific file characteristics such as resource forks, file backup date and more).
History
Compact discs were originally developed for recording musical data, but soon were used for storing additional digital data types because they were equally effective for archival mass data storage. Called CD-ROMs, the lowest level format for these type of compact discs was defined in the Yellow Book specification in 1983. However, this book did not define any format for organizing data on CD-ROMs into logical units such as files, which led to every CD-ROM maker creating their own format. In order to develop a CD-ROM file system standard (Z39.60 - Volume and File Structure of CDROM for Information Interchange), the National Information Standards Organization (NISO) set up Standards Committee SC EE (Compact Disc Data Format) in July 1985. In September/ October 1985 several companies invited experts to participate in the development of a working paper for such a standard.
In November 1985, representatives of computer hardware manufacturers gathered at the High Sierra Hotel and Casino (currently called the Hard Rock Hotel and Casino) near Lake Tahoe, California. This group became known as the High Sierra Group (HSG). Present at the meeting were representatives from Apple Computer, AT&T, Digital Equipment Corporation (DEC), Hitachi, LaserData, Microware, Microsoft, 3M, Philips, Reference Technology Inc., Sony Corporation, TMS Inc., VideoTools (later Meridian), Xebec, and Yelick. The meeting report evolved from the Yellow Book CD-ROM standard, which was so open ended it was leading to diversification and creation of many incompatible data storage methods. The High Sierra Group Proposal (HSGP) was released in May 1986, defining a file system for CD-ROMs commonly known as the High Sierra Format.
A draft version of this proposal was submitted to the European Computer Manufacturers Association (ECMA) for standardization. With some changes, this led to the issue of the initial edition of the ECMA-119 standard in December 1986. The ECMA submitted their standard to the International Standards Organization (ISO) for fast tracking, where it was further refined into the ISO 9660 standard. For compatibility the second edition of ECMA-119 was revised to be equivalent to ISO 9660 in December 1987. ISO 9660:1988 was published in 1988. The main changes from the High Sierra Format in the ECMA-119 and ISO 9660 standards were international extensions to allow the format to work better on non-US markets.
In order not to create incompatibilities, NISO suspended further work on Z39.60, which had been adopted by NISO members on 28 May 1987. It was withdrawn before final approval, in favour of ISO 9660.
In 2013, ISO published Amendment 1 to the ISO 9660 standard, introducing new data structures and relaxed file name rules intended to "bring harmonization between ISO 9660 and widely used 'Joliet Specification'." In December 2017, a 3rd Edition of ECMA-119 was published that is technically identical with ISO 9660, Amendment 1.
In 2020, ISO published Amendment 2, which adds some minor clarifying matter, but does not add or correct any technical information of the standard.
Specifications
The following is the rough overall structure of the ISO 9660 file system.
Multi-byte values can be stored in three different formats: little-endian, big-endian, and in a concatenation of both types in what the specification calls "both-byte" order. Both-byte order is required in several fields in the volume descriptors and directory records, while path tables can be either little-endian or big-endian.
Top level
The system area, the first 32,768 data bytes of the disc (16 sectors of 2,048 bytes each), is unused by ISO 9660 and therefore available for other uses. While it is suggested that they are reserved for use by bootable media, a CD-ROM may contain an alternative file system descriptor in this area, and it is often used by hybrid CDs to offer classic Mac OS-specific and macOS-specific content.
Volume descriptor set
The data area begins with the volume descriptor set, a set of one or more volume descriptors terminated with a volume descriptor set terminator. These collectively act as a header for the data area, describing its content (similar to the BIOS parameter block used by FAT, HPFS and NTFS formatted disks).
Each volume descriptor is 2048 bytes in size, fitting perfectly into a single Mode 1 or Mode 2 Form 1 sector. They have the following structure:
The data field of a volume descriptor may be subdivided into several fields, with the exact content depending on the type. Redundant copies of each volume descriptor can also be included in case the first copy of the descriptor becomes corrupt.
Standard volume descriptor types are the following:
An ISO 9660 compliant disc must contain at least one primary volume descriptor describing the file system and a volume descriptor set terminator for indicating the end of the descriptor sequence. The volume descriptor set terminator is simply a particular type of volume descriptor with the purpose of marking the end of this set of structures. The primary volume descriptor provides information about the volume, characteristics and metadata, including a root directory record that indicates in which sector the root directory is located. Other fields contain the description or name of the volume, and information about who created it and with which application. The size of the logical blocks which the file system uses to segment the volume is also stored in a field inside the primary volume descriptor, as well as the amount of space occupied by the volume (measured in number of logical blocks).
In addition to the primary volume descriptor(s), supplementary volume descriptors or enhanced volume descriptors may be present. Supplementary volume descriptors describe the same volume as the primary volume descriptor does, and are normally used for providing additional code page support when the standard code tables are insufficient. The standard specifies that ISO 2022 is used for managing code sets that are wider than 8 bytes, and that ISO 2375 escape sequences are used to identify each particular code page used. Consequently, ISO 9660 supports international single-byte and multi-byte character sets, provided they fit into the framework of the referenced standards. However, ISO 9660 does not specify any code pages that are guaranteed to be supported: all use of code tables other than those defined in the standard itself are subject to agreement between the originator and the recipient of the volume. Enhanced volume descriptors were introduced in ISO 9660, Amendment 1. They relax some of the requirements of the other volume descriptors and the directory records referenced by them: for example, the directory depth can exceed eight, file identifiers need not contain '.' or file version number, the length of a file and directory identifier is maximized to 207.
Path tables
Path tables summarize the directory structure of the relevant directory hierarchy. For each directory in the image, the path table provides the directory identifier, the location of the extent in which the directory is recorded, the length of any extended attributes associated with the directory, and the index of its parent directory path table entry. The parent directory number is a 16-bit number, limiting its range from 1 to 65,535.
Directories and files
Directory entries are stored following the location of the root directory entry, where evaluation of filenames is begun. Both directories and files are stored as extents, which are sequential series of sectors. Files and directories are differentiated only by a file attribute that indicates its nature (similar to Unix). The attributes of a file are stored in the directory entry that describes the file, and optionally in the extended attribute record. To locate a file, the directory names in the file's path can be checked sequentially, going to the location of each directory to obtain the location of the subsequent subdirectory. However, a file can also be located through the path table provided by the file system. This path table stores information about each directory, its parent, and its location on disc. Since the path table is stored in a contiguous region, it can be searched much faster than jumping to the particular locations of each directory in the file's path, thus reducing seek time.
The standard specifies three nested levels of interchange (paraphrased from section 10):
Level 1: File names are limited to eight characters with a three-character extension. Directory names are limited to eight characters. Files may contain one single file section.
Level 2: Files may contain one single file section.
Level 3: No additional restrictions than those stipulated in the main body of the standard. That is, directory identifiers may not exceed 31 characters in length, and file name + '.' + file name extension may not exceed 30 characters in length (sections 7.5 and 7.6). Files are also allowed to consist of multiple non-contiguous sections (with some restrictions as to order).
Additional restrictions in the body of the standard: The depth of the directory hierarchy must not exceed 8 (root directory being at level 1), and the path length of any file must not exceed 255. (section 6.8.2.1).
The standard also specifies the following name restrictions (sections 7.5 and 7.6):
All levels restrict file names in the mandatory file hierarchy to upper case letters, digits, underscores ("_"), and a dot. (See also section 7.4.4 and Annex A.)
If no characters are specified for the File Name then the File Name Extension shall consist of at least one character.
If no characters are specified for the File Name Extension then the File Name shall consist of at least one character.
File names shall not have more than one dot.
Directory names shall not use dots at all.
A CD-ROM producer may choose one of the lower Levels of Interchange specified in chapter 10 of the standard, and further restrict file name length from 30 characters to only 8+3 in file identifiers, and 8 in directory identifiers in order to promote interchangeability with implementations that do not implement the full standard.
All numbers in ISO 9660 file systems except the single byte value used for the GMT offset are unsigned numbers. As the length of a file's extent on disc is stored in a 32 bit value, it allows for a maximum length of just over 4.2 GB (more precisely, one byte less than 4 GiB). It is possible to circumvent this limitation by using the multi-extent (fragmentation) feature of ISO 9660 Level 3 to create ISO 9660 file systems and single files up to 8 TB. With this, files larger than 4 GiB can be split up into multiple extents (sequential series of sectors), each not exceeding the 4 GiB limit. For example, the free software such as InfraRecorder, ImgBurn and mkisofs as well as Roxio Toast are able to create ISO 9660 file systems that use multi-extent files to store files larger than 4 GiB on appropriate media such as recordable DVDs. Linux supports multiple extents.
Extensions and improvements
There are several extensions to ISO 9660 that relax some of its limitations. Notable examples include Rock Ridge (Unix-style permissions and longer names), Joliet (Unicode, allowing non-Latin scripts to be used), El Torito (enables CDs to be bootable) and the Apple ISO 9660 Extensions (macOS-specific file characteristics such as resource forks, file backup date and more).
SUSP
System Use Sharing Protocol (SUSP, IEEE P1281) provides a generic way of including additional properties for any directory entry reachable from the primary volume descriptor (PVD). In an ISO 9660 volume, every directory entry has an optional system use area whose contents are undefined and left to be interpreted by the system. SUSP defines a method to subdivide that area into multiple system use fields, each identified by a two-character signature tag. The idea behind SUSP was that it would enable any number of independent extensions to ISO 9660 to be created and included on a volume without conflicting. It also allows for the inclusion of property data that would otherwise be too large to fit within the limits of the system use area.
SUSP defines several common tags and system use fields:
CE: Continuation area
PD: Padding field
SP: System use sharing protocol indicator
ST: System use sharing protocol terminator
ER: Extensions reference
ES: Extension selector
Other known SUSP fields include:
AA: Apple extension, preferred
BA: Apple extension, old (length attribute is missing)
AS: Amiga file properties
ZF: zisofs compressed file, usually produced by program mkzftree or by libisofs. Transparently decompressed by Linux kernel if built with CONFIG_ZISOFS.
AL: records Extended File Attributes, including ACLs. Proposed by libburnia, supported by libisofs.
The Apple extensions do not technically follow the SUSP standard; however the basic structure of the AA and AB fields defined by Apple are forward compatible with SUSP; so that, with care, a volume can use both Apple extensions as well as RRIP extensions.
Rock Ridge
The Rock Ridge Interchange Protocol (RRIP, IEEE P1282) is an extension which adds POSIX file system semantics. The availability of these extension properties allows for better integration with Unix and Unix-like operating systems. The standard takes its name from the fictional town Rock Ridge in Mel Brooks' film Blazing Saddles. The RRIP extensions are, briefly:
Longer file names (up to 255 bytes) and fewer restrictions on allowed characters (support for lowercase, etc.)
UNIX-style file modes, user ids and group ids, and file timestamps
Support for Symbolic links and device files
Deeper directory hierarchy (more than 8 levels)
Efficient storage of sparse files
The RRIP extensions are built upon SUSP, defining additional tags for support of POSIX semantics, along with the format and meaning of the corresponding system use fields:
RR: Rock Ridge extensions in-use indicator (note: dropped from standard after version 1.09)
PX: POSIX file attributes
PN: POSIX device numbers
SL: symbolic link
NM: alternate name
CL: child link
PL: parent link
RE: relocated directory
TF: time stamp
SF: sparse file data
Amiga Rock Ridge is similar to RRIP, except it provides additional properties used by AmigaOS. It too is built on the SUSP standard by defining an "AS"-tagged system use field. Thus both Amiga Rock Ridge and the POSIX RRIP may be used simultaneously on the same volume. Some of the specific properties supported by this extension are the additional Amiga-bits for files. There is support for attribute "P" that stands for "pure" bit (indicating re-entrant command) and attribute "S" for script bit (indicating batch file). This includes the protection flags plus an optional comment field. These extensions were introduced by Angela Schmidt with the help of Andrew Young, the primary author of the Rock Ridge Interchange Protocol and System Use Sharing Protocol. The first publicly available software to master a CD-ROM with Amiga extensions was MakeCD, an Amiga software which Angela Schmidt developed together with Patrick Ohly.
El Torito
El Torito is an extension designed to allow booting a computer from a CD-ROM. It was announced in November 1994 and first issued in January 1995 as a joint proposal by IBM and BIOS manufacturer Phoenix Technologies. According to legend, the El Torito CD/DVD extension to ISO 9660 got its name because its design originated in an El Torito restaurant in Irvine, California (). The initial two authors were Curtis Stevens, of Phoenix Technologies, and Stan Merkin, of IBM.
A 32-bit PC BIOS will search for boot code on an ISO 9660 CD-ROM. The standard allows for booting in two different modes. Either in hard disk emulation when the boot information can be accessed directly from the CD media, or in floppy emulation mode where the boot information is stored in an image file of a floppy disk, which is loaded from the CD and then behaves as a virtual floppy disk. This is useful for computers that were designed to boot only from a floppy drive. For modern computers the "no emulation" mode is generally the more reliable method. The BIOS will assign a BIOS drive number to the CD drive. The drive number (for INT 13H) assigned is any of 80hex (hard disk emulation), 00hex (floppy disk emulation) or an arbitrary number if the BIOS should not provide emulation. Emulation is useful for booting older operating systems from a CD, by making it appear to them as if they were booted from a hard or floppy disk.
El Torito can also be used to produce CDs which can boot up Linux operating systems, by including the GRUB bootloader on the CD and following the Multiboot Specification. While the El Torito spec alludes to a "Mac" platform ID, PowerPC-based Apple Macintosh computers don't use it.
Joliet
Joliet is an extension specified and endorsed by Microsoft and has been supported by all versions of its Windows operating system since Windows 95 and Windows NT 4.0. Its primary focus is the relaxation of the filename restrictions inherent with full ISO 9660 compliance. Joliet accomplishes this by supplying an additional set of filenames that are encoded in UCS-2BE (UTF-16BE in practice since Windows 2000). These filenames are stored in a special supplementary volume descriptor, that is safely ignored by ISO 9660-compliant software, thus preserving backward compatibility. The specification only allows filenames to be up to 64 Unicode characters in length. However, the documentation for mkisofs states filenames up to 103 characters in length do not appear to cause problems. Microsoft has documented it "can use up to 110 characters."
Joliet allows Unicode characters to be used for all text fields, which includes file names and the volume name. A "Secondary" volume descriptor with type 2 contains the same information as the Primary one (sector 16 offset 40 bytes), but in UCS-2BE in sector 17, offset 40 bytes. As a result of this, the volume name is limited to 16 characters.
Many current PC operating systems are able to read Joliet-formatted media, thus allowing exchange of files between those operating systems even if non-Roman characters are involved (such as Arabic, Japanese or Cyrillic), which was formerly not possible with plain ISO 9660-formatted media. Operating systems which can read Joliet media include:
Microsoft Windows; Microsoft recommends the use of the Joliet extension for developers targeting Windows.
Linux
macOS
FreeBSD
OpenSolaris
Haiku
AmigaOS
RISC OS
Romeo
Romeo was developed by Adaptec and allows the use of long filenames up to 128 characters. However, Romeo is not backwards compatible with ISO 9660 and discs authored using this file system can only be read under the Windows 9x and Windows NT platforms, thus not allowing exchange of files between those operating systems if non-Roman characters are involved (such as Arabic, Japanese or Cyrillic), for example ü becomes ³.
Apple extensions
Apple Computer authored a set of extensions that add ProDOS or HFS/HFS+ (the primary contemporary file system for Mac OS) properties to the filesystem. Some of the additional metadata properties include:
Date of last backup
File type
Creator code
Flags and data for display
Reference to a resource fork
In order to allow non-Macintosh systems to access Macintosh files on CD-ROMs, Apple chose to use an extension of the standard ISO 9660 format. Most of the data, other than the Apple specific metadata, remains visible to operating systems that are able to read ISO 9660.
Other extensions
For operating systems which do not support any extensions, a name translation file TRANS.TBL must be used. The TRANS.TBL file is a plain ASCII text file. Each line contains three fields, separated by an arbitrary amount of whitespace:
The file type ("F" for file or "D" for directory);
The ISO 9660 filename (including the usually hidden ";1" for files); and
The extended filename, which may contain spaces.
Most implementations that create TRANS.TBL files put a single space between the file type and ISO 9660 name and some arbitrary number of tabs between the ISO 9660 filename and the extended filename.
Native support for using TRANS.TBL still exists in many ISO 9660 implementations, particularly those related to Unix. However, it has long since been superseded by other extensions, and modern utilities that create ISO 9660 images either cannot create TRANS.TBL files at all, or no longer create them unless explicitly requested by the user. Since a TRANS.TBL file has no special identification other than its name, it can also be created separately and included in the directory before filesystem creation.
The ISO 13490 standard is an extension to the ISO 9660 format that adds support for multiple sessions on a disc. Since ISO 9660 is by design a read-only, pre-mastered file system, all the data has to be written in one go or "session" to the medium. Once written, there is no provision for altering the stored content. ISO 13490 was created to allow adding more files to a writeable disc such as CD-R in multiple sessions.
JIS X 0606:1998, also known as ISO 9660:1999, is a Japanese Industrial Standard draft created by the Japanese National Body (JTC1 N4222) in order to make some improvements and remove some limitations from the original ISO 9660 standard. This draft was submitted in 1998, but it has not been ratified as an ISO standard yet. Some of its changes includes the removal of some restrictions imposed by the original standard by extending the maximum file name length to 207 characters, removing the eight-level maximum directory nesting limit, and removing the special meaning of the dot character in filenames. Some operating systems allow these relaxations as well when reading optical discs. Several disc authoring tools (such as Nero Burning ROM, mkisofs and ImgBurn) support a so-called "ISO 9660:1999" mode (sometimes called "ISO 9660 v2" or "ISO 9660 Level 4" mode) that removes restrictions following the guidelines in the ISO 9660:1999 draft.
The ISO 13346/ECMA-167 standard was designed in conjunction to the ISO 13490 standard. This new format addresses most of the shortcomings of ISO 9660, and a subset of it evolved into the Universal Disk Format (UDF), which was adopted for DVDs. The volume descriptor table retains the ISO9660 layout, but the identifier has been updated.
Disc images
Optical disc images are a common way to electronically transfer the contents of CD-ROMs. They often have the filename extension .iso (.iso9660 is less common, but also in use) and are commonly referred to as "ISOs".
Platforms
Most operating systems support reading of ISO 9660 formatted discs, and most new versions support the extensions such as Rock Ridge and Joliet. Operating systems that do not support the extensions usually show the basic (non-extended) features of a plain ISO 9660 disc.
Operating systems that support ISO 9660 and its extensions include the following:
DOS: access with extensions, such as MSCDEX.EXE (Microsoft CDROM Extension), NWCDEX.EXE or CORELCDX.EXE
Microsoft Windows 95, Windows 98, Windows ME: can read ISO 9660 Level 1, 2, 3, and Joliet
Microsoft Windows NT 4.0, Windows 2000, Windows XP, and newer Windows versions, can read ISO 9660 Level 1, 2, 3, Joliet, and ISO 9660:1999. Windows 7 may also mistake UDF format for CDFS. for more information see UDF.
Linux and BSD: ISO 9660 Level 1, 2, 3, Joliet, Rock Ridge, and ISO 9660:1999
Apple GS/OS: ISO Level 1 and 2 support via the HS.FST File System Translator.
Classic Mac OS 7 to 9: ISO Level 1, 2. Optional free software supports Rock Ridge and Joliet (including ISO Level 3): Joke Ridge and Joliet Volume Access.
macOS (all versions): ISO Level 1, 2, Joliet and Rock Ridge Extensions. Level 3 is not currently supported, although users have been able to mount these discs
AmigaOS supports the "AS" extensions (which preserve the Amiga protection bits and file comments)
QNX
ULTRIX
OS/2, eComStation and ArcaOS
BeOS, Zeta and Haiku
OpenVMS supports only ISO 9660 Interchange levels 1–3, with no extensions
RISC OS support for optical media written on a PC is patchy. Most CD-Rs/RWs work perfectly, however DVD+-Rs/RWs/RAMs are entirely hit and miss running RISC OS 4.02, RISC OS 4.39 and RISC OS 6.20
See also
Comparison of disc image software
Disk image emulator
List of International Organization for Standardization standards
Hybrid CD
References
Further reading
External links
This is the ECMA release of the ISO 9660:1988 standard, available as a free download
ISOLINUX source code (see isolinux.asm line 294 onward)
(see int 13h in interrupt.b, esp. functions 4a to 4d)
, discusses shortcomings of the standard
US Patent 5758352 - Common name space for long and short filenames
Amiga APIs
Apple Inc. file systems
Compact disc
Disk file systems
Ecma standards
09660
Optical computer storage
Optical disc authoring
Windows disk file systems | Operating System (OS) | 1,115 |
Uniprocessor system
A uniprocessor system is defined as a computer system that has a single central processing unit that is used to execute computer tasks. As more and more modern software is able to make use of multiprocessing architectures, such as SMP and MPP, the term uniprocessor is therefore used to distinguish the class of computers where all processing tasks share a single CPU. Most desktop computers are shipped with multiprocessing architectures since the 2010s. As such, this kind of system uses a type of architecture that is based on a single computing unit. All operations (additions, multiplications, etc.) are thus done sequentially on the unit.
Further reading
Parallel computing | Operating System (OS) | 1,116 |
What (ITS utility)
What (typed as in the prompt) was a small information utility available in the Incompatible Timesharing System. It could provide information about incoming email, bus schedule on the MIT campus, executable source files or answer the user in a humorous manner.
Implementation
was written in the MIDAS assembly language. It can still be used on some of the ITS instances maintained across the web. The last traceable edit of the source code was by Ken L. Harrenstien on 16 May 1988.
Usage examples
Without arguments would print information about inbox status:
*:what
You don't seem to have any recent messages.
:KILL E$J
*
With the argument bus it would print out information about the next few buses leaving from the MIT campus:
*:what bus
It is now 12:50
Bus 83 leaves Central Sq 13:10, 13:30, 13:45, 14:00, ...
Bus 83 leaves Ringworld/Alewife 13:00, 13:20, 13:40, 13:55, ...
:KILL E$J
*
Asked about source for NAME, responded with paths to source files corresponding to NAME:
*:what source for what
UP:SYSENG;
0 WHAT 201 3 +487 11/30/1987 17:33:23 (5/2/2015) KLH
0 WHAT 204 3 +493 5/16/1988 19:13:03 (5/4/2015) KLH
*
Not knowing the answer, it would often resort to humor:
*:what is life
You tell me.
:KILL E$J
*
Finally, displayed some amount of introspection:
*:what is this
It's an all purpose utility program, dummy!
:KILL E$J
*:what are you
I am an omniscient utility program, idiot!
:KILL E$J
*
See also
Incompatible Timesharing System
Dynamic debugging technique
Maclisp
External links
ITS System Documentation
UP: Public ITS system operated by the Update Computer Club at Uppsala University
Website for ITS system hobbyists with much information and documentation
Massachusetts Institute of Technology
Time-sharing operating systems
Utility software | Operating System (OS) | 1,117 |
Adaptive Domain Environment for Operating Systems
Adeos (Adaptive Domain Environment for Operating Systems) is a nanokernel hardware abstraction layer (HAL), or hypervisor, that operates between computer hardware and the operating system (OS) that runs on it. It is distinct from other nanokernels in that it is not only a low level layer for an outer kernel. Instead, it is intended to run several kernels together, which makes it similar to full virtualization technologies. It is free and open-source software released under a GNU General Public License (GPL).
Adeos provides a flexible environment for sharing hardware resources among multiple operating systems, or among multiple instances of one OS, thereby enabling multiple prioritized domains to exist simultaneously on the same hardware.
Adeos has been successfully inserted beneath the Linux kernel, opening a range of possibilities, such as symmetric multiprocessing (SMP) clustering, more efficient virtualization, patchless kernel debugging, and real-time computing (RT) systems for Linux.
Unusually among HALs, Adeos can be loaded as a Linux loadable kernel module to allow another OS to run along with it. Adeos was developed in the context of real-time application interface (RTAI) to modularize it and separate the HAL from the real-time kernel.
Prior work
Two categories of methods exist to enable multiple operating systems to run on the same system. The first is simulation-based and provides a virtual environment for which to run additional operating systems. The second suggests the use of a nanokernel layer to enable hardware sharing.
In the simulation category, there are tools such as Xen, VMware, Plex86, VirtualPC and SimOS. There is also Kernel-based Virtual Machine (KVM) which is more similar to Adeos , but is not RT and requires specific virtualization hardware support. These methods are used for users who desire to run applications foreign to their base OS, they provide no control over the base OS to the user. Simulation was never meant to be used in a production environment. In the nanokernel category there are tools such as SPACE, cache kernel and Exokernel. All of these suggest building miniature hardware management facilities which can thereafter be used to build production operating systems . The problem of this approach is that it does not address the issue of extant operating systems and their user base.
Adeos addresses the requirements of both categories of application by providing a simple layer that is inserted under an unmodified running OS and thereafter provides the required primitives and mechanisms to allow multiple OSes to share the same hardware environment. Adeos does not attempt to impose any restrictions on the hardware’s use, by the different OSes, more than is necessary for Adeos’ own operation. Instead, such restriction is to be imposed by the system administrator or the system programmer. This exposes the system to mismanagement, but the idea behind Adeos is to give back control to system administrators and programmers.
Architecture
Adeos implements a queue of signals. Each time that a peripheral sends a signal, the different operating systems that are running in the machine are awakened, in turn, and must decide if they will accept (handle), ignore, discard, or terminate the signal. Signals not handled (or discarded) by an OS are passed to the next OS in the chain. Signals that are terminated are not propagated to latter stages.
As Adeos has to ensure equal and trusted access to the hardware, it takes control of some hardware commands issued by the different OSes; but, it also must not intrude too much on the different OSes’ normal behavior. Each OS is encompassed in a domain over which it has total control. This domain may include a private address space and software abstractions such as process, virtual memory, file-systems, etc. Adeos does not attempt to impose any policy of use of the hardware except as needed for its operation. The task of determining policy is left to the system architect.
Adeos interrupt pipe
Adeos uses an interrupt pipe to propagate interrupts through the different domains running on the hardware. As some domains may prefer to be the first to receive hardware interrupts, Adeos provides a mechanism for domains to have access to priority interrupt dispatching. In effect, Adeos places the requesting domain's interrupt handler and accompanying tables, which may be called as an interrupt mechanism in SPACE terminology, at the first stages of the interrupt pipeline. Domains can control whether they accept, ignore, discard or terminate interrupts. Each of these has a different effect and is controlled differently.
Accepting interrupts is the normal state of a domain's interrupt mechanism. When Adeos encounters a domain that is accepting interrupts it summons its interrupt handler after having set the required CPU environment and stack content for the interrupt handler to operate correctly. The OS then may decide to operate any number of operations including task scheduling. Once the OS is done, the pipeline proceeds as planned by propagating interrupts down the pipeline.
When an OS in a domain does not want to be interrupted, for any reason, it asks Adeos to stall the stage its domain occupies in the interrupt pipeline. By doing so, interrupts go no further in the pipeline and are stalled at the stage occupied by the domain. When the OS is done wanting to be uninterrupted, it asks Adeos to install the pipeline and thereafter all the interrupts that were stalled at the corresponding stage follow their route to the other stages of the pipeline.
When a domain is discarding interrupts, the interrupt passes over the stage occupied by the domain and continues onto the other stages. When a domain terminates interrupts then the interrupts that are terminated by it are not propagated to latter stages. Interrupt discarding and termination is only possible when the OS in a domain recognizes Adeos.
Since some OSes do not recognize Adeos, it is possible to create a domain which only serves as a handler for that OS. Hence, in the interrupt pipeline, this stage always precedes the handled domain's stage and may take actions for that domain with Adeos in order to provide the handled domain's OS with the illusion of normal system operation.
Once Adeos is done traversing the pipeline it checks if all domains are dormant. If that is the case, it then calls on its idle task. This task remains active until the occurrence of the next interrupt. If all the domains aren't dormant it restores the processor to the state it had prior the interrupt entering the pipeline and execution continues where it had left. Since Adeos is very much hardware dependent, many details are specific to one of its particular implementations.
Applicability
General-purpose operating system resource sharing
General-purpose operating system resource sharing is one of the main objectives of Adeos, to provide an environment which enables multiple general purpose OSes to share the same hardware.
Operating system development
Developing OSes is usually a complicated process which sometimes requires extra hardware such as in-circuit emulators to probe the hardware on which an OS is running. Using Adeos, OS development is eased since any undesired behavior may be controlled by an appropriate domain handler. It can also provide a default domain handler for OS development under which developers may have controlled direct access to the hardware they are meant to control. As Adeos is itself a kernel-module, such development domain handlers may be developed independently from Adeos.
Patchless kernel debuggers and probers
Adeos provides for a way for kernel debuggers and probers to take control of Linux without modifying Linux. As with other Adeos domains, these facilities would load as normal kernel modules and would thereafter request a ring-zero domain from Adeos. Once that is done, they may request priority interrupt dispatching in the interrupt pipeline. Hence, before Linux gets to handle any interrupts, they will be able to intercept those interrupts and carry out the requested debugging tasks. This can also be extended to performance profilers and other such development tools.
See also
Xenomai
Nanokernel
Hardware abstraction layer
HAL (software)
References
External links
Adeos Workspace
Nanokernels
Virtualization software | Operating System (OS) | 1,118 |
AppWare
AppWare was a rapid application development system for Microsoft Windows and the classic Mac OS based on a simple graphical programming language. Applications were constructed by connecting together icons representing objects in the program and their commands. The resulting logic could be compiled on either platform and typically only required minor changes to the GUI layout to complete the port.
Originally introduced in 1989 as Serius89 by Serius Corporation, and eventually becoming Serius Developer, it is best known as AppWare when it was owned and marketed by Novell starting in 1993. Novell sold the product off in 1996, it was renamed MicroBrew, and development eventually ceased during 1997.
History
Serius
Joe Firmage started development of what would become AppWare circa June 1987, originally in order to help develop an accounting system for his parent's greeting card company. In 1989, when he was 18 years old, he and his brother Ed formed Serius Corp. to market the product, now known as Serius89. The company was based in Salt Lake City, Utah.
The 1.0 version shipped for the Mac in August 1989, with two versions, Serius Programmer that allowed the creation of new applications using the existing object library, and Serius Developer that allowed new objects to be written in external computer languages. This release was followed by 1.1 in October, which added a new Database object, and the 1.2 update in December. Serius was one of several visual programming tools that were available on the Mac in the late 1980s, such as TGS Systems' Prograph. The Serius89 Programmer product sold for $295 and the Serius89 Developer for $495. A review of Serius89 1.2 by MacWEEK concluded that it was "a novel, fascinating approach to 'desktop programming' and, despite its shortcomings, we believe it's an investment that will pay dividends in the future."
A major update followed in April 1990, the 2.0 version. This included a greatly increased set of objects, including support for the Communications ToolBox and an associated Terminal object. This release also included a suite of multimedia objects that allowed for the creation of interactive kiosk apps and similar. A 2.1 release followed in October, and an enormous performance upgrade in 2.2 in October 1991. By the end of 1991, Serius Corp. had attracted several outside investors and had 21 employees.
In January 1992 3.0 was release, including significant changes. The largest change followed in November 1992, however, with the introduction of Windows support and a renaming to Serius Workshop and Serius Developer Pro (mapping to Programmer and Developer from previous versions).
AppWare
Novell had invested in Serius on a number of occasions. In June 1993, during Ray Noorda's period of intense empire building, Novell purchased Serius outright. The company also purchased Software Transformations Inc., who made a cross-platform object code library that could be used to port conventional programs to a number of platforms, including the Mac, Windows, SunOS, UnixWare, HP-UX, with plans to add many more.
Together, Serius and Software Transformations were bundled under the new name AppWare, although they were unrelated products. Immediately after the acquisitions, AppWare was positioned as one of the "three pillars" of Novell's long-term strategy, the others being NetWare and UnixWare. The plan, according to statements from Novell, was to make it easier for 3rd party developers to write network-aware programs.
Under the new AppWare branding, Serius became the AppWare Visual AppBuilder, or VAB for short. The name of the internal Objects also changed, becoming "AppWare Loadable Modules" (ALMs), in keeping with the naming for their NetWare Loadable Modules (NLMs) under their core Novell NetWare product. The newly renamed version was released as a 1.0 version in October 1993. Software Transformations' code base became the AppWare Foundation.
It was not long before the AppWare plans started to fall apart. By early 1994, Novell's support for AppWare Foundation was waning, and in September 1994 they announced they would be selling the product to a third party. They did state that development of Visual AppBuilder would continue, and a Unix port would be following. They also continued to release a number of new ALMs. The Unix versions never appeared, instead, the Mac and Windows versions were renamed AppWare, and updated in a 1.1 release in 1994.
MicroBrew
Noorda was forced from Novell in April 1994, and many of the companies and products he had purchased were subsequently sold off. Joe Firmage became disillusioned with Novell in mid-1995, following its decision to sell UnixWare and abandon the "SuperNOS" project that would have combined UnixWare and Netware, and left Novell later that year. Novell then publicly stated in November 1995 that it was looking for a buyer for AppWare.
In March 1996, it was announced (based on an agreement that had been signed the month before) that Novell had sold all rights to the AppWare technology to a new company called Network Multimedia Inc. (NMI), which was headed by Ed Firmage, who had been director of AppWare marketing at Novell. Ed Firmage said that the new firm had plans to enhance and expand the capabilities of AppWare on several different platforms and in combination with several object and document technologies. (Joe Firmage did not move to Network Multimedia, instead co-founding USWeb after leaving Novell.)
Then in July 1996, Network Multimedia renamed AppWare as MicroBrew and relaunched it as a visual development tool for Internet applications. Network Multimedia was still making announcements regarding MicroBrew in February 1997.
The company continued development for a time, but folded in 1997.
Users of the system attempted to negotiate a release of the source code into some sort of open source license in early 2000, and started The Serious Project on SourceForge to coordinate development. However this release does not appear to have taken place, the page has no code.
Description
Applications in AppWare were constructed by dropping icons representing pre-rolled objects onto a worksheet, and then connecting them together to represent message flows between them. Communications was mediated by a protocol known as the Object Interaction Protocol. Some of the "objects" represented basic logic statements, while others represented GUI widgets such as text editors. The overall logic for any particular object, say a text editor in a window, was constructed as a series of chains of these object connections, fired up in response to an event. At a high level the system is similar in concept to HyperCard or Visual Basic, in that the program's logic is strongly associated with the object that sends some initial event.
AppWare built true "double clickable" applications that ran natively on either Windows or the Mac. Unlike most systems of the sort, like HyperCard, the applications did not end up looking generic, and generally behaved as first-class citizens of the host system. However the applications were also similar to HyperCard in that they generally did not support multi-window operation or the creation of new documents. AppWare applications consisted of a fixed number of forms and windows, a side effect of its lack of a NEW-type operator for creating new objects at runtime.
References
Citations
Bibliography
Further reading
Joe Firmage, "Visual AppBuilder Architectural Overview", Novell AppNotes, May 1994
Mark Gibbs, "Novell's AppWare shows early promise", Network World, 27 June 1994, pp. 55–57
Ronald Nutter, "AppWare decodes program development", Network World, 27 February 1995, pp. 51–51
Visual programming languages
Novell software
1989 software | Operating System (OS) | 1,119 |
PUPS P3
PUPS/P3 is an implementation of an organic computing environment for Linux which provides support for the implementation of low level persistent software agents.
Introduction
PUPS/P3 is a cluster computing environment derived from the MSPS operating environment implemented on the BBC Microcomputer.
The PUPS P3 environment has been used in the infrastructure of a number of scientific computing projects include the Daisy
automated species identification system and a number of computational neuroscience projects.
Features of the P3 process
PUPS/P3 processes are homeostatic agents. These agents are able to save their state and migrate between machines running compatible Linux kernels (via CRIU). The PUPS/P3 API also gives them significant access to the state of their environment: like biological organisms they are animate. That is, they are able to sense changes in their environment and respond appropriately. Fir example, a P3 process may elect to save its state or migrate if some resource, for example processor cycles become scarce. Effectively, this is the machine equivalent of an animal electing hibernate or migrate when its food resources become scarce. PUPS/P3 can also share data resources via a low level persistent object, the shared heap. The semantics of using this are similar to those used by the free()/malloc() API supplied by standard C libraries.
Computations can be jointly executed by a cluster of co-operating P3 processes. This cluster is in many ways analalogous to a multicellular organism: like cells within an organisms, individual P3 processes can specialise. For example, in the case of the Daisy pattern recognition system, the cluster consists of (ipm) processes which pre-process pattern-data, (floret) processes which run the PSOM neural nets used to classify those patterns, and (vhtml) processes which communicate the identity of patterns Daisy has discovered to the user. In addition, the Daisy cluster also has specialist (maggot and kepher) processes to clear and recycle file and memory space and (lyosome) processes which destroy and replace other processes within the cluster which have become corrupted and therefore non-functional.
In conjunction with virtualisation systems, for example the Oracle VirtualBox system, it is possible to use PUPS/P3 to build homeostatic virtual (Linux) machines which can carry computational payloads while living in a dynamic cloud environment. The latest release
of PUPS/P3 also supports container based operating system level virtualization (via Docker (software) and check pointing and subsequent migration and/or restoration via CRIU.
P3 process network
The P3 system facilitates dynamic asynchronous peer to peer communication between processes and also dynamic asynchronous communication between processes and the user. In the example process network shown, several of the communications methods implemented in PUPS/P3 are illustrated. These include:
User to PSRP server via PSRP client (using PSRP protocol). This communication mode establishes an asynchronous pseudotty connection between the psrp client (and hence the user) and the PSRP server process.
Peer to peer (between PSRP servers) via SIC channel. A PSRP server wishing to communicate directly with another server slaves an instance of the psrp client via a Slaved Interaction Client Channel (SIC). It then instructs this slaved psrp client to open a PSRP channel to the peer it wishes to talk to.
Peer to peer (between PSRP servers) via sensitive file. In this mode a PSRP server sends data to another server via file. To prevent any server reading the file it tagged with a key which has a matching lock on the recipient server. This lock and key system was inspired by enzyme-substrate and biological signaling systems.
References
External links
https://www.tumblingdice.co.uk/pupsp3/ PUPS/P3 site at Tumbling Dice
Agent-based programming languages | Operating System (OS) | 1,120 |
Mount (Unix)
In computing, mount is a command in various operating systems. Before a user can access a file on a Unix-like machine, the file system on the device which contains the file needs to be mounted with the mount command. Frequently mount is used for SD card, USB storage, DVD and other removable storage devices. The command is also available in the EFI shell.
Overview
The mount command instructs the operating system that a file system is ready to use, and associates it with a particular point in the overall file system hierarchy (its mount point) and sets options relating to its access. Mounting makes file systems, files, directories, devices and special files available for use and available to the user.
Its counterpart 'umount instructs the operating system that the file system should be disassociated from its mount point, making it no longer accessible and may be removed from the computer. It is important to 'umount a device before removing it since changes to files may have only partially been written and are completed as part of the 'umount.
The 'mount and 'umount commands require root user privilege to effect changes. Alternately, specific privileges to perform the corresponding action may have been previously granted by the root user. A file system can be defined as user mountable in the /etc/fstab file by the root user.
Examples
To display all mounted partitions:
$ mount
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
/dev/sda1 on /boot type ext3 (rw)
/tmp on /var/tmp type none (rw,noexec,nosuid,bind)
10.4.0.4:/srv/export/setup_server on /nfs/setup_server type nfs (ro,addr=10.4.0.4)
To mount the second partition of a hard disk drive to the existing directory /media/PHOTOS (mount point):
$ mount /dev/hda2 /media/PHOTOS
To unmount by referring to the physical disk partition:
$ umount /dev/hda2
To unmount by referring to the mount point:
$ umount /media/PHOTOS
To remount a partition with specific options:
$ mount -o remount,rw /dev/hda2
Derivatives and wrappers
pmount is a wrapper around the standard mount program which permits normal users to mount removable devices without a matching /etc/fstab entry. This provides a robust basis for automounting frameworks like GNOME's Utopia project and keeps the usage of root to a minimum.
This package also contains a wrapper pmount-hal, which reads information such as device labels and mount options from HAL and passes it to pmount.
The gnome-mount package contains programs for mounting, unmounting and ejecting storage devices. The goal for gnome-mount is for GNOME software such as gnome-volume-manager and GNOME-VFS to use this instead of invoking mount/umount/eject/pmount or direct HAL invoking methods. GNOME previously used pmount. Note, gnome-mount is not intended for direct use by users.
All the gnome-mount programs utilize HAL methods and as such run unprivileged. The rationale for gnome-mount is to have a centralized place (in GConf) where settings such as mount options and mount locations are maintained.
As with all unix-like commands, the options are specific to the version of mount and are precisely detailed in its man page.
In addition to the system call mount, the function mount_root() mounts the first, or root filesystem. In this context mount is called by the system call setup.
See also
Mount (computing)
mtab
util-linux
References
External links
Unix file system-related software | Operating System (OS) | 1,121 |
SX-Window
SX-Window is a graphic user interface (GUI) operating system for the Sharp X68000 series of computers, which were popular in Japan. It was first released in 1989 and had its last update in 1993.. It runs on top of the Human68k disk operating system, similarly to how Windows 3.1 runs on top of MS-DOS.
History
SX-Window was introduced for X68000 in 1989, and came preinstalled on the X68000 EXPERT model. It was developed by Hudson. The final release was 3.1 in 1993. In 2000, Sharp released the system software for the X68000 into the public domain, including SX-Window.
Technical details
The look and feel of the GUI is like that of the NeXTSTEP operating system, and its API is similar to the Macintosh Toolbox. It uses non-preemptive multitasking with the event-driven paradigm. It has a garbage collection system without MMU of MPU, but it was difficult to program because all pointers derived from handles become invalid once any API is called. The X68000 was very powerful for game software, but this GUI could be slow, as no hardware acceleration card was supported. Only a few applications and games were developed for this system.
References
Windowing systems | Operating System (OS) | 1,122 |
One Per Desk
The One Per Desk, or OPD, was an innovative hybrid personal computer/telecommunications terminal based on the hardware of the Sinclair QL. The One Per Desk was built by International Computers Limited (ICL) and launched in the UK in 1984. It was the result of a collaborative project between ICL, Sinclair Research and British Telecom begun in 1983, originally intended to incorporate Sinclair's flat-screen CRT technology.
Rebadged versions of the OPD were sold in the United Kingdom as the Merlin Tonto and as the Computerphone by Telecom Australia and the New Zealand Post Office. The initial orders placed for the One Per Desk were worth £4.5 million (for 1500 units) from British Telecom and £8 million from Telecom Australia, with ICL focusing on telecommunications providers as the means to reach small- and medium-sized businesses.
Hardware
From the QL, the OPD borrowed the 68008 CPU, ZX8301/8302 ULAs, 128 KB of RAM and dual Microdrives (re-engineered by ICL for greater reliability) but not the 8049 Intelligent Peripheral Controller. Unique to the OPD was a "telephony module" incorporating an Intel 8051 microcontroller (which also controlled the keyboard), two PSTN lines and a V.21/V.23 modem, plus a built-in telephone handset and a TI TMS5220 speech synthesiser (for automatic answering of incoming calls).
The OPD was supplied with either a 9-inch monochrome (white) monitor, priced at £1,195 plus VAT, or with a 14-inch colour monitor, priced at £1,625 plus VAT. Both monitors also housed the power supply for the OPD itself.
Later, 3.5" floppy disk drives were also available from third-party vendors.
Software
The system firmware (BFS or "Basic Functional Software") was unrelated to the QL's Qdos operating system, although a subset of SuperBASIC was provided on Microdrive cartridge. The BFS provided application-switching, voice/data call management, call answering, phone number directories, viewdata terminal emulation and a simple calculator.
The Psion applications suite bundled with the QL was also ported to the OPD as Xchange and was available as an optional ROM pack, priced at £130.
Other optional application software available on ROM included various terminal emulators such as Satellite Computing's ICL7561 emulator, plus their Action Diary and Presentation Software, address book, and inter-OPD communications utilities.
An ICL supplied application was used to synchronise a national bingo game across hundreds of bingo halls in the UK. The integral V.23 dialup modem was used to provide remote communications to the central server.
Several UK ICL Mainframe (Series 39) customers, in Local Government and Ministry of Defence sectors, used statistics applications on OPD systems to view graphical representations of mainframe reports. Once again, the integral V.23 modem was used to download from the mainframe.
Merlin Tonto
British Telecom Business Systems sold the OPD as the Merlin M1800 Tonto. BT intended the Tonto to be a centralised desktop information system able to access online services, mainframes and other similar systems through the BT telephone network. The Tonto retailed at £1,500 at launch. OPD peripherals and software ROM cartridges were also badged under the Merlin brand. BT withdrew support for the Tonto in February 1993.
The name Tonto was derived from "The Outstanding New Telecoms Opportunity"
A data communications adapter was introduced for the Tonto as a plug-in option or fitted on new units, providing a standard RS423 interface for use with mainframe computers or data communications networks, permitting the use of the Tonto as a VT100 terminal. A separate VT Link product provided support for VT52 and VT100 emulation for mainframe access over dial-up connections.
Work on the Tonto influenced the design of a follow-on product by BT's Communications Terminal Products Group and Rathdown Industries known as the QWERTYphone, this aiming to provide the telephony features of the Tonto at "a much lower cost and in a more user-friendly manner".
ComputerPhone
Aimed at the "office automation" market and seeking to integrate computing and telecommunications technology, combining support for both voice and data, the One Per Desk product was perceived as the first of its kind designed to meet the needs of managers, who would be relying on old-fashioned paper-based practices to perform their "complex and heavy workloads" involving a variety of ongoing activities including meetings, telephone calls, research, administration and numerous other tasks. Such potential users of information technology had apparently been ignored by office automation efforts, and personal computers were perceived as "exceeding most managers' requirements". The ComputerPhone attempted to sit between more specialised telephony devices and more advanced workstations, being marketed as an "executive" workstation in Australia, somewhat more towards middle management in New Zealand. Advertisements emphasised the telephony, office suite, desktop calculator, videotex, terminal and electronic messaging capabilities.
MegaOPD
An enhanced version of the OPD was produced in small numbers for the United States market. This had a 68008FN CPU, 256 KB of RAM as standard, an RS-232 port and enhanced firmware.
The telephone answering function had a female voice, with a slight New Jersey accent.
Legacy
ICL were the preferred supplier for UK local government, and OPDs found their way onto desks of council officers. Due to the cost, they tended to be issued only to the most senior, who were often elderly, had no interest in computers, and had secretaries to handle their administrative work, so many devices were simply used as telephones.
References
External links
Description of Merlin Tonto from BT Engineering
ICL One Per Desk page at rwapsoftware.co.uk including a floppy disk project
Computer-related introductions in 1984
Personal computers
Sinclair Research
ICL workstations
BT Group
68k architecture | Operating System (OS) | 1,123 |
Wired for Management
Wired for Management (WfM) was a primarily hardware-based system allowing a newly built computer without any software to be manipulated by a master computer that could access the hard disk of the new PC to paste the install program. It could also be used to update software and monitor system status remotely. Intel developed the system in the 1990s; it is now considered obsolete.
WfM included the Preboot Execution Environment (PXE) and Wake-on-LAN (WOL) standards.
WfM has been replaced by the Intelligent Platform Management Interface standard for servers and Intel Active Management Technology for PCs.
See also
Provisioning (telecommunications)
References
Networking hardware
System administration | Operating System (OS) | 1,124 |
Smolt (Linux)
Smolt was a computer program used to gather hardware information from computers running Linux, and submit them to a central server for statistical purposes, quality assurance and support. It was initiated by Fedora, with the release of Fedora 7, and soon after it was a combined effort of various Linux projects. Information collection was voluntary (opt-in) and anonymous. Smolt did not run automatically. It requested permission before uploading new data to the Smolt server. On October 10, 2012, it was announced that smolt would be discontinued on November 1, 2013. That is now in effect. The Smolt webpage is no longer available.
The project is superseded by Hardware probe.
General
Before Smolt there was no widely accepted system for assembling Linux statistics in one place. Smolt was not the first nor the only attempt, but it is the first accepted by major Linux distributions.
Collecting this kind of data across distributions can:
aid developers in detecting hardware that is poorly supported
focus efforts on popular hardware
provide workaround and fix tips
help users to choose the best distribution for their hardware
convince hardware vendors to support Linux
Use
Smolt was included in:
Fedora
openSUSE, releases from 11.1 to 12.2;
RHEL and CentOS see https://web.archive.org/web/20090109010205/http://download.fedora.redhat.com/pub/epel/ (retired link)
Gentoo see https://web.archive.org/web/20090207100254/http://packages.gentoo.org/package/app-admin/smolt
MythTV see http://smolt.mythtv.org/
Smolt server
The Smolt server stored all collected data.
See also
Linux Counter
References
External links
Smolt wiki
Smolt retirement
openSUSE about Smolt
Linux
Discontinued software
Internet properties disestablished in 2013 | Operating System (OS) | 1,125 |
Classic Mac OS memory management
Historically, the classic Mac OS used a form of memory management that has fallen out of favor in modern systems. Criticism of this approach was one of the key areas addressed by the change to .
The original problem for the engineers of the Macintosh was how to make optimum use of the 128 KB of RAM with which the machine was equipped, on Motorola 68000-based computer hardware that did not support virtual memory. Since at that time the machine could only run one application program at a time, and there was no fixed secondary storage, the engineers implemented a simple scheme which worked well with those particular constraints. That design choice did not scale well with the development of the machine, creating various difficulties for both programmers and users.
Fragmentation
The primary concern of the original engineers appears to have been fragmentation – that is, the repeated allocation and deallocation of memory through pointers leading to many small isolated areas of memory which cannot be used because they are too small, even though the total free memory may be sufficient to satisfy a particular request for memory. To solve this, Apple engineers used the concept of a relocatable handle, a reference to memory which allowed the actual data referred to be moved without invalidating the handle. Apple's scheme was simple – a handle was simply a pointer into a (non-relocatable) table of further pointers, which in turn pointed to the data. If a memory request required compaction of memory, this was done and the table, called the master pointer block, was updated. The machine itself implemented two areas in memory available for this scheme – the system heap (used for the OS), and the application heap. As long as only one application at a time was run, the system worked well. Since the entire application heap was dissolved when the application quit, fragmentation was minimized.
The memory management system had weaknesses; the system heap was not protected from errant applications, as would have been possible if the system architecture had supported memory protection, and this was frequently the cause of system problems and crashes. In addition, the handle-based approach also opened up a source of programming errors, where pointers to data within such relocatable blocks could not be guaranteed to remain valid across calls that might cause memory to move. This was a real problem for almost every system API that existed. Because of the transparency of system-owned data structures at the time, the APIs could do little to solve this. Thus the onus was on the programmer not to create such pointers, or at least manage them very carefully by dereferencing all handles after every such API call. Since many programmers were not generally familiar with this approach, early Mac programs suffered frequently from faults arising from this.
Palm OS and 16-bit Windows use a similar scheme for memory management, but the Palm and Windows versions make programmer error more difficult. For instance, in Mac OS, to convert a handle to a pointer, a program just de-references the handle directly, but if the handle is not locked, the pointer can become invalid quickly. Calls to lock and unlock handles are not balanced; ten calls to are undone by a single call to . In Palm OS and Windows, handles are an opaque type and must be de-referenced with on Palm OS or on Windows. When a Palm or Windows application is finished with a handle, it calls or . Palm OS and Windows keep a lock count for blocks; after three calls to , a block will only become unlocked after three calls to .
Addressing the problem of nested locks and unlocks can be straightforward (although tedious) by employing various methods, but these intrude upon the readability of the associated code block and require awareness and discipline on the part of the coder.
Memory leaks and stale references
Awareness and discipline are also necessary to avoid memory "leaks" (failure to deallocate within the scope of the allocation) and to avoid references to stale handles after release (which usually resulted in a hard crash—annoying on a single-tasking system, potentially disastrous if other programs are running).
Switcher
The situation worsened with the advent of Switcher, which was a way for a Mac with 512 KB or more of memory to run multiple applications at once. This was a necessary step forward for users, who found the one-app-at-a-time approach very limiting. Because Apple was now committed to its memory management model, as well as compatibility with existing applications, it was forced to adopt a scheme where each application was allocated its own heap from the available RAM.
The amount of actual RAM allocated to each heap was set by a value coded into the metadata of each application, set by the programmer. Sometimes this value wasn't enough for particular kinds of work, so the value setting had to be exposed to the user to allow them to tweak the heap size to suit their own requirements. While popular among "power users", this exposure of a technical implementation detail was against the grain of the Mac user philosophy. Apart from exposing users to esoteric technicalities, it was inefficient, since an application would be made to grab all of its allotted RAM, even if it left most of it subsequently unused. Another application might be memory starved, but would be unable to utilize the free memory "owned" by another application.
While an application could not beneficially utilize a sister application's heap, it could certainly destroy it, typically by inadvertently writing to a nonsense address. An application accidentally treating a fragment of text or image, or an unassigned location as a pointer could easily overwrite the code or data of other applications or even the OS, leaving "lurkers" even after the program was exited. Such problems could be extremely difficult to analyze and correct.
Switcher evolved into MultiFinder in System 4.2, which became the Process Manager in System 7, and by then the scheme was long entrenched. Apple made some attempts to work around the obvious limitations – temporary memory was one, where an application could "borrow" free RAM that lay outside of its heap for short periods, but this was unpopular with programmers so it largely failed to solve the problems. Apple's System 7 Tune-up addon added a "minimum" memory size and a "preferred" size—if the preferred amount of memory was not available, the program could launch in the minimum space, possibly with reduced functionality. This was incorporated into the standard OS starting with System 7.1, but still did not address the root problem.
Virtual memory schemes, which made more memory available by paging unused portions of memory to disk, were made available by third-party utilities like Connectix Virtual, and then by Apple in System 7. This increased Macintosh memory capacity at a performance cost, but did not add protected memory or prevent the memory manager's heap compaction that would invalidate some pointers.
32-bit clean
Originally the Macintosh had 128 KB of RAM, with a limit of 512 KB. This was increased to 4 MB upon the introduction of the Macintosh Plus. These Macintosh computers used the 68000 CPU, a 32-bit processor, but only had 24 physical address lines. The 24 lines allowed the processor to address up to 16 MB of memory (224 bytes), which was seen as a sufficient amount at the time. The RAM limit in the Macintosh design was 4 MB of RAM and 4 MB of ROM, because of the structure of the memory map. This was fixed by changing the memory map with the Macintosh II and the Macintosh Portable, allowing up to 8 MB of RAM.
Because memory was a scarce resource, the authors of the Mac OS decided to take advantage of the unused byte in each address. The original Memory Manager (up until the advent of System 7) placed flags in the high 8 bits of each 32-bit pointer and handle. Each address contained flags such as "locked", "purgeable", or "resource", which were stored in the master pointer table. When used as an actual address, these flags were masked off and ignored by the CPU.
While a good use of very limited RAM space, this design caused problems when Apple introduced the Macintosh II, which used the 32-bit Motorola 68020 CPU. The 68020 had 32 physical address lines which could address up to 4 GB (232 bytes) of memory. The flags that the Memory Manager stored in the high byte of each pointer and handle were significant now, and could lead to addressing errors.
In theory, the architects of the Macintosh system software were free to change the "flags in the high byte" scheme to avoid this problem, and they did. For example, on the Macintosh IIci and later machines, and other APIs were rewritten to implement handle locking in a way other than flagging the high bits of handles. But many Macintosh application programmers and a great deal of the Macintosh system software code itself accessed the flags directly rather than using the APIs, such as , which had been provided to manipulate them. By doing this they rendered their applications incompatible with true 32-bit addressing, and this became known as not being "32-bit clean".
In order to stop continual system crashes caused by this issue, System 6 and earlier running on a 68020 or a 68030 would force the machine into 24-bit mode, and would only recognize and address the first 8 megabytes of RAM, an obvious flaw in machines whose hardware was wired to accept up to 128 MB RAM – and whose product literature advertised this capability. With System 7, the Mac system software was finally made 32-bit clean, but there were still the problem of dirty ROMs. The problem was that the decision to use 24-bit or 32-bit addressing has to be made very early in the boot process, when the ROM routines initialized the Memory Manager to set up a basic Mac environment where NuBus ROMs and disk drivers are loaded and executed. Older ROMs did not have any 32-bit Memory Manager support and so was not possible to boot into 32-bit mode. Surprisingly, the first solution to this flaw was published by software utility company Connectix, whose 1991 product MODE32 reinitialized the Memory Manager and repeated early parts of the Mac boot process, allowing the system to boot into 32-bit mode and enabling the use of all the RAM in the machine. Apple licensed the software from Connectix later in 1991 and distributed it for free. The Macintosh IIci and later Motorola based Macintosh computers had 32-bit clean ROMs.
It was quite a while before applications were updated to remove all 24-bit dependencies, and System 7 provided a way to switch back to 24-bit mode if application incompatibilities were found. By the time of migration to the PowerPC and System 7.1.2, 32-bit cleanliness was mandatory for creating native applications and even later Motorola 68040 based Macs could not support 24-bit mode.
Object orientation
The rise of object-oriented languages for programming the Mac – first Object Pascal, then later C++ – also caused problems for the memory model adopted. At first, it would seem natural that objects would be implemented via handles, to gain the advantage of being relocatable. These languages, as they were originally designed, used pointers for objects, which would lead to fragmentation issues. A solution, implemented by the THINK (later Symantec) compilers, was to use Handles internally for objects, but use a pointer syntax to access them. This seemed a good idea at first, but soon deep problems emerged, since programmers could not tell whether they were dealing with a relocatable or fixed block, and so had no way to know whether to take on the task of locking objects or not. Needless to say this led to huge numbers of bugs and problems with these early object implementations. Later compilers did not attempt to do this, but used real pointers, often implementing their own memory allocation schemes to work around the Mac OS memory model.
While the Mac OS memory model, with all its inherent problems, remained this way right through to Mac OS 9, due to severe application compatibility constraints, the increasing availability of cheap RAM meant that by and large most users could upgrade their way out of a corner. The memory was not used efficiently, but it was abundant enough that the issue never became critical. This is ironic given that the purpose of the original design was to maximise the use of very limited amounts of memory. finally did away with the whole scheme, implementing a modern sparse virtual memory scheme. A subset of the older memory model APIs still exists for compatibility as part of Carbon, but maps to the modern memory manager (a thread-safe implementation) underneath. Apple recommends that code use and "almost exclusively".
References
External links
Classic Mac OS
Memory management | Operating System (OS) | 1,126 |
OpenCL
OpenCL (Open Computing Language) is a framework for writing programs that execute across heterogeneous platforms consisting of central processing units (CPUs), graphics processing units (GPUs), digital signal processors (DSPs), field-programmable gate arrays (FPGAs) and other processors or hardware accelerators. OpenCL specifies programming languages (based on C99, C++14 and C++17) for programming these devices and application programming interfaces (APIs) to control the platform and execute programs on the compute devices. OpenCL provides a standard interface for parallel computing using task- and data-based parallelism.
OpenCL is an open standard maintained by the non-profit technology consortium Khronos Group. Conformant implementations are available from Altera, AMD, ARM, Creative, IBM, Imagination, Intel, Nvidia, Qualcomm, Samsung, Vivante, Xilinx, and ZiiLABS.
Overview
OpenCL views a computing system as consisting of a number of compute devices, which might be central processing units (CPUs) or "accelerators" such as graphics processing units (GPUs), attached to a host processor (a CPU). It defines a C-like language for writing programs. Functions executed on an OpenCL device are called "kernels". A single compute device typically consists of several compute units, which in turn comprise multiple processing elements (PEs). A single kernel execution can run on all or many of the PEs in parallel. How a compute device is subdivided into compute units and PEs is up to the vendor; a compute unit can be thought of as a "core", but the notion of core is hard to define across all the types of devices supported by OpenCL (or even within the category of "CPUs"), and the number of compute units may not correspond to the number of cores claimed in vendors' marketing literature (which may actually be counting SIMD lanes).
In addition to its C-like programming language, OpenCL defines an application programming interface (API) that allows programs running on the host to launch kernels on the compute devices and manage device memory, which is (at least conceptually) separate from host memory. Programs in the OpenCL language are intended to be compiled at run-time, so that OpenCL-using applications are portable between implementations for various host devices. The OpenCL standard defines host APIs for C and C++; third-party APIs exist for other programming languages and platforms such as Python, Java, Perl, D and .NET. An implementation of the OpenCL standard consists of a library that implements the API for C and C++, and an OpenCL C compiler for the compute device(s) targeted.
In order to open the OpenCL programming model to other languages or to protect the kernel source from inspection, the Standard Portable Intermediate Representation (SPIR) can be used as a target-independent way to ship kernels between a front-end compiler and the OpenCL back-end.
More recently Khronos Group has ratified SYCL, a higher-level programming model for OpenCL as a single-source DSEL based on pure C++17 to improve programming productivity. In addition to that C++ features can also be used when implementing compute kernel sources in C++ for OpenCL language.
Memory hierarchy
OpenCL defines a four-level memory hierarchy for the compute device:
global memory: shared by all processing elements, but has high access latency ();
read-only memory: smaller, low latency, writable by the host CPU but not the compute devices ();
local memory: shared by a group of processing elements ();
per-element private memory (registers; ).
Not every device needs to implement each level of this hierarchy in hardware. Consistency between the various levels in the hierarchy is relaxed, and only enforced by explicit synchronization constructs, notably barriers.
Devices may or may not share memory with the host CPU. The host API provides handles on device memory buffers and functions to transfer data back and forth between host and devices.
OpenCL kernel language
The programming language that is used to write compute kernels is called kernel language. OpenCL adopts C/C++-based languages to specify the kernel computations performed on the device with some restrictions and additions to facilitate efficient mapping to the heterogeneous hardware resources of accelerators. Traditionally OpenCL C was used to program the accelerators in OpenCL standard, later C++ for OpenCL kernel language was developed that inherited all functionality from OpenCL C but allowed to use C++ features in the kernel sources.
OpenCL C language
OpenCL C is a C99-based language dialect adapted to fit the device model in OpenCL. Memory buffers reside in specific levels of the memory hierarchy, and pointers are annotated with the region qualifiers , , , and , reflecting this. Instead of a device program having a function, OpenCL C functions are marked to signal that they are entry points into the program to be called from the host program. Function pointers, bit fields and variable-length arrays are omitted, and recursion is forbidden. The C standard library is replaced by a custom set of standard functions, geared toward math programming.
OpenCL C is extended to facilitate use of parallelism with vector types and operations, synchronization, and functions to work with work-items and work-groups. In particular, besides scalar types such as and , which behave similarly to the corresponding types in C, OpenCL provides fixed-length vector types such as (4-vector of single-precision floats); such vector types are available in lengths two, three, four, eight and sixteen for various base types. Vectorized operations on these types are intended to map onto SIMD instructions sets, e.g., SSE or VMX, when running OpenCL programs on CPUs. Other specialized types include 2-d and 3-d image types.
Example: matrix-vector multiplication
The following is a matrix-vector multiplication algorithm in OpenCL C.
// Multiplies A*x, leaving the result in y.
// A is a row-major matrix, meaning the (i,j) element is at A[i*ncols+j].
__kernel void matvec(__global const float *A, __global const float *x,
uint ncols, __global float *y)
{
size_t i = get_global_id(0); // Global id, used as the row index
__global float const *a = &A[i*ncols]; // Pointer to the i'th row
float sum = 0.f; // Accumulator for dot product
for (size_t j = 0; j < ncols; j++) {
sum += a[j] * x[j];
}
y[i] = sum;
}
The kernel function computes, in each invocation, the dot product of a single row of a matrix and a vector :
.
To extend this into a full matrix-vector multiplication, the OpenCL runtime maps the kernel over the rows of the matrix. On the host side, the function does this; it takes as arguments the kernel to execute, its arguments, and a number of work-items, corresponding to the number of rows in the matrix .
Example: computing the FFT
This example will load a fast Fourier transform (FFT) implementation and execute it. The implementation is shown below. The code asks the OpenCL library for the first available graphics card, creates memory buffers for reading and writing (from the perspective of the graphics card), JIT-compiles the FFT-kernel and then finally asynchronously runs the kernel. The result from the transform is not read in this example.
#include <stdio.h>
#include <time.h>
#include "CL/opencl.h"
#define NUM_ENTRIES 1024
int main() // (int argc, const char* argv[])
{
// CONSTANTS
// The source code of the kernel is represented as a string
// located inside file: "fft1D_1024_kernel_src.cl". For the details see the next listing.
const char *KernelSource =
#include "fft1D_1024_kernel_src.cl"
;
// Looking up the available GPUs
const cl_uint num = 1;
clGetDeviceIDs(NULL, CL_DEVICE_TYPE_GPU, 0, NULL, (cl_uint*)&num);
cl_device_id devices[1];
clGetDeviceIDs(NULL, CL_DEVICE_TYPE_GPU, num, devices, NULL);
// create a compute context with GPU device
cl_context context = clCreateContextFromType(NULL, CL_DEVICE_TYPE_GPU, NULL, NULL, NULL);
// create a command queue
clGetDeviceIDs(NULL, CL_DEVICE_TYPE_DEFAULT, 1, devices, NULL);
cl_command_queue queue = clCreateCommandQueue(context, devices[0], 0, NULL);
// allocate the buffer memory objects
cl_mem memobjs[] = { clCreateBuffer(context, CL_MEM_READ_ONLY | CL_MEM_COPY_HOST_PTR, sizeof(float) * 2 * NUM_ENTRIES, NULL, NULL),
clCreateBuffer(context, CL_MEM_READ_WRITE, sizeof(float) * 2 * NUM_ENTRIES, NULL, NULL) };
// create the compute program
// const char* fft1D_1024_kernel_src[1] = { };
cl_program program = clCreateProgramWithSource(context, 1, (const char **)& KernelSource, NULL, NULL);
// build the compute program executable
clBuildProgram(program, 0, NULL, NULL, NULL, NULL);
// create the compute kernel
cl_kernel kernel = clCreateKernel(program, "fft1D_1024", NULL);
// set the args values
size_t local_work_size[1] = { 256 };
clSetKernelArg(kernel, 0, sizeof(cl_mem), (void *)&memobjs[0]);
clSetKernelArg(kernel, 1, sizeof(cl_mem), (void *)&memobjs[1]);
clSetKernelArg(kernel, 2, sizeof(float)*(local_work_size[0] + 1) * 16, NULL);
clSetKernelArg(kernel, 3, sizeof(float)*(local_work_size[0] + 1) * 16, NULL);
// create N-D range object with work-item dimensions and execute kernel
size_t global_work_size[1] = { 256 };
global_work_size[0] = NUM_ENTRIES;
local_work_size[0] = 64; //Nvidia: 192 or 256
clEnqueueNDRangeKernel(queue, kernel, 1, NULL, global_work_size, local_work_size, 0, NULL, NULL);
}
The actual calculation inside file "fft1D_1024_kernel_src.cl" (based on Fitting FFT onto the G80 Architecture):
R"(
// This kernel computes FFT of length 1024. The 1024 length FFT is decomposed into
// calls to a radix 16 function, another radix 16 function and then a radix 4 function
__kernel void fft1D_1024 (__global float2 *in, __global float2 *out,
__local float *sMemx, __local float *sMemy) {
int tid = get_local_id(0);
int blockIdx = get_group_id(0) * 1024 + tid;
float2 data[16];
// starting index of data to/from global memory
in = in + blockIdx; out = out + blockIdx;
globalLoads(data, in, 64); // coalesced global reads
fftRadix16Pass(data); // in-place radix-16 pass
twiddleFactorMul(data, tid, 1024, 0);
// local shuffle using local memory
localShuffle(data, sMemx, sMemy, tid, (((tid & 15) * 65) + (tid >> 4)));
fftRadix16Pass(data); // in-place radix-16 pass
twiddleFactorMul(data, tid, 64, 4); // twiddle factor multiplication
localShuffle(data, sMemx, sMemy, tid, (((tid >> 4) * 64) + (tid & 15)));
// four radix-4 function calls
fftRadix4Pass(data); // radix-4 function number 1
fftRadix4Pass(data + 4); // radix-4 function number 2
fftRadix4Pass(data + 8); // radix-4 function number 3
fftRadix4Pass(data + 12); // radix-4 function number 4
// coalesced global writes
globalStores(data, out, 64);
}
)"
A full, open source implementation of an OpenCL FFT can be found on Apple's website.
C++ for OpenCL language
In 2020 Khronos announced the transition to the community driven C++ for OpenCL programming language that provides features from C++17 in combination with the traditional OpenCL C features. This language allows to leverage a rich variety of language features from standard C++ while preserving backward compatibility to OpenCL C. This opens up a smooth transition path to C++ functionality for the OpenCL kernel code developers as they can continue using familiar programming flow and even tools as well as leverage existing extensions and libraries available for OpenCL C.
The language semantics is described in the documentation published in the releases of OpenCL-Docs repository hosted by the Khronos Group but it is currently not ratified by the Khronos Group. The C++ for OpenCL language is not documented in a stand-alone document and it is based on the specification of C++ and OpenCL C. The open source Clang compiler has supported C++ for OpenCL since release 9.
C++ for OpenCL has been originally developed as a Clang compiler extension and appeared in the release 9. As it was tightly coupled with OpenCL C and did not contain any Clang specific functionality its documentation has been re-hosted to the OpenCL-Docs repository from the Khronos Group along with the sources of other specifications and reference cards. The first official release of this document describing C++ for OpenCL version 1.0 has been published in December 2020. C++ for OpenCL 1.0 contains features from C++17 and it is backward compatible with OpenCL C 2.0. A work in progress draft of its documentation can be found on the Khronos website.
Features
C++ for OpenCL supports most of the features (syntactically and semantically) from OpenCL C except for nested parallelism and blocks. However, there are minor differences in some supported features mainly related to differences in semantics between C++ and C. For example, C++ is more strict with the implicit type conversions and it does not support the type qualifier. The following C++ features are not supported by C++ for OpenCL: virtual functions, operator, non-placement / operators, exceptions, pointer to member functions, references to functions, C++ standard libraries. C++ for OpenCL extends the concept of separate memory regions (address spaces) from OpenCL C to C++ features - functional casts, templates, class members, references, lambda functions, operators. Most of C++ features are not available for the kernel functions e.g. overloading or templating, arbitrary class layout in parameter type.
Example: complex number arithmetic
The following code snippet illustrates how kernels with complex number arithmetic can be implemented in C++ for OpenCL language with convenient use of C++ features.// Define a class Complex, that can perform complex number computations with
// various precision when different types for T are used - double, float, half.
template<typename T>
class complex_t {
T m_re; // Real component.
T m_im; // Imaginary component.
public:
complex_t(T re, T im): m_re{re}, m_im{im} {};
// Define operator for complex number multiplication.
complex_t operator*(const complex_t &other) const
{
return {m_re * other.m_re - m_im * other.m_im,
m_re * other.m_im + m_im * other.m_re};
}
int get_re() const { return m_re; }
int get_im() const { return m_im; }
};
// A helper function to compute multiplication over complex numbers read from
// the input buffer and to store the computed result into the output buffer.
template<typename T>
void compute_helper(__global T *in, __global T *out) {
auto idx = get_global_id(0);
// Every work-item uses 4 consecutive items from the input buffer
// - two for each complex number.
auto offset = idx * 4;
auto num1 = complex_t{in[offset], in[offset + 1]};
auto num2 = complex_t{in[offset + 2], in[offset + 3]};
// Perform complex number multiplication.
auto res = num1 * num2;
// Every work-item writes 2 consecutive items to the output buffer.
out[idx * 2] = res.get_re();
out[idx * 2 + 1] = res.get_im();
}
// This kernel is used for complex number multiplication in single precision.
__kernel void compute_sp(__global float *in, __global float *out) {
compute_helper(in, out);
}
#ifdef cl_khr_fp16
// This kernel is used for complex number multiplication in half precision when
// it is supported by the device.
#pragma OPENCL EXTENSION cl_khr_fp16: enable
__kernel void compute_hp(__global half *in, __global half *out) {
compute_helper(in, out);
}
#endif
Tooling and Execution Environment
C++ for OpenCL language can be used for the same applications or libraries and in the same way as OpenCL C language is used. Due to the rich variety of C++ language features, applications written in C++ for OpenCL can express complex functionality more conveniently than applications written in OpenCL C and in particular generic programming paradigm from C++ is very attractive to the library developers.
C++ for OpenCL sources can be compiled by OpenCL drivers that support cl_ext_cxx_for_opencl extension. Arm has announced support for this extension in December 2020. However, due to increasing complexity of the algorithms accelerated on OpenCL devices, it is expected that more applications will compile C++ for OpenCL kernels offline using stand alone compilers such as Clang into executable binary format or portable binary format e.g. SPIR-V. Such an executable can be loaded during the OpenCL applications execution using a dedicated OpenCL API.
Binaries compiled from sources in C++ for OpenCL 1.0 can be executed on OpenCL 2.0 conformant devices. Depending on the language features used in such kernel sources it can also be executed on devices supporting earlier OpenCL versions or OpenCL 3.0.
Aside from OpenCL drivers kernels written in C++ for OpenCL can be compiled for execution on Vulkan devices using clspv compiler and clvk runtime layer just the same way as OpenCL C kernels.
Contributions
C++ for OpenCL is an open language developed by the community of contributors listed in its documentation. New contributions to the language semantic definition or open source tooling support are accepted from anyone interested as soon as they are aligned with the main design philosophy and they are reviewed and approved by the experienced contributors.
History
OpenCL was initially developed by Apple Inc., which holds trademark rights, and refined into an initial proposal in collaboration with technical teams at AMD, IBM, Qualcomm, Intel, and Nvidia. Apple submitted this initial proposal to the Khronos Group. On June 16, 2008, the Khronos Compute Working Group was formed with representatives from CPU, GPU, embedded-processor, and software companies. This group worked for five months to finish the technical details of the specification for OpenCL 1.0 by November 18, 2008. This technical specification was reviewed by the Khronos members and approved for public release on December 8, 2008.
OpenCL 1.0
OpenCL 1.0 released with Mac OS X Snow Leopard on August 28, 2009. According to an Apple press release:
Snow Leopard further extends support for modern hardware with Open Computing Language (OpenCL), which lets any application tap into the vast gigaflops of GPU computing power previously available only to graphics applications. OpenCL is based on the C programming language and has been proposed as an open standard.
AMD decided to support OpenCL instead of the now deprecated Close to Metal in its Stream framework. RapidMind announced their adoption of OpenCL underneath their development platform to support GPUs from multiple vendors with one interface. On December 9, 2008, Nvidia announced its intention to add full support for the OpenCL 1.0 specification to its GPU Computing Toolkit. On October 30, 2009, IBM released its first OpenCL implementation as a part of the XL compilers.
Acceleration of calculations with factor to 1000 are possible with OpenCL in graphic cards against normal CPU.
Some important features of next Version of OpenCL are optional in 1.0 like double precision or half precision operations.
OpenCL 1.1
OpenCL 1.1 was ratified by the Khronos Group on June 14, 2010 and adds significant functionality for enhanced parallel programming flexibility, functionality, and performance including:
New data types including 3-component vectors and additional image formats;
Handling commands from multiple host threads and processing buffers across multiple devices;
Operations on regions of a buffer including read, write and copy of 1D, 2D, or 3D rectangular regions;
Enhanced use of events to drive and control command execution;
Additional OpenCL built-in C functions such as integer clamp, shuffle, and asynchronous strided copies;
Improved OpenGL interoperability through efficient sharing of images and buffers by linking OpenCL and OpenGL events.
OpenCL 1.2
On November 15, 2011, the Khronos Group announced the OpenCL 1.2 specification, which added significant functionality over the previous versions in terms of performance and features for parallel programming. Most notable features include:
Device partitioning: the ability to partition a device into sub-devices so that work assignments can be allocated to individual compute units. This is useful for reserving areas of the device to reduce latency for time-critical tasks.
Separate compilation and linking of objects: the functionality to compile OpenCL into external libraries for inclusion into other programs.
Enhanced image support (optional): 1.2 adds support for 1D images and 1D/2D image arrays. Furthermore, the OpenGL sharing extensions now allow for OpenGL 1D textures and 1D/2D texture arrays to be used to create OpenCL images.
Built-in kernels: custom devices that contain specific unique functionality are now integrated more closely into the OpenCL framework. Kernels can be called to use specialised or non-programmable aspects of underlying hardware. Examples include video encoding/decoding and digital signal processors.
DirectX functionality: DX9 media surface sharing allows for efficient sharing between OpenCL and DX9 or DXVA media surfaces. Equally, for DX11, seamless sharing between OpenCL and DX11 surfaces is enabled.
The ability to force IEEE 754 compliance for single precision floating point math: OpenCL by default allows the single precision versions of the division, reciprocal, and square root operation to be less accurate than the correctly rounded values that IEEE 754 requires. If the programmer passes the "-cl-fp32-correctly-rounded-divide-sqrt" command line argument to the compiler, these three operations will be computed to IEEE 754 requirements if the OpenCL implementation supports this, and will fail to compile if the OpenCL implementation does not support computing these operations to their correctly-rounded values as defined by the IEEE 754 specification. This ability is supplemented by the ability to query the OpenCL implementation to determine if it can perform these operations to IEEE 754 accuracy.
OpenCL 2.0
On November 18, 2013, the Khronos Group announced the ratification and public release of the finalized OpenCL 2.0 specification. Updates and additions to OpenCL 2.0 include:
Shared virtual memory
Nested parallelism
Generic address space
Images (optional, include 3D-Image)
C11 atomics
Pipes
Android installable client driver extension
half precision extended with optional cl_khr_fp16 extension
cl_double: double precision IEEE 754 (optional)
OpenCL 2.1
The ratification and release of the OpenCL 2.1 provisional specification was announced on March 3, 2015 at the Game Developer Conference in San Francisco. It was released on November 16, 2015. It introduced the OpenCL C++ kernel language, based on a subset of C++14, while maintaining support for the preexisting OpenCL C kernel language. Vulkan and OpenCL 2.1 share SPIR-V as an intermediate representation allowing high-level language front-ends to share a common compilation target. Updates to the OpenCL API include:
Additional subgroup functionality
Copying of kernel objects and states
Low-latency device timer queries
Ingestion of SPIR-V code by runtime
Execution priority hints for queues
Zero-sized dispatches from host
AMD, ARM, Intel, HPC, and YetiWare have declared support for OpenCL 2.1.
OpenCL 2.2
OpenCL 2.2 brings the OpenCL C++ kernel language into the core specification for significantly enhanced parallel programming productivity. It was released on May 16, 2017. Maintenance Update released in May 2018 with bugfixes.
The OpenCL C++ kernel language is a static subset of the C++14 standard and includes classes, templates, lambda expressions, function overloads and many other constructs for generic and meta-programming.
Uses the new Khronos SPIR-V 1.1 intermediate language which fully supports the OpenCL C++ kernel language.
OpenCL library functions can now use the C++ language to provide increased safety and reduced undefined behavior while accessing features such as atomics, iterators, images, samplers, pipes, and device queue built-in types and address spaces.
Pipe storage is a new device-side type in OpenCL 2.2 that is useful for FPGA implementations by making connectivity size and type known at compile time, enabling efficient device-scope communication between kernels.
OpenCL 2.2 also includes features for enhanced optimization of generated code: applications can provide the value of specialization constant at SPIR-V compilation time, a new query can detect non-trivial constructors and destructors of program scope global objects, and user callbacks can be set at program release time.
Runs on any OpenCL 2.0-capable hardware (only a driver update is required).
OpenCL 3.0
The OpenCL 3.0 specification was released on September 30, 2020 after being in preview since April 2020. OpenCL 1.2 functionality has become a mandatory baseline, while all OpenCL 2.x and OpenCL 3.0 features were made optional. The specification retains the OpenCL C language and deprecates the OpenCL C++ Kernel Language, replacing it with the C++ for OpenCL language based on a Clang/LLVM compiler which implements a subset of C++17 and SPIR-V intermediate code.
Version 3.0.7 of C++ for OpenCL with some Khronos openCL extensions were presented at IWOCL 21.
Nvidia improves with Khronos Vulkan Interop with semaphores and memory sharing.
Roadmap
When releasing OpenCL 2.2, the Khronos Group announced that OpenCL would converge where possible with Vulkan to enable OpenCL software deployment flexibility over both APIs. This has been now demonstrated by Adobe's Premiere Rush using the clspv open source compiler to compile significant amounts of OpenCL C kernel code to run on a Vulkan runtime for deployment on Android. OpenCL has a forward looking roadmap independent of Vulkan, with 'OpenCL Next' under development and targeting release in 2020. OpenCL Next may integrate extensions such as Vulkan / OpenCL Interop, Scratch-Pad Memory Management, Extended Subgroups, SPIR-V 1.4 ingestion and SPIR-V Extended debug info. OpenCL is also considering Vulkan-like loader and layers and a ‘Flexible Profile’ for deployment flexibility on multiple accelerator types.
Open source implementations
OpenCL consists of a set of headers and a shared object that is loaded at runtime. An installable client driver (ICD) must be installed on the platform for every class of vendor for which the runtime would need to support. That is, for example, in order to support Nvidia devices on a Linux platform, the Nvidia ICD would need to be installed such that the OpenCL runtime (the ICD loader) would be able to locate the ICD for the vendor and redirect the calls appropriately. The standard OpenCL header is used by the consumer application; calls to each function are then proxied by the OpenCL runtime to the appropriate driver using the ICD. Each vendor must implement each OpenCL call in their driver.
The Apple, Nvidia, ROCm, RapidMind and Gallium3D implementations of OpenCL are all based on the LLVM Compiler technology and use the Clang compiler as their frontend.
MESA Gallium Compute An implementation of OpenCL (actual 1.1 incomplete, mostly done AMD Radeon GCN) for a number of platforms is maintained as part of the Gallium Compute Project, which builds on the work of the Mesa project to support multiple platforms. Formerly this was known as CLOVER., actual development: mostly support for running incomplete framework with actual LLVM and CLANG, some new features like fp16 in 17.3, Target complete OpenCL 1.0, 1.1 and 1.2 for AMD and Nvidia. New Basic Development is done by Red Hat with SPIR-V also for Clover. New Target is modular OpenCL 3.0 with full support of OpenCL 1.2. Actual state is available in Mesamatrix. Image supports are here in the focus of development.
BEIGNET An implementation by Intel for its Ivy Bridge + hardware was released in 2013. This software from Intel's China Team, has attracted criticism from developers at AMD and Red Hat, as well as Michael Larabel of Phoronix. Actual Version 1.3.2 support OpenCL 1.2 complete (Ivy Bridge and higher) and OpenCL 2.0 optional for Skylake and newer. support for Android has been added to Beignet., actual development targets: only support for 1.2 and 2.0, road to OpenCL 2.1, 2.2, 3.0 is gone to NEO.
NEO An implementation by Intel for Gen. 8 Broadwell + Gen. 9 hardware released in 2018. This driver replaces Beignet implementation for supported platforms (not older 6.gen to Haswell). NEO provides OpenCL 2.1 support on Core platforms and OpenCL 1.2 on Atom platforms. Actual in 2020 also Graphic Gen 11 Ice Lake and Gen 12 Tiger Lake are supported. New OpenCL 3.0 is available for Alder Lake, Tiger Lake to Broadwell with Version 20.41+. It includes now optional OpenCL 2.0, 2.1 Features complete and some of 2.2.
ROCm
Created as part of AMD's GPUOpen, ROCm (Radeon Open Compute) is an open source Linux project built on OpenCL 1.2 with language support for 2.0. The system is compatible with all modern AMD CPUs and APUs (actual partly GFX 7, GFX 8 and 9), as well as Intel Gen7.5+ CPUs (only with PCI 3.0). With version 1.9 support is in some points extended experimental to Hardware with PCIe 2.0 and without atomics. An overview of actual work is done on XDC2018. ROCm Version 2.0 supports Full OpenCL 2.0, but some errors and limitations are on the todo list. Version 3.3 is improving in details. Version 3.5 does support OpenCL 2.2. Version 3.10 was with improvements and new APIs. Announced at SC20 is ROCm 4.0 with support of AMD Compute Card Instinct MI 100. Actual documentation of 4.3.1 is available at github. OpenCL 3.0 is work in progress.
POCL A portable implementation supporting CPUs and some GPUs (via CUDA and HSA). Building on Clang and LLVM. With version 1.0 OpenCL 1.2 was nearly fully implemented along with some 2.x features. Version 1.2 is with LLVM/CLANG 6.0, 7.0 and Full OpenCL 1.2 support with all closed tickets in Milestone 1.2. OpenCL 2.0 is nearly full implemented. Version 1.3 Supports Mac OS X. Version 1.4 includes support for LLVM 8.0 and 9.0. Version 1.5 implements LLVM/Clang 10 support. Version 1.6 implements LLVM/Clang 11 support and CUDA Acceleration. Actual targets are complete OpenCL 2.x, OpenCL 3.0 and improvement of performance. POCL 1.6 is with manual optimization at the same level of Intel compute runtime. Version 1.7 implements LLVM/Clang 12 support and some new OpenCL 3.0 features.
Shamrock A Port of Mesa Clover for ARM with full support of OpenCL 1.2, no actual development for 2.0.
FreeOCL A CPU focused implementation of OpenCL 1.2 that implements an external compiler to create a more reliable platform, no actual development.
MOCL An OpenCL implementation based on POCL by the NUDT researchers for Matrix-2000 was released in 2018. The Matrix-2000 architecture is designed to replace the Intel Xeon Phi accelerators of the TianHe-2 supercomputer. This programming framework is built on top of LLVM v5.0 and reuses some code pieces from POCL as well. To unlock the hardware potential, the device runtime uses a push-based task dispatching strategy and the performance of the kernel atomics is improved significantly. This framework has been deployed on the TH-2A system and is readily available to the public. Some of the software will next ported to improve POCL.
VC4CL An OpenCL 1.2 implementation for the VideoCore IV (BCM2763) processor used in the Raspberry Pi before its model 4.
Vendor implementations
Timeline of vendor implementations
June, 2008: During Apple’s WWDC conference an early beta of Mac OS X Snow Leopard was made available to the participants, it included the first beta implementation of OpenCL, about 6 months before the final version 1.0 specification was ratified late 2008. They also showed two demos. One was a grid of 8x8 screens rendered, each displaying the screen of an emulated Apple II machine — 64 independent instances in total, each running a famous karate game. This showed task parallelism, on the CPU. The other demo was a N-body simulation running on the GPU of a Mac Pro, a data parallel task.
December 10, 2008: AMD and Nvidia held the first public OpenCL demonstration, a 75-minute presentation at SIGGRAPH Asia 2008. AMD showed a CPU-accelerated OpenCL demo explaining the scalability of OpenCL on one or more cores while Nvidia showed a GPU-accelerated demo.
March 16, 2009: at the 4th Multicore Expo, Imagination Technologies announced the PowerVR SGX543MP, the first GPU of this company to feature OpenCL support.
March 26, 2009: at GDC 2009, AMD and Havok demonstrated the first working implementation for OpenCL accelerating Havok Cloth on AMD Radeon HD 4000 series GPU.
April 20, 2009: Nvidia announced the release of its OpenCL driver and SDK to developers participating in its OpenCL Early Access Program.
August 5, 2009: AMD unveiled the first development tools for its OpenCL platform as part of its ATI Stream SDK v2.0 Beta Program.
August 28, 2009: Apple released Mac OS X Snow Leopard, which contains a full implementation of OpenCL.
September 28, 2009: Nvidia released its own OpenCL drivers and SDK implementation.
October 13, 2009: AMD released the fourth beta of the ATI Stream SDK 2.0, which provides a complete OpenCL implementation on both R700/R800 GPUs and SSE3 capable CPUs. The SDK is available for both Linux and Windows.
November 26, 2009: Nvidia released drivers for OpenCL 1.0 (rev 48).
October 27, 2009: S3 released their first product supporting native OpenCL 1.0 – the Chrome 5400E embedded graphics processor.
December 10, 2009: VIA released their first product supporting OpenCL 1.0 – ChromotionHD 2.0 video processor included in VN1000 chipset.
December 21, 2009: AMD released the production version of the ATI Stream SDK 2.0, which provides OpenCL 1.0 support for R800 GPUs and beta support for R700 GPUs.
June 1, 2010: ZiiLABS released details of their first OpenCL implementation for the ZMS processor for handheld, embedded and digital home products.
June 30, 2010: IBM released a fully conformant version of OpenCL 1.0.
September 13, 2010: Intel released details of their first OpenCL implementation for the Sandy Bridge chip architecture. Sandy Bridge will integrate Intel's newest graphics chip technology directly onto the central processing unit.
November 15, 2010: Wolfram Research released Mathematica 8 with OpenCLLink package.
March 3, 2011: Khronos Group announces the formation of the WebCL working group to explore defining a JavaScript binding to OpenCL. This creates the potential to harness GPU and multi-core CPU parallel processing from a Web browser.
March 31, 2011: IBM released a fully conformant version of OpenCL 1.1.
April 25, 2011: IBM released OpenCL Common Runtime v0.1 for Linux on x86 Architecture.
May 4, 2011: Nokia Research releases an open source WebCL extension for the Firefox web browser, providing a JavaScript binding to OpenCL.
July 1, 2011: Samsung Electronics releases an open source prototype implementation of WebCL for WebKit, providing a JavaScript binding to OpenCL.
August 8, 2011: AMD released the OpenCL-driven AMD Accelerated Parallel Processing (APP) Software Development Kit (SDK) v2.5, replacing the ATI Stream SDK as technology and concept.
December 12, 2011: AMD released AMD APP SDK v2.6 which contains a preview of OpenCL 1.2.
February 27, 2012: The Portland Group released the PGI OpenCL compiler for multi-core ARM CPUs.
April 17, 2012: Khronos released a WebCL working draft.
May 6, 2013: Altera released the Altera SDK for OpenCL, version 13.0. It is conformant to OpenCL 1.0.
November 18, 2013: Khronos announced that the specification for OpenCL 2.0 had been finalized.
March 19, 2014: Khronos releases the WebCL 1.0 specification
August 29, 2014: Intel releases HD Graphics 5300 driver that supports OpenCL 2.0.
September 25, 2014: AMD releases Catalyst 14.41 RC1, which includes an OpenCL 2.0 driver.
January 14, 2015: Xilinx Inc. announces SDAccel development environment for OpenCL, C, and C++, achieves Khronos Conformance
April 13, 2015: Nvidia releases WHQL driver v350.12, which includes OpenCL 1.2 support for GPUs based on Kepler or later architectures. Driver 340+ support OpenCL 1.1 for Tesla and Fermi.
August 26, 2015: AMD released AMD APP SDK v3.0 which contains full support of OpenCL 2.0 and sample coding.
November 16, 2015: Khronos announced that the specification for OpenCL 2.1 had been finalized.
April 18, 2016: Khronos announced that the specification for OpenCL 2.2 had been provisionally finalized.
November 3, 2016 Intel support for Gen7+ of OpenCL 2.1 in SDK 2016 r3
February 17, 2017: Nvidia begins evaluation support of OpenCL 2.0 with driver 378.66.
May 16, 2017: Khronos announced that the specification for OpenCL 2.2 had been finalized with SPIR-V 1.2.
May 14, 2018: Khronos announced Maintenance Update for OpenCL 2.2 with Bugfix and unified headers.
April 27, 2020: Khronos announced provisional Version of OpenCL 3.0
June 1, 2020: Intel Neo Runtime with OpenCL 3.0 for new Tiger Lake
June 3, 2020: AMD announced RocM 3.5 with OpenCL 2.2 Support
September 30, 2020: Khronos announced that the specifications for OpenCL 3.0 had been finalized (CTS also available).
October 16, 2020: Intel announced with Neo 20.41 support for OpenCL 3.0 (includes mostly of optional OpenCL 2.x)
April 6, 2021: Nvidia supports OpenCL 3.0 for Ampere. Maxwell and later GPUs also supports OpenCL 3.0 with Nvidia driver 465+.
Devices
As of 2016, OpenCL runs on Graphics processing units, CPUs with SIMD instructions, FPGAs, Movidius Myriad 2, Adapteva epiphany and DSPs.
Khronos Conformance Test Suite
To be officially conformant, an implementation must pass the Khronos Conformance Test Suite (CTS), with results being submitted to the Khronos Adopters Program. The Khronos CTS code for all OpenCL versions has been available in open source since 2017.
Conformant products
The Khronos Group maintains an extended list of OpenCL-conformant products.
All standard-conformant implementations can be queried using one of the clinfo tools (there are multiple tools with the same name and similar feature set).
Version support
Products and their version of OpenCL support include:
OpenCL 3.0 support
All hardware with OpenCL 1.2+ is possible, OpenCL 2.x only optional, Khronos Test Suite available since 2020-10
(2020) Intel NEO Compute: 20.41+ for Gen 12 Tiger Lake to Broadwell (include full 2.0 and 2.1 support and parts of 2.2)
(2020) Intel 6th, 7th, 8th, 9th, 10th, 11th gen processors (Skylake, Kaby Lake, Coffee Lake, Comet Lake, Ice Lake, Tiger Lake) with latest Intel Windows graphics driver
(2021) Intel 11th, 12th gen processors (Rocket Lake, Alder Lake) with latest Intel Windows graphics driver
(2022) Intel 13th gen processors (Raptor Lake) with latest Intel Windows graphics driver
(2021) Nvidia Maxwell, Pascal, Volta, Turing and Ampere with Nvidia graphics driver 465+
OpenCL 2.2 support
None yet: Khronos Test Suite ready, with Driver Update all Hardware with 2.0 and 2.1 support possible
Intel NEO Compute: Work in Progress for actual products
ROCm: Version 3.5+ mostly
OpenCL 2.1 support
(2018+) Support backported to Intel 5th and 6th gen processors (Broadwell, Skylake)
(2017+) Intel 7th, 8th, 9th, 10th gen processors (Kaby Lake, Coffee Lake, Comet Lake, Ice Lake)
Khronos: with Driver Update all Hardware with 2.0 support possible
OpenCL 2.0 support
(2011+) AMD GCN GPU's (HD 7700+/HD 8000/Rx 200/Rx 300/Rx 400/Rx 500/Rx 5000-Series), some GCN 1st Gen only 1.2 with some Extensions
(2013+) AMD GCN APU's (Jaguar, Steamroller, Puma, Excavator & Zen-based)
(2014+) Intel 5th & 6th gen processors (Broadwell, Skylake)
(2015+) Qualcomm Adreno 5xx series
(2018+) Qualcomm Adreno 6xx series
(2017+) ARM Mali (Bifrost) G51 and G71 in Android 7.1 and Linux
(2018+) ARM Mali (Bifrost) G31, G52, G72 and G76
(2017+) incomplete Evaluation support: Nvidia Kepler, Maxwell, Pascal, Volta and Turing GPU's (GeForce 600, 700, 800, 900 & 10-series, Quadro K-, M- & P-series, Tesla K-, M- & P-series) with Driver Version 378.66+
OpenCL 1.2 support
(2011+) for some AMD GCN 1st Gen some OpenCL 2.0 Features not possible today, but many more Extensions than Terascale
(2009+) AMD TeraScale 2 & 3 GPU's (RV8xx, RV9xx in HD 5000, 6000 & 7000 Series)
(2011+) AMD TeraScale APU's (K10, Bobcat & Piledriver-based)
(2012+) Nvidia Kepler, Maxwell, Pascal, Volta and Turing GPU's (GeForce 600, 700, 800, 900, 10, 16, 20 series, Quadro K-, M- & P-series, Tesla K-, M- & P-series)
(2012+) Intel 3rd & 4th gen processors (Ivy Bridge, Haswell)
(2013+) Qualcomm Adreno 4xx series
(2013+) ARM Mali Midgard 3rd gen (T760)
(2015+) ARM Mali Midgard 4th gen (T8xx)
OpenCL 1.1 support
(2008+) some AMD TeraScale 1 GPU's (RV7xx in HD4000-series)
(2008+) Nvidia Tesla, Fermi GPU's (GeForce 8, 9, 100, 200, 300, 400, 500-series, Quadro-series or Tesla-series with Tesla or Fermi GPU)
(2011+) Qualcomm Adreno 3xx series
(2012+) ARM Mali Midgard 1st and 2nd gen (T-6xx, T720)
OpenCL 1.0 support
mostly updated to 1.1 and 1.2 after first Driver for 1.0 only
Portability, performance and alternatives
A key feature of OpenCL is portability, via its abstracted memory and execution model, and the programmer is not able to directly use hardware-specific technologies such as inline Parallel Thread Execution (PTX) for Nvidia GPUs unless they are willing to give up direct portability on other platforms. It is possible to run any OpenCL kernel on any conformant implementation.
However, performance of the kernel is not necessarily portable across platforms. Existing implementations have been shown to be competitive when kernel code is properly tuned, though, and auto-tuning has been suggested as a solution to the performance portability problem, yielding "acceptable levels of performance" in experimental linear algebra kernels. Portability of an entire application containing multiple kernels with differing behaviors was also studied, and shows that portability only required limited tradeoffs.
A study at Delft University from 2011 that compared CUDA programs and their straightforward translation into OpenCL C found CUDA to outperform OpenCL by at most 30% on the Nvidia implementation. The researchers noted that their comparison could be made fairer by applying manual optimizations to the OpenCL programs, in which case there was "no reason for OpenCL to obtain worse performance than CUDA". The performance differences could mostly be attributed to differences in the programming model (especially the memory model) and to NVIDIA's compiler optimizations for CUDA compared to those for OpenCL.
Another study at D-Wave Systems Inc. found that "The OpenCL kernel’s performance is between about 13% and 63% slower, and the end-to-end time is between about 16% and 67% slower" than CUDA's performance.
The fact that OpenCL allows workloads to be shared by CPU and GPU, executing the same programs, means that programmers can exploit both by dividing work among the devices. This leads to the problem of deciding how to partition the work, because the relative speeds of operations differ among the devices. Machine learning has been suggested to solve this problem: Grewe and O'Boyle describe a system of support-vector machines trained on compile-time features of program that can decide the device partitioning problem statically, without actually running the programs to measure their performance.
In a comparison of actual graphic cards of AMD RDNA 2 and Nvidia RTX Series there is an undecided result by OpenCL-Tests. Possible performance increases from the use of Nvidia CUDA or OptiX were not tested.
See also
Advanced Simulation Library
AMD FireStream
BrookGPU
C++ AMP
Close to Metal
CUDA
DirectCompute
GPGPU
HIP
Larrabee
Lib Sh
List of OpenCL applications
OpenACC
OpenGL
OpenHMPP
OpenMP
Metal
RenderScript
SequenceL
SIMD
SYCL
Vulkan
WebCL
References
External links
for WebCL
International Workshop on OpenCL (IWOCL) sponsored by The Khronos Group
2009 software
Application programming interfaces
Cross-platform software
GPGPU
OpenCL
Parallel computing | Operating System (OS) | 1,127 |
Microsoft Store
Microsoft Store (formerly known as Windows Store) is a digital distribution platform owned by Microsoft. It started as an app store for Windows 8 and Windows Server 2012 as the primary means of distributing Universal Windows Platform apps.
With Windows 10, Microsoft merged its other distribution platforms (Windows Marketplace, Windows Phone Store, Xbox Music, Xbox Video, Xbox Store, and a web storefront also known as "Microsoft Store") into Microsoft Store, making it a unified distribution point for apps, console games, and digital videos. Digital music was included until the end of 2017, and E-books were included until 2019.
In 2021, 669,000 apps were available in the store. Categories containing the largest number of apps are "Books and Reference", "Education", "Entertainment", and "Games". The majority of the app developers have one app.
As with other similar platforms, such as the Google Play and Mac App Store, Microsoft Store is curated, and apps must be certified for compatibility and content. In addition to the user-facing Microsoft Store client, the store has a developer portal with which developers can interact. Microsoft takes 5–15% of the sale price for apps and 30% on Xbox games. Prior to January 1, 2015, this cut was reduced to 20% after the developer's profits reached $25,000.
History
The Web-based storefront
Microsoft previously maintained a similar digital distribution system for software known as Windows Marketplace, which allowed customers to purchase software online. The marketplace tracked product keys and licenses, allowing users to retrieve their purchases when switching computers. Windows Marketplace was discontinued in November 2008. At this point, Microsoft opened a Web-based storefront called "Microsoft Store".
Windows 8
Microsoft first announced Windows Store, a digital distribution service for Windows at its presentation during the Build developer conference on September 13, 2011. Further details announced during the conference revealed that the store would be able to hold listings for both certified traditional Windows apps, as well as what were called "Metro-style apps" at the time: tightly-sandboxed software based on Microsoft design guidelines that are constantly monitored for quality and compliance. For consumers, Windows Store is intended to be the only way to obtain Metro-style apps. While announced alongside the "Developer Preview" release of Windows 8, Windows Store itself did not become available until the "Consumer Preview", released in February 2012.
Updates to apps published on the store after July 1, 2019, are not available to Windows 8 RTM users. Per Microsoft lifecycle policies, Windows 8 has been unsupported since 2016.
Windows 8.1
An updated version of Windows Store was introduced in Windows 8.1. Its home page was remodeled to display apps in focused categories (such as popular, recommended, top free and paid, and special offers) with expanded details, while the ability for apps to automatically update was also added. Windows 8.1 Update also introduced other notable presentation changes, including increasing the top app lists to return 1000 apps instead of 100 apps, a "picks for you" section, and changing the default sorting for reviews to be by "most popular".
Updates to apps published on the Store after July 1, 2023, will not be available to Windows 8.1.
Windows 10
Windows 10 was released with an updated version of the Windows Store, which merged Microsoft's other distribution platforms (Windows Marketplace, Windows Phone Store, Xbox Video and Xbox Music) into a unified store front for Windows 10 on all platforms, offering apps, games, music, film, TV series, themes, and ebooks. In June 2017, Spotify became available in the Windows Store.
In September 2017, Microsoft began to re-brand Windows Store as Microsoft Store, with a new icon carrying the Microsoft logo. Xbox Store was merged into this new version of the platform. This is in line with Microsoft's platform convergence strategy on all Windows 10-based operating systems.
Web apps and traditional desktop software can be packaged for distribution on Windows Store. Desktop software distributed through Windows Store are packaged using the App-V system to allow sandboxing.
In February 2018, Microsoft announced that Progressive Web Apps would begin to be available in the Microsoft Store, and Microsoft would automatically add selected quality progressive web apps through the Bing crawler or allow developers to submit Progressive Web Apps to the Microsoft Store.
Starting from Windows 10 version 1803, fonts can be downloaded and installed from the Microsoft Store.
Updates to apps published on the Store after October 14, 2025 will not be available to Windows 10.
Windows 11
In Windows 11, Microsoft Store received an updated user interface, and a new pop-up designed to handle installation links from websites. Microsoft also announced a number of changes to its policies for application submissions to improve flexibility and make the store more "open", including supporting "any kind of app, regardless of app framework and packaging technology", and the ability for developers to freely use first- or third-party payment platforms (in non-game software only) rather than those provided by Microsoft.
Windows Server
Windows Store is available in Windows Server 2012 but is not installed by default. It is unavailable in Windows Server 2016. However, UWP apps can be acquired from Microsoft Store for Business (formerly Windows Store for Business) and installed through sideloading.
Details
Microsoft Store is the primary means of distributing Windows Store apps to users. Although sideloading apps from outside the store are supported, out-of-box sideloading support on Windows 8 is only available on the Enterprise edition of Windows 8 running on computers that have joined a Windows domain. Sideloading on Windows RT and Windows 8 Pro, and on Windows 8 Enterprise computers without a domain affiliation, requires the purchase of additional licenses through volume licensing. Windows 10 removes this requirement, allowing users to freely enable or disable sideloading.
Initially, Microsoft took a 30% cut of app sales until it reached US$25,000 in revenue, after which the cut dropped to 20%. On January 1, 2015, the reduction in cut at $25,000 was removed, and Microsoft takes a 30% cut of all app purchases, regardless of overall sales. Third-party transactions are also allowed, of which Microsoft does not take a cut. In early 2019, Microsoft lets app developers get 95% of app revenues, while Microsoft will only take 5% but only if user will download the app through a direct URL. Microsoft discontinued that option in 2020. Individual developers are able to register for US$19 and companies for US$99. As of August 1, 2021, Microsoft will reduce its cut to 12% for app sales.
Windows apps
In 2015, over 669,000 apps were available on the store, including apps for Windows NT, Windows Phone, and UWP apps, which work on both platforms. Categories containing the largest number of apps are "Games", "Entertainment", "Books and Reference", and "Education". The majority of the app developers have one app. Both free and paid apps can be distributed through Microsoft Store, with paid apps ranging in cost from US$0.99 to $999.99. Developers from 120 countries can submit apps to Microsoft Store. Apps may support any of 109 languages, as long as they support one of 12 app certification languages.
Movies and TV shows
Movies and television shows are available for purchase or rental, depending on availability.
Content can be played on the Microsoft Movies & TV app (available for Windows 10, Xbox One, Xbox 360 and Xbox X/S), or Xbox Video app (available for Windows 8/RT PCs and tablets, and Windows Phone 8). In the United States, a Microsoft account can be linked to the Movies Anywhere digital locker service (separate registration required), which allows purchased content to be played on other platforms (e.g MacOS, Android, iOS).
Microsoft Movies & TV is currently available in the following 21 countries: Australia, Austria, Belgium, Brazil, Canada, Denmark, Finland, France, Germany, Ireland, Italy, Japan, Mexico, Netherlands, New Zealand, Norway, Spain, Sweden, Switzerland, the United States, and the United Kingdom. The purchase of TV shows is not currently supported in Belgium.
Music (closed)
On October 2, 2017, Microsoft announced that the sale of digital music on the Microsoft Store would cease on December 31 after the discontinuation of Groove Music Pass. Users were able to transfer their music to Spotify until January 31, 2018.
Books (closed)
Books bought from the Microsoft Store were formerly accessible on the EdgeHTML-based Microsoft Edge. The ability to open ePub e-books was removed during the shift to the Chromium-based Microsoft Edge.
On April 2, 2019, Microsoft announced that the sale of e-books on the Microsoft Store had ceased. Due to DRM licenses that would not be renewed, all books became inaccessible by July 2019, and Microsoft automatically refunded all users that had purchased books via the service.
Guidelines
Similar to Windows Phone Store, Microsoft Store is regulated by Microsoft. Applicants must obtain Microsoft's approval before their app becomes available on the store. These apps may not contain, support or approve, gratuitous profanity, obscenity, pornography, discrimination, defamation, or politically offensive content. They may also not contain contents that are forbidden by or offensive to the jurisdiction, religion or norms of the target market. They may also not encourage, facilitate or glamorize violence, drugs, tobacco, alcohol and weapons. Video game console emulators that are "primarily gaming experiences or target Xbox One" and third-party web browsers that use their own layout engines, are prohibited on Microsoft Store.
Microsoft has indicated that it can remotely disable or remove apps from end-user systems for security or legal reasons; in the case of paid apps, refunds may be issued when this is done.
Microsoft initially banned PEGI "18"-rated content from the store in Europe. However, critics noted that this made the content policies stricter than intended, as some PEGI 18-rated games are rated "Mature" on the U.S. ESRB system, which is the next lowest before its highest rating, "Adults Only". The guidelines were amended in December 2012 to remove the discrepancy.
On October 8, 2020, Microsoft announced a commitment to ten "principles" of fairness to developers in the operation of the Microsoft Store. These include transparency over its rules, practices, and Windows' "interoperability interfaces", not preventing competing application storefronts to run on Windows, charging developers "reasonable fees" and not "forc[ing]" them to include in-app purchases, allowing access to the store by any developer as long as their software meets "objective standards and requirements", not blocking apps based on their business model, how it delivers its services, or how it processes payments, not impeding developers from "communicating directly with their users through their apps for legitimate business purposes", not using private data from the store to influence the development of competing for software by Microsoft, and holding its own software to the same standards as others on the store. The announcement came in the wake of a lawsuits against Apple, Inc. and Google LLC by Epic Games over alleged anticompetitive practices conducted by their own application stores.
With the release of Windows 11, Microsoft announced that it would not require software (excluding games) distributed via Microsoft Store to use its own payment platforms, and that it will also allow third-party storefronts (such as Amazon Appstore—which will be used for its upcoming Android app support, and Epic Games Store) to offer their clients for download via Microsoft Store.
Developer portal
In addition to the user facing Microsoft Store client, the store also has a developer portal with which developers can interact. The Windows developer portal has the following sections for each app:
App Summary - An overview page of a given app, including a downloads chart, quality chart, financial summary, and a sales chart.
App Adoption - A page that shows adoption of the app, including conversions, referrers, and downloads.
App Ratings - A ratings breakdown, as well as the ability to filter reviews by region.
App Quality - An overview page showcasing exceptions that have occurred in the app.
App Finance - A page where a developer can download all transactions related to their app.
Developer tools
Microsoft Store provides developer tools for tracking apps in the store.
The dashboard also presents a detailed breakdown of users by market, age, and region, as well as charts on the number of downloads, purchases, and average time spent in an app.
See also
List of Microsoft software
Mac App Store, equivalent platform on macOS
References
External links
Windows components
Software distribution platforms
Universal Windows Platform apps
Windows 8
Windows 10
Windows 11
Xbox One software
Online content distribution
Online-only retailers of video games
Video on demand
Mobile software distribution platforms
Online retailers of the United States
Xbox One | Operating System (OS) | 1,128 |
Zenith Z-100
The Z-100 computer is a personal computer made by Zenith Data Systems (ZDS). It was a competitor to the IBM PC.
Design
The Zenith Data Systems Z-100 is a pre-assembled version of the Heathkit H100 electronic kit. In the same family, the Z-120 is an all-in-one model with self-contained monitor, and the Z-110 (called the low profile model) is similar in size to the cabinet of an IBM PC. Both models have a built-in keyboard that was modeled after the IBM Selectric typewriter.
Dual processors: 8085 and 8088.
Available with CP/M and Z-DOS (non-IBM compatible MS-DOS variant).
Five S-100 expansion slots.
Two 320 KB 40-track double-sided 5.25-inch floppy disk drives. Socket enabled direct plug-in of external 8-inch floppies.
2× serial ports (2661 UART), one Centronics printer port (discrete TTL chips), light pen port.
640×225 bitmap display. 8 colors (low-profile model), or monochrome upgradable to 8 greyscales (all-in-one).
Base 128 KB RAM, expandable to 192 KB on board, to 768 KB with S-100 cards. (Video RAM was paged into the 64 KB block above 768 KB).
The Z-100 is partially compatible with the IBM PC, using standard floppy drives. It runs a non-IBM version of MS-DOS, so generic MS-DOS programs run, but most commercial PC software use IBM BIOS extensions and do not run, including Lotus 1-2-3. Several companies offered software or hardware solutions to permit unmodified PC programs to work on the Z-100.
The Z-100 has unusually good graphics for its era, superior to the contemporary CGA (640×200 monochrome bitmap or 320×200 4-color), IBM Monochrome Display Adapter (MDA) (80×25 monochrome text-only), and with 8 colors or grayscales available at a lower resolution than the Hercules Graphics Card (720×348 monochrome). Early versions of AutoCAD were released for the Z-100 because of these advanced graphics.
Aftermarket vendors also released modifications to upgrade mainboard memory and permit installation of an Intel 8087 math coprocessor.
Uses
In 1983, Clarkson College of Technology (now Clarkson University) became the first college in the nation to give each incoming freshman a personal computer. The model issued to them was the Z-100.
In 1986, the US Air Force awarded Zenith Data Systems a $242 million contract for 90,000 Z-100 desktop computers.
Reception
Jerry Pournelle in 1983 praised the Z-100's keyboard, and wrote that it "had the best color graphics I've seen on a small machine". Although forced to buy a real IBM PC because of the Z-100 and other computers' incomplete PC compatibility, he reported in December 1983 that a friend who was inexperienced with electronic kits was able to assemble a H100 in a day, with only the disk controller needing soldering. Ken Skier praised the computer's reliability in the magazine in January 1984 after using the computer for more than 40 hours a week for eight months. While criticizing its inability to read other disk formats, he approved of Zenith's technical support, documentation, and keyboard and graphics. Skier concluded that those who "want a well-designed, well-built, well-documented system that runs the best of 8-bit and 16-bit worlds" should "consider the Zenith Z-100".
References
External links
Z-100 information and pictures from the DigiBarn Computer Museum
Heathkit / Zenith Z100/110/120 at old-computers.com
Z-100 Software and Manual archive from Antediluvian Designs
Z80-based home computers
8086-based home computers
Heathkit computers | Operating System (OS) | 1,129 |
Cygwin/X
Cygwin/X is an implementation of the X Window System that runs under Microsoft Windows. It is part of the Cygwin project, and is installed using Cygwin's standard setup system. Cygwin/X is free software, licensed under the X11 License.
Cygwin/X was originally based on XFree86, but switched to the X.Org Server due to XFree86 licensing controversy, owing to concerns over XFree86's new software license not being compatible with the GPL.
After a long hiatus following an 8 July 2005 release, the project was revitalised and the developers released a version based on the X.org modular 7.4 release on 12 November 2008 and continue to maintain it.
Features
There are two ways to run Cygwin/X:
In one, an X server runs in a single Microsoft Windows window that serves as the X display, which holds the X root window and all the other X windows in the X session. You use an X window manager to manage the X windows within the display. You can run multiple X servers, each in its own Microsoft Windows window.
The other method is to run Cygwin/X rootless. In this method, each X window corresponds with its own Microsoft Windows window and there is no root window. There is no X window manager; Microsoft Windows' window manager moves, resizes, hides, etc. the X windows.
Uses
One use for Cygwin/X is to provide a graphical interface for applications running on the same computer with Cygwin/X which are designed for the X Window System. Such an application is probably running under Cygwin.
Another use for Cygwin/X is as an X terminal: applications running on another computer access the Cygwin/X X server via the X protocol over an IP network. One can run XDM on the remote system so that a user can log into the remote computer via a window on the Cygwin/X system and then the remote system puts up web browsers, terminal windows, and the like on the Cygwin/X display.
Another common way for an application on a remote system to operate through a window on a local Cygwin/X display is SSH tunneling. An application on the local system creates an SSH session on the remote system (perhaps the application is xterm and the user types an 'ssh' command). The SSH server on the remote system sets things up so that any X client program the shell starts (on the remote system) uses the local Cygwin/X server.
See also
X-Win32 – Closed-source alternative
Xming – MinGW version
XWinLogon – Remote Access Client GUI based on Cygwin/X
References
External links
Cygwin: Changing the Face of Windows
Cygwin wikibook
Windows-only software
X servers | Operating System (OS) | 1,130 |
Atari 8-bit family software
Many pieces of software were available for the Atari 8-bit family of home computers (the 400/800, XL, and XE series). Software was sold both by Atari, Inc. (then Atari Corporation starting in mid-1984) and third parties. Atari also distributed software through the Atari Program Exchange from 1981 to 1984. After APX folded, many titles were picked up by Antic Software.
Programming languages
Assembly language
Atari, Inc. published two assemblers. The Atari Assembler Editor cartridge is a friendlier, integrated development environment using line numbers for editing source code similar to Atari BASIC. The professionally targeted Atari Macro Assembler shipped at a higher price on a copy protected disk without editor or debugger. Third-party assemblers include SynAssembler from Synapse Software and MAE (Macro Assembler Editor) from Eastern House.
Optimized Systems Software published an enhanced disk-based assembler mimicking the structure of Atari's Assembler Editor as EASMD (Editor/Assembler/Debug). It followed that with MAC/65 first on disk with BUG/65 as a companion product, then as a 16KB bank-switched cartridge. MAC/65 tokenizes lines of code as they are entered and has much faster assembly times than Atari's products.
Dunion's Debugging Tool (or DDT) by Jim Dunion is a machine language debugger originally sold through the Atari Program Exchange. A reduced version is included in the cartridge version of MAC/65. Atari magazine ANALOG Computing published the machine language monitor H:BUG as a type-in listing. followed by BBK Monitor.
BASIC
Atari shipped Atari BASIC with all their machines either as a cartridge or in ROM. It also sold Atari Microsoft BASIC on disk. Optimized Systems Software created a series of enhanced BASICs: BASIC A+, BASIC XL, BASIC XE. BASIC compilers were also available, from 1982's ABC (Monarch Data Systems) to two releases from 1985: Advan BASIC and Turbo BASIC XL.
Pascal
Atari's own Atari Pascal requires two disk drives and was relegated to the Atari Program Exchange instead of the official product line. Later options were Draper Pascal and Kyan Pascal.
Forth
Atari 8-bit Forths include fig-Forth, Extended fig-Forth (Atari Program Exchange), ES-Forth, QS Forth, and ValFORTH.
Other
Action! is an ALGOL 68-like procedural programming language that shipped on cartridge with an integrated compiler and full-screen text editor. The language is designed for quick compile times and to generate efficient 6502 machine code.
Deep Blue C is a port of Ron Cain's Small-C compiler. It was sold through the Atari Program Exchange.
Atari, Inc. published the highly regarded Atari Logo as well as Atari PILOT, both on cartridge.
Other Atari 8-bit family languages include Extended WSFN and Inter-LISP/65.
Applications
See :Category:Atari 8-bit family software.
Word processors
Atari, Inc. published the Atari Word Processor in 1981, followed by the more popular AtariWriter cartridge in 1983. Third party options include PaperClip, Letter Perfect, Word Magic, Superscript, Bank Street Writer, COMPUTE! magazine's type-in SpeedScript, The Writer's Tool, Muse Software's Super-Text, and relative latecomer The First XLEnt Word Processor in 1986. Cut & Paste from Electronic Arts and Homeword from Sierra On-Line were designed to be simpler to use than other programs.
Two integrated software packages that include word processing are HomePak and Mini Office II. Antic compared seven word processors in the February 1987 issue of the magazine.
Graphics
Movie Maker, originally from Reston Publishing then later Electronic Arts, allows creating full-screen animations with synchronized audio that can be saved in a standalone playback format.
Music
Atari's Music Composer cartridge (1979), the first music composition software for the Atari 8-bit family, was later joined by Advanced MusicSystem from the Atari Program Exchange (1982), Music Construction Set (1983), and Bank Street Music Writer (1985). Antic published the Antic Music Processor in 1988 as a disk bonus.
Games
See :Category:Atari 8-bit family games.
Because of graphics superior to that of the Apple II and Atari's home-oriented marketing, the Atari 8-bit computers gained a good reputation for games. BYTE in 1981 stated that "for sound and video graphics [they] are hard to beat". Jerry Pournelle wrote in the magazine in 1982, when trying to decide what computer to buy his sons, that "if you're only interested in games, that's the machine to get. It's not all that expensive, either".
Star Raiders was Atari's killer app, akin to VisiCalc for the Apple II in its ability to persuade customers to buy the computer. Antic in 1986 stated that "it was the first program that showed all of the Atari computer's audio and visual capabilities. It was just a game, yes, but it revolutionized the idea of what a personal computer could be made to do."
A 1984 compendium of reviews used 198 pages for games compared to 167 for all others. It noted the existence of a distinct "graphics look" to native Atari software: "Multiple graphics modes, four directional fine scrolling, colorful modified character-set backgrounds, and, of course, player missile graphics".
References | Operating System (OS) | 1,131 |
List of live CDs
This is a list of live CDs. A live CD or live DVD is a CD-ROM or DVD-ROM containing a bootable computer operating system. Live CDs are unique in that they have the ability to run a complete, modern operating system on a computer lacking mutable secondary storage, such as a hard disk drive.
Rescue and repair
Billix – A multiboot distribution and system administration toolkit with the ability to install any of the included Linux distributions
Inquisitor – Linux kernel-based hardware diagnostics, stress testing and benchmarking live CD
Parted Magic – Entirely based on the 2.6 or newer Linux kernels
System Folder of classic Mac OS on a CD or on a floppy disk – Works on any media readable by 68k or PowerPC Macintosh computers
SystemRescueCD – A Linux kernel-based CD with tools for Windows and Linux repairs
BSD-based
FreeBSD based
DesktopBSD – as of 1.6RC1 FreeBSD and FreeSBIE based
FreeBSD – has supported use of a "fixit" CD for diagnostics since 1996
FreeNAS – m0n0wall-based
FreeSBIE (discontinued) – FreeBSD-based
GhostBSD – FreeBSD based with gnome GUI, installable to HDD
Ging – Debian GNU/kFreeBSD-based
m0n0wall (discontinued) – FreeBSD-based
TrueOS – FreeBSD-based
pfSense – m0n0wall-based
Other BSDs
DragonFly BSD
Linux kernel-based
Arch Linux based
Artix – LXQt preconfigured and OpenRC-oriented live CD and distribution
Archie – live CD version of Arch Linux.
Antergos
Chakra
Manjaro – primarily free software operating system for personal computers aimed at ease of use.
Parabola GNU/Linux-libre - distro endorsed by the Free Software Foundation
SystemRescueCD
Debian-based
These are directly based on Debian:
antiX – A light-weight edition based on Debian
Debian Live – Official live CD version of Debian
Finnix – A small system administration live CD, based on Debian testing, and available for x86 and PowerPC architectures
grml – Installable live CD for sysadmins and text tool users
HandyLinux – A French/English Linux distribution derived from Debian designed for inexperienced computer users
Instant WebKiosk – Live, browser only operating system for use in web kiosks and digital signage deployments
Kali Linux – The most advanced penetration testing distribution
Knoppix – The "original" Debian-based live CD
MX Linux – Live based on Debian stable
Tails – An Amnesic OS based on anonymity and Tor
Slax – (formerly based on Slackware) modular and very easy to remaster
Webconverger – Kiosk software that boots live in order to turn PC into temporary Web kiosk
Knoppix-based
A large number of live CDs are based on Knoppix. The list of those is in the derivatives section of the Knoppix article.
Ubuntu-based
These are based at least partially on Ubuntu, which is based on Debian:
CGAL LiveCD – Live CD containing CGAL with all demos compiled. This enables the user to get an impression of CGAL and create CGAL software without the need to install CGAL.
Emmabuntüs is a Linux distribution derived from Ubuntu and designed to facilitate the repacking of computers donated to Emmaüs Communities.
gNewSense – Supported by the Free Software Foundation, includes GNOME
gOS – A series of lightweight operating systems based on Ubuntu with Ajax-based applications and other Web 2.0 applications, geared to beginning users, installable live CD
Linux Mint – Installable live CD
Mythbuntu – A self-contained media center suite based on Ubuntu and MythTV
OpenGEU – Installable live CD
PC/OS – An Ubuntu derivative whose interface was made to look like BeOS. a 64 bit version was released in May 2009. In 2010 PC/OS moved to a more unified look to its parent distribution and a GNOME version was released on March 3, 2010.
Pinguy – An Ubuntu-based distribution designed to look and feel simple. Pinguy is designed with the intent of integrating new users to Linux.
Puredyne – Live CD/DVD/USB for media artists and designers, based on Ubuntu and Debian Live
Qimo 4 Kids – A fun distro for kids that comes with educational games
Trisquel – Supported by the Free Software Foundation, includes GNOME
TurnKey Linux Virtual Appliance Library – Family of installable live CD appliances optimized for ease of use in server-type usage scenarios
Ubuntu and Lubuntu – Bootable live CDs
Other Debian-based
AVLinux – AVLinux is a Linux for multimedia content creators.
CrunchBang Linux – Installable live CD, using Openbox as window manager
Damn Small Linux – Very light and small with JWM and Fluxbox, installable live CD
DemoLinux (versions 2 and 3) – One of the first live CDs
Dreamlinux – Installable live CD to hard drives or flash media * This distribution has ceased support *
gnuLinEx – Includes GNOME
Kanotix – Installable live CD
MEPIS – Installable live CD
Gentoo-based
Bitdefender Rescue CD
Calculate Linux
FireballISO – VMware virtual machine that generates a customized security-hardened IPv4 and IPv6 firewall live CD.
Incognito – includes anonymity and security tools such as Tor by default
Kaspersky Rescue Disk
Pentoo
SabayonLinux
Ututo
VidaLinux
Mandriva-based
DemoLinux (version 1)
Mageia – installable live CD
Mandriva Linux – installable live CD; GNOME and KDE editions available
openSUSE-based
openSuSE – official Novell/SuSE-GmbH version – installable live CD; GNOME and KDE versions available
Red Hat Linux/Fedora-based
Berry Linux
CentOS – installable live CD
Fedora – installable live CD, with GNOME or KDE
Korora – installable live USB (recommended over DVD), with Cinnamon, GNOME, KDE, MATE, or Xfce
Network Security Toolkit – installable live disc, with GNOME or Fluxbox
Slackware-based
AUSTRUMI – 50 MB Mini distro
BioSLAX – a bioinformatics live CD with over 300 bioinformatics applications
NimbleX – under 200 MB
Porteus – under 300 MB
Salix
Slackware-live ( CD / USB images with latest update from slackware-current )
Vector Linux (Standard and SOHO Editions)
Zenwalk
Other
Acronis Rescue Media – to make disk images from hard disk drives
CHAOS – small (6 MB) and designed for creating ad hoc computer clusters
EnGarde Secure Linux – a highly secure Linux based on SE Linux
GeeXboX – a self-contained media center suite based on Linux and MPlayer
GoboLinux – an alternative Linux distribution. Its most salient feature is its reorganization of the filesystem hierarchy. Under GoboLinux, each program has its own subdirectory tree.
Granular – installable live CD based on PCLinuxOS, featuring KDE and Enlightenment
Lightweight Portable Security – developed and publicly distributed by the United States Department of Defense’s Software Protection Initiative to serve as a secure end node
Linux From Scratch Live CD (live CD inactive) – used as a starting point for a Linux From Scratch installation
Nanolinux – 14 MB distro on an installable live CD with BusyBox and Fltk, for desktop computing
paldo – independently developed, rolling release distribution on installable live CD
PCLinuxOS – installable live CD for desktop computing use
Puppy Linux – installable live CD, very small
SliTaz – installable live CD, one of the smallest available with good feature set
Tiny Core Linux – based on Linux 2.6 kernel, BusyBox, Tiny X, Fltk, and Flwm, begins at 10 MB
XBMC Live – a self-contained media center suite based on Embedded Linux and XBMC Media Center
OS X-based
DasBoot by SubRosaSoft.com
OSx86 (x86 only)
Windows-based
Microsoft representatives have described third-party efforts at producing Windows-based live CDs as "improperly licensed" uses of Windows, unless used solely to rescue a properly licensed installation. However, Nu2 Productions believes the use of BartPE is legal provided that one Windows license is purchased for each BartPE CD, and the Windows license is used for nothing else.
BartPE – allows creation of a bootable CD from Windows XP and Windows Server 2003 installation files
WinBuilder – allows the creation of a bootable CD from Windows 2000 and later
Windows Preinstallation Environment
OpenSolaris-based
Systems based on the former open source "OS/net Nevada" or ONNV open source project by Sun Microsystems.
BeleniX – full live CD and live USB distribution (moving to Illumos?)
OpenSolaris – the former official distribution supported by Sun Microsystems based on ONNV and some closed source parts
Illumos-based
Illumos is a fork of the former OpenSolaris ONNV aiming to further develop the ONNV and replacing the closed source parts while remaining binary compatible. The following products are based upon Illumos:
Nexenta OS – combines the GNU userland with the OpenSolaris kernel.
OpenIndiana – since OpenIndiana 151a based on Illumos
Other operating systems
AmigaOS 4 – Installable live CD
Arch Hurd – A live CD of Arch Linux with the GNU Hurd as its kernel
AROS – Offers live CD for download on the project page
BeOS – All BeOS discs can be run in live CD mode, although PowerPC versions need to be kickstarted from Mac OS 8 when run on Apple or clone hardware
FreeDOS – the official "Full CD" 1.0 release includes a live CD portion
Haiku – Haiku is a free and open source operating system compatible with BeOS running on Intel x86 platforms instead of PowerPC.
Hiren's BootCD
Minix
MorphOS – Installable live CD
OpenVMS – Installable live CD
OS/2 Ecomstation Demo
Plan 9 from Bell Labs – Has a live CD, which is also its install CD (and the installer is a shell script).
QNX
ReactOS
SkyOS
Syllable Desktop
See also
List of Linux distributions that run from RAM
List of tools to create Live USB systems
Windows To Go
References
External links
The LiveCD List
Live CDs
Live CD | Operating System (OS) | 1,132 |
Ingo Molnár
Ingo Molnár, employed by Red Hat as of May 2013, is a Hungarian Linux hacker. He is known for his contributions to the operating system in terms of security and performance.
Life and career
Molnár studied at Eötvös Loránd University.
Work
Some of his additions to the Linux kernel include the O(1) scheduler of Linux-2.6.0 and the Completely Fair Scheduler of Linux-2.6.23, the in-kernel TUX HTTP / FTP server, as well as his work to enhance thread handling. He also wrote a kernel security feature called "Exec Shield", which prevents stack-based buffer overflow exploits in the x86 architecture by disabling the execute permission for the stack.
Together with Thomas Gleixner, he worked on the real-time preemption (PREEMPT_RT) patch set, which aims to reduce the maximum thread switching latency of the Linux kernel from an unbounded number of milliseconds to down to bounded values in the order of tens of microseconds (depending on the system). As of 2011, Thomas Gleixner is working on further improving the patch and getting important infrastructure patches of the patch set merged into the Mainline Linux kernel.
Between Linux 2.6.21 and Linux 2.6.24, he worked on the Completely Fair Scheduler (CFS) which was inspired by the scheduler work of Con Kolivas.
CFS replaced the previous process scheduler of the Linux kernel with Linux-2.6.23.
In 2012 Molnar criticized the Linux desktop as "not free enough" for the users with respect to the applications. He argues that the typically used system of software distribution and deployment by a centrally organized Linux distributions is not fast and flexible enough to satisfy the requirements of users and application producers alike. Molnár suggests a decentral deployment method (similar to Autopackage, Zero Install, or the Klik-successor AppImage) which allows a more flexible application infrastructure formed by a stable platform and independent software providers.
In early 2022, he submitted an RFC on a set of about 2300 patches, called "Fast Kernel Headers", that are intended to improve kernel compile times by 50-80% and at the same time significantly reduce the problems created by the hierarchy and dependencies of include files, the so-called " dependancy hell ".
Quotes
On the question, why the Linux desktop has not been adopted by the mainstream users yet:
References
External links
Ingo Molnár's homepage at Red Hat
Ingo Molnár's RT-kernel homepage
The RT-kernel Wiki
Ingo LKML activity
Linux kernel programmers
Living people
Hungarian computer scientists
Year of birth missing (living people)
Free software programmers
Hungarian computer programmers
Red Hat employees | Operating System (OS) | 1,133 |
Banyan VINES
Banyan VINES is a network operating system developed by Banyan Systems for computers running AT&T's UNIX System V.
VINES is an acronym for Virtual Integrated NEtwork Service. Like Novell NetWare, VINES's network services were based on the Xerox XNS stack.
James Allchin, who later worked as Group Vice President for Platforms at Microsoft until his retirement on January 30, 2007, was the chief architect of Banyan VINES.
VINES technology
VINES ran on a low-level protocol known as VIP—the VINES Internetwork Protocol—that was essentially identical to the lower layers of the Xerox Network Systems (XNS) protocols. Addresses consisted of a 32-bit address and a 16-bit subnet that mapped to the 48-bit Ethernet address to route to machines. This meant that, like other XNS-based systems, VINES could only support a two-level internet.
A set of routing algorithms, however, set VINES apart from other XNS systems at this level. The key differentiator, ARP (Address Resolution Protocol), allowed VINES clients to automatically set up their own network addresses. When a client first booted up, it broadcast a request on the subnet asking for servers, which would respond with suggested addresses. The client would use the first to respond, although the servers could hand off "better" routing instructions to the client if the network changed. The overall concept very much resembled AppleTalk's AARP system, with the exception that VINES required at least one server, whereas AARP functioned completely "headlessly". Like AARP, VINES required an inherently "chatty" network, sending updates about the status of clients to other servers on the internetwork.
Rounding out its lower-level system, VINES used RTP (the Routing Table Protocol), a low-overhead message system for passing around information about changes to the routing, and ARP to determine the address of other nodes on the system. These closely resembled the similar systems used in other XNS-based protocols. VINES also included ICP (the Internet Control Protocol), which it used to pass error-messages and metrics.
At the middle layer level, VINES used fairly standard software. The unreliable datagram service and data-stream service operated essentially identically to UDP and TCP on top of IP. However, VINES also added a reliable message service as well, a sort of hybrid of the two that offered guaranteed delivery of a single packet.
Banyan offered customers TCP/IP as an extra cost option to customers of standard Vines servers. This extra charge for TCP/IP on VINES servers continued long after TCP/IP server availability had become commoditized.
At the topmost layer, VINES provided the standard file and print services, as well as the unique StreetTalk, likely the first truly practical globally consistent name-service for an entire internetwork. Using a globally distributed, partially replicated database, StreetTalk could meld multiple widely separated networks into a single network that allowed seamless resource-sharing. It accomplished this through its rigidly hierarchical naming-scheme; entries in the directory always had the form item@group@organization (similar to the naming format used in the XNS Clearinghouse directory service: item:group:organization). This applied to user accounts as well as to resources like printers and file servers.
Protocol stack
VINES client software
VINES client software ran on most PC-based operating systems, including MS-DOS and earlier versions of Microsoft Windows. It was fairly light-weight on the client, and hence remained in use during the later half of the 1990s on many older machines that could not run other networking stacks then in widespread use. This occurred on the server side as well, as VINES generally offered good performance, even from mediocre hardware.
Initial market release
With StreetTalk's inherent low bandwidth requirements, global companies and governments that grasped the advantages of worldwide directory services seamlessly spanning multiple time zones recognized VINE's technological edge. Users included gas and oil companies, power companies, public utilities—and U.S. Government agencies including the State Department, Treasury Department, Department of Agriculture, Department of Health and Human Services and Department of Defense.
The U.S. State Department, for example, was an early adopter of the VINES technology. Able to take advantage of the then high-speed 56k modems for telephonic connectivity of the developed world to the limited telephone modem speeds of 300 baud over bad analog telephone systems in the Third World, VINES was able to link embassies around the world. VINES also came with built-in point-to-point and group chat capability that was useful for basic communication over secure lines.
Defense Department adoption
By the late 1980s, the US Marine Corps was searching for simple, off-the-shelf worldwide network connectivity with rich built-in email, file, and print features. By 1988, the Marine Corps had standardized on VINES as both its garrison (base) and forward-deployed ground-based battlefield email-centric network operating system.
Using both ground-based secure radio channels and satellite and military tactical phone switches, the Marine Corps was ready for its first big test of VINES: the 1990-1991 Gulf War. Units were able to seamlessly coordinate ground, naval, and air strikes across military boundaries by using the chat function to pass target lists and adjust naval gun fire on the fly. Ground fire support coordination agencies used VINES up and down command channels—from Battalion-to-Regiment through Division-to-Corps and Squadron-to-Group to Aircraft Wing-to-Corps, as well as in peer-to-peer unit communication.
VINES competitors
For a decade, Banyan's OS competitors, Novell and Microsoft, dismissed the utility of directory services. Consequently, VINES dominated what came to be called the "directory services" space from 1985 to 1995. While seeming to ignore VINES, Novell and eventually Microsoft—companies with a flat server or domain-based network model—came to realize the strategic value of directory services. With little warning, Novell went from playing down the value of directory services to announcing its own: NetWare Directory Services (NDS). Eventually, Novell changed NDS to mean Novell Directory Services, and then renamed that to eDirectory.
Microsoft had gone through its own round of operating system development. Initially, they partnered with IBM to develop an Intel-based disk operating system called PC DOS, and its Microsoft twin, MS-DOS. Eventually, Microsoft shared true network operating system development with IBM LAN Manager and its Microsoft twin, Microsoft LAN Manager. Microsoft parted company with IBM and continued developing LAN Manager into what became Windows NT. Essentially, its OS 4.0. NT was originally a flat server or domain-based operating system with none of the advantages of VINES or NDS.
For Windows 2000 however, Microsoft included Active Directory, an LDAP directory service based on the directory from its Exchange mail server. Active Directory was as robust as and, in several key ways, superior to VINES. While VINES was limited to a three-part name, user.company.org, like Novell's NDS structure, Active Directory was not bound by such a naming convention. Active Directory had developed an additional capability that both NDS and VINES lacked, its "forest and trees" organizational model. The combination of better architecture and a marketing company the size of Microsoft doomed StreetTalk, VINES as an OS, and finally Banyan itself.
Decline
By the late 1990s, VINES's once-touted StreetTalk Services's non-flat, non-domain model with its built-in messaging, efficiency and onetime performance edge had lost ground to newer technology. Banyan was unable to market its product far beyond its initial base of multi-national and government entities.
Because Banyan could not quickly develop an OS to take advantage of newer hardware, and apparently did not understand that the StreetTalk directory services, not the shrink-wrapped OS, was the prime value added—the company lost ground in the networking market. VINES sales rapidly dried up, both because of these problems and because of the rapid rise of Windows NT. Banyan increasingly turned to StreetTalk as a differentiator, eventually porting it to NT as a stand-alone product and offering it as an interface to LDAP systems.
Furthermore, Banyan continued to operate a closed OS. This required hardware manufacturers to submit hardware and driver requirements so that Banyan could write drivers for each peripheral. When more open systems with published APIs began to appear, Banyan did not alter their model. This made it difficult for client-side support to handle the explosive growth in, for example, printers. As competitors began to adopt some of VINES's outstanding wide area networking protocols and services, manufacturers were less inclined to send a unit to Banyan for VINES specific drivers when competitors let them write their own.
Dropping the Banyan brand for ePresence in 1999, as a general Internet services company, the firm sold its services division to Unisys in late 2003 and liquidated its remaining holdings in its Switchboard.com subsidiary.
Version history
1984: Banyan VINES 1.0
1989: Banyan VINES 2.1
1990: Banyan VINES 3.0
1991: Banyan VINES 4.11
1992: Banyan VINES 5.0
1994: Banyan VINES 5.50
1997: Banyan VINES 7.0
References
Resource
Banyan VINES at Bamertal Publishing
Banyan VINES at Coldtail.com
1984 software
Directory services
Discontinued operating systems
Network operating systems
UNIX System V
X86 operating systems
XNS based protocols | Operating System (OS) | 1,134 |
Criticism of Windows XP
Criticism of Windows XP deals with issues with security, performance and the presence of product activation errors that are specific to the Microsoft operating system Windows XP.
Security issues
Windows XP has been criticized by many users for its vulnerabilities due to buffer overflows and its susceptibility to malware such as viruses, trojan horses, and worms. Nicholas Petreley for The Register notes that "Windows XP was the first version of Windows to reflect a serious effort to isolate users from the system, so that users each have their own private files and limited system privileges." However, users by default receive an administrator account that provides unrestricted access to the underpinnings of the system. If the administrator's account is compromised, there is no limit to the control that can be asserted over the PC. Windows XP Home Edition also lacks the ability to administer security policies and denies access to the Local Users and Groups utility.
Microsoft stated that the release of security patches is often what causes the spread of exploits against those very same flaws, as crackers figure out what problems the patches fix and then launch attacks against unpatched systems. For example, in August 2003 the Blaster worm exploited a vulnerability present in every unpatched installation of Windows XP, and was capable of compromising a system even without user action. In May 2004 the Sasser worm spread by using a buffer overflow in a remote service present on every installation. Patches to prevent both of these well-known worms had already been released by Microsoft. Increasingly widespread use of Service Pack 2 and greater use of personal firewalls may also contribute to making worms like these less common.
Many attacks against Windows XP systems come in the form of trojan horse e-mail attachments which contain worms. A user who opens the attachment can unknowingly infect his or her own computer, which may then e-mail the worm to more people. Notable worms of this sort that have infected Windows XP systems include Mydoom, Netsky and Bagle. To discourage users from running such programs, Service Pack 2 includes the Attachment Execution Service which records the origin of files downloaded with Internet Explorer or received as an attachment in Outlook Express. If a user tries to run a program downloaded from an untrusted security zone, Windows XP with Service Pack 2 will prompt the user with a warning.
Spyware and adware are a continuing problem on Windows XP and other versions of Windows. Spyware is also a concern for Microsoft with regard to service pack updates; Barry Goff, a group product manager at Microsoft, said some spyware could cause computers to freeze up upon installation of Service Pack 2. In January 2005, Microsoft released a free beta version of Windows Defender which removes some spyware and adware from computers.
Windows XP offers some useful security benefits, such as Windows Update, which can be set to install security patches automatically, and a built-in firewall. If a user doesn't install the updates for a long time after the Windows Update icon is displayed in the toolbar, Windows will automatically install them and restart the computer on its own. This can lead to the loss of unsaved data if the user is away from the computer when the updates are installed. Service Pack 2 enables the firewall by default. It also adds increased memory protection to let the operating system take advantage of new No eXecute technology built into CPUs such as the AMD64. This allows Windows XP to prevent some buffer overflow exploits.
On April 8, 2014, extended support of Windows XP ended. As this means that security vulnerabilities are no longer patched, the general advice given by both Microsoft and security specialists is to no longer use Windows XP.
Antitrust concerns
In light of the United States v. Microsoft Corp. case which resulted in Microsoft being convicted for illegally abusing its operating system monopoly to overwhelm competition in other markets, Windows XP has drawn fire for integrating user applications such as Windows Media Player and Windows Messenger into the operating system, as well as for its close ties to the Windows Live ID (now Microsoft account) service.
In 2001, ProComp – a group including several of Microsoft's rivals, including Oracle, Sun, and Netscape – claimed that the bundling and distribution of Windows Media Player in Windows XP was a continuance of Microsoft's anticompetitive behavior and that the integration of Windows Live ID (at the time Microsoft Passport) into Windows XP was a further example of Microsoft attempting to gain a monopoly in web services. Both of these claims were rebutted by the Association for Competitive Technology (ACT) and the Computing Technology Industry Association (CompTIA), both partially funded by Microsoft. The battle being fought by fronts for each side was the subject of a heated exchange between Oracle's Larry Ellison and Microsoft's Bill Gates.
Microsoft responded on its "Freedom to Innovate" web site, pointing out that in earlier versions of Windows, Microsoft had integrated tools such as disk defragmenters, graphical file managers, and TCP/IP stacks, and there had been no protest that Microsoft was being anti-competitive. Microsoft asserted that these tools had moved from special to general usage and therefore belonged in its operating system.
To avoid the possibility of an injunction, which might have delayed the release of Windows XP, Microsoft changed its licensing terms to allow PC manufacturers to hide access to Internet Explorer (but not remove it). Competitors dismissed this as a trivial gesture. Later, Microsoft released a utility as part of Service Pack 1 (SP1) which allows icons and other links to bundled software such as Internet Explorer, Windows Media Player, and Windows Messenger (not to be confused with the similar-named Windows Live Messenger, formerly MSN Messenger) to be removed. The components themselves remain in the system; Microsoft maintains that they are necessary for key Windows functionality (such as the HTML Help system and Windows desktop), and that removing them completely may result in unwanted consequences. One critic, Shane Brooks, has argued that Internet Explorer could be removed without adverse effects, as demonstrated with his product XPLite. Dino Nuhagic created his nLite software to remove many components from XP prior to installation of the product.
In addition, in the first release of Windows XP, the "Buy Music Online" feature always used Microsoft's Internet Explorer rather than any other web browser that the user may have set as their default. Under pressure from the United States Department of Justice, Microsoft released a patch in early 2004, which corrected the problem.
Backward compatibility
Migrating from Windows 9x to XP can be an issue for users dependent upon MS-DOS. Although XP comes with the ability to run DOS programs in a virtual DOS machine, it still has trouble running many old DOS programs. This is largely because it is a Windows NT system and does not use DOS as a base OS, and because the Windows NT architecture is different from Windows 9x. Some DOS programs that cannot run natively on XP, notably programs that rely on direct access to hardware, can be run in emulators, such as DOSBox or virtual machines, like VMware, Virtual PC, or VirtualBox. This also applies to programs that only require direct access to certain common emulated hardware components, like memory, keyboard, graphics cards, and serial ports. With DOS emulators, 32-bit versions of Windows XP can run almost any program designed for any previous Microsoft operating system. Only 64-bit versions of XP have major backward-compatibility issues. This is because old 16-bit Windows programs require a tool called NTVDM, which is only present in the 32-bit version of the OS. However, this is true of every version of Windows that comes in both 32-bit and 64-bit versions, and it is not specific to XP; additionally, virtual machine software such as VirtualBox can run 16-bit DOS and Windows programs even on 64-bit versions of Windows.
Product activation and verification
Product activation
In an attempt to reduce piracy, Microsoft introduced product activation in Windows XP. Activation required the computer or the user to activate with Microsoft (either online or over the phone) within a certain amount of time in order to continue using the operating system. If the user's computer system ever changes — for example, if two or more relevant components of the computer itself are upgraded — Windows will return to the unactivated state and will need to be activated again within a defined grace period. If a user tried to reactivate too frequently, the system will refuse to activate online. The user must then contact Microsoft by telephone to obtain a new activation code.
However, activation only applied to retail and "system builder" (intended for use by small local PC builders) copies of Windows. "Royalty OEM" (used by large PC vendors) copies are instead locked to a special signature in the machine's BIOS (and will demand activation if moved to a system whose motherboard does not have the signature) and volume license copies do not require activation at all. This led to pirates simply using volume license copies with volume license keys that were widely distributed on the Internet.
Product key testing
In addition to activation, Windows XP service packs will refuse to install on Windows XP systems with product keys known to be widely used in unauthorized installations. These product keys are either intended for use with one copy (for retail and system builder), for one OEM (for BIOS locked copies) or to one company (for volume license copies) and are included with the product. However a number of volume license product keys (which as mentioned above avoid the need for activation) were posted on the Internet and were then used for a large number of unauthorized installations. The service packs contain a list of these keys and will not update copies of Windows XP that use them.
Microsoft developed a new key verification engine for Windows XP Service Pack 2 that could detect illicit keys, even those that had never been used before. After an outcry from security consultants who feared that denying security updates to illegal installations of Windows XP would have wide-ranging consequences even for legal owners, Microsoft elected to disable the new key verification engine. Service Pack 2 only checks for the same small list of commonly used keys as Service Pack 1. This means that while Service Pack 2 will not install on copies of Windows XP which use the older set of copied keys, those who use keys which have been posted more recently may be able to update their systems.
Windows Genuine Advantage
To try to curb piracy based on leaked or generated volume license keys, Microsoft introduced Windows Genuine Advantage (WGA). WGA comprises two parts, a verification tool which must be used to get certain downloads from Microsoft and a user notification system. WGA for Windows was followed by verification systems for Internet Explorer 7, Windows Media Player 11, Windows Defender, Microsoft Office 2007 and certain updates. In late 2007, Microsoft removed the WGA verification from the installer for Internet Explorer 7 saying that the purpose of the change was to make IE7 available to all Windows users.
If the license key is judged not genuine, it displays a nag screen at regular intervals asking the user to buy a license from Microsoft. In addition, the user's access to Microsoft Update is restricted to critical security updates, and as such, new versions of enhancements and other Microsoft products will no longer be able to be downloaded or installed.
On August 26, 2008, Microsoft released a new WGA activation program that displays a plain black desktop background for computers failing validation. The background can be changed, but reverts after 1 hour.
Common criticisms of WGA have included its description as a "Critical Security Update", causing Automatic Updates to download it without user intervention on default settings, its behavior compared to spyware of "phoning home" to Microsoft every time the computer is connected to the Internet, the failure to inform end users what exactly WGA would do once installed (rectified by a 2006 update), the failure to provide a proper uninstallation method during beta testing (users were given manual removal instructions that did not work with the final build), and its sensitivity to hardware changes which cause repeated need for reactivation in the hands of some developers. Also if the user has no connection to the Internet or a phone, it will be difficult to activate it normally.
Strictly speaking, neither the download nor the install of the Notifications is mandatory; the user can change their Automatic Update settings to allow them to choose what updates may be downloaded for installation. If the update is already downloaded, the user can choose not to accept the supplemental EULA provided for the Notifications. In both cases, the user can also request that the update not be presented again. Newer Critical Security Updates may still be installed with the update hidden. However this setting will only have effect on the existing version of Notifications, so it can appear again as a new version. In 2006, California resident Brian Johnson attempted to bring a class action lawsuit against Microsoft, on grounds that Windows Genuine Advantage Notifications violated the spyware laws in the state; the lawsuit was dismissed in 2010.
Default theme
Windows XP's default theme, Luna, was criticized by some users for its childish look.
See also
Criticism of Microsoft
Criticism of Internet Explorer
Criticism of Windows Vista
Criticism of Windows 10
Free Software Foundation anti-Windows campaigns
Windows refund
References
External links
Microsoft criticisms and controversies
Windows XP
Windows XP | Operating System (OS) | 1,135 |
EFI system partition
The EFI (Extensible Firmware Interface) system partition or ESP is a partition on a data storage device (usually a hard disk drive or solid-state drive) that is used by computers having the Unified Extensible Firmware Interface (UEFI). When a computer is booted, UEFI firmware loads files stored on the ESP to start installed operating systems and various utilities.
An ESP contains the boot loaders or kernel images for all installed operating systems (which are contained in other partitions), device driver files for hardware devices present in a computer and used by the firmware at boot time, system utility programs that are intended to be run before an operating system is booted, and data files such as error logs.
Overview
The EFI system partition is formatted with a file system whose specification is based on the FAT file system and maintained as part of the UEFI specification; therefore, the file system specification is independent from the original FAT specification. The actual extent of divergence is unknown: Apple maintains a separate tool that should be used, while other systems use FAT utilities just fine. The globally unique identifier (GUID) for the EFI system partition in the GUID Partition Table (GPT) scheme is , while its ID in the master boot record (MBR) partition-table scheme is . Both GPT- and MBR-partitioned disks can contain an EFI system partition, as UEFI firmware is required to support both partitioning schemes. Also, El Torito bootable format for CD-ROMs and DVDs is supported.
UEFI provides backward compatibility with legacy systems by reserving the first block (sector) of the partition for compatibility code, effectively creating a legacy boot sector. On legacy BIOS-based systems, the first sector of a partition is loaded into memory and execution is transferred to this code. UEFI firmware does not execute the code in the MBR, except when booting in legacy BIOS mode through the Compatibility Support Module (CSM).
The UEFI specification requires MBR partition tables to be fully supported. However, some UEFI implementations immediately switch to the BIOS-based CSM booting upon detecting certain types of partition table on the boot disk, effectively preventing UEFI booting to be performed from EFI system partitions contained on MBR-partitioned disks.
UEFI firmware supports booting from removable storage devices such as USB flash drives. For that purpose, a removable device is formatted with a FAT12, FAT16 or FAT32 file system, while a boot loader needs to be stored according to the standard ESP file hierarchy, or by providing a complete path of a boot loader to the system's boot manager. On the other hand, FAT32 is always expected on fixed drives.
Usage
Linux
GRUB 2 and elilo serve as conventional, full-fledged standalone UEFI boot loaders for Linux. Once loaded by a UEFI firmware, they both can access and boot kernel images from all devices, partitions and file systems they support, without being limited to the EFI system partition.
EFI Boot Stub makes it possible to boot a Linux kernel image without the use of a conventional UEFI boot loader. By masquerading itself as a PE/COFF image and appearing to the firmware as a UEFI application, an x86 kernel image with EFI Boot Stub enabled can be directly loaded and executed by a UEFI firmware. Such kernel images can still be loaded and run by BIOS-based boot loaders; thus, EFI Boot Stub allows a single kernel image to work in any boot environment.
Linux kernel's support for the EFI Boot Stub is enabled by turning on option CONFIG_EFI_STUB (EFI stub support) during the kernel configuration. It was merged into version 3.3 of the Linux kernel mainline, released on March 18, 2012.
Gummiboot (a.k.a. systemd-boot) is a simple UEFI boot manager that loads and runs configured UEFI images, accessing only the EFI system partition. Configuration file fragments, kernel images and initrd images are required to reside on the EFI system partition, as Gummiboot does not provide support for accessing files on other partitions or file systems. Linux kernels need to be built with CONFIG_EFI_STUB enabled so they can be directly executed as UEFI images.
The mount point for the EFI system partition is usually /boot/efi, where its content is accessible after Linux is booted.
macOS
On macOS computers based on the x64 hardware architecture, the EFI system partition is initially left blank and unused for booting. However, the EFI system partition is used as a staging area for firmware updates. The logic usually goes as follows: the EFI first looks for a bootloader in ESP, and if there is none it will continue to the MacOS file system.
The pre-UEFI Apple–Intel architecture (mactel) EFI subsystem used to require the EFI system partition to be formatted in HFS+. Any third-party bootloader also needs to be "blessed" by a special IOCTL command before becoming bootable by the firmware, a relic of the system folder blessing from classic Mac OS. There is otherwise no limitations to what kinds of EFI operating system or bootloader a mactel machine can run.
Windows
On Windows XP 64-Bit Edition and later, access to the EFI system partition is obtained by running the command.
The Windows boot manager is located at the subfolder of the EFI system partition.
See also
BIOS boot partition
EFI variable
System partition and boot partition
Windows To Go
References
External links
EFI System Partition Subdirectory Registry – A registry of the subdirectories that lie below the /EFI directory on an EFI system partition
Booting
Computer file systems
Disk partitions
Unified Extensible Firmware Interface | Operating System (OS) | 1,136 |
Dell Networking Operating System
DNOS or Dell Networking Operating System is a network operating system running on switches from Dell Networking. It is derived from either the PowerConnect OS (DNOS 6.x) or Force10 OS/FTOS (DNOS 9.x) and will be made available for the 10G and faster Dell Networking S-series switches, the Z-series 40G core switches and DNOS6 is available for the N-series switches.
Two version families
The DNOS network operating system family comes in a few main versions:
DNOS3
DNOS 3.x: This is a family of firmware for the campus access switches that can only be managed using a web based GUI or run as unmanaged device.
DNOS6
DNOS 6.x: This is the operating system running on the Dell Networking N-series (campus) networking switches. It is the latest version of the 'PowerConnect' operating system, running on a Linux Kernel. It is available as upgrade for the PowerConnect 8100 series switches (which then become a Dell Networking N40xx switch) and it also is installed on all DN N1000, N2000 and N3000 series switches. It has a full web-based GUI together with a full CLI (command line interface) and the CLI will be very similar to the original PowerConnect CLI, though with a range of new features like PVSTP (per VLAN spanning tree), Policy Based Routing and MLAG.
DNOS9
DNOS 9.x: TeUTg on NetBSD.
Only the PowerConnect 8100 will be able to run on DNOS 6.x: all other PowerConnect ethernet switches will continue to run its own PowerConnect OS (on top of VxWorks) while the PowerConnect W-series run on a Dell specific version of ArubaOS.
The Dell Networking S- xxxx and Z9x00 series will run on DNOS where the other Dell Networking switches will continue to run FTOS 8.x firmware.
OS10
OS10 is a Linux-based open networking OS that can run on all Open Network Install Environment (ONIE) switches. As it runs directly in a Linux environment network admins can highly automate the network platform and manage the switches in a similar way as the (Linux) servers.
Hardware Abstraction Layer
Three of the four product families from Dell Networking are using the Broadcom Trident+ ASICs, but the company doesn't use the APIs from Broadcom: the developers at Dell Networking have written their own Hardware Abstraction Layer so that DNOS 9.x can run on different hardware platforms with minimal impact for the firmware. Currently three of the four DN switch families are based on the Broadcom Trident family (while the 4th - the E-series- run on self-developed ASICs); and two of them are running DNOS 9.x (S- and Z- series) and if the product developers want or need to use different hardware for new products they only need to develop a HAL for that new hardware and the same firmware can run on it. This keeps the company flexible and not dependent on a specific hardware-vendor and can use both 3rd party or self designed ASICs and chipsets.
The underlying OS on which DNOS 9.x, runs, is based on NetBSD (while the DNOS 6.x runs on a Linux kernel), an implementation which is often used in embedded networking-systems. NetBSD is a very stable, open source, OS running on many different hardware platforms. By choosing for a proven technology with extended TCP functionality built into the core of the OS it reduces time during development of new products or extending the DNOS with new features.
Modular setup
DNOS 9.x is also modular where different parts of the OS run independently from each other within one switch: if one process would fail the impact on other processes on the switch are limited. This modular setup is also taken to the hardware level in some product-lines where a routing-module has three separate CPUs: one for management, one for L2 and one for L3 processing. This same approach is also used in the newer firmware-families from Cisco like the NX-OS for the Nexus product-line or the IOS XR for the high-end routers (the Carrier Routing Systems) from Cisco. (and unlike the original IOS: processes under IOS aren't isolated from each other). This approach is regarded not only a way to make the firmware more resilient but also increases the security of the switches
Capabilities
All DNOS 9.x based switches offer a wide range of layer2 and layer3 protocols. All features are available on all switches: some switch models (in the S-series) offer an additional license for layer3 or routing: this additional license is NOT required to use that protocol, but only required to get support from the Dell Networking support department on using these features. All interfaces on DNOS 9.x running switches are configured as a layer3 interface and by default shutdown. To use such an interface as an ethernet switchport you need to configure it as such (with the command "switchport") and then enable that port using "no shutdown".
Unlike DNOS 6.x (which provide web and CLI - with extensive API control via undocumented "dedug console" and "dev help" commands), DNOS 9.x only offers a documented command line interface (CLI) to configure and monitor the switch directly, though it is possible with the "Automation Tools" to create your own webGui on DNOS 9.x switches.
Layer2 capabilities
All standardized ethernet standards are supported by switches running FTOS including: Spanning Tree Protocol and RSTP, VLAN and the IEEE 802.1Q standards, QinQ or IEEE 802.1ad, Link Layer Discovery Protocol and LLDP MED.
The S-series switches ending with a V and some of the E-series line-cards support Power over Ethernet or PoE with the standards for this protocol.
Layer3 capabilities
As mentioned above, by default an interface on a switch running DNOS 9.x are configured as a layer3 port. All these switches are thus routers with many interfaces that can (and most often are) reconfigured into a layer2 ethernet switch.
All DNOS 9 switches run at least the following routing protocols: Routing Information Protocol and RIP version 2, OSPF, IS-IS and Border Gateway Protocol version 4.
Open Automation
Under the name OpenAutomation 2.0 Dell Networking switches running DNOS 9.x offers a number of features under this name. These features include:
Smart Scripting
Dell Networking switches support so called smart scripting. It is possible to develop scripts that run on the switches running DNOS 9. Both Perl and Python are supported as scripting languages to automate environment specific repetitive tasks or to build in custom behavior. Users who write such scripts are promoted to share these scripts with the user-community and make them available to other Force10/DNOS users. Force10 introduced the smart scripting in FTOS in 2010, following other vendors like Cisco for their Nexus product range
Bare metal provisioning
Dell Networking switches support a bare metal provisioning option: if you need to deploy a number of similar switches you can put both (desired/latest) firmware release and standard user-specific configuration on a USB key: when deploying the switches you can insert the USB key, power-up the switch and it will automatically load the correct firmware and configuration. In combination with smart scripting someone can combine these features for a fully automated installation and configuration of new switches. It is also possible to run BMP via the network: unless re-configured to start in 'normal' mode all DNOS 9.x switches (and the earlier FTOS switches) will check if there is a BMP server on the network by sending out a DHCP/BOOTP request at boot: if it gets the correct response from the DHCP server (IP address, address of TFTP server and a script/config file name) it will contact a TFTP server to download correct firmware and configuration files and run that. You can disable this feature during initial configuration so that the switch will boot from the firmware and configuration saved on the switch NVRAM memory.
Virtual server networking
Part of the Open Automation platform are special features for the use of virtualisation in your datacenter. Virtualisation allows you to create complete (virtual) server-systems running on a standard hypervisor farm. This will create new challenges for networking in such a datacenter, support automated configuration of datacenter switches to connect newly created virtual servers. The open automation platform has several features to support this.
Network Automation
According to Dell the move to (server and datacenter) virtualisation is one of the most important developments in the IT industry. According to this vendor the industry must prevent that this path leads to getting locked-in into specific vendors due to the use of proprietary technologies. The open automation framework is an open framework that doesn't rely on proprietary solutions
Alternative OS
On some models Dell Networking switches (currently the S3048-ON, S4048-ON, S4810-ON, S6000-ON and Z9100) it is possible to run an alternative network OS: Cumulus Linux. This will run instead of DNOS on top of NetBSD. Cumulus Linux is a complete Linux distribution which uses the full TCP/IP stack of Linux.
References
Computer networking
Dell
Embedded operating systems
Internet Protocol based network software
Network operating systems | Operating System (OS) | 1,137 |
AOpen
AOPEN (, stylized AOPEN) is a major electronics manufacturer from Taiwan that makes computers and parts for computers. AOPEN used to be the Open System Business Unit of Acer Computer Inc. which designed, manufactured and sold computer components.
It was incorporated in December 1996 as a subsidiary of Acer Group with an initial public offering (IPO) at the Taiwan stock exchange in August 2002. It is also the first subsidiary that established the entrepreneurship paradigm in the pan-Acer Group. At that time, AOPENs major shareholder was the Wistron Group. In 2018 AOPEN became a partner of the pan-Acer Group again as the business-to-business branch of the computing industry.
They are perhaps most well known for their "Mobile on Desktop" (MoDT), which implements Intel's Pentium M platform on desktop motherboards. Because the Pentium 4 and other NetBurst CPUs proved less energy efficient than the Pentium M, in late 2004 and early 2005, many manufacturers introduced desktop motherboards for the mobile Pentium M, AOPEN being one of the first.
AOPEN currently specializes in ultra small form factor (uSFF) platform applications; digital signage; and product development and designs characterized by miniaturization, standardization and modularization.
Product position and strategies
Since 2005 AOPEN has been developing energy-saving products. According to different types of customers, applications and contexts, AOPEN splits its product platforms into two major categories: media player platform and Panel PC platform, both of which have Windows, Linux, Chrome OS and Android devices.
Digital Signage Platform
There are six major parts in AOPEN's digital signage platform applications: media player, management, deployment, display, extension and software. AOPEN manufacturers the digital signage media players with operating system and pre-imaging. This also includes a remote management option.
See also
List of companies of Taiwan
References
Taiwanese companies established in 1996
Companies based in Taipei
Electronics companies established in 1996
Companies listed on the Taiwan Stock Exchange
Electronics companies of Taiwan
Motherboard companies
Taiwanese brands | Operating System (OS) | 1,138 |
History of Linux
Linux began in 1991 as a personal project by Finnish student Linus Torvalds: to create a new free operating system kernel. The resulting Linux kernel has been marked by constant growth throughout its history. Since the initial release of its source code in 1991, it has grown from a small number of C files under a license prohibiting commercial distribution to the 4.15 version in 2018 with more than 23.3 million lines of source code, not counting comments, under the GNU General Public License v2.
Events leading to creation
After AT&T had dropped out of the Multics project, the Unix operating system was conceived and implemented by Ken Thompson and Dennis Ritchie (both of AT&T Bell Laboratories) in 1969 and first released in 1970. Later they rewrote it in a new programming language, C, to make it portable. The availability and portability of Unix caused it to be widely adopted, copied and modified by academic institutions and businesses.
In 1977, the Berkeley Software Distribution (BSD) was developed by the Computer Systems Research Group (CSRG) from UC Berkeley, based on the 6th edition of Unix from AT&T. Since BSD contained Unix code that AT&T owned, AT&T filed a lawsuit (USL v. BSDi) in the early 1990s against the University of California. This strongly limited the development and adoption of BSD.
In 1983, Richard Stallman started the GNU project with the goal of creating a free UNIX-like operating system. As part of this work, he wrote the GNU General Public License (GPL). By the early 1990s, there was almost enough available software to create a full operating system. However, the GNU kernel, called Hurd, failed to attract enough development effort, leaving GNU incomplete.
In 1985, Intel released the 80386, the first x86 microprocessor with a 32-bit instruction set and a memory management unit with paging.
In 1986, Maurice J. Bach, of AT&T Bell Labs, published The Design of the UNIX Operating System. This definitive description principally covered the System V Release 2 kernel, with some new features from Release 3 and BSD.
In 1987, MINIX, a Unix-like system intended for academic use, was released by Andrew S. Tanenbaum to exemplify the principles conveyed in his textbook, Operating Systems: Design and Implementation. While source code for the system was available, modification and redistribution were restricted. In addition, MINIX's 16-bit design was not well adapted to the 32-bit features of the increasingly cheap and popular Intel 386 architecture for personal computers. In the early nineties a commercial UNIX operating system for Intel 386 PCs was too expensive for private users.
These factors and the lack of a widely adopted, free kernel provided the impetus for Torvalds' starting his project. He has stated that if either the GNU Hurd or 386BSD kernels had been available at the time, he likely would not have written his own.
The creation of Linux
In 1991, while studying computer science at University of Helsinki, Linus Torvalds began a project that later became the Linux kernel. He wrote the program specifically for the hardware he was using and independent of an operating system because he wanted to use the functions of his new PC with an 80386 processor. Development was done on MINIX using the GNU C Compiler.
As Torvalds wrote in his book Just for Fun, he eventually ended up writing an operating system kernel. On 25 August 1991, he (at age ) announced this system in a Usenet posting to the newsgroup "comp.os.minix.":
According to Torvalds, Linux began to gain importance in 1992 after the X Window System was ported to Linux by Orest Zborowski, which allowed Linux to support a GUI for the first time.
Naming
Linus Torvalds had wanted to call his invention Freax, a portmanteau of "free", "freak", and "x" (as an allusion to Unix). During the start of his work on the system, he stored the files under the name "Freax" for about half of a year. Torvalds had already considered the name "Linux", but initially dismissed it as too egotistical.
In order to facilitate development, the files were uploaded to the FTP server (ftp.funet.fi) of FUNET in September 1991. Ari Lemmke at Helsinki University of Technology (HUT), who was one of the volunteer administrators for the FTP server at the time, did not think that "Freax" was a good name. So, he named the project "Linux" on the server without consulting Torvalds. Later, however, Torvalds consented to "Linux".
To demonstrate how the word "Linux" should be pronounced (), Torvalds included an audio guide () with the kernel source code.
Linux under the GNU GPL
Torvalds first published the Linux kernel under its own licence, which had a restriction on commercial activity.
The software to use with the kernel was software developed as part of the GNU project licensed under the GNU General Public License, a free software license. The first release of the Linux kernel, Linux 0.01, included a binary of GNU's Bash shell.
In the "Notes for linux release 0.01", Torvalds lists the GNU software that is required to run Linux:
In 1992, he suggested releasing the kernel under the GNU General Public License. He first announced this decision in the release notes of version 0.12. In the middle of December 1992 he published version 0.99 using the GNU GPL. Linux and GNU developers worked to integrate GNU components with Linux to make a fully functional and free operating system. Torvalds has stated, "making Linux GPLed was definitely the best thing I ever did."
Around 2000, Torvalds clarified that the Linux kernel uses the GPLv2 license, without the common "or later clause".
After years of draft discussions, the GPLv3 was released in 2007; however, Torvalds and the majority of kernel developers decided against adopting the new license.
GNU/Linux naming controversy
The designation "Linux" was initially used by Torvalds only for the Linux kernel. The kernel was, however, frequently used together with other software, especially that of the GNU project. This quickly became the most popular adoption of GNU software. In June 1994 in GNU's bulletin, Linux was referred to as a "free UNIX clone", and the Debian project began calling its product Debian GNU/Linux. In May 1996, Richard Stallman published the editor Emacs 19.31, in which the type of system was renamed from Linux to Lignux. This spelling was intended to refer specifically to the combination of GNU and Linux, but this was soon abandoned in favor of "GNU/Linux".
This name garnered varying reactions. The GNU and Debian projects use the name, although most people simply use the term "Linux" to refer to the combination.
Official mascot
Torvalds announced in 1996 that there would be a mascot for Linux, a penguin. This was because when they were about to select the mascot, Torvalds mentioned he was bitten by a little penguin (Eudyptula minor) on a visit to the National Zoo & Aquarium in Canberra, Australia. Larry Ewing provided the original draft of today's well known mascot based on this description. The name Tux was suggested by James Hughes as derivative of Torvalds' UniX, along with being short for tuxedo, a type of suit with color similar to that of a penguin.
New development
Linux Community
The largest part of the work on Linux is performed by the community: the thousands of programmers around the world that use Linux and send their suggested improvements to the maintainers. Various companies have also helped not only with the development of the kernels, but also with the writing of the body of auxiliary software, which is distributed with Linux. As of February 2015, over 80% of Linux kernel developers are paid.
It is released both by organized projects such as Debian, and by projects connected directly with companies such as Fedora and openSUSE. The members of the respective projects meet at various conferences and fairs, in order to exchange ideas. One of the largest of these fairs is the LinuxTag in Germany, where about 10,000 people assemble annually to discuss Linux and the projects associated with it.
Open Source Development Lab and Linux Foundation
The Open Source Development Lab (OSDL) was created in the year 2000, and is an independent nonprofit organization which pursues the goal of optimizing Linux for employment in data centers and in the carrier range. It served as sponsored working premises for Linus Torvalds and also for Andrew Morton (until the middle of 2006 when Morton transferred to Google). Torvalds worked full-time on behalf of OSDL, developing the Linux kernels.
On 22 January 2007, OSDL and the Free Standards Group merged to form The Linux Foundation, narrowing their respective focuses to that of promoting Linux in competition with Microsoft Windows. As of 2015, Torvalds remains with the Linux Foundation as a Fellow.
Companies
Despite being freely available, companies profit from Linux. These companies, many of which are also members of the Linux Foundation, invest substantial resources into the advancement and development of Linux, in order to make it suited for various application areas. This includes hardware donations for driver developers, cash donations for people who develop Linux software, and the employment of Linux programmers at the company. Some examples are Dell, IBM and Hewlett-Packard, which validate, use and sell Linux on their own servers, and Red Hat (now part of IBM) and SUSE, which maintain their own enterprise distributions. Likewise, Digia supports Linux by the development and LGPL licensing of Qt, which makes the development of KDE possible, and by employing some of the X and KDE developers.
Desktop environments
KDE was the first advanced desktop environment (version 1.0 released in July 1998), but it was controversial due to the then-proprietary Qt toolkit used. GNOME was developed as an alternative due to licensing questions. The two use a different underlying toolkit and thus involve different programming, and are sponsored by two different groups, German nonprofit KDE e.V. and the United States nonprofit GNOME Foundation.
As of April 2007, one journalist estimated that KDE had 65% of market share versus 26% for GNOME. In January 2008, KDE 4 was released prematurely with bugs, driving some users to GNOME. GNOME 3, released in April 2011, was called an "unholy mess" by Linus Torvalds due to its controversial design changes.
Dissatisfaction with GNOME 3 led to a fork, Cinnamon, which is developed primarily by Linux Mint developer Clement LeFebvre. This restores the more traditional desktop environment with marginal improvements.
The relatively well-funded distribution, Ubuntu, designed (and released in June 2011) another user interface called Unity which is radically different from the conventional desktop environment and has been criticized as having various flaws and lacking configurability. The motivation was a single desktop environment for desktops and tablets, although as of November 2012 Unity has yet to be used widely in tablets. However, the smartphone and tablet version of Ubuntu and its Unity interface was unveiled by Canonical Ltd in January 2013. In April 2017, Canonical canceled the phone-based Ubuntu Touch project entirely in order to focus on IoT projects such as Ubuntu Core. In April 2017, Canonical dropped Unity and began to use GNOME for the Ubuntu releases from 17.10 onward.
"Linux is obsolete"
In 1992, Andrew S. Tanenbaum, recognized computer scientist and author of the Minix microkernel system, wrote a Usenet article on the newsgroup comp.os.minix with the title "Linux is obsolete", which marked the beginning of a famous debate about the structure of the then-recent Linux kernel. Among the most significant criticisms were that:
The kernel was monolithic and thus old-fashioned.
The lack of portability, due to the use of exclusive features of the Intel 386 processor. "Writing a new operating system that is closely tied to any particular piece of hardware, especially a weird one like the Intel line, is basically wrong."
There was no strict control of the source code by any individual person.
Linux employed a set of features which were useless (Tanenbaum believed that multithreaded file systems were simply a "performance hack").
Tanenbaum's prediction that Linux would become outdated within a few years and replaced by GNU Hurd (which he considered to be more modern) proved incorrect. Linux has been ported to all major platforms and its open development model has led to an exemplary pace of development. In contrast, GNU Hurd has not yet reached the level of stability that would allow it to be used on a production server. His dismissal of the Intel line of 386 processors as 'weird' has also proven short-sighted, as the x86 series of processors and the Intel Corporation would later become near ubiquitous in personal computers and servers.
In his unpublished book Samizdat, Kenneth Brown claims that Torvalds illegally copied code from MINIX. In May 2004, these claims were refuted by Tanenbaum, the author of MINIX:
The book's claims, methodology and references were seriously questioned and in the end it was never released and was delisted from the distributor's site.
Microsoft competition and collaboration
Although Torvalds has said that Microsoft's feeling threatened by Linux in the past was of no consequence to him, the Microsoft and Linux camps had a number of antagonistic interactions between 1997 and 2001. This became quite clear for the first time in 1998, when the first Halloween document was brought to light by Eric S. Raymond. This was a short essay by a Microsoft developer that sought to lay out the threats posed to Microsoft by free software and identified strategies to counter these perceived threats.
Competition entered a new phase in the beginning of 2004, when Microsoft published results from customer case studies evaluating the use of Windows vs. Linux under the name “Get the Facts” on its own web page. Based on inquiries, research analysts, and some Microsoft sponsored investigations, the case studies claimed that enterprise use of Linux on servers compared unfavorably to the use of Windows in terms of reliability, security, and total cost of ownership.
In response, commercial Linux distributors produced their own studies, surveys and testimonials to counter Microsoft's campaign. Novell's web-based campaign at the end of 2004 was entitled “Unbending the truth” and sought to outline the advantages as well as dispelling the widely publicized legal liabilities of Linux deployment (particularly in light of the SCO v IBM case). Novell particularly referenced the Microsoft studies in many points. IBM also published a series of studies under the title “The Linux at IBM competitive advantage” to again parry Microsoft's campaign. Red Hat had a campaign called “Truth Happens” aimed at letting the performance of the product speak for itself, rather than advertising the product by studies.
In the autumn of 2006, Novell and Microsoft announced an agreement to co-operate on software interoperability and patent protection. This included an agreement that customers of either Novell or Microsoft may not be sued by the other company for patent infringement. This patent protection was also expanded to non-commercial free software developers. The last part was criticized because it only included non-commercial free software developers.
In July 2009, Microsoft submitted 22,000 lines of source code to the Linux kernel under the GPLV2 license, which were subsequently accepted. Although this has been referred to as "a historic move" and as a possible bellwether of an improvement in Microsoft's corporate attitudes toward Linux and open-source software, the decision was not altogether altruistic, as it promised to lead to significant competitive advantages for Microsoft and avoided legal action against Microsoft. Microsoft was actually compelled to make the code contribution when Vyatta principal engineer and Linux contributor Stephen Hemminger discovered that Microsoft had incorporated a Hyper-V network driver, with GPL-licensed open source components, statically linked to closed-source binaries in contravention of the GPL licence. Microsoft contributed the drivers to rectify the licence violation, although the company attempted to portray it as a charitable act, rather than one to avoid legal action against it. In the past Microsoft had termed Linux a "cancer" and "communist".
By 2011, Microsoft had become the 17th largest contributor to the Linux kernel. As of February 2015, Microsoft was no longer among the top 30 contributing sponsor companies.
The Windows Azure project was announced in 2008 and renamed to Microsoft Azure. It incorporates Linux as part of its suite of server-based software applications. In August 2018, SUSE created a Linux kernel specifically tailored to the cloud computing applications under the Microsoft Azure project umbrella. Speaking about the kernel port, a Microsoft representative said "The new Azure-tuned kernel allows those customers to quickly take advantage of new Azure services such as Accelerated Networking with SR-IOV."
In recent years, Torvalds has expressed a neutral to friendly attitude towards Microsoft following the company's new embrace of open source software and collaboration with the Linux community. "The whole anti-Microsoft thing was sometimes funny as a joke, but not really." said Torvalds in an interview with ZDNet. "Today, they're actually much friendlier. I talk to Microsoft engineers at various conferences, and I feel like, yes, they have changed, and the engineers are happy. And they're like really happy working on Linux. So I completely dismissed all the anti-Microsoft stuff."
SCO
In March 2003, the SCO Group accused IBM of violating their copyright on UNIX by transferring code from UNIX to Linux. SCO claims ownership of the copyrights on UNIX and a lawsuit was filed against IBM. Red Hat has counter-sued and SCO has since filed other related lawsuits. At the same time as their lawsuit, SCO began selling Linux licenses to users who did not want to risk a possible complaint on the part of SCO. Since Novell also claims the copyrights to UNIX, it filed suit against SCO.
In early 2007, SCO filed the specific details of a purported copyright infringement. Despite previous claims that SCO was the rightful copyright holder of 1 million lines of code, they specified only 326 lines of code, most of which were uncopyrightable. In August 2007, the court in the Novell case ruled that SCO did not actually hold the Unix copyrights, to begin with, though the Tenth Circuit Court of Appeals ruled in August 2009 that the question of who held the copyright properly remained for a jury to answer. The jury case was decided on 30 March 2010 in Novell's favour.
SCO has since filed for bankruptcy.
Trademark rights
In 1994 and 1995, several people from different countries attempted to register the name "Linux" as a trademark. Thereupon requests for royalty payments were issued to several Linux companies, a step with which many developers and users of Linux did not agree. Linus Torvalds clamped down on these companies with help from Linux International and was granted the trademark to the name, which he transferred to Linux International. Protection of the trademark was later administered by a dedicated foundation, the non-profit Linux Mark Institute. In 2000, Linus Torvalds specified the basic rules for the assignment of the licenses. This means that anyone who offers a product or a service with the name Linux must possess a license for it, which can be obtained through a unique purchase.
In June 2005, a new controversy developed over the use of royalties generated from the use of the Linux trademark. The Linux Mark Institute, which represents Linus Torvalds' rights, announced a price increase from 500 to 5,000 dollars for the use of the name. This step was justified as being needed to cover the rising costs of trademark protection.
In response to this increase, the community became displeased, which is why Linus Torvalds made an announcement on 21 August 2005, in order to dissolve the misunderstandings. In an e-mail he described the current situation as well as the background in detail and also dealt with the question of who had to pay license costs:
The Linux Mark Institute has since begun to offer a free, perpetual worldwide sublicense.
Chronology
1991: The Linux kernel is publicly announced on 25 August by the 21-year-old Finnish student Linus Benedict Torvalds. Version 0.01 is released publicly on 17 September.
1992: The Linux kernel is relicensed under the GNU GPL. The first Linux distributions are created.
1993: Over 100 developers work on the Linux kernel. With their assistance the kernel is adapted to the GNU environment, which creates a large spectrum of application types for Linux. The oldest currently existing Linux distribution, Slackware, is released for the first time. Later in the same year, the Debian project is established. Today it is the largest community distribution.
1994: Torvalds judges all components of the kernel to be fully matured: he releases version 1.0 of Linux. The XFree86 project contributes a graphical user interface (GUI). Commercial Linux distribution makers Red Hat and SUSE publish version 1.0 of their Linux distributions.
1995: Linux is ported to the DEC Alpha and to the Sun SPARC. Over the following years it is ported to an ever-greater number of platforms.
1996: Version 2.0 of the Linux kernel is released. The kernel can now serve several processors at the same time using symmetric multiprocessing (SMP), and thereby becomes a serious alternative for many companies.
1998: Many major companies such as IBM, Compaq and Oracle announce their support for Linux. The Cathedral and the Bazaar is first published as an essay (later as a book), resulting in Netscape publicly releasing the source code to its Netscape Communicator web browser suite. Netscape's actions and crediting of the essay brings Linux's open source development model to the attention of the popular technical press. In addition a group of programmers begins developing the graphical user interface KDE.
1999: A group of developers begin work on the graphical environment GNOME, destined to become a free replacement for KDE, which at the time, depended on the then proprietary Qt toolkit. During the year IBM announces an extensive project for the support of Linux. Version 2.2 of the Linux kernel is released.
2000: Dell announces that it is now the No. 2 provider of Linux-based systems worldwide and the first major manufacturer to offer Linux across its full product line.
2001: Version 2.4 of the Linux kernel is released.
2002: The media reports that "Microsoft killed Dell Linux"
2003: Version 2.6 of the Linux kernel is released.
2004: The XFree86 team splits up and joins with the existing X standards body to form the X.Org Foundation, which results in a substantially faster development of the X server for Linux.
2005: The project openSUSE begins a free distribution from Novell's community. Also the project OpenOffice.org introduces version 2.0 that then started supporting OASIS OpenDocument standards.
2006: Oracle releases its own distribution of Red Hat Enterprise Linux. Novell and Microsoft announce cooperation for a better interoperability and mutual patent protection.
2007: Dell starts distributing laptops with Ubuntu pre-installed on them.
2009: Red Hat's market capitalization equals Sun's, interpreted as a symbolic moment for the "Linux-based economy".
2011: Version 3.0 of the Linux kernel is released.
2012: The aggregate Linux server market revenue exceeds that of the rest of the Unix market.
2013: Google's Linux-based Android claims 75% of the smartphone market share, in terms of the number of phones shipped.
2014: Ubuntu claims 22,000,000 users.
2015: Version 4.0 of the Linux kernel is released.
2019: Version 5.0 of the Linux kernel is released.
See also
History of free software
Linux kernel version history
References
External links
LINUX's History by Linus Torvalds
History of Linux by Ragib Hasan
Changes done in each Linux kernel release (since version 2.5.1)
Linux
Linux kernel
Linux
Linus Torvalds | Operating System (OS) | 1,139 |
EKA2
EKA2 (EPOC Kernel Architecture 2) is the second-generation Symbian platform real-time operating system kernel, which originated in the earlier operating system EPOC.
EKA2 began with a proprietary software license. In October 2009, it was released as free and open-source software under an Eclipse Public License. In April 2011, it was reverted to a proprietary license.
Like its predecessor, EKA1, it has preemptive multithreading and full memory protection. The main differences are:
Real-time guarantees: each application programming interface (API) call is fast, but more importantly, time-bound
Multiple threads inside the kernel, and outside
Pluggable memory models, allowing better support for later generations of ARM instruction set architecture.
A nanokernel which provides the most basic OS facilities upon which other personality layers can be built
The user interface of EKA2 is almost fully compatible with EKA1. EKA1 was not used after Symbian OS version 8.1, and was superseded in 2005.
The main advantage of EKA2 was its ability to run full telephone signalling protocol stacks. Previously, on Symbian phones, these had to run on a separate central processing unit (CPU). Such signalling stacks are very complex and rewriting them to work natively on Symbian OS is typically not an option. EKA2 thus allows personality layers to emulate the basic primitives of other operating systems, thus allowing existing signalling stacks to run largely unchanged.
Real-time guarantees are a prerequisite of signalling stacks, and also help with multimedia tasks. However, as with any RTOS, a full analysis of all threads is needed before any real-time guarantees can be offered to anything except the highest-priority thread; because higher priority threads may prevent lower-priority threads from running. Any multimedia task is likely to involve graphics, storage and/or networking activity, all of which are more likely to disrupt the stream than the kernel is.
Inside the kernel, EKA1 only allowed one thread (plus a null idle thread). EKA2 allows many threads. This makes it much easier to write device drivers that involve complex finite-state machines, such as those for SD card memory sticks or USB flash drives. Interrupts are handled with an interrupt service routine, which may request an immediate deferred function call (called as soon as the interrupts are processed), or a deferred function call, which is queued to run on a kernel thread. Either may in turn communicate with user-side threads.
Power management in EKA2 was largely unchanged from EKA1. The exact scheme varies between phones, but generally the null thread puts the CPU and peripherals to sleep, after having requested a wake-up whenever the next timer is due to expire.
EKA2 runs on ARM architecture CPUs and the WINS emulator. Unofficial ports exist for other CPUs. On the emulator, EKA2 provides somewhat better emulation than EKA1, more so for the APIs which Symbian OS uses to represent processes. In EKA1 they didn't work at all on the emulator, which runs as a single Windows process.
Much of the credit for EKA2 goes to a single Symbian kernel engineer, who began the project as an experiment many years before it became an official part of Symbian OS.
See also
Nanokernel
References
External links
https://web.archive.org/web/20091025051019/http://developer.symbian.org/wiki/index.php/Category%3AKernel_%26_Hardware_Services
https://web.archive.org/web/20090717151501/http://wiki.forum.nokia.com/index.php/EPOC_Kernel_Architecture_2
http://media.wiley.com/product_data/excerpt/47/04700252/0470025247.pdf
Symbian OS
Operating system kernels
Nanokernels
Computer-related introductions in 2005 | Operating System (OS) | 1,140 |
Oric
Oric was the name used by UK-based Tangerine Computer Systems for a series of 6502-based home computers sold in the 1980s, primarily in Europe.
With the success of the ZX Spectrum from Sinclair Research, Tangerine's backers suggested a home computer and Tangerine formed Oric Products International Ltd to develop the Oric-1. The computer was introduced in 1982. During 1983, approximately 160,000 Oric-1 computers were sold in the UK, plus another 50,000 in France (where it was the year's top-selling machine). This resulted in Oric being acquired and given funding for a successor model, the 1984 Oric Atmos.
Oric was bought by Eureka, which produced the less successful Oric Telestrat (1986). Oric was dissolved the year the Telestrat was released. Eastern European clones of Oric machines were produced into the 1990s.
Models
Oric-1
Based on a 1 MHz MOS Technology 6502 CPU, the Oric-1 came in 16 KB or 48 KB RAM variants for £129 and £169 respectively, matching the models available for the popular ZX Spectrum and undercutting the price of the 48 KB version of the Spectrum by a few pounds. The circuit design requires 8 memory chips, one chip per data line of the CPU. Due to the sizing of readily available memory chips the 48 KB model has 8 * 8 KB (64 KBit) chips, making a total of 64 KB. As released only 48 KB is available to the user, with the top 16 KB of memory overlaid by the BASIC ROM;
The optional disc drive unit contains some additional hardware that allows it to enable or disable the ROM, effectively adding 16 KB of RAM to the machine. This additional memory is used by the system to store the Oric DOS software. Both Oric-1 versions have a 16 KB ROM containing the operating system and a modified BASIC interpreter.
The Oric-1 has a sound chip, the programmable General Instrument AY-3-8910.
Two graphics modes are handled by a semi-custom ASIC (ULA) which also manages the interface between the processor and memory. The two modes are a "LORES" (low resolution) text mode (though the character set can be redefined to produce graphics) with 28 rows of 40 characters and a "HIRES" (high resolution) mode with 200 rows of 240 pixels above three lines of text. Like the Spectrum, the Oric-1 suffers from attribute clash–albeit to a much lesser degree in HIRES mode, since 2 different colours can be defined for each 6x1 block of 6 pixels,
The system has a built-in television RF modulator as well as RGB output. A standard audio tape recorder can be used for external storage. There is a Centronics compatible printer interface.
Technical details
CPU: MOS 6502 @ 1 MHz
Operating system: Tangerine/Microsoft Extended Basic v1.0
ROM: 16 KB
RAM: 16 KB / 48 KB
Sound: AY-3-8912
Graphics: 40×28 text characters/ 240×200 pixels, 8 colours
Storage: tape recorder, 300 and 2400 baud
Input: integrated keyboard
Connectivity: Tape recorder I/O, Centronics compatible printer port, RGB video out, RF out, expansion port
Voltage: 9 V
Power consumption: Max 600 milliamps
Oric Atmos
In late 1983 the funding cost for continued development of Oric caused external funding to be sought, and eventually led to a sale to Edenspring Investments PLC. The Edenspring money enabled Oric International to release the Oric Atmos, which added an improved keyboard and an updated V1.1 ROM to the Oric-1. The faulty tape error checking routine was still there.
Soon after the Atmos was released, the modem, printer and 3-inch floppy disk drive originally promised for the Oric-1 were announced and released by the end of 1984. A short time after the release of the Atmos machine, a modification for the Oric-1 was issued and advertised in magazines and bulletin boards. This modification enabled the Oric-1 user to add a second ROM (containing the Oric Atmos system) to a spare ROM-socket on the Oric-1 circuit board. Then, using a switch, the users could then switch between the new Oric Atmos ROM and the original Oric-1 ROM. This was desirable since the updated ROM of the Atmos contained breaking changes for some games which relied on certain behaviours or memory addresses within the ROM. This led to tape based software often containing a 1.1 ROM/Atmos version of the software on one side of the cassette, with a 1.0 ROM/Oric-1 version on the other. Earlier titles from publishers that no longer existed or had stopped producing software for the Oric were unlikely to be updated.
Oric Stratos and Oric Telestrat
Although the Oric Atmos had not turned around Oric International's fortunes, in February 1985, they announced several models including the Oric Stratos/IQ164. Despite their backers putting them into receivership the following day, Oric was bought by French company Eureka, which continued to produce the Stratos, followed by the Oric Telestrat in late 1986.
The Stratos and Telestrat increased the RAM to 64 KB and added more ports, but kept the same processor and graphics and sound hardware as the Oric-1 and Atmos.
The Telestrat is a telecommunications-oriented machine. It comes with a disk drive as standard, and only connects to an RGB monitor / TV. The machine is backward compatible with the Oric-1 and Oric Atmos by using a cartridge. Most of the software is in French, including Hyper-BASIC's error messages. Up to 6000 units were sold in France.
In December 1987, after announcing the Telestrat 2, Oric International went into receivership for the second and final time.
Technical specification
Keyboard
The keyboard has 57 moving keys with tactile feedback. It is capable of full upper and lower case with a correctly positioned space bar. It has a full typewriter pitch. The key layout is a standard QWERTY with ESC, CTRL, RETURN and additional cursor control keys. All keys have auto repeat.
Display
The display adapter will drive a PAL UHF colour or black and white television receiver on approximately Channel 36. RGB output is also provided on a 5 pin DIN 41524 socket.
Character mode
In character mode the Oric displays 28 lines of 40 characters, producing a display very similar to Teletext. The character set is standard ASCII which is enhanced by the addition of 80 user-definable characters. ASCII characters may also be re-defined as these are down loaded into RAM on power-up. Serial attributes are used to control display features, as in Teletext, and take up one character position. All remaining characters on that line are affected by the serial attribute until either the line ends or another serial attribute.
Display features are:
Select background colour (paper) from one of eight.
Select foreground colour (ink) from one of eight.
Flash characters on and off approximately twice a second.
Produce double height characters (even line top, odd line bottom).
Switch over to user-definable character set. This feature is used to produce Teletext-style colour graphics without the need for additional RAM.
Available colours are black, blue, red, magenta, green, cyan, yellow, and white.
Each character position also has a parallel attribute, which may be operated on a character by character basis, to produce video inversion. The display has a fixed black border.
Screen graphics mode
The graphics mode consists of 200 pixels vertically by 240 pixels horizontally plus 3 lines of 40 characters (the same as character mode) at the bottom of the screen to display system information and to act as a window on the user program while still viewing the graphics display. It can also be used to input direct commands for graphics and see the effect instantly without having to switch modes. The graphics display operates with serial attributes in the same way as characters, except that the display is now considered as 200 lines by 40 graphics cells. Each graphic cell is therefore very flexible by having 8 foreground and 8 background colours and flashing patterns. The video invert parallel attribute is also usable in this mode. ASCII characters may be painted over the graphics area, thus enabling the free mixing of graphics and text.
Sound
The Oric has an internal loudspeaker and amplifier and can also be connected to external amplifiers via the 7 Pin DIN 45329 shared with the cassette interface. A General Instruments AY-3-8912 provides 3 channel sound.
For BASIC programs, four keywords generate pre-made sounds: PING, SHOOT, EXPLODE, and ZAP. The commands SOUND, MUSIC, and PLAY produce a broader range of sounds.
Cassette interface
The cassette recorder connects via a 7 Pin DIN 45329 socket shared with the external sound output. The interface includes support for tape motor control. Recording speeds offered as standard are 300 baud or 2400 baud. A tone leader allows tape recorders' automatic level control to stabilise before the filename, followed by the actual data with parity; finally, checksums are recorded to allow overall verification of the recording.
The circuit was designed using a Schmitt trigger to remove noise and make input more reliable. The system allows for verification of stored information against the tape copy, to ensure integrity before the information is flushed from memory. There was however a bug within the error-checking of recorded programs, often causing user-created programs to fail when loaded back in, this bug persist in the updated roms for the Oric Atmos.
Available basic commands are CLOAD, CSAVE (for programs and memory dumps), STORE, RECALL (for arrays of string, integer or real, added with Oric Atmos roms). Filenames up to 16 characters can be specified. Options on the commands exist for slow speed, verification, autorunning of programs or specification of start and ending addresses for dumping memory.
Expansion port
The expansion port allows full access to the CPU's data address and control lines. This allows connection of add-ons specifically designed for the Oric, including user designed hardware. The range of lines exposed allows external ROM and RAM expansion, thus allowing for rom cartridges or for expansion devices to internally include the required operating software on ROM.
Printer port
The printer port is compatible with the then standard Centronics parallel interface allows for connection of many different types of printers from low quality (e.g. low-resolution thermal printers) to high quality printers, such as fixed font daisy wheel printers or laser printers, though the latter were uncommon and expensive during the period of commercial availability of the Oric range. Most contemporary computer printers could produce text output without requiring specific drivers, and often followed de facto standards for simple graphics. More advanced use of the printer would have required a specific driver which, given the proliferation of different home computers and standards of the time, may or may not have been available.
Peripherals
Colour plotter
Tangerine's MCP-40 is a plotter with mechanics by Alps Electric. The same mechanism was also used as the basis for similar low-cost plotters produced by various home computer manufacturers around that time. These included the Atari 1020, the Commodore 1520, the Tandy/Radio Shack CGP-115, the Texas Instruments HX-1000, the Mattel Aquarius 4615, and probably also the Sharp MZ-1P16 (for MZ-800 series).
Prestel adaptor
The Prestel adaptor produced by Eureka (Informatika) was the first adaptor produced for the Oric-1 and Oric Atmos computers. However this adaptor was only furnished with very limited software.
Clones
The Atmos was licensed in Yugoslavia and sold as the Nova 64. The clones were Atmos based, the only difference being the logo indicating ORIC NOVA 64 instead of Oric Atmos 48K. This is to indicate the installed 64 KB of RAM, which was also true of the Atmos; In both 16 KB of which is masked by the ROM at startup, leaving 48 KB to work with the BASIC language.
In Bulgaria, the Atmos clone was named Pravetz 8D and produced between 1985 and 1991.
The Pravetz is entirely hardware and software compatible with the Oric Atmos. The biggest change on the hardware side is the larger white case that hosts a comfortable mechanical keyboard and an integrated power supply. The BASIC ROM has been patched to host both a Western European and Cyrillic alphabet the upper case character set produces Western European characters, while lower case gives Cyrillic letters. In order to ease the use of the two alphabets, the Pravetz 8D is fitted with a Caps Lock key. A Disk II compatible interface and a custom DOS, called DOS-8D, were created in 1987–88 by Borislav Zahariev.
See also
:Category:Oric games
References
External links
Oric FAQ
Oric: The Story so Far
Oric Atmos review March 1984 Your Computer
Microtan 65 Oric-1 Oric Atmos at the Old Computers Museum
Oric.org community portal (French)
Early microcomputers
6502-based home computers
Home computers
Tangerine Computer Systems | Operating System (OS) | 1,141 |
Genera (operating system)
Genera is a commercial operating system and integrated development environment for Lisp machines created by Symbolics. It is essentially a fork of an earlier operating system originating on the Massachusetts Institute of Technology (MIT) AI Lab's Lisp machines which Symbolics had used in common with Lisp Machines, Inc. (LMI), and Texas Instruments (TI). Genera is also sold by Symbolics as Open Genera, which runs Genera on computers based on a Digital Equipment Corporation (DEC) Alpha processor using Tru64 UNIX, on x86_64 and Arm64 GNU/Linux and on Apple M1 MacOS. It is released and licensed as proprietary software.
Genera is an example of an object-oriented operating system based on the programming language Lisp.
Genera supports incremental and interactive development of complex software using a mix of programming styles with extensive support for object-oriented programming.
MIT's Lisp machine operating system
The Lisp Machine operating system was written in Lisp Machine Lisp. It was a one-user workstation initially targeted at software developers for artificial intelligence (AI) projects. The system had a large bitmap screen, a mouse, a keyboard, a network interface, a disk drive, and slots for expansion. The operating system was supporting this hardware and it provided (among others):
code for a frontend processor
means to boot the operating system
virtual memory management
garbage collection
interface to various hardware: mouse, keyboard, bitmap frame buffer, disk, printer, network interface
an interpreter and a native code compiler for Lisp Machine Lisp
an object system: Flavors
a graphical user interface (GUI) window system and window manager
a local file system
support for the Chaosnet (CHAOS) network
an Emacs-like Editor named Zmacs
a mail program named Zmail
a Lisp listener
a debugger
This was already a complete one-user Lisp-based operating system and development environment.
The MIT Lisp machine operating system was developed from the middle 1970s to the early 1980s.
In 2006, the source code for this Lisp machine operating system from MIT was released as free and open-source software.
Genera operating system
Symbolics developed new Lisp machines and published the operating system under the name Genera. The latest version is 8.5. Symbolics Genera was developed in the early 1980s and early 1990s. In the final years, development entailed mostly patches, with very little new function.
Symbolics developed Genera based on this foundation of the MIT Lisp machine operating system. It sells the operating system and layered software. Some of the layered software has been integrated into Genera in later releases. Symbolics improved the operating system software from the original MIT Lisp machine and expanded it. The Genera operating system was only available for Symbolics Lisp machines and the Open Genera virtual machine.
Symbolics Genera has many features and supports all the versions of various hardware that Symbolics built over its life. Its source code is more than a million lines; the number depends on the release and what amount of software is installed. Symbolics Genera was published on magnetic tape and CD-ROM. The release of the operating system also provided most of the source code of the operating system and its applications. The user has free access to all parts of the running operating system and can write changes and extensions. The source code of the operating system is divided into systems. These systems bundle sources, binaries and other files. The system construction toolkit (SCT) maintains the dependencies, the components and the versions of all the systems. A system has two numbers: a major and a minor version number. The major version number counts the number of full constructions of a system. The minor version counts the number of patches to that system. A patch is a file that can be loaded to fix problems or provide extensions to a particular version of a system.
Symbolics developed a version named Open Genera, that included a virtual machine that enabled executing Genera on DEC Alpha based workstations, plus several Genera extensions and applications that were sold separately (like the Symbolics S-Graphics suite). Also, they made a new operating system named Minima for embedded uses, in Common Lisp.
The original Lisp machine operating system was developed in Lisp Machine Lisp, using the Flavors object-oriented extension to that Lisp. Symbolics provided a successor to Flavors named New Flavors. Later Symbolics also supported Common Lisp and the Common Lisp Object System (CLOS). Then Symbolics Common Lisp became the default Lisp dialect for writing software with Genera. The software of the operating system was written mostly in Lisp Machine Lisp (named ZetaLisp) and Symbolics Common Lisp. These Lisp dialects are both provided by Genera. Also parts of the software was using either Flavors, New Flavors, and Common Lisp Object System. Some of the older parts of the Genera operating system have been rewritten in Symbolics Common Lisp and the Common Lisp Object system. Many parts of the operating systems remained written in ZetaLisp and Flavors (or New Flavors).
User interface
The early versions of Symbolics Genera were built with the original graphical user interface (GUI) windowing system of the Lisp machine operating system. Symbolics then developed a radically new windowing system named Dynamic Windows with a presentation-based user interface. This window system was introduced with Genera 7 in 1986. Many of the applications of Genera have then been using Dynamic Windows for their user interface. Eventually there was a move to port parts of the window system to run on other Common Lisp implementations by other vendors as the Common Lisp Interface Manager (CLIM). Versions of CLIM have been available (among others) for Allegro Common Lisp, LispWorks, and Macintosh Common Lisp. An open source version is available (McCLIM).
Dynamic Windows uses typed objects for all output to the screen. All displayed information keeps its connection to the objects displayed (output recording). This works for both textual and graphical output. At runtime the applicable operations to these objects are computed based on the class hierarchy and the available operations (commands). Commands are organized in hierarchical command tables with typed parameters. Commands can be entered with the mouse (making extensive use of mouse chording), keystrokes, and with a command line interface. All applications share one command line interpreter implementation, which adapts to various types of usage. The graphical abilities of the window system are based on the PostScript graphics model.
The user interface is mostly in monochrome (black and white) since that was what the hardware console typically provided. But extensive support exists for color, using color frame buffers or X Window System (X11) servers with color support. The activities (applications) use the whole screen with several panes, though windows can also be smaller. The layout of these activity windows adapts to different screen sizes. Activities can also switch between different pane layouts.
Genera provides a system menu to control windows, switch applications, and operate the window system. Many features of the user interface (switching between activities, creating activities, stopping and starting processes, and much more) can also be controlled with keyboard commands.
The Dynamic Lisp Listener is an example of a command line interface with full graphics abilities and support for mouse-based interaction. It accepts Lisp expressions and commands as input. The output is mouse sensitive. The Lisp listener can display forms to input data for the various built-in commands.
The user interface provides extensive online help and context sensitive help, completion of choices in various contexts.
Documentation
Genera supports fully hyperlinked online documentation. The documentation is read with the Document Examiner, an early hypertext browser. The documentation is based on small reusable documentation records that can also be displayed in various contexts with the Editor and the Lisp Listener. The documentation is organized in books and sections. The books were also provided in printed versions with the same contents as the online documentation. The documentation database information is delivered with Genera and can be modified with incremental patches.
The documentation was created with a separate application that was not shipped with Genera: Symbolics Concordia. Concordia provides an extension to the Zmacs editor for editing documentation records, a graphics editor and a page previewer.
The documentation provides user guides, installation guidelines and references of the various Lisp constructs and libraries.
The markup language is based on the Scribe markup language and also usable by the developer.
Genera supports printing to postscript printers, provides a printing queue and also a PostScript interpreter (written in Lisp).
Features
Genera also has support for various network protocols and applications using those. It has extensive support for TCP/IP.
Genera supports one-processor machines with several threads (called processes).
Genera supports several different types of garbage collection (GC): full GC, in-place GC, incremental GC, and ephemeral GC. The ephemeral collector uses only physical memory and uses the memory management unit to get information about changed pages in physical memory. The collector uses generations and the virtual memory is divided into areas. Areas can contain objects of certain types (strings, bitmaps, pathnames, ...), and each area can use different memory management mechanisms.
Genera implements two file systems: the FEP file system for large files and the Lisp Machine File System (LMFS) optimized for many small files. These systems also maintain different versions of files. If a file is modified, Genera still keeps the old versions. Genera also provides access to, can read from and write to, other, local and remote, file systems including: NFS, FTP, HFS, CD-ROMs, tape drives.
Genera supports netbooting.
Genera provides a client for the Statice object database from Symbolics.
Genera makes extensive use of the condition system (exception handling) to handle all kinds of runtime errors and is able to recover from many of these errors. For example, it allows retrying network operations if a network connection has a failure; the application code will keep running. When errors occur, users are presented a menu of restarts (abort, retry, continue options) that are specific to the error signalled.
Genera has extensive debugging tools.
Genera can save versions of the running system to worlds. These worlds can be booted and then will contain all the saved data and code.
Programming languages
Symbolics provided several programming languages for use with Genera:
ZetaLisp, the Symbolics version of Lisp Machine Lisp
Common Lisp in several versions: Symbolics Common Lisp, Future Common Lisp (ANSI Common Lisp), CLtL1
Symbolics Pascal, a version of Pascal written in Lisp (Lisp source is included in Genera distribution)
Symbolics C, a version of C written in Lisp (Lisp source is included in Genera distribution)
Symbolics Fortran, a version of Fortran written in Lisp (Lisp source is included in Genera distribution)
Symbolics Common Lisp provides most of the Common Lisp standard with very many extensions, many of them coming from ZetaLisp.
Other languages from Symbolics
Symbolics Prolog, a version of Prolog written and integrated in Lisp
Symbolics Ada, a version of Ada written in Lisp
It is remarkable that these programming language implementations inherited some of the dynamic features of the Lisp system (like garbage collection and checked access to data) and supported incremental software development.
Third-party developers provided more programming languages, such as OPS5, and development tools, such as the Knowledge Engineering Environment (KEE) from IntelliCorp).
Applications
Symbolics Genera comes with several applications. Applications are called activities. Some of the activities:
Zmacs, an Emacs-like text editor
Zmail, a mail reader also providing a calendar
File system browser with tools for file system maintenance
Lisp Listener with command line interface
Document Examiner for browsing documentation
Restore Distribution to install software.
Distribute Systems, to create software distributions
Peek to examine system information (processes, windows, network connections, ...)
Debugger
Namespace Editor to access information about objects in the network (users, computers, file systems, ...)
Converse, a chat client
Terminal
Inspector, for browsing Lisp data structures
Notifications
Frame-Up, for designing user interfaces
Flavor Examiner, to examine the classes and methods of the Flavor object-oriented extension to Lisp
Other applications from Symbolics
Symbolics sold several applications that run on Symbolics Genera.
Symbolics Concordia, a document production suite
Symbolics Joshua, an expert system shell
Symbolics Macsyma, a computer algebra system
Symbolics NS, a chip design tool
Symbolics Plexi, a neural network development tool
Symbolics S-Graphics, a suite of tools: S-Paint, S-Geometry, S-Dynamics, S-Render
Symbolics S-Utilities: S-Record, S-Compositor, S-Colorize, S-Convert
Symbolics Scope, digital image processing with a Pixar Image Computer
Symbolics Statice, an object database
Third-party applications
Several companies developed and sold applications for Symbolics Genera. Some examples:
Automated Reasoning Tool (ART), an expert system shell from Inference Corporation
ICAD, 3d parametric CAD system
Illustrate, graphics editor
Knowledge Engineering Environment (KEE), an expert system shell, from IntelliCorp
Knowledge Craft, an expert system shell, from Carnegie Group
Metal, machine translation system from Siemens
Highlights
Genera is written fully in Lisp, using ZetaLisp and Symbolics Common Lisp, including all low-level system code, such as device drivers, garbage collection, process scheduler, network stacks, etc.
The source code is more than a million lines of Lisp, yet relatively compact, compared to the provided functions, due to extensive reuse. It is also available for users to inspect and change.
The operating system is mostly written in an object-oriented style using Flavors, New Flavors, and CLOS
It has extensive online documentation readable with the Document Examiner
Dynamic Windows provides a presentation-based user interface
The user interface can be used locally (on Lisp Machines and MacIvories) and remotely (using X11)
Groups of developers can work together in a networked environment
A central namespace server provides a directory of machines, users, services, networks, file systems, databases, and more
There is little protection against changing the operating system. The whole system is fully accessible and changeable.
Limits
Genera's limits include:
Only runs on Symbolics Lisp Machines or the Open Genera emulator.
Only one user can be logged in at once.
Only one Lisp system can run at once. Data and code is shared by applications and the operating system. However, multiple instances of Open Genera can run on one DEC Alpha.
Development effectively stopped in the middle 1990s.
Releases
1982 – Release 78
1982 – Release 210
1983 – Release 4.0
1984 – Release 5.0
1985 – Release 6.0, introduce Symbolics Common Lisp, the Ephemeral Object Garbage Collector, and Document Examiner
1986 – Genera 7.0, introduce Dynamic Windows
1990 – Genera 8.0, introduce CLOS
1991 – Genera 8.1, introduce CLIM
1992 – Genera 8.2
1993 – Genera 8.3
1993 – Open Genera 1.0
1998 – Open Genera 2.0
2021 – Portable Genera 2.0
A stable version of Open Genera that can run on x86-64 or arm64 GNU/Linux, and Apple M1 MacOS has been released.
A hacked version of Open Genera that can run on x86-64 GNU/Linux exists.
References
External links
Symbolics Genera Integrated Development Environment
"Symbolics Technical Summary"
"Genera Concepts" web copy of Symbolics' introduction to Genera
Symbolics software documents at bitsavers.org
A page of screenshots of Genera
Screenshots of the award-winning Symbolics Document Examiner
"The Symbolics Virtual Lisp Machine, Or, Using The Dec Alpha As A Programmable Micro-engine"
"2013 Video Demonstration by Symbolics programmer Kalman Reti"
Common Lisp implementations
Common Lisp (programming language) software
Computing platforms
Integrated development environments
Lisp (programming language)-based operating systems
Lisp (programming language) software
Object-oriented operating systems | Operating System (OS) | 1,142 |
Windows 10 Mobile
Windows 10 Mobile is a discontinued mobile operating system developed by Microsoft. First released in 2015, it is a successor to Windows Phone 8.1, but was marketed by Microsoft as being an edition of its PC operating system Windows 10.
Windows 10 Mobile aimed to provide greater consistency with its counterpart for PCs, including more extensive synchronization of content, Universal Windows Platform apps, as well as the capability, on supported hardware, to connect devices to an external display and use a desktop interface with mouse and keyboard input support (reminiscent of Windows on PCs). Microsoft built tools for developers to port iOS Objective-C apps with minimal modifications. Windows Phone 8.1 smartphones are eligible for upgrade to Windows 10 Mobile, pursuant to manufacturer and carrier support. Some features vary depending on hardware compatibility.
Windows 10 Mobile was designed for use on smartphones and phablets running on 32-bit ARM processor architectures. Microsoft also intended for the platform to be used on ARM tablets with screens 9 inches or smaller in size, but such devices were never commercially released. Windows 10 Mobile entered public beta for selected Lumia smartphones on February 12, 2015. The first Lumia smartphones powered by Windows 10 Mobile were released on November 20, 2015, while eligible Windows Phone devices began receiving updates to Windows 10 Mobile on March 17, 2016, pursuant to manufacturer and carrier support.
The platform never achieved any significant degree of popularity or market share in comparison to Android or iOS. By 2017, Microsoft had already begun to downplay Windows 10 Mobile, having discontinued active development (beyond maintenance releases) due to a lack of user and developer interest in the platform, and focusing more on serving incumbent mobile operating systems as part of its software and services strategy. Support for Windows 10 Mobile ended on January 14, 2020. , Windows 10 Mobile has 0.01% market share of the mobile operating system market.
Development
Microsoft had already begun the process of unifying the Windows platform across device classes in 2012; Windows Phone 8 dropped the Windows CE-based architecture of its predecessor, Windows Phone 7, for a platform built upon the NT kernel that shared much of the same architecture with its PC counterpart Windows 8 including file system (NTFS), networking stack, security elements, graphics engine (DirectX), device driver framework and hardware abstraction layer. At Build 2014, Microsoft also unveiled the concept of Universal Windows Apps. With the addition of Windows Runtime support to these platforms, apps created for Windows 8.1 could now be ported to Windows Phone 8.1 and Xbox One while sharing a common codebase with their PC counterparts. User data and licenses for an app could also be shared between multiple platforms.
In July 2014, Microsoft's then-new CEO Satya Nadella explained that the company was planning to "streamline the next version of Windows from three operating systems into one single converged operating system for screens of all sizes", unifying Windows, Windows Phone, and Windows Embedded around a common architecture and a unified application ecosystem. However, Nadella stated that these internal changes would not have any effect on how the operating systems are marketed and sold.
On September 30, 2014, Microsoft unveiled Windows 10; Terry Myerson explained that Windows 10 would be Microsoft's "most comprehensive platform ever", promoting plans to provide a "unified" platform for desktop computers, laptops, tablets, smartphones, and all-in-one devices. Windows 10 on phones was publicly unveiled during the Windows 10: The Next Chapter press event on January 21, 2015; unlike previous Windows Phone versions, it would also expand the platform's focus to small, ARM-based tablets. Microsoft's previous attempt at an operating system for ARM-based tablets, Windows RT (which was based upon the PC version of Windows 8), was commercially unsuccessful.
During the 2015 Build keynote, Microsoft announced the middleware toolkit "Islandwood", later known as Windows Bridge for iOS, which provides a toolchain that can assist developers in porting Objective-C software (primarily iOS projects) to build as Universal Windows Apps. An early build of Windows Bridge for iOS was released as open source software under the MIT License on August 6, 2015. Visual Studio 2015 can also convert Xcode projects into Visual Studio projects. Microsoft also announced plans for a toolkit codenamed "Centennial", which would allow desktop Windows software using Win32 APIs to be ported to Windows 10 Mobile.
Project Astoria
At Build, Microsoft had also announced an Android runtime environment for Windows 10 Mobile known as "Astoria", which would allow Android apps to run in an emulated environment with minimal changes, and have access to Microsoft platform APIs such as Bing Maps and Xbox Live as nearly drop-in replacements for equivalent Google Mobile Services. Google Mobile Services and certain core APIs would not be available, and apps with "deep integration into background tasks" were said to poorly support the environment.
On February 25, 2016, after already having delayed it in November 2015, Microsoft announced that "Astoria" would be shelved, arguing that it was redundant to the native Windows Bridge toolkit since iOS is already a primary target for mobile app development. The company also encouraged use of products from Xamarin (which they had acquired the previous day) for multi-platform app development using C# programming language instead. Portions of Astoria were used as a basis for the Windows Subsystem for Linux (WSL) platform on the PC version of Windows 10.
Naming
To promote it as being unified with its desktop equivalent, Microsoft promoted the operating system as being an edition of Windows 10. Microsoft had begun to phase out specific references to the Windows Phone brand in its advertising in mid-2014, but critics have still considered the operating system to be an iteration and continuation of Windows Phone due to its lineage and similar overall functionality. Microsoft referred to the OS as "Windows 10 for phones and small tablets" during its unveiling, and leaked screenshots from a Technical Preview build identified the operating system as "Windows 10 Mobile". The technical preview was officially called the "Windows 10 Technical Preview for phones", while the user agent of Microsoft Edge contained a reference to "Windows Phone 10".
On May 13, 2015, Microsoft officially confirmed the platform would be known as Windows 10 Mobile.
Features
A major aspect of Windows 10 Mobile is a focus on harmonizing user experiences and functionality between different classes of devices—specifically, devices running the PC-oriented version of Windows 10. Under the Universal Windows Platform concept, Windows Runtime apps for Windows 10 on PC can be ported to other platforms in the Windows 10 family with nearly the same codebase, but with adaptations for specific device classes. Windows 10 Mobile also shares user interface elements with its PC counterpart, such as the updated Action Center and settings menu. During its initial unveiling, Microsoft presented several examples of Windows apps that would have similar functionality and user interfaces between Windows 10 on desktops and mobile devices, including updated Photos and Maps apps, and new Microsoft Office apps. Although marketed as a converged platform, and as with Windows Phone 8, using a Windows NT-based kernel, Windows 10 Mobile still cannot run Win32 desktop applications, but is compatible with software designed for Windows Phone 8.
Notifications can be synced between devices; dismissing a notification on, for example, a laptop, will also dismiss it from a phone. Certain types of notifications now allow inline replies. The start screen now has the option to display wallpapers as a background of the screen behind translucent tiles, rather than within the tiles. The messaging app adds support for internet-based Skype messaging alongside SMS, similarly to Apple's iMessage, and can synchronize these conversations with other devices. The camera app has been updated to match the "Lumia Camera" app previously exclusive to Lumia products, and a new Photos app aggregates content from local storage and OneDrive, and can perform automatic enhancements to photos. The on-screen keyboard now contains a virtual pointing stick for manipulating the text editing cursor, a dedicated voice input button, and can be shifted towards the left or right of the screen to improve one-handed usability on larger devices.
Windows 10 Mobile supports "Continuum", a feature that allows supported devices to connect to an external display, and scale its user interface and apps into a "PC-like" desktop interface with support for mouse and keyboard input over USB or Bluetooth. Devices can connect directly to external displays wirelessly using Miracast, via USB-C, or via docking station accessories with USB ports, as well as HDMI and DisplayPort outputs.
A new iteration of the Office Mobile suite, Office for Windows 10, is also bundled. Based upon the Android and iOS versions of Office Mobile, they introduce a new user interface with a variation of the ribbon toolbar used by the desktop version, and a new mobile version of Outlook. Outlook utilizes the same rendering engine as the Windows desktop version of Microsoft Word. Microsoft Edge replaces Internet Explorer Mobile as the default web browser.
Release
Windows 10 Mobile's first-party launch devices—the Lumia 950, Lumia 950 XL, and Lumia 550—were released in November 2015 being the first phones to ship with Windows 10 Mobile. Monthly updates to the OS software are being released to address bugs and security issues. These updates are distributed to all Windows 10 Mobile devices and do not require the intervention of a user's wireless carrier in order to authorize their distribution. Firmware upgrades will still require authorization by the user's carrier.
The Windows Insider program, adopted to provide a public beta for the PC version of Windows 10, is used to provide a public beta version of Windows 10 Mobile for selected devices. A build released on April 10, 2015, was to support most second and third generation Lumia products, but the Lumia 930, Lumia Icon, and Lumia 640 XL did not receive the update due to scaling bugs, and delivery was suspended as a whole due to backup and restore issues on some models. An update to the Windows Phone Recovery Tool resolved these concerns, and delivery of Windows 10 updates was restored to the 520 with build 10052, and to the 640 with build 10080.
Build number 10136 was released on June 16, 2015, with a "migration bug" that required that existing devices on build 10080 be reverted to Windows Phone 8.1 using the Recovery Tool before the installation of 10136 could proceed. This migration bug was fixed a week later with the release of build 10149. Mobile builds of the Redstone branch till 14322 were halted for the device Lumia 635 (1 GB RAM) due to bugs.
Upgrade release
Some Windows Phone 8.1 smartphones can be upgraded to Windows 10, pursuant to hardware compatibility, manufacturer support, and carrier support. Not all phones can receive the update nor support all of its features. Microsoft originally stated that stable upgrades for Windows Phone 8.1 devices would be released in December 2015; however, the release was ultimately delayed to March 17, 2016. Among first-party devices, only the Lumia 430, 435, 532, 535, 540, 635 (1 GB RAM), 640, 640 XL, 735, 830, 929, 930 and 1520 are supported. The only third-party devices supported are the BLU Products Win HD w510u and Win HD LTE x150q, and the MCJ Madosma Q501. Windows 10 Mobile does not officially support any HTC devices (HTC One M8 for Windows, HTC Windows Phone 8X, HTC Windows Phone 8S), although the HTC One M8 for Windows could be upgraded to the public release version of Windows 10 Mobile through the Windows Insider program. While Microsoft stated that the Nokia Lumia Icon may be upgraded at a later date, the company stated that there will not be a second wave of officially supported devices. Microsoft also removed statements which promoted the BLU Win JR LTE as being compatible with Windows 10.
Microsoft originally stated that all Lumia smartphones running Windows Phone 8 and 8.1 would receive updates to 10, but Microsoft later reiterated that only devices with the "Lumia Denim" firmware revision and at least 8 GB of internal storage would receive the upgrade. In February 2015, Joe Belfiore stated that Microsoft was working on support for devices with 512 MB of RAM, (such as the popular Nokia Lumia 520), but these plans have since been dropped. Upon the official upgrade release, some Lumia models, particularly the Lumia 1020 and 1320, were excluded despite meeting the previously announced criteria. Microsoft cited poor user feedback on the performance of preview builds on these models as reasoning. On October 17, 2017, Nearly 2 years after the Windows 10 release, Microsoft released an Over-The-Cable (OTC) Updater tool to bring all Lumias up to date to the latest supported Windows 10 build, even older 512 MB and 1 GB RAM unlocked devices such as the 520, 620, 720, 925, 920 etc. which were updated using the tool to Build 10586 (November Update).
Devices
As with Windows Phone, Windows 10 Mobile supports ARM system-on-chips from Qualcomm's Snapdragon line. In March 2015, Ars Technica reported that the operating system will also introduce support for IA-32 system-on-chips from Intel and AMD, including Intel's Atom x3 and Cherry Trail Atom x5 and x7, and AMD's Carrizo. These plans never materialized
Minimum specifications for Windows 10 Mobile devices are similar to those of Windows Phone 8, with a minimum screen resolution of 800×480 (854×480 if software buttons are in use), 1 GB of RAM and 8 GB of internal storage. Owing to hardware advancements and the operating system's support for tablets, screen resolutions can now reach as high as QSXGA resolution (2560×2048) and further, as opposed to the 1080p cap of Windows Phone 8. The minimum amount of RAM required is dictated by the screen's resolution; screens with a resolution 800×480 or 960×540 and higher require 1 GB, 1920×1080 (FHD) or 1440×900 and higher require 2 GB, and 2560×1440 and higher require 3 GB.
Microsoft unveiled flagship Lumia smartphones bundled with Windows 10 during a media event on October 6, 2015, including Lumia 950, Lumia 950 XL, and the low-end Lumia 550.
Version history
First release (version 1511)
Microsoft announced Windows 10 Mobile during their January 21, 2015 event "The Next Chapter". The first Windows 10 Mobile build was rolled out on February 12, 2015, as part of the Windows Insider Program to a subset of mobile devices running Windows Phone 8 and 8.1. As with the desktop editions of Windows 10, this initial release was codenamed "Threshold", it was part of both the "Threshold 1" and "Threshold 2" development cycles. Windows 10 Mobile launched with the Microsoft Lumia 550, 950 and 950 XL. The rollout for Windows Phone 8.1 devices started March 17, 2016.
Anniversary Update (version 1607)
On February 19, 2016, Microsoft restarted the rollout of full builds for the first feature update, officially known as the "Anniversary Update" or "Version 1607", codenamed "Redstone 1". Like the start of the previous wave, the first builds were not available to all devices that were included in the Windows Insider Program.
Creators Update (version 1703) and Fall Creators Update (version 1709)
The Creators Update (named after the equivalent update to Windows 10 for PC), also known as Redstone 2, was first previewed on the Insider branch on August 17, 2016. and began deployment on April 25, 2017. It features mainly minor feature additions, including an e-book reader within Edge, the ability to turn off the phone screen when using Continuum mode on an external display, SMS support in Skype, SD card encryption, and other changes. Despite the platform's synergy with Windows 10 for PCs, some of its features (such as Night Light and Paint 3D) were excluded. Around the time that the Creators Update was finalized, Windows Insider users began to be issued updates on a branch known as feature2. Microsoft stated that there were no plans to move Windows 10 Mobile to be in sync with the other Windows 10-platforms just yet; media outlets considered this decision to be a sign that Microsoft was beginning to wind down active development of Windows 10 Mobile beyond maintenance releases, as development was no longer directly in sync with the PC version.
The Creators Update was only offered to eleven existing Windows 10 Mobile devices, of which nine would later receive the Fall Creators Update:
Alcatel Idol 4S and 4S Pro
Alcatel OneTouch Fierce XL
HP Elite x3
Lenovo Softbank 503LV
MCJ Madosma Q601 †
Microsoft Lumia 550
Microsoft Lumia 640 and 640 XL †
Microsoft Lumia 650
Microsoft Lumia 950 and 950 XL
Trinity NuAns Neo
VAIO Phone Biz (VPB051)
† indicates a phone that is incompatible with the Fall Creators Update.
In early June 2017, a private build, briefly deployed by accident by Microsoft, revealed work on an updated interface for Windows 10 Mobile known as "CShell" ("composable shell"), an implementation of the Windows shell across device classes using a modular system. The build featured a Start screen, Action Center, and Continuum desktop interface that were nearly identical in functionality and appearance to their equivalents on Windows 10 for PC. However, this iteration of the operating system was no longer backwards compatible with Windows Phone Silverlight apps.
Reception
Reception of Windows 10 Mobile was mixed. In its review of the Lumia 950 XL, The Verge felt that the platform was "buggy and unfinished", and that its user interface was inconsistent in operation and felt more like Android mixed with few of the distinct design elements that were hallmarks of Windows Phone. It was noted that the OS still retained much of the performance of Windows Phone 8, and that Microsoft had made efforts to create synergies with the PC version of Windows 10 via its universal apps concept. Continuum was regarded as potentially being a signature feature over time, but that it was merely a "parlor trick" in its launch state due to a lack of support for desktop-oriented interfaces among third-party software. TechRadar felt that the lack of apps was the "biggest let-down on Windows Phone and Windows 10 Mobile alike." After many user complaints, Microsoft started allowing users to downgrade from Windows 10 to Windows Phone 8.1.
Financial results
According to the Form 10-K For The Fiscal Year Ended June 30, 2016 revenue from Phone business was $3,358 million comparing to $7,702 million in 2015 and $3,073 million in 2014.
The year before, Microsoft Corporation disclosed information on sales of Phone Hardware:
In addition, in the Form 10-K For The Fiscal Year Ended June 30, 2015 corporation disclosed that management decided to spend "$2.5 billion of integration and restructuring expenses, primarily costs associated with restructuring plans" which includes the cost of mass dismissal of staff.
Discontinuation
On October 8, 2017, Microsoft executive Joe Belfiore revealed that the company would no longer actively develop new features or hardware for Windows phones, citing its low market share, and the resultant lack of third-party software for the platform. Microsoft had largely abandoned its mobile business, having laid off the majority of Microsoft Mobile employees in 2016, sold a number of intellectual property and manufacturing assets (including, in particular, the Nokia feature phone business) to HMD Global and Foxconn (which began producing Android-based smartphones under the Nokia brand) focused software efforts on providing apps and services compatible with Android and iOS instead, and having since announced the Surface Duo, a folding Android smartphone. Development of Windows 10 Mobile would be limited to maintenance releases and patches. By December 2018, Statcounter had reported Windows 10 Mobile's market share to be 0.33%.
In January 2019, Microsoft announced that Windows 10 Mobile will reach end of life on December 10, 2019, after which no further security updates will be released, and online services tied to the OS (such as device backup) will begin to be phased out. However, Microsoft quietly moved the EOL date back to January 14, 2020 (aligned with the EOL date for Windows 7) with one additional security update released.
See also
References
ARM operating systems
Mobile operating systems
Smartphones
Windows Phone
Tablet operating systems | Operating System (OS) | 1,143 |
Owen Mock
Owen R. Mock was a computer software designer and programmer who pioneered computer operating systems in the 1950s. In 1954 Mock was part of a group of programmers at the Los Angeles division of North American Aviation (NAA) who developed the PACT series of compilers for the IBM 701 computer. In December 1955, Mock's group installed in the IBM 701 the "North American 701 Monitor" which was the first operating system to be in operation.
General Motors Research (GMR) also had an IBM 701 and used the compilers developed by Mock's group. When Robert L. Patrick at GMR designed a non-stop multi-user batch processing operating system for use on the next generation computer (IBM 704), Mock's group at NAA and George Ryckman's group at GMR joined forces to develop Robert Patrick's design for the IBM 704. This GM-NAA I/O software was the first operating system for the 704 and began production in 1956.
Publications
Owen R. Mock, Logical Organization of the PACT I Compiler, J. ACM, vol. 3, No. 4, pages 279-287 (October, 1956).
Owen R. Mock, The Share 709 System: Input-Output Buffering, J. ACM, vol. 6, No. 2, pages 145-151, (April, 1959).
References
North American 701 Monitor by Owen R. Mock
American computer programmers
Living people
Year of birth missing (living people) | Operating System (OS) | 1,144 |
Xerox Star
The Xerox Star workstation, officially named Xerox 8010 Information System, was the first commercial personal computer to incorporate technologies that have since become standard in personal computers, including a bitmapped display, a window-based graphical user interface, icons, folders, mouse (two-button), Ethernet networking, file servers, print servers, and e-mail.
Introduced by Xerox Corporation on April 27, 1981, the name Star technically refers only to the software sold with the system for the office automation market. The 8010 workstations were also sold with software based on the programming languages Lisp and Smalltalk for the smaller research and software development market.
History
The Xerox Alto
The Xerox Star systems concept owes much to the Xerox Alto, an experimental workstation designed by the Xerox Palo Alto Research Center (PARC). The first Alto became operational in 1972. The Alto had been strongly influenced by what its designers had seen previously with NLS (at SRI) and PLATO (at University of Illinois). At first, only a few Altos had been built. Although by 1979 nearly 1,000 Ethernet-linked Altos had been put into operation at Xerox and another 500 at collaborating universities and government offices, it was never intended to be a commercial product. Then in 1977, Xerox started a development project which worked to incorporate the Alto innovations into a commercial product; their concept was an integrated document preparation system, centered around the (then expensive) laser printing technology and oriented towards large corporations and their trading partners. When the resulting Xerox Star system was announced in 1981, the cost was about $75,000 ($ in today's dollars) for a basic system, and $16,000 ($ today) for each added workstation. A base system would have an 8010 Star workstation, and a second 8010 dedicated as a server (with RS232 I/O), and a floor-standing laser printer. The server software included a File Server, a Print Server, and distributed services (Mail Server, Clearinghouse Name Server / Directory, and Authentication Server). Customers could connect Xerox Memorywriter typewriters to this system over ethernet and send email, using the Memorywriter as a teletype.
The Xerox Star development process
The Star was developed at Xerox's Systems Development Department (SDD) in El Segundo, California, which had been established in 1977 under the direction of Don Massaro. A section of SDD, SDD North, was located in Palo Alto, California, and included some people borrowed from PARC. SDD's mission was to design the "Office of the future", a new system that would incorporate the best features of the Alto, was easy to use, and could automate many office tasks.
The development team was headed by David Liddle, and would eventually grow to more than 200 developers. A good part of the first year was taken up by meetings and planning, the result of which was an extensive and detailed functional specification, internally termed the "Red Book". This became the bible for all development tasks. It defined the interface and enforced consistency in all modules and tasks. All changes to the functional specification had to be approved by a review team which maintained standards rigorously.
One group in Palo Alto worked on the underlying operating system interface to the hardware and programming tools. Teams in El Segundo and Palo Alto collaborated on developing the user interface and user applications.
The staff relied heavily on the technologies they were working on, file sharing, print servers and e-mail. They were even connected to the Internet, then named ARPANET, which helped them communicate between El Segundo and Palo Alto.
The Star was implemented in the programming language Mesa, a direct precursor to Modula-2 and Modula-3. Mesa was not object-oriented, but included processes (threads) and monitors (mutexes) in the language.
Mesa required creating two files for every module: a definition module specified data structures and procedures for each object, and one or more implementation modules contained the code for the procedures. Traits were a programming convention used to implement object-oriented capabilities and multiple inheritance in the Star/Viewpoint customer environment.
The Star team used a sophisticated integrated development environment (IDE), named internally Tajo and externally Xerox Development Environment (XDE). Tajo had many similarities with the Smalltalk-80 environment, but it had many added tools. For example, the version control system DF, which required programmers to check out modules before they could be changed. Any change in a module which would force changes in dependent modules were closely tracked and documented. Changes to lower level modules required various levels of approval.
The software development process was intense. It involved much prototyping and user testing. The software engineers had to develop new network communications protocols and data-encoding schemes when those used in PARC's research environment proved inadequate.
Initially, all development was done on Alto workstations. These were not well suited to the extreme burdens placed by the software. Even the processor intended for the product proved inadequate and involved a last minute hardware redesign. Many software redesigns, rewrites, and late additions had to be made, variously based on results from user testing, and marketing and systems considerations.
A Japanese language version of the system was produced in conjunction with Fuji Xerox, code named J-Star, and full support for international customers.
In the end, many features from the Star Functional Specification were not implemented. The product had to get to market, and the last several months before release focused on reliability and performance.
System features
User interface
The key philosophy of the user interface was to mimic the office paradigm as much as possible in order to make it intuitive for users. The concept of what you see is what you get (WYSIWYG) was considered paramount. Text would be displayed as black on a white background, just like paper, and the printer would replicate the screen using Interpress, a page description language developed at PARC.
One of the main designers of the Star, Dr. David Canfield Smith, invented the concept of computer icons and the desktop metaphor, in which the user would see a desktop that contained documents and folders, with different icons representing different types of documents. Clicking any icon would open a window. Users would not start programs first (e.g., a text editor, graphics program or spreadsheet software), they would simply open the file and the appropriate application would appear.
The Star user interface was based on the concept of objects. For example, a word processing document would hold page objects, paragraph objects, sentence objects, word objects, and character objects. The user could select objects by clicking on them with the mouse, and press dedicated special keys on the keyboard to invoke standard object functions (open, delete, copy, move) in a uniform way. There was also a "Show Properties" key used to display settings, called property sheets, for the particular object (e.g., font size for a character object). These general conventions greatly simplified the menu structure of all the programs.
Object integration was designed into the system from the start. For example, a chart object created in the graphing module could be inserted into any type of document. This type of ability eventually became available as part of the operating system on the Apple Lisa and was featured in Mac OS System 7 as Publish and Subscribe. It became available on Microsoft Windows with the introduction of Object Linking and Embedding (OLE) in 1990. This approach was also later used on the OpenDoc software platform in the mid-to-late 1990s, and in the AppleWorks (originally ClarisWorks) package available for the Apple Mac (1991) and Microsoft Windows (1993).
Hardware
Initially, the Star software was to run on a new series of virtual-memory processors, described in a PARC technical report called, "Wildflower: An Architecture for a Personal Computer", by Butler Lampson. The machines had names that always began with the letter D. They were all microprogrammed processors; for the Star software, microcode was loaded that implemented an instruction set designed for Mesa. It was possible to load microcode for the Interlisp or Smalltalk environments, but these 3 environments could not run at the same time.
The next generation of these machines, the Dorado, used an emitter coupled logic (ECL) processor. It was four times faster than Dandelion on standard benchmarks, and thus competitive with the fastest super minicomputers of the day. It was used for research but was a rack-mounted CPU that was never intended to be an office product.
A network router called Dicentra was also based on this design.
The Dolphin, built with transistor-transistor logic (TTL) technology, including 74S181 ALUs. It was intended to be the Star workstation, but its cost was deemed too much to meet the project goals. The complexity of the software eventually overwhelmed its limited configuration. At one time in Star's development, it took more than half an hour to reboot the system.
The actually released Star workstation hardware was known as a Dandelion (often shortened to "Dlion"). It was based on the AMD Am2900 bitslice microprocessor technology. An enhanced version of the Dandelion, with more microcode space, was dubbed the "Dandetiger".
The base Dandelion system had 384 kB memory (expandable to 1.5 MB), a 10 MB, 29 MB or 40 MB 8" hard drive, an 8" floppy drive, mouse and an Ethernet connection. The performance of this machine, which sold for $20,000, was about 850 in the Dhrystone benchmark — comparable to that of a VAX-11/750, which cost five times more. The cathode ray tube (CRT) display (black and white, 1024×809 pixels with 38.7 Hz refresh) was large by the time's standards. It was meant to be able to display two 8.5×11 in pages side by side in true size. An interesting feature of the display was that the overscan area (borders) could be programmed with a 16×16 repeating pattern. This was used to extend the root window pattern to all the edges of the monitor, a feature that is unavailable even today on most video cards.
Marketing and commercial reception
The Xerox Star was not originally meant to be a stand-alone computer, but to be part of an integrated Xerox "personal office system" that also connected to other workstations and network services via Ethernet. Although a single unit sold for $16,000, a typical office would need to buy at least 2 or 3 machines along with a file server and a name server/print server. Spending $50,000 to $100,000 for a complete installation was not an easy sell, when a secretary's annual salary was about $12,000 and a Commodore VIC-20 cost around $300.
Later incarnations of the Star would allow users to buy one unit with a laser printer, but even so, only about 25,000 units were sold, leading many to consider the Xerox Star a commercial failure.
The workstation was originally designed to run the Star software for performing office tasks, but it was also sold with different software for other markets. These other configurations included a workstation for Interlisp or Smalltalk, and a server.
Some have said that the Star was ahead of its time, that few outside of a small circle of developers really understood the potential of the system, considering that IBM introduced their 8088-based IBM PC running the comparatively primitive PC DOS the same year that the Star was brought to market. However, comparison with the IBM PC may be irrelevant: well before it was introduced, buyers in the Word Processing industry were aware of the 8086-based IBM Displaywriter, the full-page portrait black-on-white Xerox 860 page display system and the 120 page-per-minute Xerox 9700 laser printer. Furthermore, the design principles of Smalltalk and modeless working had been extensively discussed in the August 1981 issue of Byte magazine, so Xerox PARC's standing and the potential of the Star can scarcely have been lost on its target (office systems) market, who would never have expected IBM to position a mass-market PC to threaten far more profitable dedicated WP systems. Unfortunately, the influential niche market of pioneering players in electronic publishing such as Longman were already aligning their production processes towards generic markup languages such as SGML (forerunner of HTML and XML) whereby authors using inexpensive offline systems could describe document structure, making their manuscripts ready for transfer to computer to film systems that offered far higher resolution than the then maximum of 360 dpi laser printing technologies.
Another possible reason given for the lack of success of the Star was Xerox's corporate structure. A longtime copier company, Xerox played to their strengths. They already had one significant failure in making their acquisition of Scientific Data Systems pay off. It is said that there were internal jealousies between the old line copier systems divisions that were responsible for bulk of Xerox's revenues and the new upstart division. Their marketing efforts were seen by some as half-hearted or unfocused. Furthermore, the most technically savvy sales representatives that might have sold office automation equipment were paid large commissions on leases of laser printer equipment costing up to a half-million dollars. No commission structure for decentralized systems could compete. The multi-lingual technical documentation market was also a major opportunity, but this needed cross-border collaboration for which few sales organisations were ready at the time.
Even within Xerox Corporation, in the mid-1980s, there was little understanding of the system. Few corporate executives ever saw or used the system, and the sales teams, if they requested a computer to assist with their planning, would instead receive older, CP/M-based Xerox 820 or 820-II systems. There was no effort to seed the 8010/8012 Star systems within Xerox Corporation.
Probably most significantly, strategic planners at the Xerox Systems Group (XSG) felt that they could not compete against other workstation makers such as Apollo Computer or Symbolics. The Xerox name alone was considered their greatest asset, but it did not produce customers.
Finally, by today's standards, the system would be considered very slow, due partly to the limited hardware of the time, and partly to a poorly implemented file system; saving a large file could take minutes. Crashes could be followed by an hours-long process called file scavenging, signaled by the appearance of the diagnostic code 7511 in the top left corner of the screen.
In the end, the Star's weak commercial reception probably came down to price, performance in demonstrations, and weakness of sales channels. Even then Apple Computer's Lisa, inspired by the Star and introduced 2 years later, was a market failure, for many of the same reasons as the Star. To credit Xerox, they did try many things to try to improve sales. The next release of Star was on a different, more efficient hardware platform, Daybreak, using a new, faster processor, and accompanied by significant rewriting of the Star software, renamed ViewPoint, to improve performance. The new system, dubbed the Xerox 6085 PCS, was released in 1985. The new hardware provided 1 MB to 4 MB of memory, a 10 MB to 80 MB hard disk, a 15" or 19" display, a 5.25" floppy drive, a mouse, Ethernet connection and a price of a little over $6,000.
The Xerox 6085 could be sold along with an attached laser printer as a standalone system. Also offered was a PC compatibility mode via an 80186-based expansion board. Users could transfer files between the ViewPoint system and PC-based software, albeit with some difficulty because the file formats were incompatible with any on the PC. But even with a significantly lower price, it was still a Rolls Royce in the world of lower cost $2,000 personal computers.
In 1989, Viewpoint 2.0 introduced many new applications related to desktop publishing. Eventually, Xerox jettisoned the integrated hardware/software workstation offered by Viewpoint and offered a software-only product called GlobalView, providing the Star interface and technology on an IBM PC compatible platform. The initial release required installing a Mesa CPU add-on board. The final release of GlobalView 2.1 ran as an emulator on Sun Solaris, Microsoft Windows 3.1, Windows 95, or Windows 98, IBM OS/2 and was released in 1996.
In the end, Xerox PARC, which prided itself upon building hardware 10 years ahead of its time and equipping each researcher with the hardware so they could get started on the software, enabled Xerox to bring the product to market 5 years too early, all throughout the 1980s and into the early 1990s. The custom-hardware platform was always too expensive for the mission for which Star/Viewpoint was intended. Apple, having copied the Xerox Star in the early 1980s with Lisa, struggled and had the same poor results. Apple's second, cost-reduced effort, the Macintosh, barely succeeded (by ditching the virtual memory, implementing it in software, and using commodity microprocessors) - and was not their most profitable product in the late 1980s. Apple also struggled to make profits on office system software in the same time period. L Peter Deutsch, one of the pioneers of the Postscript language, finally found a way to achieve Xerox-Star-like efficiency using just-in-time compilation in the early 1990s for bitmap operations, making the last bit of Xerox-Star custom hardware, the BitBLT, obsolete by 1990.
Legacy
Even though the Star product failed in the market, it raised expectations and laid important groundwork for later computers. Many of the innovations behind the Star, such as WYSIWYG editing, Ethernet, and network services such as directory, print, file, and internetwork routing have become commonplace in computers of today.
Members of the Apple Lisa engineering team saw Star at its introduction at the National Computer Conference (NCC '81) and returned to Cupertino where they converted their desktop manager to an icon-based interface modeled on the Star. Among the developers of Xerox's Gypsy WYSIWYG editor, Larry Tesler left Xerox to join Apple in 1980 where he also developed the MacApp framework.
Charles Simonyi left Xerox to join Microsoft in 1981 where he developed first WYSIWYG version of Microsoft Word (3.0). In 1983, Simonyi recommended Scott A. McGregor, who was recruited by Bill Gates to lead the development of Windows 1.0, in part for McGregor's experience in windowing systems at PARC. Later that year, several others left PARC to join Microsoft.
Star, Viewpoint and GlobalView were the first commercial computing environments to offer support for most natural languages, including full-featured word processing, leading to their adoption by the Voice of America, other United States foreign affairs agencies, and several multinational corporations.
The list of products that were inspired or influenced by the user interface of the Star, and to a lesser extent the Alto, include the Apple Lisa and Macintosh, Graphics Environment Manager (GEM) from Digital Research (the CP/M company), VisiCorp's Visi On, Microsoft Windows, Atari ST, BTRON from TRON Project, Commodore's Amiga, Elixir Desktop, Metaphor Computer Systems, Interleaf, IBM OS/2, OPEN LOOK (co-developed by Xerox), SunView, KDE, Ventura Publisher and NEXTSTEP. Adobe Systems PostScript was based on Interpress. Ethernet was further refined by 3Com, and has become a de facto standard networking protocol.
Some people feel that Apple, Microsoft, and others plagiarized the GUI and other innovations from the Xerox Star, and believe that Xerox didn't properly protect its intellectual property. The truth is more complex, perhaps. Many patent disclosures were submitted for the innovations in the Star. However, at the time, the 1975 Xerox Consent Decree, a Federal Trade Commission (FTC) antitrust action, placed restrictions on what the firm was able to patent. Also, when the Star disclosures were being prepared, the Xerox patent attorneys were busy with several other new technologies such as laser printing. Finally, patents on software, particularly those relating to user interfaces, were then an untested legal area.
Xerox did go to trial to protect the Star user interface. In 1989, after Apple sued Microsoft for copyright infringement of its Macintosh user interface in Windows, Xerox filed a similar lawsuit against Apple. However, this suit was thrown out on procedural grounds, not substantive, because a three-year statute of limitations had passed. In 1994, Apple lost its suit against Microsoft, not only the issues originally contested, but all claims to the user interface.
On January 15, 2019, a work-in-progress Xerox Star emulator created by LCM+L known as Darkstar was released for Windows and Linux.
See also
Lisp machine
Pilot (operating system)
References
External links
The first GUIs - Chapter 2. History: A Brief History of User Interfaces
Star graphics: An object-oriented implementation
Traits: An approach to multiple-inheritance subclassing
The design of Star's records processing: data processing for the noncomputer professional
The Xerox "Star": A Retrospective.
The Xerox "Star": A Retrospective. (with full-size screenshots)
Dave Curbow's Xerox Star Historical Documents (at the Digibarn)
The Digibarn's pages on the Xerox Star 8010 Information System
Xerox Star 1981
HCI Review of the Xerox Star
GUI of Xerox Star
Video: Xerox Star User Interface (1982)
Video: Xerox Star User Interface compared to Apple Lisa (2020)
Star
History of human–computer interaction
Computer workstations
Products introduced in 1981
Computers using bit-slice designs | Operating System (OS) | 1,145 |
Open Semantic Framework
The Open Semantic Framework (OSF) is an integrated software stack using semantic technologies for knowledge management. It has a layered architecture that combines existing open source software with additional open source components developed specifically to provide a complete Web application framework. OSF is made available under the Apache 2 license.
OSF is a platform-independent Web services framework for accessing and exposing structured data, semi-structured data, and unstructured data using ontologies to reconcile semantic heterogeneities within the contributing data and schema. Internal to OSF, all data is converted to RDF to provide a common data model. The OWL 2 ontology language is used to describe the data schema overlaying all of the constituent data sources.
The architecture of OSF is built around a central layer of RESTful web services, designed to enable most constituent modules within the software stack to be substituted without major adverse impacts on the entire stack. A central organizing perspective of OSF is that of the dataset. These datasets contain the records in any given OSF instance. One or more domain ontologies is used by a given OSF instance to define the structural relationships amongst the data and their attributes and concepts.
Some of the use applications for OSF include local government, health information systems, community indicator systems, eLearning, citizen engagement, or any domain that may be modeled by ontologies.
Documentation and training videos are provided with the open-source OSF application.
History
Early components of OSF were provided under the names of structWSF and conStruct starting in June 2009. The first version 1.x of OSF was announced in August 2010. The first automated OSF installer was released in March 2012. OSF was expanded with an ontology manager, structOntology in August 2012. The version 2.x developments of OSF occurred for enterprise sponsors in the period of early 2012 until the end of 2013. None of these interim 2.x versions were released to the public. Then, at the conclusion of this period, Structured Dynamics, the main developer of OSF, refactored these specific enterprise developments to leapfrog to a new version 3.0 of OSF, announced in early 2014. These public releases were last updated to OSF version 3.4.0 in August 2016.
Architecture and technologies
The Open Semantic Framework has a basic three-layer architecture. User interactions and content management are provided by an external content management system, which is currently Drupal (but does not depend on it). This layer accesses the pivotal OSF Web Services; there are now more than 20 providing OSF's distributed computing functionality. Full CRUD access and user permissions and security is provided to all digital objects in the stack. This middleware layer then provides a means to access the third layer, the engines and indexers that drive the entire stack. Both the top CMS layer and the engines layer are provided by existing off-the-shelf software. What makes OSF a complete stack are the connecting scripts and the intermediate Web services layer.
The premise of the OSF stack is based on the RDF data model. RDF provides the means for integrating existing structured data assets in any format, with semi-structured data like XML and HTML, and unstructured documents or text. The OSF framework is made operational via ontologies that capture the domain or knowledge space, matched with internal ontologies that guide OSF operations and data display. This design approach is known as ODapps, for ontology-driven applications.
Content management layer
OSF delegates all direct user interactions and standard content management to an external CMS. In the case of Drupal, this integration is tighter, and supports connectors and modules that can replace standard Drupal storage and databases with OSF triplestores.
Web services layer
This intermediate OSF Web Services layer may also be accessed directly via API or command line or utilities like cURL, suitable for interfacing with standard content management systems (CMSs), or via a dedicated suite of connectors and modules that leverage the open source Drupal CMS. These connectors and modules, also part of the standard OSF stack and called OSF for Drupal, natively enable Drupal's existing thousands of modules and ecosystem of developers and capabilities to access OSF using familiar Drupal methods.
The OSF middleware framework is generally RESTful in design and is based on HTTP and Web protocols and W3C open standards. The initial OSF framework comes packaged with a baseline set of more than 20 Web services in CRUD, browse, search, tagging, ontology management, and export and import. All Web services are exposed via APIs and SPARQL endpoints. Each request to an individual Web service returns an HTTP status and optionally a document of resultsets. Each results document can be serialized in many ways, and may be expressed as either RDF, pure XML, JSON, or other formats.
Engines layer
The engines layer represents the major workflow requirements and data management and indexing of the system. The premise of the Open Semantic Framework is based on the RDF data model. Using a common data model means that all Web services and actions against the data only need to be programmed via a single, canonical form. Simple converters convert external, native data formats to the RDF form at time of ingest; similar converters can translate the internal RDF form back into native forms for export (or use by external applications). This use of a canonical form leads to a simpler design at the core of the stack and a uniform basis to which tools or other work activities can be written.
The OSF engines are all open source and work to support this premise. The OSF engines layer governs the index and management of all OSF content. Documents are indexed by the Solr engine for full-text search, while information about their structural characteristics and metadata are stored in an RDF triplestore database provided by OpenLink's Virtuoso software. The schema aspects of the information (the "ontologies") are separately managed and manipulated with their own W3C standard application, the OWL API. At ingest time, the system automatically routes and indexes the content into its appropriate stores. Another engine, GATE (General Architecture for Text Engineering), provides semi-automatic assistance in tagging input information and other natural language processing (NLP) tasks.
Alternatives
OSF is sometimes referred to as a linked data application. Alternative applications in this space include:
Callimachus
CubicWeb
LOD2 Stack
Apache Marmotta
The Open Semantic Framework also has alternatives in the semantic publishing and semantic computing arenas.
See also
Data integration
Data management
Drupal
Enterprise information integration
Knowledge organization
Linked data
Middleware
Ontology-based data integration
Resource Description Framework
Resource-oriented architecture
Semantic computing
Semantic integration
Semantic publishing
Semantic search
Semantic service-oriented architecture
Semantic technology
Software framework
Web Ontology Language
References
External links
Official website
Drupal
GATE
Open Semantic Framework code repository at GitHub
OSF interest group
OWL API
Virtuoso
Further information
Technical documentation library at the
Video training series at the
Free content management systems
Free software programmed in PHP
Knowledge management
Ontology (information science)
Semantic Web
Web frameworks | Operating System (OS) | 1,146 |
ZSNES
ZSNES is a free software Super Nintendo Entertainment System emulator written mostly in x86 assembly with official ports for Linux, DOS, Windows, and unofficial ports for Xbox and macOS.
Background
Development of ZSNES began on 3 July 1997 and the first version was released on 14 October 1997, for DOS. Since then, official ports have been made for Windows and Linux. The emulator became free software under the GPL-2.0-or-later license on 2 April 2001. Despite an announcement by adventure_of_link stating that "ZSNES is NOT dead, it's still in development" made on the ZSNES board after the departure of its original developers zsKnight and _Demo_, development has slowed dramatically since its last version (1.51 released on 24 January 2007). Much of the development efforts concentrated on increasing the emulator's portability, by rewriting assembly code in C and C++, including a new GUI using Qt.
ZSNES is notable in that it was among the first to emulate most SNES enhancement chips at some level. Until version 1.50, ZSNES featured netplay via TCP/IP or UDP.
Because ZSNES is largely written in low-level assembly language for x86 processors, the idea of porting ZSNES to devices using RISC architectures such as ARM is highly unfeasible. Commercial gaming consoles did not typically use x86 processors (with the original Xbox being the most well-known exception) prior to the eighth generation, with the 2013 releases of the Xbox One and PlayStation 4.
Reception
ZSNES was generally well-regarded in its heyday, with British game magazine Retro Gamer in 2005 calling the emulator "very impressive" and praising the "incredible toaster mode".
However, with the more recent development of more accurate SNES emulators such as Snes9x and higan as computers have gradually become more powerful, retrospective reviews have criticized ZSNES not only for its relatively low accuracy, but also because its former popularity has led several fan-made translations and modifications to be designed with specific workarounds for the emulator's inaccuracies, which often makes them unplayable both on real hardware and in the newer emulators that have superseded ZSNES. Some of these other emulators even include a mode which is explicitly designed to replicate the quirks of ZSNES, allowing the ZSNES-focused mods to become playable again.
In 2015 an exploit that allowed a specially crafted SNES ROM to gain control of the host system, and thus be able to execute malicious code, was discovered in version 1.51; a partially fixed preview build was released shortly afterwards.
See also
List of video game console emulators
References
External links
ZSNES Documentation
Interview with zsKnight
Super Nintendo Entertainment System emulators
Linux emulation software
Windows emulation software
DOS emulation software
Free emulation software
Free software that uses SDL
Cross-platform software
Free software programmed in C
Free software programmed in C++
Assembly language software
Portable software | Operating System (OS) | 1,147 |
Blue screen of death
A blue screen of death (BSoD), officially known as a stop error or blue screen error, is an error screen that the Windows operating system displays in the event of a fatal system error. It indicates a system crash, in which the operating system has reached a critical condition where it can no longer operate safely, e.g., hardware failure or an unexpected termination of a crucial process.
History
Blue error screens have been around since the beta version of Windows 1.0; if Windows found a newer DOS version than it expected, it would generate a blue screen with white text saying "Incorrect DOS version", before starting normally. In the final release (version 1.01), however, this screen prints random characters as a result of bugs in the Windows logo code. It is not a crash screen, either; upon crashing, Windows 1.0 either locks up or exits to DOS.
Windows 3.0 uses a text-mode screen for displaying important system messages, usually from digital device drivers in 386 Enhanced Mode or other situations where a program could not run. Windows 3.1 changed the color of this screen from black to blue. Windows 3.1 also displays a blue screen when the user presses the Ctrl+Alt+Delete key combination while no programs were unresponsive. As with it predecessors, Windows 3.x exits to DOS if an error condition is severe enough.
The first blue screen of death appeared in Windows NT 3.1 (the first version of the Windows NT family, released in 1993) and all Windows operating systems released afterwards. In its earliest version, the error started with ***STOP:. Hence, it became known as a "stop error."
BSoDs can be caused by poorly written device drivers or malfunctioning hardware, such as faulty memory, power supply issues, overheating of components, or hardware running beyond its specification limits. In the Windows 9x era, incompatible DLLs or bugs in the operating system kernel could also cause BSoDs. Because of the instability and lack of memory protection in Windows 9x, BSoDs were much more common.
Incorrect attribution
On September 4, 2014, several online journals, including Business Insider, DailyTech, Engadget, Gizmodo, Lifehacker, Neowin, Softpedia, TechSpot, The Register, and The Verge incorrectly attributed the creation of the Blue Screen of Death to Steve Ballmer, Microsoft's former CEO, citing an article by Microsoft employee Raymond Chen, entitled "Who wrote the text for the in Windows 3.1?". The article focused on the creation of the first rudimentary task manager in Windows 3.x, which shared visual similarities with a BSoD. In a follow-up on September 9, 2014, Raymond Chen complained about this widespread mistake, claimed responsibility for revising the BSoD in Windows 95 and panned BGR.com for having "entirely fabricated a scenario and posited it as real". Engadget later updated its article to correct the mistake.
Formats
BSoDs originally showed silver text on a royal blue background with information about current memory values and register values. Starting with Windows Server 2012 (released in September 2012), Windows adopted a cerulean background. Windows 11 initially used a black background, but starting from build number 22000.348, switched to a dark blue background. Preview builds of Windows 10, Windows 11, and Windows Server (available from the Windows Insider program) feature a dark green background instead of a blue one. Windows 3.1, 95, and 98 support customizing the color of the screen. In the Windows NT family, however, the color is hard-coded.
Windows 95, 98 and ME render their BSoDs in the 80×25 text mode. BSoDs in the Windows NT family initially used the 80×50 text mode on a 720×400 screen. Windows XP, Vista and 7 BSoDs use the 640×480 screen resolution and the Lucida Console font. Windows 8 and Windows Server 2012 use Segoe UI. On UEFI machines, their BSoDs use the highest screen resolution available. On with legacy BIOS machines, by default, they use 1024×768, but they can be configured to use the highest resolution available (via the 'highestmode' parameter in Boot Configuration Data). Windows 10, versions 1607 and later, uses the same format as Windows 8, but has a QR code which leads to a Microsoft Support web page that tries to troubleshoot the issue step-by-step.
Windows NT
In the Windows NT family of operating systems, the blue screen of death (referred to as "bug check" in the Windows software development kit and driver development kit documentation) occurs when the kernel or a driver running in kernel mode encounters an error from which it cannot recover. This is usually caused by an illegal operation being performed. The only safe action the operating system can take in this situation is to restart the computer. As a result, data may be lost, as users are not given an opportunity to save it.
The text on the error screen contains the code of the error and its symbolic name (e.g. "0x0000001E, KMODE_EXCEPTION_NOT_HANDLED") along with four error-dependent values in parentheses that are there to help software engineers fix the problem that occurred. Depending on the error code, it may display the address where the problem occurred, along with the driver which is loaded at that address. Under Windows NT, the second and third sections of the screen may contain information on all loaded drivers and a stack dump, respectively. The driver information is in three columns; the first lists the base address of the driver, the second lists the driver's creation date (as a Unix timestamp), and the third lists the name of the driver.By default, Windows will create a memory dump file when a stop error occurs. Depending on the OS version, there may be several formats this can be saved in, ranging from a 64kB "minidump" (introduced in Windows 2000) to a "complete dump" which is effectively a copy of the entire contents of physical memory (RAM). The resulting memory dump file may be debugged later, using a kernel debugger. For Windows, WinDBG or KD debuggers from Debugging Tools for Windows are used. A debugger is necessary to obtain a stack trace, and may be required to ascertain the true cause of the problem; as the information on-screen is limited and thus possibly misleading, it may hide the true source of the error. By default, Windows XP is configured to save only a 64kB minidump when it encounters a stop error, and to then automatically reboot the computer. Because this process happens very quickly, the blue screen may be seen only for an instant or not at all. Users have sometimes noted this as a random reboot rather than a traditional stop error, and are only aware of an issue after Windows reboots and displays a notification that it has recovered from a serious error. This happens only when the computer has a function called "Auto Restart" enabled, which can be disabled in the Control Panel which in turn shows the stop error.
Microsoft Windows can also be configured to send live debugging information to a kernel debugger running on a separate computer. If a stop error is encountered while a live kernel debugger is attached to the system, Windows will halt execution and cause the debugger to break in, rather than displaying the BSoD. The debugger can then be used to examine the contents of memory and determine the source of the problem.
A BSoD can also be caused by a critical boot loader error, where the operating system is unable to access the boot partition due to incorrect storage drivers, a damaged file system or similar problems. The error code in this situation is STOP 0x0000007B (INACCESSIBLE_BOOT_DEVICE). In such cases, there is no memory dump saved. Since the system is unable to boot from the hard drive in this situation, correction of the problem often requires using the repair tools found on the Windows installation disc.
Details
Before Windows Server 2012, each BSoD displayed an error name in uppercase (e.g. APC_INDEX_MISMATCH), a hexadecimal error number (e.g. 0x00000001) and four parameters. The last two are shown together in the following format:
Depending on the error number and its nature, all, some, or even none of the parameters contain data pertaining to what went wrong, and/or where it happened. In addition, the error screens showed four paragraphs of general explanation and advice and may have included other technical data such the file name of the culprit and memory addresses.
With the release of Windows Server 2012, the BSoD was changed, removing all of the above in favor of the error name, and a concise description. Windows 8 added a sad-emoticon (except on its Japanese version). The hexadecimal error code and parameters can still be found in the Windows Event Log or in memory dumps. Since Windows 10 Build 14393, the screen features a QR code for quick troubleshooting. Windows 10 Build 19041 slightly changed the text from "Your PC ran into a problem" to "Your device ran into a problem".
Windows 9x
Windows 9x is a community nickname for Windows 95, 98, and ME, even though the latter's name doesn't match the nickname's pattern. The operating systems frequently experience BSoD, the main way for virtual device drivers to report errors to the user. Windows 9x's version of BSoD, internally referred to as "_VWIN32_FaultPopup", gives the user the option either to restart the computer or to continue using Windows. This behavior is in contrast with the Windows NT BSoD, which prevents the user from using the computer until it has been powered off.
The most common BSoD is an 80×25 screen which is the operating system's way of reporting an interrupt caused by a processor exception; it is a more serious form of the general protection fault dialog boxes. The memory address of the error is given and the error type is a hexadecimal number from 00 to 11 (0 to 17 decimal). The error codes are as follows:
00: Division fault
01: Startup Error
02: Non-Maskable Interrupt
03: Shutdown Error
04: Overflow Trap
05: Bounds Check Fault
06: Invalid Opcode Fault
07: "Coprocessor Not Available" Fault
08: Double Fault
09: Coprocessor Segment Overrun
0A: Invalid Task State Segment Fault
0B: Not Present Fault
0C: Stack Fault
0D: General Protection Fault
0E: Page Fault
0F: Error Message Limit Exceed
10: Coprocessor Error Fault
11: Alignment Check Fault
Reasons for BSoDs include:
Problems that occur with incompatible versions of DLLs: Windows loads these DLLs into memory when they are needed by application programs; if versions are changed, the next time an application loads the DLL it may be different from what the application expects. These incompatibilities increase over time as more new software is installed, and is one of the main reasons why a freshly installed copy of Windows is more stable than an "old" one.
Faulty or poorly written device drivers
Hardware incompatibilities
Damaged hardware may also cause a BSoD.
In Windows 95 and 98, a BSoD occurs when the system attempts to access the file "c:\con\con","c:\aux\aux",or"c:\prn\prn" on the hard drive. This could be inserted on a website to crash visitors' machines. On 16 March 2000, Microsoft released a security update to resolve this issue.
One famous instance of a Windows 9x BSoD occurred during a presentation of a Windows 98 beta given by Bill Gates at COMDEX on April 20, 1998: The demo PC crashed with a BSoD when his assistant, Chris Capossela, connected a scanner to the PC to demonstrate Windows 98's support for Plug and Play devices. This event brought thunderous applause from the crowd and Gates replied (after a nervous pause): "That must be why we're not shipping Windows 98 yet."
Windows CE
The simplest version of the blue screen occurs in Windows CE (except in Pocket PC 2000 and Pocket PC 2002). The blue screen in Windows CE 3.0 is similar to the one in Windows NT.
Gallery
Similar screens
Stop errors are comparable to kernel panics in macOS, Linux, and other Unix-like systems, and to bugchecks in OpenVMS. Windows 3.1, like some versions of macOS, displays a Black Screen of Death instead of a blue one. Windows 98 displays a red error screen raised by the Advanced Configuration and Power Interface (ACPI) when the host computer's BIOS encounters a problem. The bootloader of the first beta version of Windows Vista also displays a red error screen in the event of a boot failure. The Xbox One has a Green Screen of Death instead of the blue one.
As mentioned earlier, the insider builds of Windows Server 2016 and later, Windows 10, and Windows 11 display a green screen.
See also
Screens of death
Guru Meditation
Kernel panic
Purple Screen of Death
Sad Mac
Black screen of death
Red Ring of Death
Game over
References
External links
Bug Check Code Reference
SysInternals BlueScreen Screen Saver v3.2
Blue Screen of Death on MalWiki
Computer errors
Windows administration
Screens of death | Operating System (OS) | 1,148 |
Computer performance
In computing, computer performance is the amount of useful work accomplished by a computer system. Outside of specific contexts, computer performance is estimated in terms of accuracy, efficiency and speed of executing computer program instructions. When it comes to high computer performance, one or more of the following factors might be involved:
Short response time for a given piece of work.
High throughput (rate of processing work).
Low utilization of computing resource(s).
Fast (or highly compact) data compression and decompression.
High availability of the computing system or application.
High bandwidth.
Short data transmission time.
Technical and non-technical definitions
The performance of any computer system can be evaluated in measurable, technical terms, using one or more of the metrics listed above. This way the performance can be
Compared relative to other systems or the same system before/after changes
In absolute terms, e.g. for fulfilling a contractual obligation
Whilst the above definition relates to a scientific, technical approach, the following definition given by Arnold Allen would be useful for a non-technical audience:
The word performance in computer performance means the same thing that performance means in other contexts, that is, it means "How well is the computer doing the work it is supposed to do?"
As an aspect of software quality
Computer software performance, particularly software application response time, is an aspect of software quality that is important in human–computer interactions.
Performance engineering
Performance engineering within systems engineering encompasses the set of roles, skills, activities, practices, tools, and deliverables applied at every phase of the systems development life cycle which ensures that a solution will be designed, implemented, and operationally supported to meet the performance requirements defined for the solution.
Performance engineering continuously deals with trade-offs between types of performance. Occasionally a CPU designer can find a way to make a CPU with better overall performance by improving one of the aspects of performance, presented below, without sacrificing the CPU's performance in other areas. For example, building the CPU out of better, faster transistors.
However, sometimes pushing one type of performance to an extreme leads to a CPU with worse overall performance, because other important aspects were sacrificed to get one impressive-looking number, for example, the chip's clock rate (see the megahertz myth).
Application performance engineering
Application Performance Engineering (APE) is a specific methodology within performance engineering designed to meet the challenges associated with application performance in increasingly distributed mobile, cloud and terrestrial IT environments. It includes the roles, skills, activities, practices, tools and deliverables applied at every phase of the application lifecycle that ensure an application will be designed, implemented and operationally supported to meet non-functional performance requirements.
Aspects of performance
Computer performance metrics (things to measure) include availability, response time, channel capacity, latency, completion time, service time, bandwidth, throughput, relative efficiency, scalability, performance per watt, compression ratio, instruction path length and speed up. CPU benchmarks are available.
Availability
Availability of a system is typically measured as a factor of its reliability - as reliability increases, so does availability (that is, less downtime). Availability of a system may also be increased by the strategy of focusing on increasing testability and maintainability and not on reliability. Improving maintainability is generally easier than reliability. Maintainability estimates (Repair rates) are also generally more accurate. However, because the uncertainties in the reliability estimates are in most cases very large, it is likely to dominate the availability (prediction uncertainty) problem, even while maintainability levels are very high.
Response time
Response time is the total amount of time it takes to respond to a request for service. In computing, that service can be any unit of work from a simple disk IO to loading a complex web page. The response time is the sum of three numbers:
Service time - How long it takes to do the work requested.
Wait time - How long the request has to wait for requests queued ahead of it before it gets to run.
Transmission time – How long it takes to move the request to the computer doing the work and the response back to the requestor.
Processing speed
Most consumers pick a computer architecture (normally Intel IA-32 architecture) to be able to run a large base of pre-existing, pre-compiled software. Being relatively uninformed on computer benchmarks, some of them pick a particular CPU based on operating frequency (see megahertz myth).
Some system designers building parallel computers pick CPUs based on the speed per dollar.
Channel capacity
Channel capacity is the tightest upper bound on the rate of information that can be reliably transmitted over a communications channel. By the noisy-channel coding theorem, the channel capacity of a given channel is the limiting information rate (in units of information per unit time) that can be achieved with arbitrarily small error probability.
Information theory, developed by Claude E. Shannon during World War II, defines the notion of channel capacity and provides a mathematical model by which one can compute it. The key result states that the capacity of the channel, as defined above, is given by the maximum of the mutual information between the input and output of the channel, where the maximization is with respect to the input distribution.
Latency
Latency is a time delay between the cause and the effect of some physical change in the system being observed. Latency is a result of the limited velocity with which any physical interaction can take place. This velocity is always lower or equal to speed of light. Therefore, every physical system that has spatial dimensions different from zero will experience some sort of latency.
The precise definition of latency depends on the system being observed and the nature of stimulation. In communications, the lower limit of latency is determined by the medium being used for communications. In reliable two-way communication systems, latency limits the maximum rate that information can be transmitted, as there is often a limit on the amount of information that is "in-flight" at any one moment. In the field of human-machine interaction, perceptible latency (delay between what the user commands and when the computer provides the results) has a strong effect on user satisfaction and usability.
Computers run sets of instructions called a process. In operating systems, the execution of the process can be postponed if other processes are also executing. In addition, the operating system can schedule when to perform the action that the process is commanding. For example, suppose a process commands that a computer card's voltage output be set high-low-high-low and so on at a rate of 1000 Hz. The operating system may choose to adjust the scheduling of each transition (high-low or low-high) based on an internal clock. The latency is the delay between the process instruction commanding the transition and the hardware actually transitioning the voltage from high to low or low to high.
System designers building real-time computing systems want to guarantee worst-case response. That is easier to do when the CPU has low interrupt latency and when it has deterministic response.
Bandwidth
In computer networking, bandwidth is a measurement of bit-rate of available or consumed data communication resources, expressed in bits per second or multiples of it (bit/s, kbit/s, Mbit/s, Gbit/s, etc.).
Bandwidth sometimes defines the net bit rate (aka. peak bit rate, information rate, or physical layer useful bit rate), channel capacity, or the maximum throughput of a logical or physical communication path in a digital communication system. For example, bandwidth tests measure the maximum throughput of a computer network. The reason for this usage is that according to Hartley's law, the maximum data rate of a physical communication link is proportional to its bandwidth in hertz, which is sometimes called frequency bandwidth, spectral bandwidth, RF bandwidth, signal bandwidth or analog bandwidth.
Throughput
In general terms, throughput is the rate of production or the rate at which something can be processed.
In communication networks, throughput is essentially synonymous to digital bandwidth consumption. In wireless networks or cellular communication networks, the system spectral efficiency in bit/s/Hz/area unit, bit/s/Hz/site or bit/s/Hz/cell, is the maximum system throughput (aggregate throughput) divided by the analog bandwidth and some measure of the system coverage area.
In integrated circuits, often a block in a data flow diagram has a single input and a single output, and operate on discrete packets of information. Examples of such blocks are FFT modules or binary multipliers. Because the units of throughput are the reciprocal of the unit for propagation delay, which is 'seconds per message' or 'seconds per output', throughput can be used to relate a computational device performing a dedicated function such as an ASIC or embedded processor to a communications channel, simplifying system analysis.
Relative efficiency
Scalability
Scalability is the ability of a system, network, or process to handle a growing amount of work in a capable manner or its ability to be enlarged to accommodate that growth
Power consumption
The amount of electric power used by the computer (power consumption). This becomes especially important for systems with limited power sources such as solar, batteries, human power.
Performance per watt
System designers building parallel computers, such as Google's hardware, pick CPUs based on their speed per watt of power, because the cost of powering the CPU outweighs the cost of the CPU itself.
For spaceflight computers, the processing speed per watt ratio is a more useful performance criterion than raw processing speed.
Compression ratio
Compression is useful because it helps reduce resource usage, such as data storage space or transmission capacity. Because compressed data must be decompressed to use, this extra processing imposes computational or other costs through decompression; this situation is far from being a free lunch. Data compression is subject to a space–time complexity trade-off.
Size and weight
This is an important performance feature of mobile systems, from the smart phones you keep in your pocket to the portable embedded systems in a spacecraft.
Environmental impact
The effect of a computer or computers on the environment, during manufacturing and recycling as well as during use. Measurements are taken with the objectives of reducing waste, reducing hazardous materials, and minimizing a computer's ecological footprint.
Transistor count
The transistor count is the number of transistors on an integrated circuit (IC). Transistor count is the most common measure of IC complexity.
Benchmarks
Because there are so many programs to test a CPU on all aspects of performance, benchmarks were developed.
The most famous benchmarks are the SPECint and SPECfp benchmarks developed by Standard Performance Evaluation Corporation and the Certification Mark benchmark developed by the Embedded Microprocessor Benchmark Consortium EEMBC.
Software performance testing
In software engineering, performance testing is in general testing performed to determine how a system performs in terms of responsiveness and stability under a particular workload. It can also serve to investigate, measure, validate or verify other quality attributes of the system, such as scalability, reliability and resource usage.
Performance testing is a subset of performance engineering, an emerging computer science practice which strives to build performance into the implementation, design and architecture of a system.
Profiling (performance analysis)
In software engineering, profiling ("program profiling", "software profiling") is a form of dynamic program analysis that measures, for example, the space (memory) or time complexity of a program, the usage of particular instructions, or frequency and duration of function calls. The most common use of profiling information is to aid program optimization.
Profiling is achieved by instrumenting either the program source code or its binary executable form using a tool called a profiler (or code profiler). A number of different techniques may be used by profilers, such as event-based, statistical, instrumented, and simulation methods.
Performance tuning
Performance tuning is the improvement of system performance. This is typically a computer application, but the same methods can be applied to economic markets, bureaucracies or other complex systems. The motivation for such activity is called a performance problem, which can be real or anticipated. Most systems will respond to increased load with some degree of decreasing performance. A system's ability to accept a higher load is called scalability, and modifying a system to handle a higher load is synonymous to performance tuning.
Systematic tuning follows these steps:
Assess the problem and establish numeric values that categorize acceptable behavior.
Measure the performance of the system before modification.
Identify the part of the system that is critical for improving the performance. This is called the bottleneck.
Modify that part of the system to remove the bottleneck.
Measure the performance of the system after modification.
If the modification makes the performance better, adopt it. If the modification makes the performance worse, put it back to the way it was.
Perceived performance
Perceived performance, in computer engineering, refers to how quickly a software feature appears to perform its task. The concept applies mainly to user acceptance aspects.
The amount of time an application takes to start up, or a file to download, is not made faster by showing a startup screen (see Splash screen) or a file progress dialog box. However, it satisfies some human needs: it appears faster to the user as well as providing a visual cue to let them know the system is handling their request.
In most cases, increasing real performance increases perceived performance, but when real performance cannot be increased due to physical limitations, techniques can be used to increase perceived performance.
Performance Equation
The total amount of time (t) required to execute a particular benchmark program is
, or equivalently
where
is "the performance" in terms of time-to-execute
is the number of instructions actually executed (the instruction path length). The code density of the instruction set strongly affects N. The value of N can either be determined exactly by using an instruction set simulator (if available) or by estimation—itself based partly on estimated or actual frequency distribution of input variables and by examining generated machine code from an HLL compiler. It cannot be determined from the number of lines of HLL source code. N is not affected by other processes running on the same processor. The significant point here is that hardware normally does not keep track of (or at least make easily available) a value of N for executed programs. The value can therefore only be accurately determined by instruction set simulation, which is rarely practiced.
is the clock frequency in cycles per second.
is the average cycles per instruction (CPI) for this benchmark.
is the average instructions per cycle (IPC) for this benchmark.
Even on one machine, a different compiler or the same compiler with different compiler optimization switches can change N and CPI—the benchmark executes faster if the new compiler can improve N or C without making the other worse, but often there is a trade-off between them—is it better, for example, to use a few complicated instructions that take a long time to execute, or to use instructions that execute very quickly, although it takes more of them to execute the benchmark?
A CPU designer is often required to implement a particular instruction set, and so cannot change N.
Sometimes a designer focuses on improving performance by making significant improvements in f (with techniques such as deeper pipelines and faster caches), while (hopefully) not sacrificing too much C—leading to a speed-demon CPU design.
Sometimes a designer focuses on improving performance by making significant improvements in CPI (with techniques such as out-of-order execution, superscalar CPUs, larger caches, caches with improved hit rates, improved branch prediction, speculative execution, etc.), while (hopefully) not sacrificing too much clock frequency—leading to a brainiac CPU design.
For a given instruction set (and therefore fixed N) and semiconductor process, the maximum single-thread performance (1/t) requires a balance between brainiac techniques and speedracer techniques.
See also
Algorithmic efficiency
Computer performance by orders of magnitude
Network performance
Latency oriented processor architecture
Optimization (computer science)
RAM update rate
Complete instruction set
Hardware acceleration
Speedup
Cache replacement policies
References | Operating System (OS) | 1,149 |
T 1173/97
T 1173/97, also known as Computer program product/IBM or simply Computer program product, is a decision of a Technical Board of Appeal of the European Patent Office (EPO), issued on July 1, 1998. It is a landmark decision for interpreting Article 52(2) and (3) of the European Patent Convention (EPC) and whether computer programs are excluded from patentability under the EPC.
It mainly held that
"a computer program product is not excluded from patentability under if, when it is run on a computer, it produces a further technical effect which goes beyond the "normal" physical interactions between program (software) and computer (hardware)".
Decision T 1173/97 distinguished computer programs with a technical character from those with a non-technical character, and was thus based on an approach differing from the view taken by a number of previous decisions of the Boards of Appeal that all computer programs were excluded under Art. 52(2)(c) and (3) EPC.
T 1173/97, along with T 935/97 (not published in the Official Journal of the EPO), are considered to be "groundbreaking decisions". The Enlarged Board of Appeal has described T 1173/97 as
"seminal in its definition of 'further technical effect' and abandonment of the contribution approach to [the exclusion under Article 52(2) and (3) EPC]".
Reasoning
The Board first examined the relationship between the TRIPS Agreement and the EPC. It confirmed that the TRIPS Agreement could not be directly applied to the EPC, but thought it appropriate to take the TRIPS Agreement into consideration, since it "gives a clear indication of current trends". The Board considered that "it is the clear intention of TRIPS [and in particular Art. 27(1) TRIPS ] not to exclude from patentability any inventions, whatever field of technology they belong to, and therefore, in particular, not to exclude programs for computers".
The Board however pointed out that "the only source of substantive patent law for examining European patent applications [at that moment was] the European Patent Convention". The Board therefore considered Art. 52(2) and (3) EPC and concluded that the combination of Art. 52(2) and (3) EPC (exclusion of computer programs, but only when the application relates to computer programs "as such") "demonstrates that the legislators did not want to exclude from patentability all programs for computers".
The Board then endeavored to determine the meaning of the expression "as such" in Article 52(3) EPC. It concluded that, if a computer program has a technical character, it should be considered patentable, since the technical character of an invention is generally accepted as an essential requirement for its patentability, In other words, "having technical character means not being excluded from patentability under the "as such" provision pursuant to Article 52(3) EPC."
Technical character
The Board next held that
"[for] the purpose of interpreting the exclusion from patentability of programs for computers under Article 52(2) and (3) EPC, ... programs for computers cannot be considered as having a technical character for the very reason that they are programs for computers".
otherwise, since all computer programs are suitable to run on a computer, no distinction could be made between, on the one hand, computer programs with a technical character and, on the other hand, computer programs as such. In other words, the mere fact that an invention is a computer program is not a sufficient reason to conclude that it has a technical character, when interpreting these legal provisions.
The technical character of computer programs, in view of these provisions, was found by the Board in the "further effects deriving from the execution (by the hardware) of the instructions given by the computer program", where these further effects have a technical character. An invention which brings about a technical effect may be considered to be an invention. A computer program must be considered to be invention within the meaning of Art. 52(1) EPC if it produces a technical effect.
Elaborating more on the further technical effect, the Board held that "a computer program product may ... possess a technical character because it has the potential to cause a predetermined further technical effect." Therefore "computer programs products are not excluded from patentability under all circumstances".
To summarize, the Board held that:
"a computer program claimed by itself is not excluded from patentability if the program, when running on a computer or loaded into a computer, brings about, or is capable of bringing about, a technical effect which goes beyond the "normal" physical interactions between the program (software) and the computer (hardware) on which it is run".
The Board also took the view that
"it does not make any difference whether a computer program is claimed by itself or as a record on a carrier".
Remittal
The case was then remitted to the first instance, i.e. the Examining Division, for further prosecution, and "in particular for examination of whether the wording of the ... claims [avoided] exclusion from patentability under Article 52(2) and (3) EPC."
Opinion on the contribution approach
The Board also used the opportunity to state that "determining the technical contribution an invention achieves with respect to the prior art is ... more appropriate for the purpose of examining novelty and inventive step than for deciding on possible exclusion under Article 52(2) and (3)." This was later emphasized in decisions T 931/95 and T 258/03.
Later legal developments
As explained by the Enlarged Board of Appeal in its opinion G 3/08 of May 12, 2010, one particular view taken by the Board in decision T 1173/97 was not followed by later case law, in particular by later decision T 424/03. The Board in T 1173/97 took the view that it did not make any difference whether a computer program is claimed by itself or as a record on a carrier (in both cases a "further technical effect" would be required to comply with Article 52(2) and (3) EPC). This view however has been considered by the Enlarged Board of Appeal as contrary to the own premises of T 1173/97. The Board in T 424/03 (following and extending the reasoning of decision T 258/03) came to the conclusion that a claim to a computer program on a computer-readable medium necessarily avoids exclusion from patentability under Article 52(2) EPC, restricting the need of a "further technical effect" (to meet the provisions of Article 52(2) and (3) EPC) to computer programs claimed by themselves (i.e., not claimed on a computer-readable medium for instance).
See also
Software patents under the European Patent Convention
List of decisions of the EPO Boards of Appeal relating to Article 52(2) and (3) EPC
G 3/08, opinion issued by the Enlarged Board of Appeal following a referral by the President of the European Patent Office on the question of patentability of computer programs; the referral was eventually rejected as inadmissible
Notes and references
External links
Decision T 1173/97 on the "EPO boards of appeal decisions" section of the EPO web site
Software patent case law
European Patent Office case law
1998 in case law
1998 in Europe | Operating System (OS) | 1,150 |
OpenCable Application Platform
The OpenCable Application Platform, or OCAP, is an operating system layer designed for consumer electronics that connect to a cable television system, the Java-based middleware portion of the platform. Unlike operating systems on a personal computer, the cable company controls what OCAP programs run on the consumer's machine. Designed by CableLabs for the cable networks of North America, OCAP programs are intended for interactive services such as eCommerce, online banking, Electronic program guides, and digital video recording. Cable companies have required OCAP as part of the Cablecard 2.0 specification, a proposal that is controversial and has not been approved by the Federal Communications Commission. Cable companies have stated that two-way communications by third party devices on their networks will require them to support OCAP. The Consumer Electronics Association and other groups argue OCAP is intended to block features that compete with cable company provided services and that consumers should be entitled to add, delete and otherwise control programs as on their personal computers.
On January 8, 2008 CableLabs announced the Tru2Way brand for the OpenCable platform, including OCAP as the application platform.
Technical overview
OCAP is the Java based software/middleware portion of the OpenCable initiative. OCAP is based on the Globally Executable MHP (GEM)-standard, and was defined by CableLabs. Because OCAP is based on GEM, it has a lot in common with the Multimedia Home Platform (MHP)-standard defined by the DVB project.
At present two versions of the OCAP standard exist:
OCAP v1.0
OCAP v2.0
See also
Downloadable Conditional Access System (DCAS)
Embedded Java
Java Platform, Micro Edition
ARIB
Interactive digital cable ready
OEDN
References
External links
Sun Microsystems' Java TV
MHP official standards for interactive television and related interactive home entertainment.
MHP tutorials
MHP Knowledge Database
The OCAP/EBIF Developer Network
Cable television
Digital television
Digital cable
Operating system technology
Proprietary hardware | Operating System (OS) | 1,151 |
8-bit computing
In computer architecture, 8-bit integers or other data units are those that are 8 bits wide (1 octet). Also, 8-bit central processing unit (CPU) and arithmetic logic unit (ALU) architectures are those that are based on registers or data buses of that size. Memory addresses (and thus address buses) for 8-bit CPUs are generally larger than 8-bit, usually 16-bit, while they could in theory be 8-bit, and in some situations 8-bit addresses are also used with 16-bit addresses mainly used. '8-bit' is also a generation of microcomputers in which 8-bit microprocessors were the norm.
The term '8-bit' is also applied to the character sets that could be used on computers with 8-bit bytes, the best known being various forms of extended ASCII, including the ISO/IEC 8859 series of national character sets especially Latin 1 for English and Western European languages.
The IBM System/360 introduced byte-addressable memory with 8-bit bytes, as opposed to bit-addressable or decimal digit-addressable or word-addressable memory, although its general-purpose registers were 32 bits wide, and addresses were contained in the lower 24 bits of those addresses. Different models of System/360 had different internal data path widths; the IBM System/360 Model 30 (1965) implemented the 32-bit System/360 architecture, but had an 8-bit native path width, and performed 32-bit arithmetic 8 bits at a time.
The first widely adopted 8-bit microprocessor was the Intel 8080, being used in many hobbyist computers of the late 1970s and early 1980s, often running the CP/M operating system; it had 8-bit data words and 16-bit addresses. The Zilog Z80 (compatible with the 8080) and the Motorola 6800 were also used in similar computers. The Z80 and the MOS Technology 6502 8-bit CPUs were widely used in home computers and second- and third-generation game consoles of the 1970s and 1980s. Many 8-bit CPUs or microcontrollers are the basis of today's ubiquitous embedded systems.
Details
An 8-bit register can store 28 different values. The range of integer values that can be stored in 8 bits depends on the integer representation used. With the two most common representations, the range is 0 through 255 (28 − 1) for representation as an (unsigned) binary number, and −128 (−1 × 27) through 127 (27 − 1) for representation as two's complement.
8-bit CPUs use an 8-bit data bus and can therefore access 8 bits of data in a single machine instruction. The address bus is typically a double octet (16 bits) wide, due to practical and economical considerations. This implies a direct address space of 64 KB (65,536 bytes) on most 8-bit processors.
Most home computers from the 8-bit era fully exploited the address space, such as the BBC Micro (Model B) with 32 KB of RAM plus 32 KB of ROM. Others like the very popular Commodore 64 had full 64 KB RAM, plus 20 KB ROM, meaning with 16-bit addressing you could not use all of the RAM by default (e.g. from the included BASIC language interpreter in ROM); without exploiting bank switching, which allows for breaking the 64 KB (RAM) limit in some systems. Other computers would have as low as 1 KB (plus 4 KB ROM), such as the Spectrum ZX80 (while the later very popular Sinclair ZX Spectrum had more memory), or even only 128 bytes of RAM (plus storage from a ROM cartridge), as in an early game console Atari 2600 and thus 8-bit addressing would have been enough for the RAM, if it wouldn't have needed to cover ROM too). The Commodore 128, and other 8-bit systems, meaning still with 16-bit addressing, could use more than 64 KB, i.e. 128 KB RAM, also the BBC Master with it expandable to 512 KB of RAM.
While in general 8-bit CPUs have 16-bit addressing, in some architectures you have both, such as in the MOS Technology 6502 CPU, where the zero page is used extensively, saving one byte in the instructions accessing that page, and also having 16-bit addressing instructions that take 2 bytes for the address plus 1 for the opcode. Commonly index registers are 8-bit (while other "8-bit" CPUs, such as Motorola 6800 had 16-bit index registers), such as the 6502 CPU, and then the size of the arrays addressed using indexed addressing instructions are at most 256 bytes, without needing longer code, i.e. meaning 8-bit addressing to each individual array.
Notable 8-bit CPUs
The first commercial 8-bit processor was the Intel 8008 (1972) which was originally intended for the Datapoint 2200 intelligent terminal. Most competitors to Intel started off with such character oriented 8-bit microprocessors. Modernized variants of these 8-bit machines are still one of the most common types of processor in embedded systems.
Another notable 8-bit CPU is the MOS Technology 6502. It, and variants of it, were used in a number of personal computers, such as the Apple I and Apple II, the Atari 8-bit family, the BBC Micro, and the Commodore PET and Commodore VIC-20, and in a number of video game consoles, such as the Atari 2600 and the Nintendo Entertainment System.
Use for training, prototyping, and general hardware education
8-bit processors continue to be designed today for general education about computer hardware, as well as for hobbyists' interests. One such CPU was are designed and implemented using 7400-series integrated circuits on a breadboard. Designing 8-bit CPU's and their respective assemblers is a common training exercise for engineering students, engineers, and hobbyists. FPGA's are used for this purpose.
See also
Kenbak-1
References
Data unit | Operating System (OS) | 1,152 |
Background Intelligent Transfer Service
Background Intelligent Transfer Service (BITS) is a component of Microsoft Windows XP and later iterations of the operating systems, which facilitates asynchronous, prioritized, and throttled transfer of files between machines using idle network bandwidth. It is most commonly used by recent versions of Windows Update, Microsoft Update, Windows Server Update Services, and System Center Configuration Manager to deliver software updates to clients, Microsoft's anti-virus scanner Microsoft Security Essentials (a later version of Windows Defender) to fetch signature updates, and is also used by Microsoft's instant messaging products to transfer files. BITS is exposed through the Component Object Model (COM).
Technology
BITS uses idle bandwidth to transfer data. Normally, BITS transfers data in the background, i.e., BITS will only transfer data whenever there is bandwidth which is not being used by other applications. BITS also supports resuming transfers in case of disruptions.
BITS version 1.0 supports only downloads. From version 1.5, BITS supports both downloads and uploads. Uploads require the IIS web server, with BITS server extension, on the receiving side.
Transfers
BITS transfers files on behalf of requesting applications asynchronously, i.e., once an application requests the BITS service for a transfer, it will be free to do any other task, or even terminate. The transfer will continue in the background as long as the network connection is there and the job owner is logged in. BITS jobs do not transfer when the job owner is not signed in.
BITS suspends any ongoing transfer when the network connection is lost or the operating system is shut down. It resumes the transfer from where it left off when (the computer is turned on later and) the network connection is restored. BITS supports transfers over SMB, HTTP and HTTPS.
Bandwidth
BITS attempts to use only spare bandwidth. For example, when applications use 80% of the available bandwidth, BITS will use only the remaining 20%. BITS constantly monitors network traffic for any increase or decrease in network traffic and throttles its own transfers to ensure that other foreground applications (such as a web browser) get the bandwidth they need.
Note that BITS does not necessarily measure the actual bandwidth. BITS versions 3.0 and up will use Internet Gateway Device counters, if available, to more accurately calculate available bandwidth. Otherwise, BITS will use the speed as reported by the NIC to calculate bandwidth. This can lead to bandwidth calculation errors, for example when a fast network adapter (10 Mbit/s) is connected to the network via a slow link (56 kbit/s).
Jobs
BITS uses a queue to manage file transfers. A BITS session has to be started from an application by creating a Job. A job is a container, which has one or more files to transfer. A newly created job is empty. Files must be added, specifying both the source and destination URIs. While a download job can have any number of files, upload jobs can have only one. Properties can be set for individual files. Jobs inherit the security context of the application that creates them.
BITS provides API access to control jobs. A job can be programmatically started, stopped, paused, resumed, and queried for status. Before starting a job, a priority has to be set for it to specify when the job is processed relative to other jobs in the transfer queue. By default, all jobs are of Normal priority. Jobs can optionally be set to High, Low, or Foreground priority. Background transfers are optimized by BITS,1 which increases and decreases (or throttles) the rate of transfer based on the amount of idle network bandwidth that is available. If a network application begins to consume more bandwidth, BITS decreases its transfer rate to preserve the user's interactive experience, except for Foreground priority downloads.
Scheduling
BITS schedules each job to receive only a finite time slice, for which only that job is allowed to transfer, before it is temporarily paused to give another job a chance to transfer. Higher priority jobs get a higher chunk of time slice. BITS uses round-robin scheduling to process jobs in the same priority and to prevent a large transfer job from blocking smaller jobs.
When a job is newly created, it is automatically suspended (or paused). It has to be explicitly resumed to be activated. Resuming moves the job to the queued state. On its turn to transfer data, it first connects to the remote server and then starts transferring. After the job's time slice expires, the transfer is temporarily paused, and the job is moved back to the queued state. When the job gets another time slice, it has to connect again before it can transfer. When the job is complete, BITS transfers ownership of the job to the application that created it.
BITS includes a built-in mechanism for error handling and recovery attempts. Errors can be either fatal or transient; either moves a job to the respective state. A transient error is a temporary error that resolves itself after some time. For a transient error, BITS waits for some time and then retries. For fatal errors, BITS transfers control of the job to the creating application, with as much information regarding the error as it can provide.
Command-line interface tools
BITSAdmin command
Microsoft provides a BITS Administration Utility (BITSAdmin) command-line utility to manage BITS jobs. The utility is part of Windows Vista and later. It is also available as a part of the Windows XP Service Pack 2 Support Tools or Windows Server 2003 Service Pack 1 Support Tools.
Usage example:
C:\> bitsadmin /transfer myDownloadJob /download /priority normal https://example.com/file.zip C:\file.zip
PowerShell BitsTransfer
In Windows 7, the BITSAdmin utility is deprecated in favor of Windows PowerShell cmdlets. The BitsTransfer PowerShell module provides eight cmdlets with which to manage BITS jobs.
The following example is the equivalent of the BITSAdmin example above:
Start-BitsTransfer -Source "https://example.com/file.zip" -Destination "C:\file.zip" -DisplayName "myDownloadJob"
List of non-Microsoft applications that use BITS
AppSense – Uses BITS to install Packages on clients.
BITS Download Manager – A download manager for Windows that creates BITS Jobs.
BITSync – An open source utility that uses BITS to perform file synchronization on Server Message Block network shares.
Civilization V – Uses BITS to download mod packages.
Endless OS installer for Windows – Uses BITS to download OS images.
Eve Online – Uses BITS to download all the patches post-Apocrypha (March 10, 2009). It is also now used in the client repair tool.
Some Google services including Chrome, Gears, Pack, Flutter updater and YouTube Uploader used BITS.
Firefox (since version 68) for updates.
KBOX Systems Management Appliance – A systems management appliance that can use BITS to deliver files to Windows systems.
RSS Bandit – Uses BITS to download attachments in web feeds.
Oxygen media platform – Uses BITS to distribute Media Content and Software Updates.
SharpBITS – An open source download manager for Windows that handles BITS jobs.
WinBITS – An open source Downloader for Windows that downloads files by creating BITS Jobs.
Novell ZENworks Desktop Management – A systems management software that can use BITS to deliver application files to workstations.
Specops Deploy/App – A systems management software that (when available) uses BITS for delivering packages to the clients in the background.
See also
List of Microsoft Windows components
Protocols for file transfer
References
External links
Background Intelligent Transfer Service in Windows Server 2008
Fix Background Intelligent Transfer Service in Windows 10
BITS version history
bitsadmin | Microsoft Docs
Distributed data storage
Network file transfer protocols
Hypertext Transfer Protocol clients
Windows services
Windows administration | Operating System (OS) | 1,153 |
MAI Systems
MAI Systems Corporation, or simply MAI, was a United States-based computer company best known for its Basic/Four product and the customized computer systems that ran it. It was later known for its computer reservation systems.
The company formed in 1957 as a consulting firm, Management Assistance Inc.. In the early 1960s they created a profitable niche leasing IBM mainframes and grew to have income in the millions. When the System/360 was announced the company was left with hundreds of millions of dollars worth of suddenly outdated equipment, and by 1971 the company was almost insolvent. The company re-launched that year starting several new subsidiaries. Among these was Basic/Four Corporation which sold customized minicomputers running their version of Business Basic, Genesis One that bought and sold obsolete equipment, and Sorbus, which performed servicing on computer equipment. Genesis One was never very successful, and while Sorbus was modestly profitable, by 1975 two-thirds of MAI's income was from Basic/Four.
By the mid-1980s, the introduction of the IBM PC was shaking up the entire computer market and MAI was once again losing money. Asher Edelman began a proxy war to take over the company, and on achieving this in 1985, immediately began liquidation. Sorbus was sold to Bell Atlantic and Basic/Four sold to Bennett S. LeBow. Basic/Four took over the MAI marque, and the product, now running on the Unix-like UCOS system, became MAI Basic Four, Inc. This was re-launched in 1990 as Open BASIC on the PC, becoming MAI Systems at that time. Using the money LeBow organized during the leveraged buyout, the company purchased a number of other companies in an effort to diversify. Among these was Computerized Lodging Systems, who made booking systems for hotels.
The Business BASIC market disappeared, and the company was declared bankrupt in 1993. They emerged from Chapter 11 later that year, focused on the hotel software market, the division now named Hotel Information Systems. In this form they continued until the early 2000s, until being purchased by Appian Corporation in 2005.
History
Leasing business
Walter Oreamuno, an immigrant to the United States from Costa Rica, won a problem-solving contest run by IBM. Using the winnings, he formed Management Assistance Inc. in 1955 with fellow Costa Rican, Jorge González-Martén, performing computer consulting in areas underserved by IBM, like Costa Rica and the Philippines, and then Europe and Canada.
In 1956, IBM entered a consent decree with the Department of Justice that forced them to sell their mainframe computers, not just lease them. Oreamuno and González-Martén saw an opportunity; they offered companies that had recently purchased an IBM system to buy the system and lease it back to them at a rate lower than IBM would. This meant the company did not have to use its own capital to buy the systems, and their first customers were mainly banks who could easily arrange the required financing.
IBM depreciated its systems very rapidly and this led to a large market for used machines at very low costs. Using the profits from their early leasing arrangements, the company began buying systems of their own. In 1961 the company went public, raising $300,000 (). The company began a rapid expansion and by 1966 had amassed about $200 million in systems () and its shares soared to $55 a share.
In 1965, IBM introduced the System/360. This immediately rendered almost every other computer obsolete overnight. MAI had two-year contracts for their systems, and customers began cancelling them as they purchased new ones that worked with the 360. Oreamuno continued purchasing older systems to lease to customers who did not need the 360, but this proved unwise. MAI attempted a merger with Transamerica Corporation in 1967, but this fell apart. Oreamuno resigned as Chief Executive Officer was replaced by Luther Schwalm, formerly of IBM.
Under Schwalm, the company stopped purchasing punch card and related systems that were now outdated, and began purchasing newer systems like hard disk drives. In spite of these corrective measures, in 1967 they had a $17 million write-down of their older hardware. The company's fortunes quickly soured; in 1970 they had $60 million in revenue but $140 million in debt with a total net worth of negative $28 million.
Reorganization and Basic/Four
In 1969, González-Martén went to the board of directors with a new proposal; for $6.5 million he proposed to develop a new minicomputer that would use computer terminals as its primary input and thereby leave behind the punch card-based workflows of the mainframe systems. Shortly after having made the proposal, González-Martén returned to Costa Rica. In 1970, MAI president Sol Gordon asked him to return, setting up a new division, Basic/Four in Santa Ana, California.
In 1971, chief financial officer Raymond Kurshan took over as president and González-Martén returned to Costa Rica. Kurshan reorganized the firm into three divisions, Basic/Four Corporation continued development of their new platform, Genesis One took over the existing leasing business with an eye to selling off the equipment, and Sorbus was a new service organization formed from the service side of MAI's leasing business. The new systems were introduced with four models in June 1972 at the Commodore Hotel in San Francisco. The key concept behind the platform was its use of Business Basic, which offered COBOL-like record handing in the BASIC computer language and multi-user account handling. The systems were an immediate success, and by 1975, sales had grown $43 million, two-thirds of MAI's income.
Suddenly profitable again, the company began expanding their product line. In 1977, the company bought Wordstream Corporation and sold their word processing systems that ran on IBM terminals. They also introduced a number of applications written in Business Basic, including EASY, a reporting system, and Business Data statistics software. In 1979 they introduced the DataWord II, which operated both as a word processing system and a terminal. However, new entrants into the standalone word processing market, especially Wang, quickly rendered their products unprofitable and they exited that market in 1980.
By 1980, they announced the sale of their 10,000th system, but profitability was once again dropping significantly. To stem the loss of customers, in 1983 they introduced the MAI 8000, a supermini capable of supporting 96 simultaneous users. However, by this point the 1981 introduction of the IBM PC was gaining momentum, and the margins continued to contract. They company then began a diversification process, entering niche business like pharmaceutical firms, sewing-goods companies, and non-profit agencies. This was not successful, and for the year the Basic/Four division reported a loss of $10.2 million.
Breakup and the new MAI
In 1984, Asher Edelman purchased 12% of MAI's outstanding shares, and began a proxy war for control of the company. This led to him placing four of the ten seats on the board of directors. He gained outright control in August, causing Kurshan to resign his positions as chairman, president and CEO. Edelman immediately began liquidating the company. Sorbus was sold to a Bell Atlantic subsidiary and Basic/Four was purchased in a $100 million leveraged buyout by Bennett S. LeBow.
LeBow renamed the company as MAI Basic Four, Inc., now privately held. He immediately sold the Canadian division to Bell Atlantic for $23 million. LeBow brought in William Patton to turn around the fortunes of the rest of the company, which they did by focussing on their existing 27,000 strong customer base, and narrowing future sales to eight key markets where they were well represented. In September 1986, the company reported a $17 million profit on sales of $281 million, and the company was once again taken public.
In 1986 the company introduced the MAI 3000 midrange system, and in 1987, the expandable MAI 4000. Although the company represented only about 1% of the minicomputer market, these models were nevertheless successful and the next year was among the company's best, with $22.8 million profits from sales of $321 million. With this cash, the company re-purchased MAI Canada and portions of Sorbus, along with twenty-five smaller software firms aimed at specific industries. In 1988 they had their record year, with $24.5 million profits on $420 million in sales.
1989 downturn and reorganization as MAI Systems
Sales began to slip in the second half of 1988, and LeBow put his shares up for sale. No one expressed an interest, so instead, in November 1988 LeBow decided to use MAI as the basis for a takeover/merger of Prime Computer. By early 1989 the United States computer market had a sudden downturn and sales plummeted. In June, Patton resigned as president. The takeover attempt failed in June, having cost $25 million and generating considerable ill-will among MAI's customer base. In August 1989, LeWow-controlled Brooke Partners invested $55 million and became the largest shareholder. LeBow resigned control positions, and Fred Anderson became the president and chief operating officer while William Weksel became the CEO and chair.
In April 1990 the company purchased Computerized Lodging Systems, who produced a series of software systems for the hotel industry. They also released Open BASIC, a version of Business Basic that ran on a variety of operating systems. The company name was changed to MAI Systems later that year. In 1991 the company began winding down its manufacturing and became a reseller of commercial off the shelf systems. The company continued to lose money, and reported a loss of $182 million for fiscal 1992.
In March 1993 a group of European banks took control of MAI's European operations, and the rest of the company entered Chapter 11. They emerged from Chapter 11 in 1993 as a much smaller company focussed mainly on their niche market software systems like hotel booking and food services (through the Sextant division). The company continued to retrench through the 1990s and the remaining hotel unit was sold to Appian Corporation in 2005.
See also
MAI Systems Corp. v. Peak Computer, Inc.
References
Citations
Bibliography
American companies established in 1957
Software companies of the United States
American companies disestablished in 2005
Leasing companies | Operating System (OS) | 1,154 |
On-board diagnostics
On-board diagnostics (OBD) is an automotive term referring to a vehicle's self-diagnostic and reporting capability. OBD systems give the vehicle owner or repair technician access to the status of the various vehicle sub-systems. The amount of diagnostic information available via OBD has varied widely since its introduction in the early 1980s versions of on-board vehicle computers. Early versions of OBD would simply illuminate a malfunction indicator light or "idiot light" if a problem was detected but would not provide any information as to the nature of the problem. Modern OBD implementations use a standardized digital communications port to provide real-time data in addition to a standardized series of diagnostic trouble codes, or DTCs, which allow a person to rapidly identify and remedy malfunctions within the vehicle.
History
1968: Volkswagen introduces the first on-board computer system, in their fuel-injected Type 3 models. This system is entirely analog with no diagnostic capabilities.
1975: Bosch and Bendix EFI systems are adopted by major automotive manufacturers in an effort to improve tail pipe emissions. These systems are also analog in nature, though some provide rudimentary diagnostic capability through factory tools, such as the Kent Moore J-25400, compatible with the Datsun 280Z, and the Cadillac Seville.
1980: General Motors introduces the first digital OBD system on their 1980 Eldorado and Seville models. A proprietary 5-pin ALDL interfaces with the Engine Control Module (ECM) to initiate a diagnostic request and provide a serial data stream. The protocol communicates at 160 baud with Pulse-width modulation (PWM) signaling and monitors all engine management functions. Real-time sensor data, component overrides, and Diagnostic Trouble Codes (DTC's) are also displayed through the electronic climate control system's digital readout when in diagnostic mode.
1982: RCA defines an analog STE/ICE vehicle diagnostic standard used in the CUCV, M60 tank and other military vehicles of the era for the US Army.
1986: An upgraded version of the ALDL protocol appears which communicates at 8192 baud with half-duplex UART signaling. This protocol is defined in GM XDE-5024B.
1988: The California Air Resources Board (CARB) requires that all new vehicles sold in California in 1988 and newer vehicles have some basic OBD capability. These requirements are generally referred to as "OBD-I", though this name is not applied until the introduction of OBD-II. The data link connector and its position are not standardized, nor is the data protocol. The Society of Automotive Engineers (SAE) recommends a standardized diagnostic connector and set of diagnostic test signals.
~1994: Motivated by a desire for a state-wide emissions testing program, the CARB issues the OBD-II specification and mandates that it be adopted for all cars sold in California starting in model year 1996 (see CCR Title 13 Section 1968.1 and 40 CFR Part 86 Section 86.094). The DTCs and connector suggested by the SAE are incorporated into this specification.
1996: The OBD-II specification is made mandatory for all cars sold in the United States.
2001: The European Union makes EOBD mandatory for all gasoline (petrol) vehicles sold in the European Union, starting in MY2001 (see European emission standards Directive 98/69/EC).
2004: The European Union makes EOBD mandatory for all diesel vehicles sold in the European Union
2006: All vehicles manufactured in Australia and New Zealand are required to be OBD-II compliant after January 1, 2006.
2008: All cars sold in the United States are required to use the ISO 15765-4 signaling standard (a variant of the Controller Area Network (CAN) bus).
2008: Certain light vehicles in China are required by the Environmental Protection Administration Office to implement OBD (standard GB18352) by July 1, 2008. Some regional exemptions may apply.
2010: HDOBD (heavy duty) specification is made mandatory for selected commercial (non-passenger car) engines sold in the United States.
Standard interfaces
ALDL
GM's ALDL (Assembly Line Diagnostic Link) is sometimes referred as a predecessor to, or a manufacturer's proprietary version of, an OBD-I diagnostic. This interface was made in different varieties and changed with power train control modules (aka PCM, ECM, ECU). Different versions had slight differences in pin-outs and baud rates. Earlier versions used a 160 baud rate, while later versions went up to 8192 baud and used bi-directional communications to the PCM.
OBD-I
The regulatory intent of OBD-I was to encourage auto manufacturers to design reliable emission control systems that remain effective for the vehicle's "useful life". The hope was that by forcing annual emissions testing for California, and denying registration to vehicles that did not pass, drivers would tend to purchase vehicles that would more reliably pass the test. OBD-I was largely unsuccessful, as the means of reporting emissions-specific diagnostic information was not standardized. Technical difficulties with obtaining standardized and reliable emissions information from all vehicles led to an inability to implement the annual testing program effectively.
The Diagnostic Trouble Codes (DTC's) of OBD-I vehicles can usually be found without an expensive scan tool. Each manufacturer used their own Diagnostic Link Connector (DLC), DLC location, DTC definitions, and procedure to read the DTC's from the vehicle. DTC's from OBD-I cars are often read through the blinking patterns of the 'Check Engine Light' (CEL) or 'Service Engine Soon' (SES) light. By connecting certain pins of the diagnostic connector, the 'Check Engine' light will blink out a two-digit number that corresponds to a specific error condition. The DTC's of some OBD-I cars are interpreted in different ways, however. Cadillac (gasoline) fuel-injected vehicles are equipped with actual on-board diagnostics, providing trouble codes, actuator tests and sensor data through the new digital Electronic Climate Control display.
Holding down 'Off' and 'Warmer' for several seconds activates the diagnostic mode without the need for an external scan tool. Some Honda engine computers are equipped with LEDs that light up in a specific pattern to indicate the DTC. General Motors, some 1989-1995 Ford vehicles (DCL), and some 1989-1995 Toyota/Lexus vehicles have a live sensor data stream available; however, many other OBD-I equipped vehicles do not. OBD-I vehicles have fewer DTC's available than for OBD-II equipped vehicles.
OBD-1.5
OBD 1.5 refers to a partial implementation of OBD-II which General Motors used on some vehicles in 1994, 1995, & 1996. (GM did not use the term OBD 1.5 in the documentation for these vehicles — they simply have an OBD and an OBD-II section in the service manual.)
For example, the 94–95 Corvettes have one post-catalyst oxygen sensor (although they have two catalytic converters), and have a subset of the OBD-II codes implemented. For a 1994 Corvette the implemented OBD-II codes are P0116-P0118, P0131-P0135, P0151-P0155, P0158, P0160-P0161, P0171-P0175, P0420, P1114-P1115, P1133, P1153 and P1158.
This hybrid system was present on the GM H-body cars in 94–95, W-body cars (Buick Regal, Chevrolet Lumina ('95 only), Chevrolet Monte Carlo ('95 only), Pontiac Grand Prix, Oldsmobile Cutlass Supreme) in 94–95, L-body (Chevrolet Beretta/Corsica) in 94–95, Y-body (Chevrolet Corvette) in 94–95, on the F-body (Chevrolet Camaro and Pontiac Firebird) in 95 and on the J-Body (Chevrolet Cavalier and Pontiac Sunfire) and N-Body (Buick Skylark, Oldsmobile Achieva, Pontiac Grand Am) in 95 and 96 and also on '94-'95 Saab vehicles with the naturally aspirated 2.3.
The pinout for the ALDL connection on these cars is as follows:
For ALDL connections, pin 9 is the data stream, pins 4 and 5 are ground, and pin 16 is battery voltage.
An OBD 1.5 compatible scan tool is required to read codes generated by OBD 1.5.
Additional vehicle-specific diagnostic and control circuits are also available on this connector. For instance, on the Corvette there are interfaces for the Class 2 serial data stream from the PCM, the CCM diagnostic terminal, the radio data stream, the airbag system, the selective ride control system, the low tire pressure warning system, and the passive keyless entry system.
An OBD 1.5 has also been used in the Ford Scorpio since 95.
OBD-II
OBD-II is an improvement over OBD-I in both capability and standardization. The OBD-II standard specifies the type of diagnostic connector and its pinout, the electrical signalling protocols available, and the messaging format. It also provides a candidate list of vehicle parameters to monitor along with how to encode the data for each. There is a pin in the connector that provides power for the scan tool from the vehicle battery, which eliminates the need to connect a scan tool to a power source separately. However, some technicians might still connect the scan tool to an auxiliary power source to protect data in the unusual event that a vehicle experiences a loss of electrical power due to a malfunction. Finally, the OBD-II standard provides an extensible list of DTCs. As a result of this standardization, a single device can query the on-board computer(s) in any vehicle. This OBD-II came in two models OBD-IIA and OBD-IIB. OBD-II standardization was prompted by emissions requirements, and though only emission-related codes and data are required to be transmitted through it, most manufacturers have made the OBD-II Data Link Connector the only one in the vehicle through which all systems are diagnosed and programmed. OBD-II Diagnostic Trouble Codes are 4-digit, preceded by a letter: P for powertrain (engine and transmission), B for body, C for chassis, and U for network.
OBD-II diagnostic connector
The OBD-II specification provides for a standardized hardware interface—the female 16-pin (2x8) J1962 connector. Unlike the OBD-I connector, which was sometimes found under the hood of the vehicle, the OBD-II connector is required to be within of the steering wheel (unless an exemption is applied for by the manufacturer, in which case it is still somewhere within reach of the driver).
SAE J1962 defines the pinout of the connector as:
The assignment of unspecified pins is left to the vehicle manufacturer's discretion.
EOBD
The European on-board diagnostics (EOBD) regulations are the European equivalent of OBD-II, and apply to all passenger cars of category M1 (with no more than 8 passenger seats and a Gross Vehicle Weight rating of 2500 kg or less) first registered within EU member states since January 1, 2001 for petrol (gasoline) engined cars and since January 1, 2004 for diesel engined cars.
For newly introduced models, the regulation dates applied a year earlier - January 1, 2000 for petrol and January 1, 2003 for diesel.
For passenger cars with a Gross Vehicle Weight rating of greater than 2500 kg and for light commercial vehicles, the regulation dates applied from January 1, 2002 for petrol models, and January 1, 2007 for diesel models.
The technical implementation of EOBD is essentially the same as OBD-II, with the same SAE J1962 diagnostic link connector and signal protocols being used.
With Euro V and Euro VI emission standards, EOBD emission thresholds are lower than previous Euro III and IV.
EOBD fault codes
Each of the EOBD fault codes consists of five characters: a letter, followed by four numbers. The letter refers to the system being interrogated e.g. Pxxxx would refer to the powertrain system. The next character would be a 0 if complies to the EOBD standard. So it should look like P0xxx.
The next character would refer to the sub system.
P00xx - Fuel and Air Metering and Auxiliary Emission Controls.
P01xx - Fuel and Air Metering.
P02xx - Fuel and Air Metering (Injector Circuit).
P03xx - Ignition System or Misfire.
P04xx - Auxiliary Emissions Controls.
P05xx - Vehicle Speed Controls and Idle Control System.
P06xx - Computer Output Circuit.
P07xx - Transmission.
P08xx - Transmission.
The following two characters would refer to the individual fault within each subsystem.
EOBD2
The term "EOBD2" is marketing speak used by some vehicle manufacturers to refer to manufacturer-specific features that are not actually part of the OBD or EOBD standard. In this case "E" stands for Enhanced.
JOBD
JOBD is a version of OBD-II for vehicles sold in Japan.
ADR 79/01 & 79/02 (Australian OBD standard)
The ADR 79/01 (Vehicle Standard (Australian Design Rule 79/01 – Emission Control for Light Vehicles) 2005) standard is the Australian equivalent of OBD-II.
It applies to all vehicles of category M1 and N1 with a Gross Vehicle Weight rating of 3500 kg or less, registered from new within Australia and produced since January 1, 2006 for petrol (gasoline) engined cars and since January 1, 2007 for diesel engined cars.
For newly introduced models, the regulation dates applied a year earlier - January 1, 2005 for petrol and January 1, 2006 for diesel.
The ADR 79/01 standard was supplemented by the ADR 79/02 standard which imposed tighter emissions restrictions, applicable to all vehicles of class M1 and N1 with a Gross Vehicle Weight rating of 3500 kg or less, from July 1, 2008 for new models, July 1, 2010 for all models.
The technical implementation of this standard is essentially the same as OBD-II, with the same SAE J1962 diagnostic link connector and signal protocols being used.
OBD-II signal protocols
There are five signaling protocols that are permitted with the OBD-II interface. Most vehicles implement only one of the protocols. It is often possible to deduce the protocol used based on which pins are present on the J1962 connector:
SAE J1850 PWM (pulse-width modulation — 41.6 kB/sec, standard of the Ford Motor Company)
pin 2: Bus+
pin 10: Bus–
High voltage is +5 V
Message length is restricted to 12 bytes, including CRC
Employs a multi-master arbitration scheme called 'Carrier Sense Multiple Access with Non-Destructive Arbitration' (CSMA/NDA)
SAE J1850 VPW (variable pulse width — 10.4/41.6 kB/sec, standard of General Motors)
pin 2: Bus+
Bus idles low
High voltage is +7 V
Decision point is +3.5 V
Message length is restricted to 12 bytes, including CRC
Employs CSMA/NDA
ISO 9141-2. This protocol has an asynchronous serial data rate of 10.4 kbps. It is somewhat similar to RS-232; however, the signal levels are different, and communications happen on a single, bidirectional line without additional handshake signals. ISO 9141-2 is primarily used in Chrysler, European, and Asian vehicles.
pin 7: K-line
pin 15: L-line (optional)
UART signaling
K-line idles high, with a 510 ohm resistor to Vbatt
The active/dominant state is driven low with an open-collector driver.
Message length is Max 260Bytes. Data field MAX 255.
ISO 14230 KWP2000 (Keyword Protocol 2000)
pin 7: K-line
pin 15: L-line (optional)
Physical layer identical to ISO 9141-2
Data rate 1.2 to 10.4 kBaud
Message may contain up to 255 bytes in the data field
ISO 15765 CAN (250 kbit/s or 500 kbit/s). The CAN protocol was developed by Bosch for automotive and industrial control. Unlike other OBD protocols, variants are widely used outside of the automotive industry. While it did not meet the OBD-II requirements for U.S. vehicles prior to 2003, as of 2008 all vehicles sold in the US are required to implement CAN as one of their signaling protocols.
pin 6: CAN High
pin 14: CAN Low
All OBD-II pinouts use the same connector, but different pins are used with the exception of pin 4 (battery ground) and pin 16 (battery positive).
OBD-II diagnostic data available
OBD-II provides access to data from the engine control unit (ECU) and offers a valuable source of information when troubleshooting problems inside a vehicle. The SAE J1979 standard defines a method for requesting various diagnostic data and a list of standard parameters that might be available from the ECU. The various parameters that are available are addressed by "parameter identification numbers" or PIDs which are defined in J1979. For a list of basic PIDs, their definitions, and the formula to convert raw OBD-II output to meaningful diagnostic units, see OBD-II PIDs. Manufacturers are not required to implement all PIDs listed in J1979 and they are allowed to include proprietary PIDs that are not listed. The PID request and data retrieval system gives access to real time performance data as well as flagged DTCs. For a list of generic OBD-II DTCs suggested by the SAE, see Table of OBD-II Codes. Individual manufacturers often enhance the OBD-II code set with additional proprietary DTCs.
Mode of operation/OBD services
Here is a basic introduction to the OBD communication protocol according to ISO 15031. In SAE J1979 these "modes" were renamed to "services", starting in 2003.
Service / Mode $01 is used to identify what powertrain information is available to the scan tool.
Service / Mode $02 displays Freeze Frame data.
Service / Mode $03 lists the emission-related "confirmed" diagnostic trouble codes stored. It displays exact numeric, 4 digit codes identifying the faults.
Service / Mode $04 is used to clear emission-related diagnostic information. This includes clearing the stored pending/confirmed DTCs and Freeze Frame data.
Service / Mode $05 displays the oxygen sensor monitor screen and the test results gathered about the oxygen sensor. There are ten numbers available for diagnostics:
$01 Rich-to-Lean O2 sensor threshold voltage
$02 Lean-to-Rich O2 sensor threshold voltage
$03 Low sensor voltage threshold for switch time measurement
$04 High sensor voltage threshold for switch time measurement
$05 Rich-to-Lean switch time in ms
$06 Lean-to Rich switch time in ms
$07 Minimum voltage for test
$08 Maximum voltage for test
$09 Time between voltage transitions in ms
Service / Mode $06 is a Request for On-Board Monitoring Test Results for Continuously and Non-Continuously Monitored System. There are typically a minimum value, a maximum value, and a current value for each non-continuous monitor.
Service / Mode $07 is a Request for emission-related diagnostic trouble codes detected during current or last completed driving cycle. It enables the external test equipment to obtain "pending" diagnostic trouble codes detected during current or last completed driving cycle for emission-related components/systems. This is used by service technicians after a vehicle repair, and after clearing diagnostic information to see test results after a single driving cycle to determine if the repair has fixed the problem.
Service / Mode $08 could enable the off-board test device to control the operation of an on-board system, test, or component.
Service / Mode $09 is used to retrieve vehicle information. Among others, the following information is available:
VIN (Vehicle Identification Number): Vehicle ID
CALID (Calibration Identification): ID for the software installed on the ECU
CVN (Calibration Verification Number): Number used to verify the integrity of the vehicle software. The manufacturer is responsible for determining the method of calculating CVN(s), e.g. using checksum.
In-use performance counters
Gasoline engine : Catalyst, Primary oxygen sensor, Evaporating system, EGR system, VVT system, Secondary air system, and Secondary oxygen sensor
Diesel engine : NMHC catalyst, NOx reduction catalyst, NOx absorber Particulate matter filter, Exhaust gas sensor, EGR system, VVT system, Boost pressure control, Fuel system.
Service / Mode $0A lists emission-related "permanent" diagnostic trouble codes stored. As per CARB, any diagnostic trouble codes that is commanding MIL on and stored into non-volatile memory shall be logged as a permanent fault code.
See OBD-II PIDs for an extensive list of this information.
Applications
Various tools are available that plug into the OBD connector to access OBD functions. These range from simple generic consumer level tools to highly sophisticated OEM dealership tools to vehicle telematic devices.
Hand-held scan tools
A range of rugged hand-held scan tools is available.
Simple fault code readers/reset tools are mostly aimed at the consumer level.
Professional hand-held scan tools may possess more advanced functions
Access more advanced diagnostics
Set manufacturer- or vehicle-specific ECU parameters
Access and control other control units, such as air bag or ABS
Real-time monitoring or graphing of engine parameters to facilitate diagnosis or tuning
Mobile device-based tools and analysis
Mobile device applications allow mobile devices such as cell phones and tablets to display and manipulate the OBD-II data accessed via USB adaptor cables or Bluetooth adapters plugged into the car's OBD II connector. Newer devices on the market are equipped with GPS sensors and the ability to transmit vehicle location and diagnostics data over a cellular network. Modern OBD-II devices can therefore nowadays be used to for example locate vehicles, monitor driving behavior in addition to reading Diagnostics Trouble Codes (DTC). Even more advanced devices allow users to reset engine DTC codes, effectively turning off engine lights in the dashboard, however resetting the codes does not address the underlying issues and can in worst case scenarios even lead to engine breakage where the source issue is serious and left unattended for long periods of time.
OBD2 Software
An OBD2 software package when installed in a computer (Windows, Mac, or Linux) can help diagnose the onboard system, read and erase DTCs, turn off MIL, show real-time data, and measure vehicle fuel economy.
To use OBD2 software, one needs to have a Bluetooth or WIFI OBD2 adapter plugged in the OBD2 port to enable the vehicle to connect with the computer where the software is installed.
PC-based scan tools and analysis platforms
A PC-based OBD analysis tool that converts the OBD-II signals to serial data (USB or serial port) standard to PCs or Macs. The software then decodes the received data to a visual display. Many popular interfaces are based on the ELM327 or STN OBD Interpreter ICs, both of which read all five generic OBD-II protocols. Some adapters now use the J2534 API allowing them to access OBD-II Protocols for both cars and trucks.
In addition to the functions of a hand-held scan tool, the PC-based tools generally offer:
Large storage capacity for data logging and other functions
Higher resolution screen than handheld tools
The ability to use multiple software programs adding flexibility
The identification and clearance of fault code
Data shown by intuitive graphs and charts
The extent that a PC tool may access manufacturer or vehicle-specific ECU diagnostics varies between software products as it does between hand-held scanners.
Data loggers
Data loggers are designed to capture vehicle data while the vehicle is in normal operation, for later analysis.
Data logging uses include:
Engine and vehicle monitoring under normal operation, for the purposes of diagnosis or tuning.
Some US auto insurance companies offer reduced premiums if OBD-II vehicle data loggers or cameras are installed - and if the driver's behaviour meets requirements. This is a form of auto insurance risk selection
Monitoring of driver behaviour by fleet vehicle operators.
Analysis of vehicle black box data may be performed on a periodic basis, automatically transmitted wirelessly to a third party or retrieved for forensic analysis after an event such as an accident, traffic infringement or mechanical fault.
Emission testing
In the United States, many states now use OBD-II testing instead of tailpipe testing in OBD-II compliant vehicles (1996 and newer). Since OBD-II stores trouble codes for emissions equipment, the testing computer can query the vehicle's onboard computer and verify there are no emission related trouble codes and that the vehicle is in compliance with emission standards for the model year it was manufactured.
In the Netherlands, 2006 and later vehicles get a yearly EOBD emission check.
Driver's supplementary vehicle instrumentation
Driver's supplementary vehicle instrumentation is instrumentation installed in a vehicle in addition to that provided by the vehicle manufacturer and intended for display to the driver during normal operation. This is opposed to scanners used primarily for active fault diagnosis, tuning, or hidden data logging.
Auto enthusiasts have traditionally installed additional gauges such as manifold vacuum, battery current etc. The OBD standard interface has enabled a new generation of enthusiast instrumentation accessing the full range of vehicle data used for diagnostics, and derived data such as instantaneous fuel economy.
Instrumentation may take the form of dedicated trip computers, carputer or interfaces to PDAs, smartphones, or a Garmin navigation unit.
As a carputer is essentially a PC, the same software could be loaded as for PC-based scan tools and vice versa, so the distinction is only in the reason for use of the software.
These enthusiast systems may also include some functionality similar to the other scan tools.
Vehicle telematics
OBD II information is commonly used by vehicle telematics devices that perform fleet tracking, monitor fuel efficiency, prevent unsafe driving, as well as for remote diagnostics and by Pay-As-You-Drive insurance.
Although originally not intended for the above purposes, commonly supported OBD II data such as vehicle speed, RPM, and fuel level allow GPS-based fleet tracking devices to monitor vehicle idling times, speeding, and over-revving. By monitoring OBD II DTCs a company can know immediately if one of its vehicles has an engine problem and by interpreting the code the nature of the problem. It can be used to detect reckless driving in real time based on the sensor data provided through the OBD port. This detection is done by adding a complex events processor (CEP) to the backend and on the client's interface. OBD II is also monitored to block mobile phones when driving and to record trip data for insurance purposes.
OBD-II diagnostic trouble codes
OBD-II diagnostic trouble codes (DTCs) contain 1 letter and 4 numbers, and are divided into the following categories:
B – Body (includes air conditioning and airbag) (1164 codes)
C – Chassis (includes ABS) (486 codes)
P – Powertrain (engine and transmission) (1688 codes)
U – Network (wiring bus) (299 codes)
Standards documents
SAE standards documents on OBD-II
J1962 – Defines the physical connector used for the OBD-II interface.
J1850 – Defines a serial data protocol. There are 2 variants: 10.4 kbit/s (single wire, VPW) and 41.6 kbit/s (2 wire, PWM). Mainly used by US manufacturers, also known as PCI (Chrysler, 10.4K), Class 2 (GM, 10.4K), and SCP (Ford, 41.6K)
J1978 – Defines minimal operating standards for OBD-II scan tools
J1979 – Defines standards for diagnostic test modes
J2012 – Defines standards trouble codes and definitions.
J2178-1 – Defines standards for network message header formats and physical address assignments
J2178-2 – Gives data parameter definitions
J2178-3 – Defines standards for network message frame IDs for single byte headers
J2178-4 – Defines standards for network messages with three byte headers*
J2284-3 – Defines 500K CAN physical and data link layer
J2411 – Describes the GMLAN (Single-Wire CAN) protocol, used in newer GM vehicles. Often accessible on the OBD connector as PIN 1 on newer GM vehicles.
SAE standards documents on HD (Heavy Duty) OBD
J1939 – Defines a data protocol for heavy duty commercial vehicles
ISO standards
ISO 9141: Road vehicles – Diagnostic systems. International Organization for Standardization, 1989.
Part 1: Requirements for interchange of digital information
Part 2: CARB requirements for interchange of digital information
Part 3: Verification of the communication between vehicle and OBD II scan tool
ISO 11898: Road vehicles – Controller area network (CAN). International Organization for Standardization, 2003.
Part 1: Data link layer and physical signalling
Part 2: High-speed medium access unit
Part 3: Low-speed, fault-tolerant, medium-dependent interface
Part 4: Time-triggered communication
ISO 14230: Road vehicles – Diagnostic systems – Keyword Protocol 2000, International Organization for Standardization, 1999.
Part 1: Physical layer
Part 2: Data link layer
Part 3: Application layer
Part 4: Requirements for emission-related systems
ISO 15031: Communication between vehicle and external equipment for emissions-related diagnostics, International Organization for Standardization, 2010.
Part 1: General information and use case definition
Part 2: Guidance on terms, definitions, abbreviations and acronyms
Part 3: Diagnostic connector and related electrical circuits, specification and use
Part 4: External test equipment
Part 5: Emissions-related diagnostic services
Part 6: Diagnostic trouble code definitions
Part 7: Data link security
ISO 15765: Road vehicles – Diagnostics on Controller Area Networks (CAN). International Organization for Standardization, 2004.
Part 1: General information
Part 2: Network layer services ISO 15765-2
Part 3: Implementation of unified diagnostic services (UDS on CAN)
Part 4: Requirements for emissions-related systems
Security issues
Researchers at the University of Washington and University of California examined the security around OBD, and found that they were able to gain control over many vehicle components via the interface. Furthermore, they were able to upload new firmware into the engine control units. Their conclusion is that vehicle embedded systems are not designed with security in mind.
There have been reports of thieves using specialist OBD reprogramming devices to enable them to steal cars without the use of a key. The primary causes of this vulnerability lie in the tendency for vehicle manufacturers to extend the bus for purposes other than those for which it was designed, and the lack of authentication and authorization in the OBD specifications, which instead rely largely on security through obscurity.
See also
OBD-II PIDs ("Parameter IDs")
Unified Diagnostic Services
Engine control unit
Immobiliser
References
Notes
Birnbaum, Ralph and Truglia, Jerry. Getting to Know OBD II. New York, 2000. .
SAE International. On-Board Diagnostics for Light and Medium Duty Vehicles Standards Manual. Pennsylvania, 2003. .
External links
Directive 98/69/EC of the European Parliament and of the Council of 13 October 1998.
National OBD Clearing House Center for Automotive Science and Technology at Weber State University
OBD-II Codes Definition OBD-II codes definition, description and repair information.
OBD2 Codes Guides OBD2 trouble codes meaning, fixes, lookup, and full list for free download
United States Environmental Protection Agency OBD information for repair technicians, vehicle owners, and manufacturers
OBD2 Vehicle Plug Pinouts including compatibility lists Manufacturer Specific OBD-II diagnostics pinouts and compatibility information.
Automotive technologies
Industrial computing
Vehicle security systems | Operating System (OS) | 1,155 |
File system API
A file system API is an application programming interface through which a utility or user program requests services of a file system. An operating system may provide abstractions for accessing different file systems transparently.
Some file system APIs may also include interfaces for maintenance operations, such as creating or initializing a file system, verifying the file system for integrity, and defragmentation.
Each operating system includes the APIs needed for the file systems it supports. Microsoft Windows has file system APIs for NTFS and several FAT file systems. Linux systems can include APIs for ext2, ext3, ReiserFS, and Btrfs to name a few.
History
Some early operating systems were capable of handling only tape and disk file systems. These provided the most basic of interfaces with:
Write, read and position
More coordination such as device allocation and deallocation required the addition of:
Open and close
As file systems provided more services, more interfaces were defined:
Metadata management
File system maintenance
As additional file system types, hierarchy structure and supported media increased, additional features needed some specialized functions:
Directory management
Data structure management
Record management
Non-data operations
Multi-user systems required APIs for:
Sharing
Restricting access
Encryption
API overviews
Write, read and position
Writing user data to a file system is provided for use directly by the user program or the run-time library. The run-time library for some programming languages may provide type conversion, formatting and blocking. Some file systems provide identification of records by key and may include re-writing an existing record. This operation is sometimes called PUT or PUTX (if the record exists)
Reading user data, sometimes called GET, may include a direction (forward or reverse) or in the case of a keyed file system, a specific key. As with writing run-time libraries may intercede for the user program.
Positioning includes adjusting the location of the next record. This may include skipping forward or reverse as well as positioning to the beginning or end of the file.
Open and close
The open API may be explicitly requested or implicitly invoked upon the issuance of the first operation by a process on an object. It may cause the mounting of removable media, establishing a connection to another host and validating the location and accessibility of the object. It updates system structures to indicate that the object is in use.
Usual requirements for requesting access to a file system object include:
The object which is to be accessed (file, directory, media and location)
The intended type of operations to be performed after the open (reads, updates, deletions)
Additional information may be necessary, for example
a password
a declaration that other processes may access the same object while the opening process is using the object (sharing). This may depend on the intent of the other process. In contrast, a declaration that no other process may access the object regardless of the other processes intent (exclusive use).
These are requested via a programming language library which may provide coordination among modules in the process in addition to forwarding the request to the file system.
It must be expected that something may go wrong during the processing of the open.
The object or intent may be improperly specified (the name may include an unacceptable character or the intent is unrecognized).
The process may be prohibited from accessing the object (it may be only accessible by a group or specific user).
The file system may be unable to create or update structures required to coordinate activities among users.
In the case of a new (or replacement) object, there may not be sufficient capacity on the media.
Depending on the programming language, additional specifications in the open may establish the modules to handle these conditions. Some libraries specify a library module to the file system permitting analysis should the opening program be unable to perform any meaningful action as a result of a failure. For example, if the failure is on the attempt to open the necessary input file, the only action may be to report the failure and abort the program. Some languages simply return a code indicating the type of failure which always must be checked by the program, which decides what to report and if it can continue.
Close may cause dismounting or ejecting removable media and updating library and file system structures to indicate that the object is no longer in use.
The minimal specification to the close references the object. Additionally, some file systems provide specifying a disposition of the object which may indicate the object is to be discarded and no longer be part of the file system.
Similar to the open, it must be expected that something may go wrong.
The specification of the object may be incorrect.
There may not be sufficient capacity on the media to save any data being buffered or to output a structure indicating that the object was successfully updated.
A device error may occur on the media where the object is stored while writing buffered data, the completion structure or updating meta data related to the object (for example last access time).
A specification to release the object may be inconsistent with other processes still using the object.
Considerations for handling a failure are similar to those of the open.
Metadata management
Information about the data in a file is called metadata.
Some of the metadata is maintained by the file system, for example last-modification date (and various other dates depending on the file system),
location of the beginning of the file, the size of the file and if the file system backup utility has saved the current version of the files. These items cannot usually be altered by a user program.
Additional meta data supported by some file systems may include the owner of the file, the group to which the file belongs as well as permissions and/or access control (i.e. What access and updates various users or groups may perform), and whether the file is normally visible when the directory is listed. These items are usually modifiable by file system utilities which may be executed by the owner.
Some applications store more metadata. For images the metadata may include the camera model and settings used to take the photo. For audio files, the meta data may include the album, artist who recorded the recording and comments about the recording which may be specific to a particular copy of the file (i.e. different copies of the same recording may have different comments as update by the owner of the file). Documents may include items like checked-by, approved-by, etc.
Directory management
Renaming a file, moving a file (or a subdirectory) from one directory to another and deleting a file are examples of the operations provide by the file system for the management of directories.
Metadata operations such as permitting or restricting access the a directory by various users or groups of users are usually included.
Filesystem maintenance
As a filesystem is used directories, files and records may be added, deleted or modified. This usually causes inefficiencies in the underlying data structures. Things like logically sequential blocks distributed across the media in a way that causes excessive repositioning, partially used even empty blocks included in linked structures. Incomplete structures or other inconsistencies may be caused by device or media errors, inadequate time between detection of impending loss of power and actual power loss, improper system shutdown or media removal, and on very rare occasions file system coding errors.
Specialized routines in the file system are included to optimize or repair these structures. They are not usually invoked by the user directly but triggered within the file system itself. Internal counters of the number of levels of structures, number of inserted objects may be compared against thresholds. These may cause user access to be suspended to a specific structure (usually to the displeasure of the user or users effected) or may be started as low priority asynchronous tasks or they may be deferred to a time of low user activity. Sometimes these routines are invoked or scheduled by the system manager or as in the case of defragmentation.
Kernel-level API
The API is "kernel-level" when the kernel not only provides the interfaces for the filesystems developers but is also the space in which the filesystem code resides.
It differs with the old schema in that the kernel itself uses its own facilities to talk with the filesystem driver and vice versa, as contrary to the kernel being the one that handles the filesystem layout and the filesystem the one that directly access the hardware.
It is not the cleanest scheme but resolves the difficulties of major rewrite that has the old scheme.
With modular kernels it allows adding filesystems as any kernel module, even third party ones. With non-modular kernels however it requires the kernel to be recompiled with the new filesystem code (and in closed-source kernels, this makes third party filesystem impossible).
Unixes and Unix-like systems such as Linux have used this modular scheme.
There is a variation of this scheme used in MS-DOS (DOS 4.0 onward) and compatibles to support CD-ROM and network file systems. Instead of adding code to the kernel, as in the old scheme, or using kernel facilities as in the kernel-based scheme, it traps all calls to a file and identifies if it should be redirected to the kernel's equivalent function or if it has to be handled by the specific filesystem driver, and the filesystem driver "directly" access the disk contents using low-level BIOS functions.
Driver-based API
The API is "driver-based" when the kernel provides facilities but the file system code resides totally external to the kernel (not even as a module of a modular kernel).
It is a cleaner scheme as the filesystem code is totally independent, it allows filesystems to be created for closed-source kernels and online filesystem additions or removals from the system.
Examples of this scheme are the Windows NT and OS/2 respective IFSs.
Mixed kernel-driver-based API
In this API all filesystems are in the kernel, like in kernel-based APIs, but they are automatically trapped by another API, that is driver-based, by the OS.
This scheme was used in Windows 3.1 for providing a FAT filesystem driver in 32-bit protected mode, and cached, (VFAT) that bypassed the DOS FAT driver in the kernel (MSDOS.SYS) completely, and later in the Windows 9x series (95, 98 and Me) for VFAT, the ISO9660 filesystem driver (along with Joliet), network shares, and third party filesystem drivers, as well as adding to the original DOS APIs the LFN API (that IFS drivers can not only intercept the already existent DOS file APIs but also add new ones from within the 32-bit protected mode executable).
However that API was not completely documented, and third parties found themselves in a "make-it-by-yourself" scenario even worse than with kernel-based APIs.
User space API
The API is in the user space when the filesystem does not directly use kernel facilities but accesses disks using high-level operating system functions and provides functions in a library that a series of utilities use to access the filesystem.
This is useful for handling disk images.
The advantage is that a filesystem can be made portable between operating systems as the high-level operating system functions it uses can be as common as ANSI C, but the disadvantage is that the API is unique to each application that implements one.
Examples of this scheme are the hfsutils and the adflib.
Interoperatibility between file system APIs
As all filesystems (at least the disk ones) need equivalent functions provided by the kernel, it is possible to easily port a filesystem code from one API to another, even if they are of different types.
For example, the ext2 driver for OS/2 is simply a wrapper from the Linux's VFS to the OS/2's IFS and the Linux's ext2 kernel-based, and the HFS driver for OS/2 is a port of the hfsutils to the OS/2's IFS. There also exists a project that uses a Windows NT IFS driver for making NTFS work under Linux.
See also
Comparison of file systems
File system
Filename extension
Filing Open Service Interface Definition (OSID)
Installable File System (IFS)
List of file systems
Virtual file system
References
Sources
O'Reilly - Windows NT File System Internals, A Developer's Guide - By Rajeev Nagar -
Microsoft Press - Inside Windows NT File System - By Helen Custer -
Wiley - UNIX Filesystems: Evolution, Design, and Implementation - By Steve D. Pate -
Microsoft Press - Inside Windows NT - By Helen Custer -
External links
Filesystem Specifications and Technical Whitepapers
A Tour of the Linux VFS
Microsoft's IFSKit
hfsutils
adflib
A FileSystem Abstraction System for Go
Application programming interfaces
Computer file systems | Operating System (OS) | 1,156 |
Otus (education)
Otus is an educational technology company providing a learning management system, data warehouse and many classroom management tools for K-12 students, teachers, parents, and administrators. Otus was nominated as a finalist of two 2016 Codie awards in the "Best Classroom Management System" and "Best K-12 Course or Learning Management Solution" categories, one 2018 Codie award for "Best Student Assessment Solution", and two 2019 Codie awards for "Best Data Solution" and "Best Administrative Solution". Otus was a finalist in EdTech Digest's 2016 "District Data Solution" and "Learning Management System" categories. Otus co-founder Chris Hull was also announced as one of the National School Boards Association's "20 to Watch Educators for 2016".
History
Founded in 2012 by Chris Hull and Pete Helfers, two social studies teachers at Illinois' Elm Place Middle School, Otus was designed to replicate the ideal K-12 classroom experience that the two teachers envisioned. After receiving a grant to bring 1:1 computing iPads to their students, both quickly recognized the limitless potential that classroom management tools offer, but were disappointed with the effectiveness of the current apps on the market. Believing that they could do better, Chris and Pete started to develop Otus while continuing to work their respective teaching jobs, their end-goal being to create an all in one classroom management system integrating everything necessary to run a classroom. After reaching out to Chicago-based hedge fund manager Andy Bluhm, who provided them with $2M in funding, Otus was officially launched in August, 2014 and has grown domestically and internationally since. As of May 10, 2016, Otus owns the trademark phrase "Student Performance Platform." Otus has recently partnered with NWEA to become a MAP (Measures of Academic Progress) Authorized Data Partner. Starting Fall 2016, all users of Otus will be able to see and take advantage of MAP data directly in the Otus platform.
Functions
Otus competes with other learning management systems such as Edmodo, Blackboard Learn, Schoology, Moodle and Canvas by Instructure to provide educational tools on a district and school wide basis. Otus in particular aims to offer "everything a mobile classroom could possibly need for both teachers and students" and does so by providing attendance tracking, a digital bookshelf for uploading files, in-app annotations, assignments, papers, polls, blogs, quizzes, and more. Otus integrates third-party apps, such as Khan Academy, PARCC, and many more through their "toolbox" program. Everything a student does through the Otus platform is integrated into a deep learning program called "analytics" on the Otus app, which compiles information from all first- and third-party apps and creates a student profile easily accessible by teachers and parents at the teacher's discretion. Otus is freely available for use by students, teachers, and parents, but requires a subscription fee for district administrators. Otus is available on iOS for teachers and students, and the web for all users.
References
External links
Educational technology companies of the United States
Internet properties established in 2012
Companies based in Chicago
IOS software
Classroom management software | Operating System (OS) | 1,157 |
Windows Management Instrumentation
Windows Management Instrumentation (WMI) consists of a set of extensions to the Windows Driver Model that provides an operating system interface through which instrumented components provide information and notification. WMI is Microsoft's implementation of the Web-Based Enterprise Management (WBEM) and Common Information Model (CIM) standards from the Distributed Management Task Force (DMTF).
WMI allows scripting languages (such as VBScript or Windows PowerShell) to manage Microsoft Windows personal computers and servers, both locally and remotely. WMI comes preinstalled in Windows 2000 and in newer Microsoft OSes. It is available as a download for Windows NT and Windows 95 to Windows 98.
Microsoft also provides a command-line interface to WMI called Windows Management Instrumentation Command-line (WMIC).
Purpose of WMI
The purpose of WMI is to define a proprietary set of environment-independent specifications which allow management information to be shared between management applications. WMI prescribes enterprise management standards and related technologies for Windows that work with existing management standards, such as Desktop Management Interface (DMI) and SNMP. WMI complements these other standards by providing a uniform model. This model represents the managed environment through which management data from any source can be accessed in a common way.
Development process
Because WMI abstracts the manageable entities with CIM and a collection of providers, the development of a provider implies several steps. The major steps can be summarized as follows:
Create the manageable entity model
Define a model
Implement the model
Create the WMI provider
Determine the provider type to implement
Determine the hosting model of the provider
Create the provider template with the ATL wizard
Implement the code logic in the provider
Register the provider with WMI and the system
Test the provider
Create consumer sample code.
Importance of WMI providers
Since the release of the first WMI implementation during the Windows NT 4.0 SP4 era (as an out-of-band download), Microsoft has consistently added WMI providers to Windows:
Under Windows NT 4.0, Microsoft had roughly 15 WMI providers available once WMI was installed
When Windows 2000 was released, there were 29 WMI providers as part of the operating system installation
With the release of Windows Server 2003, Microsoft included in the platform more than 80 WMI providers
Windows Vista includes 13 new WMI providers, taking the number close to around 100 in all
Windows Server 2008 includes more providers, including providers for IIS 7, PowerShell and virtualization
Windows 10 includes 47 providers for the Mobile Device Management (MDM) service.
Many customers have interpreted the growth in numbers of providers as a sign that WMI has become at Microsoft the "ubiquitous" management layer of Windows, even if Microsoft has never made this commitment explicit.
Because of a constant increasing exposure of management data through WMI in Windows, people in the IT systems management field started to develop scripts and automation procedures based on WMI. Beyond the scripting needs, most leading management-software packages, such as MOM, SCCM, ADS, HP OpenView for Windows (HPOV), BMC Software or CA, Inc. are WMI-enabled and capable of consuming and providing WMI information through various User Interfaces. This enables administrators and operators not capable of scripting or programming on top of WMI to enjoy the benefits of WMI without even learning about it. However, if they want to, because WMI is scriptable, it gives them the opportunity to consume WMI information from scripts or from any WMI-aware enterprise-management software.
Features
For someone willing to develop one or many WMI providers, WMI offers many features out of the box. Here are the most important advantages:
Automation interfaces: Because WMI comes with a set of automation interfaces ready to use, all management features supported by a WMI provider and its set of classes get the scripting support for free out-of-the box. Beyond the WMI class design and the provider development, the Microsoft development and test teams are not required to create, validate or test a scripting model as it is already available from WMI.
.NET Management interfaces: Because the System.Management namespace relies on the existing COM/DCOM plumbing, the created WMI provider and its set of WMI classes becomes automatically available to all .NET applications independently of the language used (e.g. C#, VB.NET). Beyond the WMI class design and the provider development, like for scripting, the Microsoft development and test teams are not required to create, validate and test new assemblies to support a new namespace in the .NET Framework as this support is already available from WMI for free.
C/C++ COM/DCOM programming interfaces: Like most components in Windows, COM/DCOM programmers can leverage the features of the provider they develop at the COM/DCOM interfaces level. Like in previous environments (scripting and .NET Framework), a COM/DCOM consumer just needs to interact with the standard set of WMI COM interfaces to leverage the WMI provider capabilities and its set of supported WMI classes. To make all management information available from the native APIs, the WMI provider developer just needs to interact with a set of pre-defined WMI COM interfaces. This will make the management information available at the WMI COM level automatically. Moreover, the scripting COM interface object model is very similar to the COM/DCOM interface object model, which makes it easy for developers to be familiar with the scripting experience.
Remoting capabilities over DCOM and SOAP: More than simply offering local COM capabilities, as management is all about remoting, WMI offers the DCOM transport. In addition, SOAP transport will be available in Windows Server 2003 R2 through the WS-Management initiative led by Microsoft, Intel, Sun Microsystems and Dell. This initiative allows running any scripts remotely or to consume WMI data through a specific set of interfaces handling SOAP requests/responses. The advantage for the WMI provider developer is that when he exposes all his features through WMI, Windows Remote Management/WS-Management can in turn consume that information as well (embedded objects in WMI instances are not supported in Windows Server 2003 R2. It is however a target for Vista). All the layering to WS-Management and the mapping of the CIM data model to SOAP comes for free out of the WMI/WS-Management solution. In the event DCOM must be used, implementing DCOM requires the presence of a proxy DLL deployed on each client machine. As WMI is available in the Windows operating system since Windows 2000, these issues are eliminated.
Support for Queries: WMI offers support for WQL queries out of the box. This means that if a provider is not designed to support queries, WMI supports it by using an enumeration technique out of the provider.
Eventing capabilities: WMI offers the capability to notify a subscriber for any event it is interested in. WMI uses the WMI Query Language (WQL) to submit WQL event queries and defines the type of events to be returned. The eventing mechanism, with all related callbacks, is part of the WMI COM/DCOM and automation interfaces. Anyone writing a WMI provider can have the benefit of this functionality at no cost for his customers. It will be up to the consumer to decide how it wants to consume the management information exposed by the WMI provider and its related set of WMI classes.
Code template generator: To speed up the process of writing a WMI provider including all COM/DCOM interfaces and related definitions, the WMI team developed the WMI ATL Wizard to generate the code template implementing a provider. The code generated is based on the WMI class model initially designed by the developer. The WMI provider developer will be able to interface the pre-defined COM/DCOM interfaces for the WMI provider with its set of native APIs retrieving the management information to expose. The exercise consists in filling the “gaps” in the provider code to create the desired interfacing logic.
Predictability: Predictability is an important concern for IT professionals because it defines the capability of someone having an experience with a set of interfaces managing a Windows component to apply this knowledge right away, intuitively, to any other manageable Windows component without having relearn everything from ground up. Predictability for a customer is a real gain as it increases the Return of Investment (ROI). A person facing such a situation simply expects things to work the same way based on his previous experience. The constant increase of COM programming/scriptable interfaces has a huge impact on the predictability, as this makes it difficult for customers to automate, manage Windows and leverage their existing knowledge. WMI with CIM address this problem by always exposing the same programming object model (COM/DCOM, Automation, .NET) whatever the manageable entity is.
Protect existing customer investments: Protecting customers and partners investment motivates customers to invest in technologies. As Microsoft did invest a lot these past years in writing WMI providers, customers and partners invested in tools leveraging the WMI capabilities of Windows. Therefore, they naturally continue to exploit these capabilities instead of having to use a new set of specific interfaces for each Windows manageable component. A specific set of interfaces means having a specific set of agents or in-house developed software based on a new model or set of interfaces especially dedicated to a component or technology. By leveraging the capabilities of WMI today, customers and partners can leverage the work investment made in the past while minimizing their costs in developments, learning curves and new discoveries. This will also have a great impact on the stability and reliability of their infrastructure as they continue to leverage an existing implementation with an improved technology.
Provide a logical and unified administration model: As briefly described before in the introduction, this model is based on an industry standard called CIM defined by the DMTF (http://www.dmtf.org). The CIM class-based schema is defined by a consortium of constructors and software developers that meets the requirements of the industry. This implies that not only Microsoft leverages the WMI capabilities, but also any other third party constructors or developers write their own code to fit into the model. For instance, Intel is doing this for some of their network driver adapters and software. HP is leveraging existing WMI providers and implementing their own WMI providers in their HP Open View Enterprise Management software. IBM consumes WMI from the Tivoli management suite, MOM and SMS are also consuming and providing WMI information. Lastly, Windows XP SP2 leverages WMI to get information status from anti-virus software and firewalls.
WMI tools
Some WMI tools can also be useful during the design and development phases. These tools are:
The MOF compiler (MOFComp.exe): The Managed Object Format (MOF) compiler parses a file containing Managed Object Format statements and adds the classes and class instances defined in the file to the CIM repository. The MOF format is a specific syntax to define CIM class representation in an ASCII file (e.g. MIB are to SNMP what MOF files are to CIM). MOFComp.exe is included in every WMI installation. Every definition existing in the CIM repository is initially defined in an MOF file. MOF files are located in %SystemRoot%\System32\WBEM. During the WMI setup, they are loaded in the CIM repository.
The WMI Administrative Tools: The WMI Administrative Tools are made of four tools: WMI CIM Studio, WMI Object Browser, WMI Event Registration and WMI Event Viewer. The most important tool for a WMI provider developer is WMI CIM Studio as it helps in the initial WMI class creation in the CIM repository. It uses a web interface to display information and relies on a collection of ActiveX components installed on the system when it runs for the first time. WMI CIM Studio provides the ability to:
Connect to a chosen system and browse the CIM repository in any namespace available.
Search for classes by their name, by their descriptions or by property names.
Review the properties, methods and associations related to a given class.
See the instances available for a given class of the examined system.
Perform Queries in the WQL language.
Generate an MOF file based on selected classes.
Compile an MOF file to load it in the CIM repository.
WinMgmt.exe: WinMgmt.exe is not a tool; it is the executable that implements the WMI Core service. Under the Windows NT family of operating systems, WMI runs as a service. On computers running Windows 98, Windows 95 or Windows Me, WMI runs as an application. Under the Windows NT family of operating systems, it is also possible to run this executable as an application, in which case, the executable runs in the current user context. For this, the WMI service must be stopped first. The executable supports some switches that can be useful when starting WMI as a service or as an application. WMI provider developers who may want to debug their providers essentially need to run the WMI service as an application.
WBEMTest.exe: WBEMTest.exe is a WMI tester tool, which is delivered with WMI. This tool allows an administrator or a developer to perform most of the tasks from a graphical interface that WMI provides at the API level. Although available under all Windows NT-based operating systems, this tool is not officially supported by Microsoft. WBEMTest provides the ability to:
Enumerate, open, create and delete classes.
Enumerate, open, create and delete instances of classes.
Select a namespace.
Perform data and event queries.
Execute methods associated to classes or instances.
Execute every WMI operation asynchronously, synchronously or semi-asynchronously.
The WMI command line tool (WMIC): WMIC is a command-line tool designed to ease WMI information retrieval about a system by using some simple keywords (aliases). WMIC.exe is only available under Windows XP Professional, Windows Server 2003, Windows Vista, Windows 7 and Windows Server 2008. By typing “WMIC /?” from the command-line, a complete list of the switches and reserved keywords is available.
There is a Linux port of WMI command line tool, written in Python, based on Samba4 called 'wmi-client'
WBEMDump.exe: WBEMDump is a tool delivered with the Platform SDK. This command line tool comes with its own Visual C++ project. The tool can show the CIM repository classes, instances, or both. It is possible to retrieve the same information as that retrieved with WMIC. WBEMDump.exe requires more specific knowledge about WMI, as it doesn't abstract WMI as WMIC. However, it runs under Windows NT 4.0 and Windows 2000. It is also possible to execute methods exposed by classes or instances. Even if it is not a standard WMI tool delivered with the system installation, this tool can be quite useful for exploring the CIM repository and WMI features.
WMIDiag.vbs: The WMI Diagnosis Tool is a VBScript downloadable from Microsoft here and is a tool for testing and validating WMI on Windows 2000 and greater. The download includes pretty thorough documentation and the tool supports numerous switches. When run, it will generate up to four text files which: list the steps taken (the LOG file), an overview of the results (REPORT file), a statistics file (in comma separated values format), and optionally a file listing of the providers registered on the machine (PROVIDERS, also in comma separated values format). The report file that is generated includes a list of the issues identified and potential ways to fix them.
WMI Explorer: The WMI Explorer Tool is a freely available and opensource program downloadable here and is a tool for enumerating and querying WMI providers in a graphical user interface.
Wireless networking example
In the .NET Framework, the ManagementClass class represents a Common Information Model (CIM) management class. A WMI class can be a Win32_LogicalDisk in the case of a disk drive, or a Win32_Process, such as a running program like Notepad.exe.
This example shows how "MSNdis_80211_ServiceSetIdentifier" WMI class is used to find the SSID of the Wi-Fi network that the system is currently connected to in the language C#:
ManagementClass mc = new ManagementClass("root\\WMI", "MSNdis_80211_ServiceSetIdentifier", null);
ManagementObjectCollection moc = mc.GetInstances();
foreach (ManagementObject mo in moc)
{
string wlanCard = (string)mo["InstanceName"];
bool active;
if (!bool.TryParse((string)mo["Active"], out active))
{
active = false;
}
byte[] ssid = (byte[])mo["Ndis80211SsId"];
}
The "MSNdis_80211_ServiceSetIdentifier" WMI class is only supported on Windows XP and Windows Server 2003.
WMI driver extensions
The WMI extensions to WDM provide kernel-level instrumentation such as publishing information, configuring device settings, supplying event notification from device drivers and allowing administrators to set data security through a WMI provider known as the WDM provider. The extensions are part of the WDM architecture; however, they have broad utility and can be used with other types of drivers as well (such as SCSI and NDIS). The WMI Driver Extensions service monitors all drivers and event trace providers that are configured to publish WMI or event trace information. Instrumented hardware data is provided by way of drivers instrumented for WMI extensions for WDM. WMI extensions for WDM provide a set of Windows device driver interfaces for instrumenting data within the driver models native to Windows, so OEMs and IHVs can easily extend the instrumented data set and add value to a hardware/software solution. The WMI Driver Extensions, however, are not supported by Windows Vista and later operating systems.
See also
Open Management Infrastructure
References
External links
WMI at the Microsoft Developer Network
CIM terminology
WMI Overview and Background
WMI and CIM overview
How improved support for WMI makes PowerShell the best environment to use and script WMI
Microsoft WMI Webcast
WMI Code Creator
Microsoft application programming interfaces
Windows components
Windows communication and services
Network management
Management systems
System administration | Operating System (OS) | 1,158 |
GOFF
The GOFF (Generalized Object File Format) specification was developed for IBM's MVS operating system to supersede the IBM OS/360 Object File Format to compensate for weaknesses in the older format.
Background
The original IBM OS/360 Object File Format was developed in 1964 for the new IBM System/360 mainframe computer. The format was also used by makers of plug compatible and workalike mainframes, including the Univac 90/60, 90/70 and 90/80 and Fujitsu B2800. The format was expanded to add symbolic records and expanded information about modules, plus support for procedures and functions with names longer than 8 characters. While this helped, it did not provide for the enhanced information necessary for today's more complicated programming languages and more advanced features such as objects, properties and methods, Unicode support, and virtual methods.
The GOFF object file format was developed by IBM approximately in 1995 as a means to overcome these problems. The earliest mention of this format was in the introductory information about the new High Level Assembler. Note that the OS/360 Object File Format was simply superseded by the GOFF format, it was not deprecated, and is still in use by assemblers and language compilers where the language can withstand the limitations of the older format.
Conventions
This article will use the term "module" to refer to any name or equivalent symbol, which is used to provide an identifier for a piece of code or data external to the scope to which it is referenced. A module may refer to a subroutine, a function, Fortran Common or Block Data, an object or class, a method or property of an object or class, or any other named routine or identifier external to that particular scope referencing the external name.
The terms "assembler" for a program that converts assembly language to machine code, as well as as the process of using one, and as the process of using a "compiler," which does the same thing for high-level languages, should, for the purposes of this article. be considered interchangeable; thus where "compile" and "compiler" are used, substitute "assemble" and "assembler" as needed.
Numbers used in this article are expressed as follows: unless specified as hexadecimal (base 16), all numbers used are in decimal (base 10). When necessary to express a number in hexadecimal, the standard mainframe assembler format of using the capital letter X preceding the number, expressing any hexadecimal letters in the number in upper case, and enclosing the number in single quotes, e.g. the number 15deadbeef16 would be expressed as X'15DEADBEEF'.
A "byte" as used in this article, is 8-bits, and unless otherwise specified, a "byte" and a "character" are the same thing; characters in EBCDIC are also 8-bit. When multi-byte character sets (such as Unicode) are used in user programs, they will use two (or more) bytes.
Requirements and restrictions
The format is similar to the OS/360 Object File Format but adds additional information for use in building applications.
GOFF files are either fixed- or variable-length records.
A GOFF record must completely fit within a single record of the underlying file system. A GOFF file is not a stream-type file.
Fixed-length records must be 80 bytes. The minimum size of a variable-length record is 56 bytes. In the case of fixed-length records, there will be unused bytes at the end of a record. These bytes must be set to binary zero.
The program reading (or writing) GOFF records is not to make assumptions about the internal format of records, the operating system is presumed to be able to provide fixed- or variable-length records without the program reading them needing to be aware of the operating system internal file management. The length of a record is not part of the record itself.
Binary values are stored in big endian format, e.g. the value 1 is X'01' for an 8-bit value, X'0001' for a 16-bit value, X'00000001' for a 32-bit value, and X'0000000000000001' for a 64-bit value.
Bits are counted from left to right; bit 0 is the left-most bit in a byte or word.
Fixed-length records are required for GOFF files deployed on Unix systems.
A record may be continued on a subsequent record. Where a record is continued, no intervening record(s) shall occur between the record being continued and the continuation record.
A GOFF object file starts with an HDR record and ends with an END record. The END record should include the number of GOFF records (not the number of physical records) in the file.
A language compiler or assembler can produce multiple GOFF files in one compilation/assembly, but the individual GOFF files must be separate from each other. This means that a module or compilation unit, consisting of an HDR record, intervening ESD, TXT and others, finishing with an END record, may then be followed by another compilation unit starting with HDR and ending with END, and so on, as needed.
Module and Class names are case sensitive. A module named "exit" (as used by the C language) need not be the same as "EXIT" used by the Fortran language.
Some conventions applicable to the OS/360 Object File Format are carried over to the GOFF Object File Format, including:
Unless otherwise specified, all characters are in the EBCDIC character set, except for external names, as stated below.
ESD items (Main programs, subroutines, functions, FORTRAN Common, methods and properties in objects) must be numbered starting with 1 and each new item is to have the next number in sequence, without any 'gaps' in the numbering sequence.
An ESD item must be defined before any other record (such as a TXT or RLD record) references it.
Each ESD record contains exactly one ESD item. (This is different from the old format, which permitted up to 3 ESD items in each ESD record.)
An RLD record (relocation dictionary) may contain one or more items, and an RLD record may be continued to a subsequent record.
To ensure future compatibility, fields indicated as 'reserved' should be set to binary zero.
Character sets used for external names are not defined by the GOFF standard, but there is a provision for a file to indicate what character set is being used. (This is to support double-byte character set Unicode-based module names.) Some IBM products, however, only allow characters for external names and other identifiers to a restricted range, typically (EBCDIC) hexadecimal values of X'41' through X'FE' plus the shift-in and shift out characters, X'0F' and X'0E', respectively.
The new format supports Class names, of which there are two types, reserved and user supplied or non-reserved. All class names have a maximum length of 16 characters.
Reserved Class names consist of a single letter, an underscore, and 1 to 14 characters. Reserved Class names beginning with B_ are reserved for the binder; Reserved Class names beginning with C_ marked as loadable are reserved for programs created for use with IBM's Language Environment (LE). Class names beginning with C_ which are not marked as loadable, as well as classes beginning with X_, Y_ or Z_ are available for general use as non-reserved.
User Supplied class names may be lower-case.
Class names are not external symbols.
The following classes used by the binder may be referenced if needed for compilation purposes:
The following class names are reserved by the binder and are not accessible to user applications:
The SYM object file symbolic table information from the 360 Object File format record is not available for GOFF object files; the ADATA record (sub-record to TXT) must be used instead.
Record Types
Similarly to the older OS/360 format, object file records are divided into 6 different record types, some added, some deleted, some altered:
HDR record (this is new) must occur first, it defines the header for the object file.
ESD records define main programs, subroutines, functions, dummy sections, Fortran Common, methods and properties, and any module or routine that can be called by another module. They are used to define the program(s) or program segments that were compiled in this execution of the compiler, and external routines used by the program (such as exit() in C, CALL EXIT in Fortran; new() and dispose() in Pascal). ESD records should occur before any reference to an ESD symbol.
TXT records have been expanded, and in addition to containing the machine instructions or data which is held by the module, they also contain Identification Data (IDR) records (20 or more types), Associated Data (ADATA) records, and additional information related to the module.
RLD records are used to relocate addresses. For example, a program referencing an address located 500 bytes inside the module, will internally store the address as 500, but when the module is loaded into memory it's bound to be located someplace else, so an RLD record informs the linkage editor or loader what addresses to change. Also, when a module references an external symbol, it will usually set the value of the symbol to zero, then include an RLD entry for that symbol to allow the loader or linkage editor to alter the address to the correct value.
LEN records are new, and supply certain length information.
END records indicate the end of a module, and optionally where the program is to begin execution. This must be the last record in the file.
Format
GOFF records may be fixed or variable length; the minimum length when using variable-length records is 56 characters, although most records will be longer than this. Except for module and class names, all characters are in the EBCDIC character set. Unix-based systems must use fixed-length (80-byte) records. Records in fixed-length files that are shorter than the fixed length should be zero-filled. To distinguish GOFF records from the older OS/360 object format (where the first byte of a record is X'02') or from commands that may be present in the file, the first byte of each GOFF record is always the binary value X'03', while commands must start with a character value of at least space (X'40'). The next 2 bytes of a GOFF record indicate the record type, continuation and version of the file format. These first 3 bytes are known as the PTV field.
PTV
The PTV field represents the first 3 bytes of every GOFF record.
HDR
The HDR record is required, and must be the first record.
ESD
An ESD record gives the public name for a module, a main program, a subroutine, procedure, function, property or method in an object, Fortran Common or alternate entry point. An ESD record for a public name must be present in the file before any reference to that name is made by any other record.
Continuation
In the case of fixed-length records where the name requires continuation records, the following is used:
Behavior Attributes
ADATA records
ADATA ("associated data") records are used to provide additional symbol information about a module. They replaced the older SYM records in the 360 object file format. To create an ADATA record
Create an ESD record of type ED for the class name that the records are part of
Set all fields in the Behavioral Attributes record to 0 except
Class Loading (bits 0-1 of byte 5) is X'10'
Binding Algorithm is 0
Text Record Style (bits 0-3 of byte 2) is X'0010'
Optionally set the Read Only (bit 4 of byte 3) and Not Executable (bits 5-7 of byte 3) values if appropriate
Create a TXT record for each ADATA item
Element ESDID is the value of the ADATA ED record for that particular ADATA entry
Offset is zero
Data Length is the length of the ADATA record
Data field contains the actual ADATA record itself
ADATA records will be appended to the end of the class in the order they are declared.
Class names assigned to ADATA records are translated by IBM programs by converting the binary value to text and appending it to the name C_ADATA, So an item numbered X'0033' would become the text string C_ADATA0033.
TXT
TXT records specify the machine code instructions and data to be placed at a specific address location in the module. Note that wherever a "length" must be specified for this record, the length value must include any continuations to this record.
Continuation
Compression Table
A compression table is used if bytes 20-21 of the TXT record is nonzero. The R value is used to determine the number of times to repeat the string; the L value indicates the length of the text to be repeated "R" times. This could be used for pre-initializing tables or arrays to blanks or zero or for any other purpose where it is useful to express repeated data as a repeat count and a value.
IDR Data Table
The IDR Table, which is located starting at byte 24 of the TXT record, identifies the compiler or assembler (and its version number) that created this object file.
IDR Format 1
Note that unlike most number values stored in a GOFF file, the "version", "release" and "trans_date" values are numbers as text characters instead of binary
IDR Format 2
Normally compilers and assemblers do not generate this format record, it is typically created by the binder.
IDR Format 3
All text in this item are character data; no binary information is used.
RLD
RLD records allow a module to show where it references an address that must be relocated, such as references to specific locations in itself, or to external modules.
Relocation Data
[A] If R_Pointer (bit 0 of byte 0 of Flags field is 1) is omitted, this field starts 4 bytes lower, in bytes 8-11.
[B] If R_Pointer or P_Pointer (bit 1 of byte 0 of Flags field is 1) is omitted, this field starts 4 bytes lower, in bytes 12-15. If both fields are omitted, this field starts 8 bytes lower, in bytes 8-11.
[C] If R_Pointer, P_Pointer, or Offset (bit 2 of byte 0 of Flags field is 1) are omitted, this field starts 4 bytes lower. If any two of them are omitted, this field starts 8 bytes lower. If all of them are omitted, this field starts 12 bytes lower.
To clarify, if a module in a C program named "Basura" was to issue a call to the "exit" function to terminate itself, the R_Pointer address would be the ESDID of the routine "exit" while the P_Pointer would be the ESDID of "Basura". If the address was in the same module (like internal subroutines, or a reference to data within the same module) R_Pointer and P_Pointer would be the same.
Flags
LEN
LEN records are used to declare the length of a module where it was not known at the time the ESD record was created, e.g. for one-pass compilers.
Elements
A deferred-length element entry cannot be continued or split
END
END must be the last record for a module. An 'Entry Point' is used when an address other than the beginning of the module is to be used as the start point for its execution. This is used either because the program has non-executable data appearing before the start of the module (very common for older assembly programmers, as older versions of the assembler were much slower to assemble data stored in programs once instructions were specified), or because the module calls an external module first, such as a run-time library to initialize itself.
Continuation
If an entry-point name specified on a fixed-length END record is longer than 54 bytes or (if this record itself is also continued) is longer than an additional 77 bytes), the following continuation record is used.
References
Executable file formats
IBM mainframe operating systems | Operating System (OS) | 1,159 |
ZENworks
ZENworks, a suite of software products developed and maintained by Micro Focus International for computer systems management, aims to manage the entire life cycle of servers, of desktop PCs (Windows, Linux or Mac), of laptops, and of handheld devices such as Android and iOS mobile phones and tablets. Novell planned to include Full Disk Encryption (FDE) functionality within ZENworks.
ZENworks supports multiple server platforms and multiple directory services.
History
The name, "ZENworks", first appeared as "Z.E.N.works" in 1998 with ZENworks 1.0
and with ZENworks Starter Pack - a limited version of ZENworks 1.0 that came bundled with NetWare 5.0 (1998). Novell added server-management functionality, and the product grew into a suite consisting of:
"ZENworks for Desktops" (ZfD)
"ZENworks for Servers" (ZfS)
"ZENworks for Handhelds" (ZfH)
Novell has continued to add components to the suite, which it sells under the consolidated name "ZENworks Suite".
The initial ZENworks products had a tight integration with Novell Directory Service (NDS). With the release of ZENworks Configuration Management 10 (2007) the product architecture completely changed, the product became directory agnostic and ZENworks Suite products were integrated into a single management framework.
ZENworks Releases:
Elements of the ZENworks Suite
In the latest version of ZENworks known as ZENworks 2017 the ZENworks Suite consists of seven individual products:
Additionally, Novell offers an ITIL version of "Novell Service Desk". This version is ITIL-certified by PinkVERIFY and supports ten ITIL v3 processes, e.g. Change, Incident, Problem and Service Level Management.
In terms of implementation, the ZENworks Agent (also known as the "ZENworks Management Daemon" or "zmd") installs, updates and removes software. The ZENworks Configuration Management (ZCM) addresses patching, endpoint security, asset management and provisioning.
See also
Systems Management
Patch Management
Mobile Device Management
Full Disk Encryption
Antimalware
References
Further reading
External links
Novell ZENworks Product page
ZENworks
Remote administration software
System administration | Operating System (OS) | 1,160 |
Windowing
Windowing may refer to:
Windowing system, a graphical user interface (GUI) which implements windows as a primary metaphor
In signal processing, the application of a window function to a signal
In computer networking, a flow control mechanism to manage the amount of transmitted data sent without receiving an acknowledgement (e.g. TCP windowing)
Date windowing, a method to interpret a two-digit year as a regular four-digit year, see Year 2000 problem
Address Windowing Extensions, a Microsoft Windows Application Programming Interface
A process used to produce images in a computed tomography (CT) scan
A method of publication wherein a work is published on different media at different times (e.g. first in cinemas, then on Blu-ray)
See also
Window (disambiguation)
Windows (disambiguation) | Operating System (OS) | 1,161 |
Microsoft BASIC
Microsoft BASIC is the foundation software product of the Microsoft company and evolved into a line of BASIC interpreters adapted for many different microcomputers. It first appeared in 1975 as Altair BASIC, which was the first version of BASIC published by Microsoft as well as the first high-level programming language available for the Altair 8800 microcomputer.
During the home computer craze during the late-1970s and early-1980s, Microsoft BASIC was ported to and supplied with practically every computer design. Slight variations to add support for machine-specific functions, especially graphics, led to a profusion of related designs like Commodore BASIC and Atari Microsoft BASIC.
As the early home computers gave way to newer designs like the IBM Personal Computer and Apple Macintosh, BASIC was no longer as widely used, although it retained a strong following. The release of Visual Basic reboosted its popularity and it remains in wide use on Microsoft Windows platforms in its most recent incarnation, Visual Basic .NET
Altair BASIC and early microcomputers
The Altair BASIC interpreter was developed by Microsoft founders Paul Allen and Bill Gates using a self-made Intel 8080 emulator running on a PDP-10 minicomputer. The MS dialect is patterned on Digital Equipment Corporation's BASIC-PLUS on the PDP-11, which Gates had used in high school. The first versions supported integer math only, but Monte Davidoff convinced them that floating-point arithmetic was possible, and wrote a library which became the Microsoft Binary Format.
Altair BASIC was delivered on paper tape and in its original version took 4 KB of memory. The following functions and statements were available:
LIST, NEW, PRINT, INPUT, IF...THEN, FOR...NEXT, SQR, RND, SIN, LET, USR, DATA, READ, REM, CLEAR, STOP, TAB, RESTORE, ABS, END, INT, RETURN, STEP, GOTO, and GOSUB.
There were no string variables in 4k BASIC and single-precision 32-bit floating point was the only numeric type supported. Variable names consisted of one letter (A–Z) or one letter followed by one digit (0–9), thus allowing up to 286 numeric variables.
For machines with more memory, the 8 KB version added 31 additional statements and support for string variables and their related operations like MID$ and string concatenation. String variables were denoted with a $ suffix, which remained in later versions of the language. Later on, Microsoft released the 12K Extended BASIC, which included double precision 64-bit variables, IF...THEN...ELSE structures, user defined functions, more advanced program editing commands, and descriptive error messages as opposed to error numbers. Numeric variables now had three basic types, % denoted 16-bit integers, # denoted 64-bit doubles, and ! denoted 32-bit singles, but this was the default format so the ! is rarely seen in programs.
The extended 8 KB version was then generalized into BASIC-80 (8080/85, Z80), and ported into BASIC-68 (6800), BASIC-69 (6809), and 6502-BASIC. The 6502 had somewhat less dense assembler code and expanded in size to just under 8K for the single precision version, or 9K for a version using an intermediate 40-bit floating point format in place of the original 32-bit version. This new 40-bit format became the most common as it was used on most 6502-based machines of the era. It was also ported to the 16-bit BASIC-86 (8086/88).
The final major release of BASIC-80 was version 5.x, which appeared in 1981 and added support for 40-character variable names, WHILE...WEND loops, dynamic string allocation, and several other features. BASIC 5.x removed the ability to crunch program lines.
The core command set and syntax are the same in all implementations of Microsoft BASIC and, generally speaking, a program can be run on any version if it does not use hardware-specific features or double precision numbers (not supported in some implementations).
Licenses to home computer makers
After the initial success of Altair BASIC, Microsoft BASIC became the basis for a lucrative software licensing business, being ported to the majority of the numerous home and other personal computers of the 1970s and especially the 1980s, and extended along the way. Contrary to the original Altair BASIC, most home computer BASICs are resident in ROM, and thus are available on the machines at power-on in the form of the characteristic "READY." prompt. Hence, Microsoft's and other variants of BASIC constitute a significant and visible part of the user interface of many home computers' rudimentary operating systems.
By 1981, Microsoft BASIC was so popular that even companies that already had a BASIC licensed the language, such as IBM for its Personal Computer, and Atari, which sold both Atari Microsoft BASIC and its own Atari BASIC. IBM's Don Estridge said, "Microsoft BASIC had hundreds of thousands of users around the world. How are you going to argue with that?" Microsoft licensed similar versions to companies that competed with each other. After licensing IBM Advanced BASIC (BASICA) to IBM, for example, Microsoft licensed the compatible GW-BASIC to makers of PC clones, and also sold copies to retail customers. The company similarly licensed an Applesoft-compatible BASIC to VTech for its Laser 128 clone.
Extended BASIC-80
Tangerine Microtan 65
Spectravideo SV-318 and SV-328
Known variants:
NCR Basic Plus 6, released in the first quarter of 1977 for the NCR 7200 model VI data-entry terminal. The adaptation of Microsoft's Extended BASIC-80 was carried out by Marc McDonald in 1976/1977.
Disk BASIC-80
MBASIC is available for CP/M-80 and ISIS-II. Also available for TEKDOS.
MBASIC is a stripped-down BASIC-80 with only hardware-neutral functions. However, due to the popularity of CP/M, the great majority of Z80 machines ran MBASIC, rather than a version customized for specific hardware (TRS-80 BASIC was one of the few exceptions). Microsoft's CP/M card for the Apple II included a modified version of MBASIC that incorporated some of the graphics commands from Applesoft BASIC, such as HPLOT, but the full command set is not supported.
Standalone Disk BASIC-80
The first implementation to use an 8-bit variant of the File Allocation Table was a BASIC adaptation for an Intel 8080-based NCR 7200, 7520 or 7530 data-entry terminal with 8-inch floppy disks in 1977/1978.
TRS-80 Level II/III BASIC
The TRS-80 computer was offered initially with an adaption of Li-Chen Wang's Tiny BASIC (Level I BASIC); within a few months this was replaced by a port of BASIC-80 which incorporated some of Level I BASIC's command set, particularly the commands for setting graphics characters. Level II BASIC contained some of the features of Extended BASIC, although due to the need to include Level I commands such as SET and PSET, other features such as descriptive error messages still had to be left out; these were subsequently added into TRS-80 Disk BASIC.
The TRS-80 Model 4 had a newer disk-based BASIC that utilized the BASIC-80 5.x core, which included support for 40-character variable names. Thus the ability to crunch program lines (without spaces between keywords and arguments) was no longer possible as it had been in Level II. It was no longer necessary to reserve string space. New features included user defined functions (DEF FN) and access to TRSDOS 6 system functions via a SYSTEM keyword. A modified version published later by OS provider Logical Systems, in the LS-DOS Version 6.3 update, added single-letter access to BASIC control functions (like LIST and EDIT) and direct access to LS-DOS supervisor calls. The program edit environment was still line-oriented. The facility available in Level II to sort arrays (CMD"O") was not available; programmers and users had to devise their own workarounds.
BASIC-86
The first implementation as a standalone disk based language system was for Seattle Computer Products S-100 bus 8086 CPU card in 1979. It was utilizing an 8-bit FAT file system.
Microsoft also offered a version of Standalone BASIC-86 for SBC-86/12 for Intel's 8086 Single Board Computer platform in 1980.
Texas Instruments BASIC
This is the version of BASIC used on Texas Instruments' TI-99/4A computer line. Although very similar to Microsoft BASIC TI-99/4 BASIC was not written by Microsoft as was widely rumored. According to TI Engineer H. Schuurman; 'They (in the form of Bob Greenberg of Microsoft) were contracted to develop BASIC for the SR-70 (which is also sometimes referred to as the 99/7), but the BASIC for the 99/4 was developed in-house.' Ti-99/4 BASIC was based on the Dartmouth Basic and complies to the American National Standard for minimal Basic (ANSI X3.60-1978).
6502 BASIC
Microsoft ported BASIC-80 to the 6502 during the summer of 1976; it was mostly a straight port of the 8K version of BASIC-80 and included the same prompts asking for memory size and if the user wanted floating point functions enabled or not (having them active used an extra 135 bytes of memory). The earliest machines to use 6502 BASIC were the OSI Model 500 and KIM-1 in 1977. 6502 BASIC included certain features from Extended BASIC such as user-defined functions and descriptive error messages, but omitted other features like double precision variables and the PRINT USING statement. As compensation for not having double precision variables, Microsoft included 40-bit floating point support instead of BASIC-80's 32-bit floating point and string allocation was dynamic (thus the user did not have to reserve string space like in BASIC-80). However, vendors could still request BASIC with 32-bit floating point for a slightly smaller memory footprint; as one example, Disk BASIC for the Atari 8-bits used 32-bit floating point rather than 40-bit.
Standard features of the 9K version of Microsoft 6502 BASIC included:
GET statement to detect a key press.
Line crunching program lines do not require any spaces except between the line number and statement.
Only supported variable types are string, single precision, and integer (arrays only).
Long variable names are not supported and only the first two characters are recognized.
Dynamic string allocation.
6502 BASIC lacked a standardized set of commands for disk and printer output; these were up to the vendor to add and varied widely with each implementation.
Later implementations of 6502 Basic (1983–) were improved in many aspects.
While early Commodore machines (VIC-20, C64) had a BASIC very close to 6502 MS BASIC, later Commodore 8-bit machines (C=264 series, PET and C=128 named as V3.5, V4.0 and V7.0) had numerous improvements to make BASIC more useful and user friendly:
Disk commands (DIRECTORY, DSAVE, DLOAD, BACKUP, HEADER, SCRATCH, COLLECT, DVERIFY, COPY, DELETE, RENAME, etc.)
Graphics commands (CIRCLE, DRAW, BOX, COLOR (of background, border, etc.), PAINT, SCALE)
Graphics block copy and logical operation with the existing graphical screen (SSHAPE and GSHAPE with OR, AND, XOR, etc.)
Sprite definition, displaying and animation commands on C128, even saving sprites to binaries
Sound commands (VOL, SOUND), later on at C=128 Music commands (ADSR and SID filter programming (ENVELOPE and FILTER), PLAY, TEMPO commands)
Signs of more structured programming: IF–THEN–ELSE, DO–LOOP–WHILE/UNTIL–EXIT, ON–GOSUB
Extended I/O commands for special features: JOY, Function keys
Debugging commands: STOP, CONT, TRON, TROFF, RESUME
Extended handling of character screen: WINDOW
Support easier program development: RENUMBER, NEW, MONITOR, RREG
Spectravideo CompuMate on the Atari 2600's MOS Technology 6507 CPU in 1983
BASIC-68 and BASIC-69
Microsoft catalogs from the 1980s also showed the availability of BASIC-68 and BASIC-69 for the Motorola 6800 and 6809 microprocessors respectively, running the FLEX operating systems, and also mention OEM versions for Perkin-Elmer, Ohio Nuclear, Pertec and Societe Occitane d'Electronique systems.
It seems likely this is what is also the basis for the Microsoft/Epson BASIC in the Epson HX-20 portable computer, which has two Hitachi 6301 CPUs, which are essentially a "souped up" 6801. Most of the core features in BASIC-68 and BASIC-69 were copied directly from BASIC-80.
BASIC-69 was notably also licensed to Tandy, where it formed the nucleus of Color BASIC on the TRS-80 Color Computer. Not to be confused with BASIC09, a very different BASIC created by Microware as the main language for its OS-9, the other OS available on the Color Computer (Microware also wrote version 2.0 of Extended Color BASIC when Microsoft refused to do it). Microsoft BASIC was also included in the Dragon 32 / 64 computers that were built in Wales and enjoyed some limited success in the UK home computer market in the early 1980s. Dragon computers were somewhat compatible with the Tandy TRS-80, as they were built on very similar hardware.
MSX
Microsoft produced a ROM-based MSX BASIC for use in MSX home computers, which used a Z80 processor. This version supported the graphics and sound systems of the MSX computers; some variants also had support for disk drives.
Modern descendants
No variety of Microsoft BASIC (BASICA, GW-BASIC, QuickBasic, QBasic) is currently distributed with Microsoft Windows or DOS. However, versions that will still run on modern machines can be downloaded from various Internet sites or be found on old DOS disks.
The latest incarnation of Microsoft BASIC is Visual Basic .NET, which incorporates some features from C++ and C# and can be used to develop Web forms, Windows forms, console applications and server-based applications. Most .NET code samples are presented in VB.NET as well as C#, and VB.NET continues to be favored by former Visual Basic programmers.
In October 2008, Microsoft released Small Basic. The language has only 14 keywords. Small Basic Version 1.0 (12 June 2011) was released with an updated Microsoft MSDN Web site that included a full teacher curriculum, a Getting Started Guide, and several e-books. Small Basic exists to help students as young as age eight learn the foundations of computer programming and then graduate to Visual Basic via the downloadable software, Visual Studio Express, where they can continue to build on the foundation by learning Visual C#, VB.NET, and Visual C++.
Variants and derivatives of Microsoft BASIC
Altair BASIC (MITS Altair and other S-100 computers)
Amiga BASIC (Commodore Amiga family)
Applesoft BASIC (Apple II family)
Atari Microsoft BASIC I and II (Atari 8-bit family)
BASICA ("BASIC Advanced") (PC DOS, on IBM PC)
Color BASIC (TRS-80 Color Computer)
Commodore BASIC (Commodore 8-bit family, incl C64)
Oric Extended Basic (Oric 8-bit family)
Color BASIC and Disk Extended Color BASIC (TRS-80 Color Computer and Dragon 32/64)
IBM Cassette BASIC (Original IBM PC, built into ROM)
Galaksija BASIC (Galaksija home computer)
GW-BASIC (BASICA for MS-DOS, on PC compatibles)
Microsoft Level III BASIC (Tandy/Radio-Shack TRS-80)
Basic 1.0 (Thomson computer family)
MBASIC (CP/M, on 8080/85 and Z80 based computers)
MS BASIC for Macintosh (Mac OS on Apple Macintosh)
MSX BASIC (MSX standard home computers)
N88-BASIC (NEC PC8801/9801)
N82-BASIC (NEC PC-8201/8201A, TRS-80 Model 100)
QBasic (PC DOS/MS-DOS on IBM PC and compatibles)
QuickBASIC (PC MS-DOS on IBM PC and compatibles)
Small Basic (MS Windows on IBM PC and compatibles)
TRS-80 Level II BASIC (Tandy/Radio-Shack TRS-80)
T-BASIC (Toshiba Pasopia) and T-BASIC7 (Toshiba Pasopia 7)
Visual Basic (classic and .NET) (PC DOS/MS-DOS/MS Windows on IBM PC and compatibles)
Video Technology Basic (Laser 350/500/700)
WordBasic (pre-VBA) (MS Windows)
HP2640 HP2647 Programmable Terminal with AGL graphics extensions
FreeBASIC a free clone of the QuickBasic system
Gambas free implementation inspired by Visual Basic
See also
Locomotive BASIC
Atari BASIC
Integer BASIC
Tiny BASIC
BBC BASIC
Open Letter to Hobbyists
Notes
References
External links
Bill Gates’ Personal Easter Eggs in 8 Bit BASIC
BASIC
BASIC
BASIC programming language family
Computer-related introductions in 1975
Programming languages created in 1975 | Operating System (OS) | 1,162 |
Antivirus software
Antivirus software, or antivirus software (abbreviated to AV software), also known as anti-malware, is a computer program used to prevent, detect, and remove malware.
Antivirus software was originally developed to detect and remove computer viruses, hence the name. However, with the proliferation of other malware, antivirus software started to protect from other computer threats. In particular, modern antivirus software can protect users from malicious browser helper objects (BHOs), browser hijackers, ransomware, keyloggers, backdoors, rootkits, trojan horses, worms, malicious LSPs, dialers, fraud tools, adware, and spyware. Some products also include protection from other computer threats, such as infected and malicious URLs, spam, scam and phishing attacks, online identity (privacy), online banking attacks, social engineering techniques, advanced persistent threat (APT), and botnet DDoS attacks.
History
1949–1980 period (pre-antivirus days)
Although the roots of the computer virus date back as early as 1949, when the Hungarian scientist John von Neumann published the "Theory of self-reproducing automata", the first known computer virus appeared in 1971 and was dubbed the "Creeper virus". This computer virus infected Digital Equipment Corporation's (DEC) PDP-10 mainframe computers running the TENEX operating system.
The Creeper virus was eventually deleted by a program created by Ray Tomlinson and known as "The Reaper". Some people consider "The Reaper" the first antivirus software ever written – it may be the case, but it is important to note that the Reaper was actually a virus itself specifically designed to remove the Creeper virus.
The Creeper virus was followed by several other viruses. The first known that appeared "in the wild" was "Elk Cloner", in 1981, which infected Apple II computers.
In 1983, the term "computer virus" was coined by Fred Cohen in one of the first ever published academic papers on computer viruses. Cohen used the term "computer virus" to describe programs that: "affect other computer programs by modifying them in such a way as to include a (possibly evolved) copy of itself." (note that a more recent, and precise, definition of computer virus has been given by the Hungarian security researcher Péter Szőr: "a code that recursively replicates a possibly evolved copy of itself").
The first IBM PC compatible "in the wild" computer virus, and one of the first real widespread infections, was "Brain" in 1986. From then, the number of viruses has grown exponentially. Most of the computer viruses written in the early and mid-1980s were limited to self-reproduction and had no specific damage routine built into the code. That changed when more and more programmers became acquainted with computer virus programming and created viruses that manipulated or even destroyed data on infected computers.
Before internet connectivity was widespread, computer viruses were typically spread by infected floppy disks. Antivirus software came into use, but was updated relatively infrequently. During this time, virus checkers essentially had to check executable files and the boot sectors of floppy disks and hard disks. However, as internet usage became common, viruses began to spread online.
1980–1990 period (early days)
There are competing claims for the innovator of the first antivirus product. Possibly, the first publicly documented removal of an "in the wild" computer virus (i.e. the "Vienna virus") was performed by Bernd Fix in 1987.
In 1987, Andreas Lüning and Kai Figge, who founded G Data Software in 1985, released their first antivirus product for the Atari ST platform. In 1987, the Ultimate Virus Killer (UVK) was also released. This was the de facto industry standard virus killer for the Atari ST and Atari Falcon, the last version of which (version 9.0) was released in April 2004. In 1987, in the United States, John McAfee founded the McAfee company (was part of Intel Security) and, at the end of that year, he released the first version of VirusScan. Also in 1987 (in Czechoslovakia), Peter Paško, Rudolf Hrubý, and Miroslav Trnka created the first version of NOD antivirus.
In 1987, Fred Cohen wrote that there is no algorithm that can perfectly detect all possible computer viruses.
Finally, at the end of 1987, the first two heuristic antivirus utilities were released: Flushot Plus by Ross Greenberg and Anti4us by Erwin Lanting. In his O'Reilly book, Malicious Mobile Code: Virus Protection for Windows, Roger Grimes described Flushot Plus as "the first holistic program to fight malicious mobile code (MMC)."
However, the kind of heuristic used by early AV engines was totally different from those used today. The first product with a heuristic engine resembling modern ones was F-PROT in 1991. Early heuristic engines were based on dividing the binary into different sections: data section, code section (in a legitimate binary, it usually starts always from the same location). Indeed, the initial viruses re-organized the layout of the sections, or overrode the initial portion of a section in order to jump to the very end of the file where malicious code was located—only going back to resume execution of the original code. This was a very specific pattern, not used at the time by any legitimate software, which represented an elegant heuristic to catch suspicious code. Other kinds of more advanced heuristics were later added, such as suspicious section names, incorrect header size, regular expressions, and partial pattern in-memory matching.
In 1988, the growth of antivirus companies continued. In Germany, Tjark Auerbach founded Avira (H+BEDV at the time) and released the first version of AntiVir (named "Luke Filewalker" at the time). In Bulgaria, Vesselin Bontchev released his first freeware antivirus program (he later joined FRISK Software). Also Frans Veldman released the first version of ThunderByte Antivirus, also known as TBAV (he sold his company to Norman Safeground in 1998). In Czechoslovakia, Pavel Baudiš and Eduard Kučera started avast! (at the time ALWIL Software) and released their first version of avast! antivirus. In June 1988, in South Korea, Ahn Cheol-Soo released its first antivirus software, called V1 (he founded AhnLab later in 1995). Finally, in the Autumn 1988, in United Kingdom, Alan Solomon founded S&S International and created his Dr. Solomon's Anti-Virus Toolkit (although he launched it commercially only in 1991 – in 1998 Solomon's company was acquired by McAfee). In November 1988 a professor at the Panamerican University in Mexico City named Alejandro E. Carriles copyrighted the first antivirus software in Mexico under the name "Byte Matabichos" (Byte Bugkiller) to help solve the rampant virus infestation among students.
Also in 1988, a mailing list named VIRUS-L was started on the BITNET/EARN network where new viruses and the possibilities of detecting and eliminating viruses were discussed. Some members of this mailing list were: Alan Solomon, Eugene Kaspersky (Kaspersky Lab), Friðrik Skúlason (FRISK Software), John McAfee (McAfee), Luis Corrons (Panda Security), Mikko Hyppönen (F-Secure), Péter Szőr, Tjark Auerbach (Avira) and Vesselin Bontchev (FRISK Software).
In 1989, in Iceland, Friðrik Skúlason created the first version of F-PROT Anti-Virus (he founded FRISK Software only in 1993). Meanwhile in the United States, Symantec (founded by Gary Hendrix in 1982) launched its first Symantec antivirus for Macintosh (SAM). SAM 2.0, released March 1990, incorporated technology allowing users to easily update SAM to intercept and eliminate new viruses, including many that didn't exist at the time of the program's release.
In the end of the 1980s, in United Kingdom, Jan Hruska and Peter Lammer founded the security firm Sophos and began producing their first antivirus and encryption products. In the same period, in Hungary, also VirusBuster was founded (which has recently being incorporated by Sophos).
1990–2000 period (emergence of the antivirus industry)
In 1990, in Spain, Mikel Urizarbarrena founded Panda Security (Panda Software at the time). In Hungary, the security researcher Péter Szőr released the first version of Pasteur antivirus. In Italy, Gianfranco Tonello created the first version of VirIT eXplorer antivirus, then founded TG Soft one year later.
In 1990, the Computer Antivirus Research Organization (CARO) was founded. In 1991, CARO released the "Virus Naming Scheme", originally written by Friðrik Skúlason and Vesselin Bontchev. Although this naming scheme is now outdated, it remains the only existing standard that most computer security companies and researchers ever attempted to adopt. CARO members includes: Alan Solomon, Costin Raiu, Dmitry Gryaznov, Eugene Kaspersky, Friðrik Skúlason, Igor Muttik, Mikko Hyppönen, Morton Swimmer, Nick FitzGerald, Padgett Peterson, Peter Ferrie, Righard Zwienenberg and Vesselin Bontchev.
In 1991, in the United States, Symantec released the first version of Norton AntiVirus. In the same year, in the Czech Republic, Jan Gritzbach and Tomáš Hofer founded AVG Technologies (Grisoft at the time), although they released the first version of their Anti-Virus Guard (AVG) only in 1992. On the other hand, in Finland, F-Secure (founded in 1988 by Petri Allas and Risto Siilasmaa – with the name of Data Fellows) released the first version of their antivirus product. F-Secure claims to be the first antivirus firm to establish a presence on the World Wide Web.
In 1991, the European Institute for Computer Antivirus Research (EICAR) was founded to further antivirus research and improve development of antivirus software.
In 1992, in Russia, Igor Danilov released the first version of SpiderWeb, which later became Dr. Web.
In 1994, AV-TEST reported that there were 28,613 unique malware samples (based on MD5) in their database.
Over time other companies were founded. In 1996, in Romania, Bitdefender was founded and released the first version of Anti-Virus eXpert (AVX). In 1997, in Russia, Eugene Kaspersky and Natalya Kaspersky co-founded security firm Kaspersky Lab.
In 1996, there was also the first "in the wild" Linux virus, known as "Staog".
In 1999, AV-TEST reported that there were 98,428 unique malware samples (based on MD5) in their database.
2000–2005 period
In 2000, Rainer Link and Howard Fuhs started the first open source antivirus engine, called OpenAntivirus Project.
In 2001, Tomasz Kojm released the first version of ClamAV, the first ever open source antivirus engine to be commercialised. In 2007, ClamAV was bought by Sourcefire, which in turn was acquired by Cisco Systems in 2013.
In 2002, in United Kingdom, Morten Lund and Theis Søndergaard co-founded the antivirus firm BullGuard.
In 2005, AV-TEST reported that there were 333,425 unique malware samples (based on MD5) in their database.
2005–2014 period
In 2007, AV-TEST reported a number of 5,490,960 new unique malware samples (based on MD5) only for that year. In 2012 and 2013, antivirus firms reported a new malware samples range from 300,000 to over 500,000 per day.
Over the years it has become necessary for antivirus software to use several different strategies (e.g. specific email and network protection or low level modules) and detection algorithms, as well as to check an increasing variety of files, rather than just executables, for several reasons:
Powerful macros used in word processor applications, such as Microsoft Word, presented a risk. Virus writers could use the macros to write viruses embedded within documents. This meant that computers could now also be at risk from infection by opening documents with hidden attached macros.
The possibility of embedding executable objects inside otherwise non-executable file formats can make opening those files a risk.
Later email programs, in particular Microsoft's Outlook Express and Outlook, were vulnerable to viruses embedded in the email body itself. A user's computer could be infected by just opening or previewing a message.
In 2005, F-Secure was the first security firm that developed an Anti-Rootkit technology, called BlackLight.
Because most users are usually connected to the Internet on a continual basis, Jon Oberheide first proposed a Cloud-based antivirus design in 2008.
In February 2008 McAfee Labs added the industry-first cloud-based anti-malware functionality to VirusScan under the name Artemis. It was tested by AV-Comparatives in February 2008 and officially unveiled in August 2008 in McAfee VirusScan.
Cloud AV created problems for comparative testing of security software – part of the AV definitions was out of testers control (on constantly updated AV company servers) thus making results non-repeatable. As a result, Anti-Malware Testing Standards Organisation (AMTSO) started working on method of testing cloud products which was adopted on May 7, 2009.
In 2011, AVG introduced a similar cloud service, called Protective Cloud Technology.
2014–present (rise of next-gen)
Following the 2013 release of the APT 1 report from Mandiant, the industry has seen a shift towards signature-less approaches to the problem capable of detecting and mitigating zero-day attacks. Numerous approaches to address these new forms of threats have appeared, including behavioral detection, artificial intelligence, machine learning, and cloud-based file detonation. According to Gartner, it is expected the rise of new entrants, such Carbon Black, Cylance and Crowdstrike will force EPP incumbents into a new phase of innovation and acquisition. One method from Bromium involves micro-virtualization to protect desktops from malicious code execution initiated by the end user. Another approach from SentinelOne and Carbon Black focuses on behavioral detection by building a full context around every process execution path in real time, while Cylance leverages an artificial intelligence model based on machine learning. Increasingly, these signature-less approaches have been defined by the media and analyst firms as "next-generation" antivirus and are seeing rapid market adoption as certified antivirus replacement technologies by firms such as Coalfire and DirectDefense. In response, traditional antivirus vendors such as Trend Micro, Symantec and Sophos have responded by incorporating "next-gen" offerings into their portfolios as analyst firms such as Forrester and Gartner have called traditional signature-based antivirus "ineffective" and "outdated".
Identification methods
One of the few solid theoretical results in the study of computer viruses is Frederick B. Cohen's 1987 demonstration that there is no algorithm that can perfectly detect all possible viruses. However, using different layers of defense, a good detection rate may be achieved.
There are several methods which antivirus engines can use to identify malware:
Sandbox detection: a particular behavioural-based detection technique that, instead of detecting the behavioural fingerprint at run time, it executes the programs in a virtual environment, logging what actions the program performs. Depending on the actions logged, the antivirus engine can determine if the program is malicious or not. If not, then, the program is executed in the real environment. Albeit this technique has shown to be quite effective, given its heaviness and slowness, it is rarely used in end-user antivirus solutions.
Data mining techniques: one of the latest approaches applied in malware detection. Data mining and machine learning algorithms are used to try to classify the behaviour of a file (as either malicious or benign) given a series of file features, that are extracted from the file itself.
Signature-based detection
Traditional antivirus software relies heavily upon signatures to identify malware.
Substantially, when a malware sample arrives in the hands of an antivirus firm, it is analysed by malware researchers or by dynamic analysis systems. Then, once it is determined to be a malware, a proper signature of the file is extracted and added to the signatures database of the antivirus software.
Although the signature-based approach can effectively contain malware outbreaks, malware authors have tried to stay a step ahead of such software by writing "oligomorphic", "polymorphic" and, more recently, "metamorphic" viruses, which encrypt parts of themselves or otherwise modify themselves as a method of disguise, so as to not match virus signatures in the dictionary.
Heuristics
Many viruses start as a single infection and through either mutation or refinements by other attackers, can grow into dozens of slightly different strains, called variants. Generic detection refers to the detection and removal of multiple threats using a single virus definition.
For example, the Vundo trojan has several family members, depending on the antivirus vendor's classification. Symantec classifies members of the Vundo family into two distinct categories, Trojan.Vundo and Trojan.Vundo.B.
While it may be advantageous to identify a specific virus, it can be quicker to detect a virus family through a generic signature or through an inexact match to an existing signature. Virus researchers find common areas that all viruses in a family share uniquely and can thus create a single generic signature. These signatures often contain non-contiguous code, using wildcard characters where differences lie. These wildcards allow the scanner to detect viruses even if they are padded with extra, meaningless code. A detection that uses this method is said to be "heuristic detection."
Rootkit detection
Anti-virus software can attempt to scan for rootkits. A rootkit is a type of malware designed to gain administrative-level control over a computer system without being detected. Rootkits can change how the operating system functions and in some cases can tamper with the anti-virus program and render it ineffective. Rootkits are also difficult to remove, in some cases requiring a complete re-installation of the operating system.
Real-time protection
Real-time protection, on-access scanning, background guard, resident shield, autoprotect, and other synonyms refer to the automatic protection provided by most antivirus, anti-spyware, and other anti-malware programs. This monitors computer systems for suspicious activity such as computer viruses, spyware, adware, and other malicious objects. Real-time protection detects threats in opened files and scans apps in real-time as they are installed on the device. When inserting a CD, opening an email, or browsing the web, or when a file already on the computer is opened or executed.
Issues of concern
Unexpected renewal costs
Some commercial antivirus software end-user license agreements include a clause that the subscription will be automatically renewed, and the purchaser's credit card automatically billed, at the renewal time without explicit approval. For example, McAfee requires users to unsubscribe at least 60 days before the expiration of the present subscription while BitDefender sends notifications to unsubscribe 30 days before the renewal. Norton AntiVirus also renews subscriptions automatically by default.
Rogue security applications
Some apparent antivirus programs are actually malware masquerading as legitimate software, such as WinFixer, MS Antivirus, and Mac Defender.
Problems caused by false positives
A "false positive" or "false alarm" is when antivirus software identifies a non-malicious file as malware. When this happens, it can cause serious problems. For example, if an antivirus program is configured to immediately delete or quarantine infected files, as is common on Microsoft Windows antivirus applications, a false positive in an essential file can render the Windows operating system or some applications unusable. Recovering from such damage to critical software infrastructure incurs technical support costs and businesses can be forced to close whilst remedial action is undertaken.
Examples of serious false-positives:
May 2007: a faulty virus signature issued by Symantec mistakenly removed essential operating system files, leaving thousands of PCs unable to boot.
May 2007: the executable file required by Pegasus Mail on Windows was falsely detected by Norton AntiVirus as being a Trojan and it was automatically removed, preventing Pegasus Mail from running. Norton AntiVirus had falsely identified three releases of Pegasus Mail as malware, and would delete the Pegasus Mail installer file when that happened. In response to this Pegasus Mail stated:
April 2010: McAfee VirusScan detected svchost.exe, a normal Windows binary, as a virus on machines running Windows XP with Service Pack 3, causing a reboot loop and loss of all network access.
December 2010: a faulty update on the AVG anti-virus suite damaged 64-bit versions of Windows 7, rendering it unable to boot, due to an endless boot loop created.
October 2011: Microsoft Security Essentials (MSE) removed the Google Chrome web browser, rival to Microsoft's own Internet Explorer. MSE flagged Chrome as a Zbot banking trojan.
September 2012: Sophos' anti-virus suite identified various update-mechanisms, including its own, as malware. If it was configured to automatically delete detected files, Sophos Antivirus could render itself unable to update, required manual intervention to fix the problem.
September 2017: the Google Play Protect anti-virus started identifying Motorola's Moto G4 Bluetooth application as malware, causing Bluetooth functionality to become disabled.
System and interoperability related issues
Running (the real-time protection of) multiple antivirus programs concurrently can degrade performance and create conflicts. However, using a concept called multiscanning, several companies (including G Data Software and Microsoft) have created applications which can run multiple engines concurrently.
It is sometimes necessary to temporarily disable virus protection when installing major updates such as Windows Service Packs or updating graphics card drivers. Active antivirus protection may partially or completely prevent the installation of a major update. Anti-virus software can cause problems during the installation of an operating system upgrade, e.g. when upgrading to a newer version of Windows "in place"—without erasing the previous version of Windows. Microsoft recommends that anti-virus software be disabled to avoid conflicts with the upgrade installation process. Active anti-virus software can also interfere with a firmware update process.
The functionality of a few computer programs can be hampered by active anti-virus software. For example, TrueCrypt, a disk encryption program, states on its troubleshooting page that anti-virus programs can conflict with TrueCrypt and cause it to malfunction or operate very slowly. Anti-virus software can impair the performance and stability of games running in the Steam platform.
Support issues also exist around antivirus application interoperability with common solutions like SSL VPN remote access and network access control products. These technology solutions often have policy assessment applications that require an up-to-date antivirus to be installed and running. If the antivirus application is not recognized by the policy assessment, whether because the antivirus application has been updated or because it is not part of the policy assessment library, the user will be unable to connect.
Effectiveness
Studies in December 2007 showed that the effectiveness of antivirus software had decreased in the previous year, particularly against unknown or zero day attacks. The computer magazine c't found that detection rates for these threats had dropped from 40-50% in 2006 to 20–30% in 2007. At that time, the only exception was the NOD32 antivirus, which managed a detection rate of 68%. According to the ZeuS tracker website the average detection rate for all variants of the well-known ZeuS trojan is as low as 40%.
The problem is magnified by the changing intent of virus authors. Some years ago it was obvious when a virus infection was present. At the time, viruses were written by amateurs and exhibited destructive behavior or pop-ups. Modern viruses are often written by professionals, financed by criminal organizations.
In 2008, Eva Chen, CEO of Trend Micro, stated that the anti-virus industry has over-hyped how effective its products are—and so has been misleading customers—for years.
Independent testing on all the major virus scanners consistently shows that none provides 100% virus detection. The best ones provided as high as 99.9% detection for simulated real-world situations, while the lowest provided 91.1% in tests conducted in August 2013. Many virus scanners produce false positive results as well, identifying benign files as malware.
Although methods may differ, some notable independent quality testing agencies include AV-Comparatives, ICSA Labs, West Coast Labs, Virus Bulletin, AV-TEST and other members of the Anti-Malware Testing Standards Organization.
New viruses
Anti-virus programs are not always effective against new viruses, even those that use non-signature-based methods that should detect new viruses. The reason for this is that the virus designers test their new viruses on the major anti-virus applications to make sure that they are not detected before releasing them into the wild.
Some new viruses, particularly ransomware, use polymorphic code to avoid detection by virus scanners. Jerome Segura, a security analyst with ParetoLogic, explained:
A proof of concept virus has used the Graphics Processing Unit (GPU) to avoid detection from anti-virus software. The potential success of this involves bypassing the CPU in order to make it much harder for security researchers to analyse the inner workings of such malware.
Rootkits
Detecting rootkits is a major challenge for anti-virus programs. Rootkits have full administrative access to the computer and are invisible to users and hidden from the list of running processes in the task manager. Rootkits can modify the inner workings of the operating system and tamper with antivirus programs.
Damaged files
If a file has been infected by a computer virus, anti-virus software will attempt to remove the virus code from the file during disinfection, but it is not always able to restore the file to its undamaged state. In such circumstances, damaged files can only be restored from existing backups or shadow copies (this is also true for ransomware); installed software that is damaged requires re-installation (however, see System File Checker).
Firmware infections
Any writeable firmware in the computer can be infected by malicious code. This is a major concern, as an infected BIOS could require the actual BIOS chip to be replaced to ensure the malicious code is completely removed. Anti-virus software is not effective at protecting firmware and the motherboard BIOS from infection. In 2014, security researchers discovered that USB devices contain writeable firmware which can be modified with malicious code (dubbed "BadUSB"), which anti-virus software cannot detect or prevent. The malicious code can run undetected on the computer and could even infect the operating system prior to it booting up.
Performance and other drawbacks
Antivirus software has some drawbacks, first of which that it can impact a computer's performance.
Furthermore, inexperienced users can be lulled into a false sense of security when using the computer, considering their computers to be invulnerable, and may have problems understanding the prompts and decisions that antivirus software presents them with. An incorrect decision may lead to a security breach. If the antivirus software employs heuristic detection, it must be fine-tuned to minimize misidentifying harmless software as malicious (false positive).
Antivirus software itself usually runs at the highly trusted kernel level of the operating system to allow it access to all the potential malicious process and files, creating a potential avenue of attack. The US National Security Agency (NSA) and the UK Government Communications Headquarters (GCHQ) intelligence agencies,
respectively, have been exploiting anti-virus software to spy on users. Anti-virus software has highly privileged and trusted access to the underlying operating system, which makes it a much more appealing target for remote attacks. Additionally anti-virus software is "years behind security-conscious client-side applications like browsers or document readers. It means that Acrobat Reader, Microsoft Word or Google Chrome are harder to exploit than 90 percent of the anti-virus products out there", according to Joxean Koret, a researcher with Coseinc, a Singapore-based information security consultancy.
Alternative solutions
Antivirus software running on individual computers is the most common method employed of guarding against malware, but it is not the only solution. Other solutions can also be employed by users, including Unified Threat Management (UTM), hardware and network firewalls, Cloud-based antivirus and online scanners.
Hardware and network firewall
Network firewalls prevent unknown programs and processes from accessing the system. However, they are not antivirus systems and make no attempt to identify or remove anything. They may protect against infection from outside the protected computer or network, and limit the activity of any malicious software which is present by blocking incoming or outgoing requests on certain TCP/IP ports. A firewall is designed to deal with broader system threats that come from network connections into the system and is not an alternative to a virus protection system.
Cloud antivirus
Cloud antivirus is a technology that uses lightweight agent software on the protected computer, while offloading the majority of data analysis to the provider's infrastructure.
One approach to implementing cloud antivirus involves scanning suspicious files using multiple antivirus engines. This approach was proposed by an early implementation of the cloud antivirus concept called CloudAV. CloudAV was designed to send programs or documents to a network cloud where multiple antivirus and behavioral detection programs are used simultaneously in order to improve detection rates. Parallel scanning of files using potentially incompatible antivirus scanners is achieved by spawning a virtual machine per detection engine and therefore eliminating any possible issues. CloudAV can also perform "retrospective detection," whereby the cloud detection engine rescans all files in its file access history when a new threat is identified thus improving new threat detection speed. Finally, CloudAV is a solution for effective virus scanning on devices that lack the computing power to perform the scans themselves.
Some examples of cloud anti-virus products are Panda Cloud Antivirus and Immunet. Comodo Group has also produced cloud-based anti-virus.
Online scanning
Some antivirus vendors maintain websites with free online scanning capability of the entire computer, critical areas only, local disks, folders or files. Periodic online scanning is a good idea for those that run antivirus applications on their computers because those applications are frequently slow to catch threats. One of the first things that malicious software does in an attack is disable any existing antivirus software and sometimes the only way to know of an attack is by turning to an online resource that is not installed on the infected computer.
Specialized tools
Virus removal tools are available to help remove stubborn infections or certain types of infection. Examples include Avast Free Anti- Malware, AVG Free Malware Removal Tools, and Avira AntiVir Removal Tool. It is also worth noting that sometimes antivirus software can produce a false positive result, indicating an infection where there is none.
A rescue disk that is bootable, such as a CD or USB storage device, can be used to run antivirus software outside of the installed operating system, in order to remove infections while they are dormant. A bootable antivirus disk can be useful when, for example, the installed operating system is no longer bootable or has malware that is resisting all attempts to be removed by the installed antivirus software. Examples of some of these bootable disks include the Bitdefender Rescue CD, Kaspersky Rescue Disk 2018, and Windows Defender Offline (integrated into Windows 10 since the Anniversary Update). Most of the Rescue CD software can also be installed onto a USB storage device, that is bootable on newer computers.
Usage and risks
According to an FBI survey, major businesses lose $12 million annually dealing with virus incidents. A survey by Symantec in 2009 found that a third of small to medium-sized business did not use antivirus protection at that time, whereas more than 80% of home users had some kind of antivirus installed. According to a sociological survey conducted by G Data Software in 2010 49% of women did not use any antivirus program at all.
See also
Anti-virus and anti-malware software
CARO, the Computer Antivirus Research Organization
Comparison of antivirus software
Comparison of computer viruses
EICAR, the European Institute for Computer Antivirus Research
Firewall software
Internet security
Linux malware
Quarantine (computing)
Sandbox (computer security)
Timeline of computer viruses and worms
Virus hoax
Citations
General bibliography
Utility software types | Operating System (OS) | 1,163 |
Z/VM
z/VM is the current version in IBM's VM family of virtual machine operating systems. z/VM was first released in October 2000 and remains in active use and development . It is directly based on technology and concepts dating back to the 1960s, with IBM's CP/CMS on the IBM System/360-67 (see article History of CP/CMS for historical details). z/VM runs on IBM's IBM Z family of computers. It can be used to support large numbers (thousands) of Linux virtual machines. (See Linux on IBM Z.)
On October 16, 2018, IBM released z/VM Version 7.1 which requires z/Architecture, implemented in IBM's EC12, BC12 and later models.
See also
z/OS
OpenSolaris for System z
z/TPF
z/VSE
PR/SM
Time-sharing system evolution
References
Citations
External links
IBM z/VM Evaluation Edition (free download)
Virtualization software
IBM mainframe operating systems | Operating System (OS) | 1,164 |
List of computer system emulators
This article lists software and hardware that emulates computing platforms.
The host in this article is the system running the emulator, and the guest is the system being emulated.
The list is organized by guest operating system (the system being emulated), grouped by word length. Each section contains a list of emulators capable of emulating the specified guest, details of the range of guest systems able to be emulated, and the required host environment and licensing.
64-bit guest systems
AlphaServer
IBM
Silicon Graphics
UltraSPARC
x86-64 platforms (64-bit PC and compatible hardware)
60-bit guest systems
60-bit CDC 6000 series and Cyber mainframe
48-bit guest systems
English Electric KDF9
36-bit guest systems
DEC PDP-10
GE-600 series
IBM 7094
32-bit guest systems
32-bit IBM mainframe
Acorn Archimedes, A7000, RiscPC, Phoebe
While the ARM processor in the Acorn Archimedes is a 32-bit chip, it only had 26-bit addressing making an ARM/Archimedes emulator, such as Aemulor or others below, necessary for 26-bit compatibility, for later ARM processors have mostly have dropped it.
Amiga
Apple Lisa
Apple Macintosh with 680x0 CPU
Apple Macintosh with PowerPC CPU
Atari ST/STE/Falcon
AT&T UNIX PC
Cobalt Qube
Corel NetWinder
DEC VAX
DECstation
Motorola 88000
Sharp X68000
Sinclair QL
SPARCstation
x86 platforms (32-bit PC and compatible hardware)
24-bit guest systems
ICL 1900
SDS 900-series
20-bit guest systems
GE-200 series
PERQ
18-bit guest systems
DEC PDP-1
DEC PDP-4/7/9/15
16-bit guest systems
Apple IIGS
NEC PC-9800 series
DEC PDP-11
Mera 400
Polish minicomputer Mera 400. Also in development hardware emulator in FPGA.
Texas Instruments TI-99/4 and TI-99/4A
Texas Instruments TI-980
Texas Instruments TI-990
Varian Data Machines
x86-16 IBM PC/XT/AT compatible
12-bit guest systems
DEC PDP-8
8-bit guest systems
Acorn Atom
Acorn Electron
Altair 8800
Amstrad CPC
Apple-1
Apple II
Apple ///
Atari 8-bit family
BBC Micro
Commodore 64
Commodore Plus/4
Commodore VIC-20
Enterprise 64/128
Fairlight CMI IIx
Jupiter ACE
Mattel Aquarius
MicroBee
MSX
NEC PC-8800 series
Oric
SAM Coupé
Sharp MZ
Sinclair ZX80
Sinclair ZX81
Sinclair ZX Spectrum and clones
For Sinclair ZX Spectrum and clones
Tandy 1000
Thomson MO5
TRS-80
PDA and smartphone guest systems
Pocket PC
Calculator guest systems
Hewlett-Packard calculators
Texas Instruments calculators
See also
Comparison of platform virtualization software
List of emulators
List of video game emulators
References
Emulators, Computer system
Computer system emulators | Operating System (OS) | 1,165 |
Android 12
Android 12 is the twelfth major release and 19th version of Android, the mobile operating system developed by the Open Handset Alliance led by Google. The first beta was released on May 18, 2021. Android 12 was released publicly on October 4, 2021, through Android Open Source Project (AOSP) and was released to supported Google Pixel devices on October 19, 2021.
History
Android 12 (internally codenamed Snow Cone) was announced in an Android blog posted on February 18, 2021. A developer preview was released immediately, with two additional ones planned the following two months. After that, four monthly beta releases were planned, beginning in May, the last one of them reaching platform stability in August, with general availability coming shortly after that.
The second developer preview was released on March 17, 2021, followed by a third preview on April 21, 2021. The first beta build was then released on May 18, 2021. It was followed by beta 2 on June 9, 2021, which got a bugfix update to 2.1 on June 23. Then beta 3 was released on July 14, 2021, getting a bugfix update to beta 3.1 on July 26. Beta 4 was released on August 11, 2021. A fifth beta, not planned in the original roadmap, was released on September 8, 2021. Android 12 stable got released on the Android Open Source Project on October 4, getting its public over-the-air rollout on October 19, coinciding with the launch event for the Pixel 6.
Android 12L
In October 2021, Google announced Android 12L, an interim release of Android 12 including improvements specific for foldable phones, tablets, desktop-sized screens and Chromebooks, and modifications to the user interface to tailor it to larger screens. It is planned to launch in early 2022. Developer Preview 1 of Android 12L was released in October 2021, followed by Beta 1 in December 2021, Beta 2 in January 2022, and Beta 3 in February 2022.
Features
User interface
Android 12 introduces a major refresh to the operating system's Material Design language branded as "Material You", which features larger buttons, an increased amount of animation, and a new style for home screen widgets. A feature, internally codenamed "monet", allows the operating system to automatically generate a color theme for system menus and supported apps using the colors of the user's wallpaper. The smart home and Wallet areas added to the power menu on Android 11 have been relocated to the notification shade, while Google Assistant is now activated by holding the power button.
Android 12 also features native support for taking scrolling screenshots.
In addition to the user interface, widgets on Android 12 are also updated with the new Material You design language.
Platform
Performance improvements have been made to system services such as the WindowManager, PackageManager, system server, and interrupts. It also adds accessibility improvements for those who are visually impaired. The Android Runtime has been added to Project Mainline, allowing it to be serviced via Play Store.
Android 12 adds support for spatial audio, and MPEG-H 3D Audio, and will support transcoding of HEVC video for backwards compatibility with apps which do not support it. A "rich content insertion" API eases the ability to transfer formatted text and media between apps, such as via the clipboard. Third party app stores now have the ability to update apps without constantly asking the user for permission.
Privacy
OS-level machine learning functions are sandboxed within the "Android Private Compute Core", which is expressly prohibited from accessing networks.
Apps requesting location data can now be restricted to having access only to "approximate" location data rather than "precise". Controls to prevent apps from using the camera and microphone system-wide have been added to the quick settings toggles. An indicator will also be displayed on-screen if they are active.
See also
iOS 15
macOS Monterey
Windows 11
References
External links
Video: 60+ changes in Android 12
Android (operating system)
2021 software | Operating System (OS) | 1,166 |
Open Shortest Path First
Open Shortest Path First (OSPF) is a routing protocol for Internet Protocol (IP) networks. It uses a link state routing (LSR) algorithm and falls into the group of interior gateway protocols (IGPs), operating within a single autonomous system (AS).
OSPF gathers link state information from available routers and constructs a topology map of the network. The topology is presented as a routing table to the Internet Layer for routing packets by their destination IP address. OSPF supports Internet Protocol Version 4 (IPv4) and Internet Protocol Version 6 (IPv6) networks and supports the Classless Inter-Domain Routing (CIDR) addressing model.
OSPF is widely used in large enterprise networks. IS-IS, another LSR-based protocol, is more common in large service provider networks.
Originally designed in the 1980s, OSPF is defined for IPv4 in protocol version 2 by RFC 2328 (1998). The updates for IPv6 are specified as OSPF Version 3 in RFC 5340 (2008). OSPF supports the Classless Inter-Domain Routing (CIDR) addressing model.
Concepts
OSPF is an interior gateway protocol (IGP) for routing Internet Protocol (IP) packets within a single routing domain, such as an autonomous system. It gathers link state information from available routers and constructs a topology map of the network. The topology is presented as a routing table to the Internet Layer which routes packets based solely on their destination IP address.
OSPF detects changes in the topology, such as link failures, and converges on a new loop-free routing structure within seconds. It computes the shortest-path tree for each route using a method based on Dijkstra's algorithm. The OSPF routing policies for constructing a route table are governed by link metrics associated with each routing interface. Cost factors may be the distance of a router (round-trip time), data throughput of a link, or link availability and reliability, expressed as simple unitless numbers. This provides a dynamic process of traffic load balancing between routes of equal cost.
OSPF divides the network into routing areas to simplify administration and optimize traffic and resource utilization. Areas are identified by 32-bit numbers, expressed either simply in decimal, or often in the same octet-based dot-decimal notation used for IPv4 addresses. By convention, area 0 (zero), or 0.0.0.0, represents the core or backbone area of an OSPF network. While the identifications of other areas may be chosen at will, administrators often select the IP address of a main router in an area as the area identifier. Each additional area must have a connection to the OSPF backbone area. Such connections are maintained by an interconnecting router, known as an area border router (ABR). An ABR maintains separate link-state databases for each area it serves and maintains summarized routes for all areas in the network.
OSPF runs over Internet Protocol Version 4 (IPv4) and Internet Protocol Version 6 (IPv6), but does not use a transport protocol, such as UDP or TCP. It encapsulates its data directly in IP packets with protocol number 89. This is in contrast to other routing protocols, such as the Routing Information Protocol (RIP) and the Border Gateway Protocol (BGP). OSPF implements its own transport error detection and correction functions. OSPF uses multicast addressing for distributing route information within a broadcast domain. It reserves the multicast addresses 224.0.0.5 (IPv4) and FF02::5 (IPv6) for all SPF/link state routers (AllSPFRouters) and 224.0.0.6 (IPv4) and FF02::6 (IPv6) for all Designated Routers (AllDRouters). For non-broadcast networks, special provisions for configuration facilitate neighbor discovery. OSPF multicast IP packets never traverse IP routers, they never travel more than one hop. The protocol may therefore be considered a link layer protocol, but is often also attributed to the application layer in the TCP/IP model. It has a virtual link feature that can be used to create an adjacency tunnel across multiple hops. OSPF over IPv4 can operate securely between routers, optionally using a variety of authentication methods to allow only trusted routers to participate in routing. OSPFv3 (IPv6) relies on standard IPv6 protocol security (IPsec), and has no internal authentication methods.
For routing IP multicast traffic, OSPF supports the Multicast Open Shortest Path First (MOSPF) protocol. Cisco does not include MOSPF in their OSPF implementations. Protocol Independent Multicast (PIM) in conjunction with OSPF or other IGPs, is widely deployed.
OSPF version 3 introduces modifications to the IPv4 implementation of the protocol. Except for virtual links, all neighbor exchanges use IPv6 link-local addressing exclusively. The IPv6 protocol runs per link, rather than based on the subnet. All IP prefix information has been removed from the link-state advertisements and from the hello discovery packet making OSPFv3 essentially protocol-independent. Despite the expanded IP addressing to 128 bits in IPv6, area and router Identifications are still based on 32-bit numbers.
Router relationships
OSPF supports complex networks with multiple routers, including backup routers, to balance traffic load on multiple links to other subnets. Neighboring routers in the same broadcast domain or at each end of a point-to-point link communicate with each other via the OSPF protocol. Routers form adjacencies when they have detected each other. This detection is initiated when a router identifies itself in a hello protocol packet. Upon acknowledgment, this establishes a two-way state and the most basic relationship. The routers in an Ethernet or Frame Relay network select a designated router (DR) and a backup designated router (BDR) which act as a hub to reduce traffic between routers. OSPF uses both unicast and multicast transmission modes to send "hello" packets and link-state updates.
As a link-state routing protocol, OSPF establishes and maintains neighbor relationships for exchanging routing updates with other routers. The neighbor relationship table is called an adjacency database. Two OSPF routers are neighbors if they are members of the same subnet and share the same area ID, subnet mask, timers and authentication. In essence, OSPF neighborship is a relationship between two routers that allow them to see and understand each other but nothing more. OSPF neighbors do not exchange any routing information – the only packets they exchange are hello packets. OSPF adjacencies are formed between selected neighbors and allow them to exchange routing information. Two routers must first be neighbors and only then, can they become adjacent. Two routers become adjacent if at least one of them is designated router or backup designated router (on multiaccess-type networks), or they are interconnected by a point-to-point or point-to-multipoint network type. For forming a neighbor relationship between, the interfaces used to form the relationship must be in the same OSPF area. While an interface may be configured to belong to multiple areas, this is generally not practiced. When configured in a second area, an interface must be configured as a secondary interface.
Operation modes
The OSPF can have different operation modes on the following setups on an interface/network:
Broadcast (default), each router advertises itself by periodically multicasting hello packets, and the use of designated routers. Using multicast
Non-broadcast multi-access, with the use of designated routers. May need static configuration. Packets are sent as unicast,
Point-to-multipoint, where OSPF treats neighbours as point-to-point links. No designated router is elected. Using multicast. Separate hello packets are sent to each neighbor.
Point-to-point. Each router advertises itself by periodically multicasting hello packets. No designated router is elected. The interface can be IP unnumbered (without assigning a unique IP address to it). Using multicast.
Virtual links, the packets are sent as unicast. Can only be configured on a non-backbone area (but not stub-area). Endpoints need to be ABR, the virtual links behave as unnumbered point-to-point connections. The cost of an intra-area path between the two routers is added to the link.
Virtual link over Generic Routing Encapsulation (GRE). Since OSPF does not support virtual links for other areas then the backbone. A workaround is to use GRE over backbone area. Note if the same IP or router ID is used the link creates two equal-cost routes to the destination.
Sham link A link that connects sites that belong to the same OSPF area and share an OSPF backdoor link via MPLS VPN backbone.
Adjacency state machine
Each OSPF router within a network communicates with other neighboring routers on each connecting interface to establish the states of all adjacencies. Every such communication sequence is a separate conversation identified by the pair of router IDs of the communicating neighbors. RFC 2328 specifies the protocol for initiating these conversations (Hello Protocol) and for establishing full adjacencies (Database Description Packets, Link State Request Packets). During its course, each router conversation transitions through a maximum of eight conditions defined by a state machine:
Down: The state down represents the initial state of a conversation when no information has been exchanged and retained between routers with the Hello Protocol.
Attempt: The Attempt state is similar to the Down state, except that a router is in the process of efforts to establish a conversation with another router, but is only used on NBMA networks.
Init: The Init state indicates that a HELLO packet has been received from a neighbor, but the router has not established a two-way conversation.
2-Way: The 2-Way state indicates the establishment of a bidirectional conversation between two routers. This state immediately precedes the establishment of adjacency. This is the lowest state of a router that may be considered as a Designated Router.
ExStart: The ExStart state is the first step of adjacency of two routers.
Exchange: In the Exchange state, a router is sending its link-state database information to the adjacent neighbor. At this state, a router is able to exchange all OSPF routing protocol packets.
Loading: In the Loading state, a router requests the most recent link-state advertisements (LSAs) from its neighbor discovered in the previous state.
Full: The Full state concludes the conversation when the routers are fully adjacent, and the state appears in all router- and network-LSAs. The link state databases of the neighbors are fully synchronized.
OSPF areas
A network is divided into OSPF areas that are logical groupings of hosts and networks. An area includes its connecting router having an interface for each connected network link. Each router maintains a separate link-state database for the area whose information may be summarized towards the rest of the network by the connecting router. Thus, the topology of an area is unknown outside the area. This reduces the routing traffic between parts of an autonomous system.
OSPF can handle thousands of routers with more a concern of reaching capacity of the forwarding information base (FIB) table when the network contains lots of routes and lower-end devices. Modern low-end routers have a full gigabyte of RAM which allows them to handle many routers in an area 0. Many resources refer to OSPF guides from over 20 years ago where it was impressive to have 64 MB of RAM.
Areas are uniquely identified with 32-bit numbers. The area identifiers are commonly written in the dot-decimal notation, familiar from IPv4 addressing. However, they are not IP addresses and may duplicate, without conflict, any IPv4 address. The area identifiers for IPv6 implementations (OSPFv3) also use 32-bit identifiers written in the same notation. When dotted formatting is omitted, most implementations expand area 1 to the area identifier 0.0.0.1, but some have been known to expand it as 1.0.0.0.
Several vendors (Cisco, Allied Telesis, Juniper, Alcatel-Lucent, Huawei, Quagga), implement Totally stubby and NSSA totally stubby area for stub and not-so-stubby areas. Although not covered by RFC standards, they are considered by many to be standard features in OSPF implementations.
OSPF defines several area types:
Backbone
Non-Backbone/regular
Stub,
Totally stubby
Not-so-stubby
Totally Not-so-stubby
Transit.
Backbone area
The backbone area (also known as area 0 or area 0.0.0.0) forms the core of an OSPF network. All other areas are connected to it, either directly or through other routers. OSPF requires this to prevent routing loops. Inter-area routing happens via routers connected to the backbone area and to their own associated areas. It is the logical and physical structure for the 'OSPF domain' and is attached to all nonzero areas in the OSPF domain. Note that in OSPF the term Autonomous System Boundary Router (ASBR) is historic, in the sense that many OSPF domains can coexist in the same Internet-visible autonomous system, RFC 1996.
All OSPF areas must connect to the backbone area. This connection, however, can be through a virtual link. For example, assume area 0.0.0.1 has a physical connection to area 0.0.0.0. Further assume that area 0.0.0.2 has no direct connection to the backbone, but this area does have a connection to area 0.0.0.1. Area 0.0.0.2 can use a virtual link through the transit area 0.0.0.1 to reach the backbone. To be a transit area, an area has to have the transit attribute, so it cannot be stubby in any way.
Regular area
A regular area is just a non-backbone (nonzero) area without specific feature, generating and receiving summary and external LSAs. The backbone area is a special type of such area.
Transit area
A transit area is an area with two or more OSPF border routers and is used to pass network traffic from one adjacent area to another. The transit area does not originate this traffic and is not the destination of such traffic. The backbone area is a special type of transit area.
Examples of this:
Backbone area
In OSPF requires all areas to be directly connected to the backbone area, if not Virtual links have to be used, and the area that it transit called Transit area.
Stub area
In hello packets the E flag is not high, indication "External routing: not capable"
A stub area is an area that does not receive route advertisements external to the AS and routing from within the area is based entirely on a default route. An ABR deletes type 4, 5 LSAs from internal routers, sends them a default route of 0.0.0.0 and turns itself into a default gateway. This reduces LSDB and routing table size for internal routers.
Modifications to the basic concept of stub area have been implemented by systems vendors, such as the totally stubby area (TSA) and the not-so-stubby area (NSSA), both an extension in Cisco Systems routing equipment.
Totally stubby area
A totally stubby area is similar to a stub area. However, this area does not allow summary routes in addition to not having external routes, that is, inter-area (IA) routes are not summarized into totally stubby areas. The only way for traffic to get routed outside the area is a default route which is the only Type-3 LSA advertised into the area. When there is only one route out of the area, fewer routing decisions have to be made by the route processor, which lowers system resource utilization.
Occasionally, it is said that a TSA can have only one ABR.
Not-so-stubby area
In hello packets the N flag is set high, indication "NSSA: supported"
A not-so-stubby area (NSSA) is a type of stub area that can import autonomous system external routes and send them to other areas, but still cannot receive AS-external routes from other areas.
NSSA is an extension of the stub area feature that allows the injection of external routes in a limited fashion into the stub area. A case study simulates an NSSA getting around the Stub Area problem of not being able to import external addresses. It visualizes the following activities: the ASBR imports external addresses with a type 7 LSA, the ABR converts a type 7 LSA to type 5 and floods it to other areas, the ABR acts as an "ASBR" for other areas.
The ASBRs do not take type 5 LSAs and then convert to type 7 LSAs for the area.
Totally Not-so-stubby area
An addition to the standard functionality of an NSSA, the totally stubby NSSA is an NSSA that takes on the attributes of a TSA, meaning that type 3 and 4 summary routes are not flooded into this type of area. It is also possible to declare an area both totally stubby and not-so-stubby, which means that the area will receive only the default route from area 0.0.0.0, but can also contain an autonomous system boundary router (ASBR) that accepts external routing information and injects it into the local area, and from the local area into area 0.0.0.0.
Redistribution into an NSSA area creates a special type of LSA known as type 7, which can exist only in an NSSA area. An NSSA ASBR generates this LSA, and an NSSA ABR router translates it into type 5 LSA which gets propagated into the OSPF domain.
A newly acquired subsidiary is one example of where it might be suitable for an area to be simultaneously not-so-stubby and totally stubby if the practical place to put an ASBR is on the edge of a totally stubby area. In such a case, the ASBR does send externals into the totally stubby area, and they are available to OSPF speakers within that area. In Cisco's implementation, the external routes can be summarized before injecting them into the totally stubby area. In general, the ASBR should not advertise default into the TSA-NSSA, although this can work with extremely careful design and operation, for the limited special cases in which such an advertisement makes sense.
By declaring the totally stubby area as NSSA, no external routes from the backbone, except the default route, enter the area being discussed. The externals do reach area 0.0.0.0 via the TSA-NSSA, but no routes other than the default route enter the TSA-NSSA. Routers in the TSA-NSSA send all traffic to the ABR, except to routes advertised by the ASBR.
Router types
OSPF defines the following overlapping categories of routers:
Internal router (IR) An internal router has all its interfaces belonging to the same area.
Area border router (ABR) An area border router is a router that connects one or more areas to the main backbone network. It is considered a member of all areas it is connected to. An ABR keeps multiple instances of the link-state database in memory, one for each area to which that router is connected.
Backbone router (BR) A backbone router has an interface to the backbone area. Backbone routers may also be area routers, but do not have to be.
Autonomous system boundary router (ASBR) An autonomous system boundary router is a router that is connected by using more than one routing protocol and that exchanges routing information with routers autonomous systems. ASBRs typically also run an exterior routing protocol (e.g., BGP), or use static routes, or both. An ASBR is used to distribute routes received from other, external ASs throughout its own autonomous system. An ASBR creates External LSAs for external addresses and floods them to all areas via ABR. Routers in other areas use ABRs as next hops to access external addresses. Then ABRs forward packets to the ASBR that announces the external addresses.
The router type is an attribute of an OSPF process. A given physical router may have one or more OSPF processes. For example, a router that is connected to more than one area, and which receives routes from a BGP process connected to another AS, is both an area border router and an autonomous system boundary router.
Each router has an identifier, customarily written in the dotted-decimal format (e.g., 1.2.3.4) of an IP address. This identifier must be established in every OSPF instance. If not explicitly configured, the highest logical IP address will be duplicated as the router identifier. However, since the router identifier is not an IP address, it does not have to be a part of any routable subnet in the network, and often isn't to avoid confusion.
Non-point-to-point network
On networks (same subnet) with more than 2 OSPF routers as system of designated router (DR) and backup designated router (BDR), is used to reducing network traffic by providing a source for routing updates.
This is done using multicast address's:
, all routers in the topology will listen on that multicast address.
, DR and BRD will listen on that multicast address.
The DR and BDR maintains a complete topology table of the network and sends the updates to the other routers via multicast. All routers in a multi-access network segment will form a slave/master relationship with the DR and BDR. They will form adjacencies with the DR and BDR only. Every time a router sends an update, it sends it to the DR and BDR on the multicast address . The DR will then send the update out to all other routers in the area, to the multicast address . This way all the routers do not have to constantly update each other, and can rather get all their updates from a single source. The use of multicasting further reduces the network load. DRs and BDRs are always setup/elected on OSPF broadcast networks. DR's can also be elected on NBMA (Non-Broadcast Multi-Access) networks such as Frame Relay or ATM. DRs or BDRs are not elected on point-to-point links (such as a point-to-point WAN connection) because the two routers on either side of the link must become fully adjacent and the bandwidth between them cannot be further optimized. DR and non-DR routers evolve from 2-way to full adjacency relationships by exchanging DD, Request, and Update.
Designated router
A designated router (DR) is the router interface elected among all routers on a particular multiaccess network segment, generally assumed to be broadcast multiaccess. Special techniques, often vendor-dependent, may be needed to support the DR function on non-broadcast multiaccess (NBMA) media. It is usually wise to configure the individual virtual circuits of an NBMA subnet as individual point-to-point lines; the techniques used are implementation-dependent.
Backup designated router
A backup designated router (BDR) is a router that becomes the designated router if the current designated router has a problem or fails. The BDR is the OSPF router with the second-highest priority at the time of the last election.
A given router can have some interfaces that are designated (DR) and others that are backup designated (BDR), and others that are non-designated. If no router is a DR or a BDR on a given subnet, the BDR is first elected, and then a second election is held for the DR.
Designated router election
The DR is elected based on the following default criteria:
If the priority setting on an OSPF router is set to 0, that means it can NEVER become a DR or BDR.
When a DR fails and the BDR takes over, there is another election to see who becomes the replacement BDR.
The router sending the Hello packets with the highest priority wins the election.
If two or more routers tie with the highest priority setting, the router sending the Hello with the highest RID (Router ID) wins. NOTE: a RID is the highest logical (loopback) IP address configured on a router, if no logical/loopback IP address is set then the router uses the highest IP address configured on its active interfaces (e.g. would be higher than ).
Usually the router with the second-highest priority number becomes the BDR.
The priority values range between 0 – 255, with a higher value increasing its chances of becoming DR or BDR.
If a higher priority OSPF router comes online after the election has taken place, it will not become DR or BDR until (at least) the DR and BDR fail.
If the current DR 'goes down' the current BDR becomes the new DR and a new election takes place to find another BDR. If the new DR then 'goes down' and the original DR is now available, still previously chosen BDR will become DR.
Protocol messages
Unlike other routing protocols, OSPF does not carry data via a transport protocol, such as the User Datagram Protocol (UDP) or the Transmission Control Protocol (TCP). Instead, OSPF forms IP datagrams directly, packaging them using protocol number 89 for the IP Protocol field. OSPF defines five different message types, for various types of communication. Multiple packets can be send per frame.
OSFP uses the following packets 5 type:
Hello
Database description
Link State Request
Link State Update
Link State Acknowledgement
Hello Packet
OSPF's Hello messages are used as a form of greeting, to allow a router to discover other adjacent routers on its local links and networks. The messages establish relationships between neighboring devices (called adjacencies) and communicate key parameters about how OSPF is to be used in the autonomous system or area. During normal operation, routers send hello messages to their neighbors at regular intervals (the hello interval); if a router stops receiving hello messages from a neighbor, after a set period (the dead interval) the router will assume the neighbor has gone down.
Database description DBD
Database description messages contain descriptions of the topology of the autonomous system or area. They convey the contents of the link-state database (LSDB) for the area from one router to another. Communicating a large LSDB may require several messages to be sent by having the sending device designated as a master device and sending messages in sequence, with the slave (recipient of the LSDB information) responding with acknowledgments.
Link state packets
Link state request (LSR) Link state request messages are used by one router to request updated information about a portion of the LSDB from another router. The message specifies the link(s) for which the requesting device wants more current information.
Link state update (LSU) Link-state update messages contain updated information about the state of certain links on the LSDB. They are sent in response to a link state request message, and also broadcast or multicast by routers on a regular basis. Their contents are used to update the information in the LSDBs of routers that receive them.
Link state acknowledgment (LSAck) Link-state acknowledgment messages provide reliability to the link-state exchange process, by explicitly acknowledging receipt of a Link State Update message.
OSPF v2 area types and accepted LSAs
Not all area types use all LSA. Below is a matrix of accepted LSAs.
Routing metrics
OSPF uses path cost as its basic routing metric, which was defined by the standard not to equate to any standard value such as speed, so the network designer could pick a metric important to the design. In practice, it is determined by comparing the speed of the interface to a reference-bandwidth for the OSPF process. The cost is determined by dividing the reference bandwidth by the interface speed (although the cost for any interface can be manually overridden). If a reference bandwidth is set to '10000', then a 10 Gbit/s link will have a cost of 1. Any speeds less than 1 are rounded up to 1. Here is an example table that shows the routing metric or 'cost calculation' on an interface.
Type-1 LSA has a size of 16-bit field (65,535 in decimal)
Type-3 LSA has a size of 24-bit field (16,777,216 in decimal)
OSPF is a layer 3 protocol: if a layer 2 switch is between the two devices running OSPF, one side may negotiate a speed different from the other side. This can create an asymmetric routing on the link (Router 1 to Router 2 could cost '1' and the return path could cost '10'), which may lead to unintended consequences.
Metrics, however, are only directly comparable when of the same type. Four types of metrics are recognized. In decreasing preference, these types are (for example, an intra-area route is always preferred to an external route regardless of metric):
Intra-area
Inter-area
External Type 1, which includes both the external path cost and the sum of internal path costs to the ASBR that advertises the route,
External Type 2, the value of which is solely that of the external path cost,
OSPF v3
OSPF version 3 introduces modifications to the IPv4 implementation of the protocol.
Despite the expanded IP addressing to 128-bits in IPv6, area and router identifications are still based on 32-bit numbers.
High level Changes
Except for virtual links, all neighbor exchanges use IPv6 link-local addressing exclusively. The IPv6 protocol runs per link, rather than based on the subnet.
All IP prefix information has been removed from the link-state advertisements and from the hello discovery packet making OSPFv3 essentially protocol-independent.
Three separate flooding scopes for LSAs:
Link-local scope, LSA is only flooded on the local link and no further.
Area scope, LSA is flooded throughout a single OSPF area.
AS scope. LSA is flooded throughout the routing domain.
Use of IPv6 Link-Local Addresses, for neighbor discovery, auto-configuration.
Authentication has been moved to the IP Authentication Header
Changes introduced in OSPF v3, then backported by vendors to v2
Explicit Support for Multiple Instances per Link
Packet Format Changes
OSPF version number changed to 3
From the LSA header, The Options field has been removed.
In Hello packets and Database Description, the Options field is changed from 16 to 24 bit.
In Hello packet, the address information has been removed. the Interface ID has been added.
In router-LSAs, Two Options bits, the "R-bit" and the "V6-bit", have been added.
"R-bit", Allows for multi-homed hosts to participate in the routing protocol.
"V6-bit", specializes the R-bit.
Add "Instance ID" that allows multiple OSPF protocol instances on the same logical interface.
LSA Format Changes
The LSA Type field, is changed to 16.
Add support for Handling Unknown LSA Types
Three of bits is used for encoding flooding scope.
With IPv6, addresses in LSAs are now expressed as prefix and prefix length.
In Router-LSAs and network-LSAs, the address information is removed.
Router-LSAs and network-LSAs, is made network protocol independent.
A new LSA type is added Link-LSA, Link-LSA provide the router's link-local address to all other routers attached to the logical interface, list of IPv6 prefixes to associate with the link, and can send information that reflect the router's capabilities.
LSA Type-3 summary-LSAs have been renamed "inter-area-prefix-LSAs".
LSA Type-4 summary LSAs have been renamed "inter-area-router-LSAs".
Intra-area-prefix-LSA is added, a LSA that carries all IPv6 prefix information.
OSPF over MPLS-VPN
A customer can use OSPF over a MPLS-VPN, where the service provider uses BGP or RIP as their interior gateway protocol.
When using OSPF over MPLS-VPN, the VPN backbone becomes part of the OSPF backbone area 0. In all areas, isolated copies of the IGP are run.
Advantages:
The MPLS-VPN is transparent to the customer's OSPF standard routing.
Customer's equipment only needs to support OSPF.
Reduce the need for tunnels (Generic Routing Encapsulation, IPsec, wireguard) to use OSPF.
To achieve this, a non-standard OSPF-BGP redistribution is used. All OSPF routes retain the source LSA type and metric.
To prevent loops, an optional DN bit is used in LSAs to indicate that a route has already been sent from the provider edge to the customer's equipment.
OSPF extensions
Traffic engineering
OSPF-TE is an extension to OSPF extending the expressivity to allow for traffic engineering and use on non-IP networks. Using OSPF-TE, more information about the topology can be exchanged using opaque LSA carrying type–length–value elements. These extensions allow OSPF-TE to run completely out of band of the data plane network. This means that it can also be used on non-IP networks, such as optical networks.
OSPF-TE is used in GMPLS networks as a means to describe the topology over which GMPLS paths can be established. GMPLS uses its own path setup and forwarding protocols, once it has the full network map.
In the Resource Reservation Protocol (RSVP), OSPF-TE is used for recording and flooding RSVP signaled bandwidth reservations for label switched paths within the link-state database.
Optical routing
documents work in optical routing for IP based on extensions to OSPF and IS-IS.
Multicast Open Shortest Path First
The Multicast Open Shortest Path First (MOSPF) protocol is an extension to OSPF to support multicast routing. MOSPF allows routers to share information about group memberships.
OSPF in broadcast and non-broadcast networks
In broadcast multiple-access networks, neighbor adjacency is formed dynamically using multicast hello packets to . A DR and BDR are elected normally, and function normally.
For non-broadcast multiple-access networks (NBMA), the following two official modes are defined:
non-broadcast
point-to-multipoint
Cisco has defined the following three additional modes for OSPF in NBMA topologies:
point-to-multipoint non-broadcast
broadcast
point-to-point
Notable implementations
Allied Telesis implements OSPFv2 & OSPFv3 in Allied Ware Plus (AW+)
Arista Networks implements OSPFv2 and OSPFv3
BIRD implements both OSPFv2 and OSPFv3
Cisco IOS and NX-OS
Cisco Meraki
D-Link implements OSPFv2 on Unified Services Router.
Dell's FTOS implements OSPFv2 and OSPFv3
ExtremeXOS
GNU Zebra, a GPL routing suite for Unix-like systems supporting OSPF
Juniper Junos
NetWare implements OSPF in its Multi Protocol Routing module.
OpenBSD includes OpenOSPFD, an OSPFv2 implementation.
Quagga, a fork of GNU Zebra for Unix-like systems
FRRouting, the successor of Quagga
XORP, a routing suite implementing RFC2328 (OSPFv2) and RFC2740 (OSPFv3) for both IPv4 and IPv6
Windows NT 4.0 Server, Windows 2000 Server and Windows Server 2003 implemented OSPFv2 in the Routing and Remote Access Service, although the functionality was removed in Windows Server 2008.
Applications
OSPF is a widely deployed routing protocol that can converge a network in a few seconds and guarantee loop-free paths. It has many features that allow the imposition of policies about the propagation of routes that it may be appropriate to keep local, for load sharing, and for selective route importing. IS-IS, in contrast, can be tuned for lower overhead in a stable network, the sort more common in ISP than enterprise networks. There are some historical accidents that made IS-IS the preferred IGP for ISPs, but ISPs today may well choose to use the features of the now-efficient implementations of OSPF, after first considering the pros and cons of IS-IS in service provider environments.
OSPF can provide better load-sharing on external links than other IGPs. When the default route to an ISP is injected into OSPF from multiple ASBRs as a Type I external route and the same external cost specified, other routers will go to the ASBR with the least path cost from its location. This can be tuned further by adjusting the external cost. If the default route from different ISPs is injected with different external costs, as a Type II external route, the lower-cost default becomes the primary exit and the higher-cost becomes the backup only.
The only real limiting factor that may compel major ISPs to select IS-IS over OSPF is if they have a network with more than 850 routers.
See also
Fabric Shortest Path First
Mesh networking
Route analytics
Routing
Shortest path problem
References
Further reading
External links
IETF OSPF Working Group
Cisco OSPF
Cisco OSPF Areas and Virtual Links
Summary of OSPF v2
Internet protocols
Internet Standards
Routing protocols
Application layer protocols | Operating System (OS) | 1,167 |
Famicom Disk System
The commonly shortened to the Famicom Disk System or just Disk System, is a peripheral for Nintendo's Family Computer home video game console, released only in Japan on February 21, 1986. It uses proprietary floppy disks called "Disk Cards" for cheaper data storage and it adds a new high-fidelity sound channel for supporting Disk System games.
Fundamentally, the Disk System serves simply to enhance some aspects already inherent to the base Famicom system, with better sound and cheaper gamesthough with the disadvantages of high initial price, slow speed, and lower reliability. However, this boost to the market of affordable and writable mass storage temporarily served as an enabling technology for the creation of new types of video games. This includes the vast, open world, progress-saving adventures of the best-selling The Legend of Zelda (1986) and Metroid (1986), games with a cost-effective and swift release such as the best-selling Super Mario Bros. 2, and nationwide leaderboards and contests via the in-store Disk Fax kiosks, which are considered to be forerunners of today's online achievement and distribution systems.
By 1989, the Famicom Disk System was inevitably obsoleted by the improving semiconductor technology of game cartridges. The Disk System's lifetime sales reached 4.4 million units by 1990. Its final game was released in 1992, its software was discontinued in 2003, and Nintendo officially discontinued its technical support in 2007.
History
By 1985, Nintendo's Family Computer was dominating the Japanese home video game market, selling over three million units within a year and a half. Because of its success, the company had difficulty with keeping up demand for new stock, often getting flooded with calls from retailers asking for more systems. Retailers also requested for cheaper games; the cost of chips and semiconductors made cartridges expensive to make, and often cost a lot of money for both stores and consumers to purchase. Chip shortages also created supply issues. To satisfy these requests, Nintendo began thinking of ways to potentially lower the cost of games. It turned towards the home computer market for inspiration; Nintendo specifically looked to floppy disks which were quickly becoming the standard for storage media for personal computers. Floppy disks were cheap to produce and rewritable, allowing games to be easily produced during the manufacturing process. Seeing its potential, Nintendo began work on a disk-based peripheral for the Famicom.
For its proprietary diskette platform, which they dubbed the "Disk Card", Nintendo chose to base it on Mitsumi's Quick Disk media format, a cheaper alternative to floppy disks for Japanese home computers. The Disk Card format presented a number of advantages over cartridges, such as increased storage capacity that allowed for larger games, additional sound channels, and the ability to save player progress. The add-on itself was produced by Masayuki Uemura and Nintendo Research & Development 2, the same team that designed the Famicom itself. Following several delays, the Famicom Disk System was released on February 21, 1986, at a retail price of ¥15000 (US$80). The same day, Nintendo released The Legend of Zelda as a launch title, alongside disk re-releases of earlier Famicom games. Marketing material for the Disk System featured a yellow mascot character named Diskun, or Mr. Disk. The Famicom Disk System sold over 300,000 units within three months, jumping to over 2 million by the end of the year. Nintendo remained confident the Disk System would be a sure-fire success, and ensured that all future first-party releases would be exclusive to the peripheral.
Coinciding with the Disk System's release, Nintendo installed several "Disk Writer" kiosks in various toy and electronic stores across the country. These kiosks allowed customers to bring in their disk games and have a new game rewritten onto them for a ¥500 fee; blank disks could also be purchased for ¥2000. Nintendo also introduced special high-score tournaments for specific Disk System games, where players could submit their scores directly to Nintendo via "Disk Fax" machines found in retail stores. Winners would receive exclusive prizes, including Famicom-branded stationary sets and a gold-colored Punch-Out!! cartridge. Nintendo of America announced plans to release the Disk System for the Famicom's international counterpart, the Nintendo Entertainment System, however these plans were eventually scrapped.
Despite the Famicom Disk System's success and advantages over the Famicom itself, it also imposed many problems of its own. Most common was the quality of the Disk Cards; Nintendo removed the shutters on most Disk System games to reduce costs, instead placing them in a wax sleeve and clear plastic shell. The disks themselves are fragile, and the lack of a shutter made them collect dust and eventually become unplayable as a result. Piracy was also rampant, with disk copying devices and bootleg games becoming commonplace in stores and in magazine advertisements. Third-party developers for the Disk System were also angered towards Nintendo's strict licensing terms, requiring that it receive 50% copyright ownership of any and all software released — this led to several major developers, such as Namco and Hudson Soft, refusing to produce games for it. Four months after the Disk System was released, Capcom released a Famicom conversion of Ghosts 'n Goblins on a 128k cartridge, which as a result made consumers and developers less impressed with the Disk System's technological features. Retailers disliked the Disk Writer kiosks for taking up too much space and for generally being unprofitable. The Disk System's vague error messages, long loading times, and the poor quality of the rubber drive belt that spun the disks are also cited as attributing to its downfall.
By 1989, advancements in technology made cartridge games much cheaper and easier to produce, leaving the Famicom Disk System obsolete. Retailers were critical of Nintendo simply abandoning the Disk Writers and leaving stores with large kiosks that took up vital space, while companies began to release or move their games from the Disk System to a standard cartridge; towards the end of development, Squaresoft ported Final Fantasy over to the Famicom as a cartridge game, with its own battery backup save feature. Nintendo officially discontinued the Famicom Disk System in 1990, selling around 4.4 million units total. Disk writing services were still kept in operation until 2003, while technical services were serviced up until 2007.
Hardware versions
Sharp released the Twin Famicom, a Famicom model that features a built-in Disk System.
Disk Writer and Disk Fax kiosks
Widespread copyright violation in Japan's predominantly personal-computer-based game rental market inspired corporations to petition the government to ban the rental of all video games in 1984. With games then being available only via full purchase, demand rose for a new and less expensive way to access more games. In 1986, as video gaming had increasingly expanded from computers into the video game console market, Nintendo advertised a promise to install 10,000 Famicom Disk Writer kiosks in toy and hobby stores across Japan within one year. These jukebox style stations allowed users to copy from a rotating stock of the latest games to their disks and keep each one for an unlimited time. To write an existing disk with a new game from the available roster was (then about and 1/6 of the price of many new games). Instruction sheets were given by the retailer, or available by mail order for . Some game releases, such as Kaette Kita Mario Bros. (lit. The Return of Mario Bros.), are exclusive to these kiosks.
In 1987, Disk Writer kiosks in select locations were also provisioned as Disk Fax systems as Nintendo's first online concept. Players could take advantage of the dynamic rewritability of blue floppy disk versions of Disk System games (such as Famicom Grand Prix: F1 Race and Golf Japan Course) in order to save their high scores at their leisure at home, and then bring the disk to a retailer's Disk Fax kiosk, which collated and transmitted the players' scores via facsimile to Nintendo. Players participated in a nationwide leaderboard, with unique prizes.
The kiosk service was very popular and remained available until 2003. In subsequent console generations, Nintendo would relaunch this online national leaderboard concept with the home satellite-based Satellaview subscription service in Japan from 1995-2000 for the Super Famicom. It would relaunch the model of games downloadable to rewritable portable media from store kiosks, with the Nintendo Power service in Japan which is based on rewritable flash media cartridges for Super Famicom and Game Boy from 1997–2007.
Calling the Disk Writer "one of the coolest things Nintendo ever created", Kotaku says modern "digital distribution could learn from [Disk Writer]", and that the system's premise of game rental and achievements would still be innovative in today's retail and online stores. NintendoLife said it "was truly ground-breaking for its time and could be considered a forerunner of more modern distribution methods [such as] Xbox Live Arcade, PlayStation Network, and Steam".
Technology
The device is connected to the Famicom console by plugging its RAM Adapter cartridge into the system's cartridge port, and attaching that cartridge's cable to the disk drive. The RAM Adapter contains 32 kilobytes (KB) of RAM for temporarily caching program data from disk, 8 KB of RAM for tile and sprite data storage, and an ASIC named the 2C33. The ASIC acts as a disk controller, plus single-cycle wavetable-lookup synthesizer sound hardware. Finally, embedded in the 2C33 is an 8KB BIOS ROM. The Disk Cards used are double-sided, with a total capacity of 112 KB per disk. Many games span both sides of a disk and a few span multiple disks, requiring the user to switch at some point during gameplay. The Disk System is capable of running on six C-cell batteries or the supplied AC adapter. Batteries usually last five months with daily game play. The inclusion of a battery option is due to the likelihood of a standard set of AC plugs already being occupied by a Famicom and a television.
The Disk System's Disk Cards are somewhat proprietary 71 mm × 76 mm (2.8 × 3 in) 56K-per-side double-sided floppy. They are a slight modification of Mitsumi's Quick Disk 71 mm 2.8 in square disk format which is used in a handful of Japanese computers and various synthesizer keyboards, along with a few word processors. QuickDisk drives are in a few devices in Europe and North America. Mitsumi already had close relations with Nintendo, as it manufactured the Famicom and NES consoles, and possibly other Nintendo hardware.
Modifications to the standard Quick Disk format include the "NINTENDO" moulding along the bottom of each Disk Card. In addition to branding the disk, this acts as a rudimentary form of copy protection - a device inside the drive bay contains raised protrusions which fit into their recessed counterparts, ostensibly ensuring that only official disks are used. If a disk without these recessed areas is inserted, the protrusions cannot raise, and the system will not allow the game to be loaded. This was combined with technical measures in the way data was stored on the disk to prevent users from physically swapping copied disk media into an official shell. However, both of these measures were defeated by pirate game distributors; in particular, special disks with cutouts alongside simple devices to modify standard Quick Disks were produced to defeat the physical hardware check, enabling rampant piracy. An advertisement containing a guide for a simple modification to a Quick Disk to allow its use with a Famicom Disk System was printed in at least one magazine.
Games
There are about 200 games in the Famicom Disk System's library. Some are FDS exclusives, some are Disk Writer exclusives, and many were re-released years later on the cartridge format such as The Legend of Zelda for NES in 1987 and for Famicom in 1994. The most notable FDS originals include The Legend of Zelda, Zelda II: The Adventure of Link, Kid Icarus, Ice Hockey, and Akumajō Dracula (Castlevania).
Square Co., Ltd. had a branch called Disk Original Group, a software label that published Disk System games from Japanese PC software companies. The venture was largely a failure and almost pushed a pre-Final Fantasy Square into bankruptcy. Final Fantasy was to be released for the FDS, but a disagreement over Nintendo's copyright policies caused Square to change its position and release the game as a cartridge.
Nintendo released a disk version of Super Mario Bros. in addition to the cartridge version. The Western-market Super Mario Bros. 2 originated from a disk-only game called Yume Kōjō: Doki Doki Panic.
Nintendo utilized the cheaper and more dynamic disk medium for a Disk Writer exclusive cobranded advertisement-based game, a genre now called advergames. Kaettekita Mario Bros. (lit. The Return of Mario Bros.) is a remastered version of Mario Bros. with enhanced jump controls and high score saving, plus a new slot machine minigame branded for the Nagatanien food company.
The final FDS game release was Janken Disk Jō in December 1992, a rock paper scissors game featuring the Disk System mascot, Disk-kun.
Legacy
The Famicom Disk System briefly served as an enabling technology for the creation of a new wave of home console video games and a new type of video game experience, mostly due to tripling the size of cheap game storage compared to affordable cartridge ROMs, and by storing gamers' progress within their vast new adventures. These games include the open world design and enduring series launches of The Legend of Zelda (1986) and Metroid (1986), with its launch game Zelda becoming very popular and leading to sequels which are considered some of the greatest games of all time. Almost one decade ahead of Nintendo's Satellaview service, the FDS's writable and portable storage technology served as an enabling technology for the innovation of online leaderboards and contests via the in-store Disk Fax kiosks, which are now seen as the earliest forerunners of modern online gaming and distribution.
Within its library of 200 original games, some are FDS-exclusive and many were re-released one or two years later on cartridges for Famicom and NES, though without the FDS's additional sound channel.
See also
Sega CD - A similar peripheral for the Sega Genesis.
64DD
Notes
References
Nintendo Entertainment System accessories
Video game console add-ons
Video game storage media
Japan-only video game hardware
Computer-related introductions in 1986 | Operating System (OS) | 1,168 |
BioLinux
BioLinux is a term used in a variety of projects involved in making access to bioinformatics software on a Linux platform easier using one or more of the following methods:
Provision of complete systems
Provision of bioinformatics software repositories
Addition of bioinformatics packages to standard distributions
Live DVD/CDs with bioinformatics software added
Community building and support systems
There are now various projects with similar aims, on both Linux systems and other Unices, and a selection of these are given below. There is also an overview in the Canadian Bioinformatics Helpdesk Newsletter that details some of the Linux-based projects.
Package repositories
Apple/Mac
Many Linux packages are compatible with Mac OS X and there are several projects which attempt to make it easy to install selected Linux packages (including bioinformatics software) on a computer running Mac OS X. (source?)
BioArchLinux
BioArchLinux repository contain more than 3,770 packages for Arch Linux and Arch Linux based distribution.
Debian
Debian is another very popular Linux distribution in use in many academic institutions, and some bioinformaticians have made their own software packages available for this distribution in the deb format.
Red Hat
Package repositories are generally specific to the distribution of Linux the bioinformatician is using. A number of Linux variants are prevalent in bioinformatics work. Fedora is a freely-distributed version of the commercial Red Hat system. Red Hat is widely used in the corporate world as they offer commercial support and training packages. Fedora Core is a community supported derivative of Red Hat and is popular amongst those who like Red Hat's system but don't require commercial support. Many users of bioinformatics applications have produced RPMs (Red Hat's package format) designed to work with Fedora, which you can potentially also install on Red Hat Enterprise Linux systems. Other distributions such as Mandriva and SUSE use RPMs, so these packages may also work on these distributions.
Slackware
Slackware is one of the less used Linux distributions. It is popular with those who have better knowledge of the Linux operating system and who prefer the command line over the various GUIs available. Packages are in the tgz or tgx format. The most widely known live distribution based on Slackware is Slax and it has been used as a base for many of the bioinformatics distributions.
BioSLAX
Live DVDs/CDs
Live DVDs or CDs are not an ideal way to provide bioinformatics computing, as they run from a CD/DVD drive. This means they are slower than a traditional hard disk installation and have limited ability to be configured. However, they can be suitable for providing ad hoc solutions where no other Linux access is available, and may even be used as the basis for a Linux installation.
Standard distributions with good bioinformatics support
In general, Linux distributions have a wide range of official packages available, but this does not usually include much in the way of scientific support. There are exceptions, such as those detailed below.
Gentoo Linux
Gentoo Linux provides over 156 bioinformatics applications (see Gentoo sci-biology herd in the main tree) in the form of ebuilds, which build the applications from source code. Additional 315 packages are in Gentoo science overlay (for testing).
Although a very flexible system with excellent community support, the requirement to install from source means that Gentoo systems are often slow to install, and require considerable maintenance. It is possible to reduce some of the compilation time by using a central server to generate binary packages. On the other hand, you can fine tune all to run at the highest speed utilizing the best of your processor (for example to actually use SSE and AVX and AVX2 CPU instructions). Binary-based distro's usually provide binaries using only i686 or even just i386 instruction sets.
FreeBSD
FreeBSD is not a Linux distribution, but as it is a version of Unix that it is very similar. Its ports are like Gentoo's ebuilds, and the same caveats apply. However, there are also pre-complied binary packages available. There are over 60 biological sciences applications, and they're listed on the Fresh Ports site.
Debian
There are more than a hundred bioinformatics packages provided as part of the standard Debian installation. NEBC Bio-Linux packages can also be installed on a standard Debian system as long as the bio-linux-base package is also installed. This creates a /usr/local/bioinf directory where our other packages install their software. Debian packages may also work on Ubuntu Linux or other Debian-derived installations.
Community building and support systems
Providing support and documentation should be an important part of any BioLinux project, so that scientists who are not IT specialists may quickly find answers to their specific problems. Support forums or mailing lists are also useful to disseminate knowledge within the research community. Some of these resources are linked to here.
See also
List of open-source bioinformatics software
List of biomedical cybernetics software
References
External links
Bioinformatics software
Linux
Computational science | Operating System (OS) | 1,169 |
SunView
SunView (Sun Visual Integrated Environment for Workstations, originally SunTools) was a windowing system from Sun Microsystems developed in the early 1980s. It was included as part of SunOS, Sun's UNIX implementation; unlike later UNIX windowing systems, much of it was implemented in the system kernel. SunView ran on Sun's desktop and deskside workstations, providing an interactive graphical environment for technical computing, document publishing, medical, and other applications of the 1980s, on high resolution monochrome, greyscale and color displays.
Bundled productivity applications
SunView included a full suite of productivity applications, including an email reader, calendaring tool, text editor, clock, preferences, and menu management interface (all GUIs). The idea of shipping such clients and the associated server software with the base OS was several years ahead of the rest of the industry.
Sun’s original SunView application suite was later ported to X, featuring the OPEN LOOK look and feel. Known as the DeskSet productivity tool set, this was one distinguishing element of Sun's OpenWindows desktop environment.
The DeskSet tools became a unifying element at the end of the Unix wars, where the open systems industry was embroiled in a battle which would last for years. As part of the COSE initiative, it was decided that Sun’s bundled applications would be ported yet again, this time to the Motif widget toolkit, and the result would be part of CDE. This became the standard for a time across all open systems vendors.
The full suite of group productivity applications that Sun had bundled with the desktop workstations turned out to be a significant legacy of SunView. While the underlying windowing infrastructure changed, protocols changed, and windowing systems changed, the Sun applications remained largely the same, maintaining interoperability with previous implementations.
Successors
SunView was intended to be superseded by NeWS, a more sophisticated window system based on PostScript; however, the actual successor turned out to be OpenWindows, whose window server supported SunView, NeWS and the X Window System. Support for the display of SunView programs was phased out after Solaris 2.2. Sun provided a toolkit for X called XView, with an API similar to that of SunView, simplifying the transition for developers between the two environments.
Sun later announced its migration to the GNOME desktop environment from CDE, presumably marking the end of the 20-year-plus history of the SunView/DeskSet code base.
Sun Microsystems software
Windowing systems
Widget toolkits | Operating System (OS) | 1,170 |
Live USB
A live USB is a portable USB-attached external data storage device containing a full operating system that can be booted from. The term is reminiscent of USB flash drives but may encompass an external hard disk drive or solid-state drive, though they may be referred to as "live HDD" and "live SSD" respectively. They are the evolutionary next step after live CDs, but with the added benefit of writable storage, allowing customizations to the booted operating system. Live USBs can be used in embedded systems for system administration, data recovery, or test driving, and can persistently save settings and install software packages on the USB device.
Many operating systems including , , Windows XP Embedded and a large portion of Linux and BSD distributions can run from a USB flash drive, and Windows 8 Enterprise has a feature titled Windows To Go for a similar purpose.
Background
To repair a computer with booting issues, technicians often use lightweight operating systems on bootable media and a command-line interface. The development of the first live CDs with graphical user interface made it feasible for non-technicians to repair malfunctioning computers. Most Live CDs are Linux-based, and in addition to repairing computers, these would occasionally be used in their own right as operating systems.
Personal computers introduced USB booting in the early 2000s, with the Macintosh computers introducing the functionality in 1999 beginning with the Power Mac G4 with AGP graphics and the slot-loading iMac G3 models. Intel-based Macs carried this functionality over with booting macOS from USB. Specialized USB-based booting was proposed by IBM in 2004 with Reincarnating PCs with Portable SoulPads and Boot Linux from a FireWire device.
Benefits and limitations
Live USBs share many of the benefits and limitations of live CDs, and also incorporate their own.
Benefits
In contrast to live CDs, the data contained on the booting device can be changed and additional data stored on the same device. A user can carry their preferred operating system, applications, configuration, and personal files with them, making it easy to share a single system between multiple users.
Live USBs provide the additional benefit of enhanced privacy because users can easily carry the USB device with them or store it in a secure location (e.g. a safe), reducing the opportunities for others to access their data. On the other hand, a USB device is easily lost or stolen, so data encryption and backup is even more important than with a typical desktop system.
The absence of moving parts in USB flash devices allows true random access, thereby avoiding the rotational latency and seek time of hard drives or optical media, meaning small programs will start faster from a USB flash drive than from a local hard disk or live CD. However, as USB devices typically achieve lower data transfer rates than internal hard drives, booting from older computers that lack support for USB 2.0 or newer can be very slow.
Limitations
LiveUSB OSes like Ubuntu Linux apply all filesystem writes to a casper filesystem overlay (casper-rw) that, once full or out of flash drive space, becomes unusable and the OS ceases to boot.
USB controllers on add-in cards (e.g. ISA, PCI, and PCI-E) are almost never capable of being booted from, so systems that do not have native USB controllers in their chipset (e.g. such as older ones before USB) likely will be unable to boot from USB even when USB is enabled via such an add-in card.
Some computers, particularly older ones, may not have a BIOS that supports USB booting. Many which do support USB booting may still be unable to boot the device in question. In these cases a computer can often be "redirected" to boot from a USB device through use of an initial bootable CD or floppy disk.
Some Intel-based Macintosh computers have limitations when booting from USB devices – while the Extensible Firmware Interface (EFI) firmware can recognize and boot from USB drives, it can do this only in EFI mode. When the firmware switches to "legacy" BIOS mode, it no longer recognizes USB drives. Non-Macintosh systems, notably Windows and Linux, may not be typically booted in EFI mode and thus USB booting may be limited to supported hardware and software combinations that can easily be booted via EFI. However, programs like Mac Linux USB Loader can alleviate the difficulties of the task of booting a Linux-live USB on a Mac. This limitation could be fixed by either changing the Apple firmware to include a USB driver in BIOS mode, or changing the operating systems to remove the dependency on the BIOS.
Due to the additional write cycles that occur on a full-blown installation, the life of the flash drive may be slightly reduced. This doesn't apply to systems particularly designed for live systems which keep all changes in RAM until the user logs off. A write-locked SD card (known as a Live SD, the solid-state counterpart to a live CD) in a USB flash card reader adapter is an effective way to avoid any duty cycles on the flash medium from writes and circumvent this problem. The SD card as a WORM device has an essentially unlimited life. An OS such as Linux can then run from the live USB/SD card and use conventional media for writing, such as magnetic disks, to preserve system changes; .
Setup
Various applications exist to create live USBs; examples include Universal USB Installer, Rufus, Fedora Live USB Creator, and UNetbootin. There are also software applications available that can be used to create a Multiboot live USB; some examples include YUMI Multiboot Bootable USB Creator and Ventoy. A few Linux distributions and live CDs have ready-made scripts which perform the steps below automatically. In addition, on Knoppix and Ubuntu extra applications can be installed, and a persistent file system can be used to store changes. A base install ranges between as little as 16 MiB (Tiny Core Linux) to a large DVD-sized install (4 gigabytes).
To set up a live USB system for commodity PC hardware, the following steps must be taken:
A USB flash drive needs to be connected to the system, and be detected by it
One or more partitions may need to be created on the USB flash drive
The "bootable" flag must be set on the primary partition on the USB flash drive
An MBR must be written to the primary partition of the USB flash drive
The partition must be formatted (most often in FAT32 format, but other file systems can be used too)
A bootloader must be installed to the partition (most often using syslinux when installing a Linux system)
A bootloader configuration file (if used) must be written
The necessary files of the operating system and default applications must be copied to the USB flash drive
Language and keyboard files (if used) must be written to the USB flash drive
USB support in the BIOS’s boot menu (although there are ways to get around this; actual use of a CD or DVD can allow the user to choose if the medium can later be written to. Write Once Read Many discs allow certainty that the live system will be clean the next time it is rebooted.)
Knoppix live CDs have a utility that, on boot, allows users to declare their intent to write the operating system's file structures either temporarily, to a RAM disk, or permanently, on disk or flash media to preserve any added configurations and security updates. This can be easier than recreating the USB system but may be moot since many live USB tools are simple to use.
Full installation
An alternative to a live solution is a traditional operating system installation with the elimination of swap partitions. This installation has the advantage of being efficient for the software, as a live installation would still contain software removed from the persistent file due to the operating system’s installer still being included with the media. However, a full installation is not without disadvantages; due to the additional write cycles that occur on a full installation, the life of the flash drive may be slightly reduced. To mitigate this, some live systems are designed to store changes in RAM until the user powers down the system, which then writes such changes. Another factor is if the speed of the storage device is poor; performance can be comparable to legacy computers even on machines with modern parts if the flash drive transfers such speeds. One way to solve this is to use a USB hard drive, as they generally give better performance than flash drives regardless of the connector.
Microsoft Windows
Although many live USBs rely on booting an open-source operating system such as Linux, it is possible to create live USBs for Microsoft Windows by using Diskpart or WinToUSB.
See also
Boot disk
dd (Unix)
Disk cloning
Extensible Firmware Interface
External hard disk
extlinux
initramfs
ISO file
Lightweight Linux distribution
List of live CDs
List of tools to create Live USB systems
List of Linux distributions that run from RAM
Live USB creator
Multiboot Specification
Comparison of Linux Live CDs
Partitionless
Persistence (computer science)
Portable Apps
Portable-VirtualBox
PXE
Self-booting diskette
UNetbootin
Virtualization
References
External links
The Differences Between Persistent Live USB and Full Linux Install on USB
Universal USB Installer
Partitionless Installation
Tutorial – How to Set your BIOS to boot from CD or USB
HOW TO: Create a working Live USB
Debian Live project
How to create a Live USB in Ubuntu
Casper
USB | Operating System (OS) | 1,171 |
Headless software
Headless software (e.g. "headless Java" or "headless Linux",) is software capable of working on a device without a graphical user interface. Such software receives inputs and provides output through other interfaces like network or serial port and is common on servers and embedded devices.
The term "headless" is most often used when the ordinary version of the program requires that a graphics card or similar graphical interface device be present. For instance, the absence of a graphic card, mouse or keyboard may cause an initialization process that assumes their presence to fail, or the graphics card may be relied upon to build some offline image that is later served through network.
A headless computer (for example, and most commonly, a server) may be missing many of the system libraries that support the display of graphical interfaces. Software that expects these libraries may fail to start or even to compile if such libraries are not present. Software built on a headless machine must be built within command line tools only, without the aid of an IDE.
Headless websites
Next to headless computers and headless software, the newest form of headless technology can be found in websites. Traditional websites have their own back-end and front-end (graphical user interface). All the pieces work with the same code base and communicate directly with each other, making the website as a whole. However in a headless installation the front-end is a stand-alone piece of software, which through API communicates with a back-end. Both parts operate separately from each other, and can even be placed on separate servers, creating a minimum version of a multi-server architecture. The bridge between both parts is the API client. The endpoints of the API are connected to each other.
The biggest advantages of this technology can be found in performance optimisation and flexibility of the software stack.
See also
Secure Shell
Headless browser
Headless computer
Headless content management system
References
Software engineering terminology | Operating System (OS) | 1,172 |
OrangeFS
OrangeFS is an open-source parallel file system, the next generation of Parallel Virtual File System (PVFS). A parallel file system is a type of distributed file system that distributes file data across multiple servers and provides for concurrent access by multiple tasks of a parallel application. OrangeFS was designed for use in large-scale cluster computing and is used by companies, universities, national laboratories and similar sites worldwide.
Versions and features
2.8.5
Server-to-server communication infrastructure
SSD option for storage of distributed metadata
Full native Windows client support
Replication for immutable files
2.8.6
Direct interface for applications
Client caching for the direct interface with multi-process single-system coherence
Initial release of the webpack supporting WebDAV and S3 via Apache modules
2.8.7
Updates, fixes and performance improvements
2.8.8
Updates, fixes and performance improvements, native Hadoop support via JNI shim, support for newer Linux kernels
2.9
Distributed Metadata for Directory Entries
Capability-based security in 3 modes
Standard security
Key-based security
Certificate-based security with LDAP interface support
Extended documentation
History
OrangeFS emerged as a development branch of PVFS2, so much of its history is shared with the history of PVFS. Spanning twenty years, the extensive history behind OrangeFS is summarized in the time line below.
A development branch is a new direction in development. The OrangeFS branch was begun in 2007, when leaders in the PVFS2 user community determined that:
Many were satisfied with the design goals of PVFS2 and needed it to remain relatively unchanged for future stability
Others envisioned PVFS2 as a foundation on which to build an entirely new set of design objectives for more advanced applications of the future.
This is why OrangeFS is often described as the next generation of PVFS2.
1993
Parallel Virtual File System (PVFS) was developed by Walt Ligon and Eric Blumer under a NASA grant to study I/O patterns of parallel programs. PVFS version 0 was based on the Vesta parallel file system developed at IBM's Thomas J. Watson Research Center, and its name was derived from its development to work on Parallel Virtual Machine (PVM).
1994
Rob Ross rewrote PVFS to use TCP/IP, departing significantly from the original Vesta design. PVFS version 1 was targeted to a cluster of DEC Alpha workstations on FDDI, a predecessor to Fast Ethernet networking. PVFS made significant gains over Vesta in the area of scheduling disk I/O while multiple clients access a common file.
Late 1994'''
The Goddard Space Flight Center chose PVFS as the file system for the first Beowulf (early implementations of Linux-based commodity computers running in parallel). Ligon and Ross worked with key GSFC developers, including Thomas Sterling, Donald Becker, Dan Ridge, and Eric Hendricks over the next several years.
1997
PVFS released as an open-source package
1999
Ligon proposed the development of a new PVFS version. Initially developed at Clemson University, the design was completed in a joint effort among contributors from Clemson, Argonne National Laboratory and the Ohio Supercomputer Center, including major contributions by Phil Carns, a PhD student at Clemson.
2003
PVFS2 released, featuring object servers, distributed metadata, accommodation of multiple metadata servers, file views based on MPI (Message Passing Interface, a protocol optimized for high performance computing) for multiple network types, and a flexible architecture for easy experimentation and extensibility. PVFS2 becomes an “Open Community” project, with contributions from many universities and companies around the world.
2005
PVFS version 1 was retired. PVFS2 is still supported by Clemson and Argonne. In recent years, various contributors (many of them charter designers and developers) continued to improve PVFS performance.
2007
Argonne National Laboratories chose PVFS2 for its IBM Blue Gene/P, a super computer sponsored by the U.S. Department of Energy.
2008
Ligon and others at Clemson began exploring possibilities for the next generation of PVFS in a roadmap that included the growing needs of mainstream cluster computing in the business sector. As they began developing extensions for supporting large directories of small files, security enhancements, and redundancy capabilities, many of these goals conflicted with development for Blue Gene. With diverging priorities, the PVFS source code was divided into two branches. The branch for the new roadmap became "Orange" in honor of Clemson school colors, and the branch for legacy systems was dubbed "Blue" for its pioneering customer installation at Argonne. OrangeFS became the new open systems brand to represent this next-generation virtual file system, with an emphasis on security, redundancy and a broader range of applications.
Fall 2010
OrangeFS became the main branch of PVFS, and Omnibond began offering commercial support for OrangeFS/PVFS, with new feature requests from paid support customers receiving highest development priority. First production release of OrangeFS introduced.
Spring 2011 OrangeFS 2.8.4 released
September 2011 OrangeFS adds Windows client
February 2012 OrangeFS 2.8.5 released
June 2012 OrangeFS 2.8.6 released, offering improved performance, web clients and direct-interface libraries. The new OrangeFS Web pack provides integrated support for WebDAV and S3.
January 2013 OrangeFS 2.8.7 released
May 2013 OrangeFS available on Amazon Web Services marketplace. OrangeFS 2.9 Beta Version available, adding two new security modes and allowing distribution of directory entries among multiple data servers.
April 2014 OrangeFS 2.8.8 released adding shared mmap support, JNI support for Hadoop Ecosystem Applications supporting direct replacement of HDFS
November 2014 OrangeFS 2.9.0 released adding support for distributed metadata for directory entries using an extensible hashing algorithm modeled after giga+, POSIX backward compatible capability base security supporting multiple modes.
January 2015 OrangeFS 2.9.1 released
March 2015 OrangeFS 2.9.2 released
June 2015 OrangeFS 2.9.3 released
November 2015 OrangeFS included in CloudyCluster 1.0 release on AWS
May 2016 OrangeFS supported in Linux Kernel 4.6
October 2017 2.9.6 Released
January 2018 2.9.7 Released, OrangeFS rpm will now be included in Fedora distribution
February 2019 CloudyCluster v2 released on AWS marketplace featuring OrangeFS
June 2019 CloudyCluster v2 released on GCP featuring OrangeFS
July 2019 OreangeFS is integrated with the Linux page cache in Linux kernel 5.2
January 2020 OrangeFS interim fix for write after open issues, merged into the Linux kernel 5.5
August 2020 kernel patch back to 5.4lts that fixes issues with nonstandard block sizes.
September 2020 2.9.8 Released
June 2021 Linux 5.13 kernel: OrangeFS readahead in the in Linux kernel has been reworked to take advantage of the new xarray and readahead_expand logic. This significantly improved read performance.
July 2021 df results bug - df on OrangeFS was reporting way too small vs. reality and causing canned installer (and confused human) issues. This has been backported to several previous kernels in addition to pulled into the latest.
References
External links
Orange File System - Next Generation of the Parallel Virtual File System
Architecture of a Next-Generation Parallel File System (Video archive)
Scalable Distributed Directory Implementation on Orange File System
Elasticluster with OrangeFS
OrangeFS in the AWS Marketplace
Free software
Distributed file systems supported by the Linux kernel
Distributed file systems | Operating System (OS) | 1,173 |
Microsoft Open Specification Promise
The Microsoft Open Specification Promise (or OSP) is a promise by Microsoft, published in September 2006, to not assert its patents, in certain conditions, against implementations of a certain list of specifications.
The OSP is not a licence, but rather a covenant not to sue. It promises protection but does not grant any rights.
The OSP is limited to implementations to the extent that they conform to those specifications. This allows for conformance to be partial. So if an implementation follows the specification for some aspects, and deviates in other aspects, then the Covenant Not to Sue applies only to the implementation's aspects which follow the specification.
Relations with free software / open source projects
The protections granted by the OSP are independent to the licence of implementations. There is disagreement as to whether the conditions of the OSP can be fulfilled by free software / open source projects, and whether they thus gain any protection from the OSP.
An article in Cover Pages quotes Lawrence Rosen, an attorney and lecturer at Stanford Law School, as saying,
"I'm pleased that this OSP is compatible with free and open-source licenses."
Linux vendor Red Hat's stance, as communicated by lawyer Mark Webbink in 2006, is:
"Red Hat believes that the text of the OSP gives sufficient flexibility to implement the listed specifications in software licensed under free and open-source licenses. We commend Microsoft’s efforts to reach out to representatives from the open source community and solicit their feedback on this text, and Microsoft's willingness to make modifications in response to our comments."
Standards lawyer Andy Updegrove said in 2006 the Open Specification Promise was
"what I consider to be a highly desirable tool for facilitating the implementation of open standards, in particular where those standards are of interest to the open source community."
However, the Software Freedom Law Center, a law firm for free software and open source software, has warned of problems with the OSP for use in free software / open source software projects. In a published analysis of the promise it states that
"...it permits implementation under free software licenses so long as the resulting code isn't used freely."
Their analysis warned of a possible inconsistency with GPL. This applies specifically to the patent promise scope being limited to conforming implementations of covered specifications only.
Effectively when an implementer owns a patent and builds that patent technology in GPL3 licensed code, the implementer grants those first party patent rights downline to all re-users of that code. When the code is reused, the OSP only applies as long as the reuse of that code is limited to implementing the covered specifications.
Other patent promises with similar limitations include IBM's Interoperability Specifications Pledge (ISP) and Sun Microsystems' OpenDocument Patent Statement. This means, for example, that use of the required Sun patented StarOffice-related technology for OpenDocument should be protected by the Sun Covenant, but reuse of the code with the patented technology for non-OpenDocument implementations is no longer protected by the related Sun covenant.
For this reason the SFLC has stated:
"The OSP cannot be relied upon by GPL developers for their implementations not because its provisions conflict with GPL, but because it does not provide the freedom that the GPL requires."
The SFLC specifically point out:
new versions of listed specifications could be issued at any time by Microsoft, and be excluded from the OSP.
any code resulting from an implementation of one of the covered specifications could not safely be used outside the very limited field of use defined by Microsoft in the OSP.
The Microsoft OSP itself mentions the GPL in two of its FAQs. In one it says,
"we can’t give anyone a legal opinion about how our language relates to the GPL or other OSS licenses".
In another, it specifically only mentions the "developers, distributors, and users of Covered Implementations", so excluding downstream developers, distributors, and users of code later derived from these "Covered Implementations" and it specifically does not mention which version of the GPL is addressed, leading some commentators to conclude that the current GPLv3 may be excluded.
Q: I am a developer/distributor/user of software that is licensed under the GPL, does the Open Specification Promise apply to me?A: Absolutely, yes. The OSP applies to developers, distributors, and users of Covered Implementations without regard to the development model that created such implementations, or the type of copyright licenses under which they are distributed, or the business model of distributors/implementers. The OSP provides the assurance that Microsoft will not assert its Necessary Claims against anyone who make, use, sell, offer for sale, import, or distribute any Covered Implementation under any type of development or distribution model, including the GPL.
Licensed technologies
Technologies on which the Open Specification Promise applies are:
Web Services
Devices Profile for Web Services (DPWS)
Identity Selector Interoperability Profile v1.0
Identity Selector Interoperability Profile v1.5
Open Data Protocol (OData)
Remote Shell Web Services Protocol
SOAP
SOAP 1.1 Binding for MTOM 1.0
SOAP MTOM / XOP
SOAP-over-UDP
Web Single Sign-On Interoperability Profile
Web Single Sign-On Metadata Exchange Protocol
WS-Addressing
WS-Addressing End Point References and Identity
WS-AtomicTransaction
WS-BusinessActivity
WS-Coordination
WS-Discovery
WSDL
WSDL 1.1 Binding Extension for SOAP 1.2
WS-Enumeration
WS-Eventing
WS-Federation
WS-Federation Active Requestor Profile
WS-Federation Passive Requestor Profile
WS-I Basic Profile
WS-Management
WS-Management Catalog
WS-MetadataExchange
WS-Policy
WS-PolicyAttachment
WS-ReliableMessaging
WS-RM Policy
WS-SecureConversation
WS-Security: Kerberos Binding
WS-Security: Kerberos Token Profile
WS-Security: Rights Expression Language (REL) Token Profile
WS-Security: SAML Token profile
WS-Security: SOAP Message Security
WS-Security: UsernameToken Profile
WS-Security: X.509 Certificate Token Profile
WS-SecurityPolicy
WS-Transfer
WS-Trust
Web
OpenService Format Specification (a.o. Accelerator)
Web Slice Format Specification introduced with Internet Explorer 8
XML Search Suggestions Format Specification
Virtualization Specifications
Virtual Hard Disk (VHD) Image Format Specification
Microsoft Application Virtualization File Format Specification v1
Hyper-V Functional Specification
Security
RFC 4406 – Sender ID: Authenticating E-Mail
RFC 4408 – Sender Policy Framework: Authorizing Use of Domains in “Mail From”
RFC 4407 – Purported Responsible Address in E-Mail Messages
RFC 4405 – SMTP Service Extension for Indicating the Responsible Submitter of an E-Mail Message
RFC 7208 – Sender Policy Framework (SPF) for Authorizing Use of Domains in Email
U-Prove Cryptographic Specification V1.0
U-Prove Technology Integration into the Identity Metasystem V1.0
Office file formats
XML file formats
Office 2003 XML Reference Schemas
Office Open XML 1.0 – Ecma-376
Office Open XML ISO/IEC 29500:2008
OpenDocument Format for Office Applications v1.0 OASIS
OpenDocument Format for Office Applications v1.0 ISO/IEC 26300:2006
OpenDocument Format for Office Applications v1.1 OASIS
Binary file formats
Word 97-2007 Binary File Format (.doc) Specification
PowerPoint 97-2007 Binary File Format (.ppt) Specification
Excel 97-2007 Binary File Format (.xls) Specification
Excel 2007 Binary File Format (.xlsb) Specification
Office Drawing 97-2007 Binary Format Specification
Structure specifications
[MS-DOC]: Word Binary File Format (.doc) Structure Specification
[MS-PPT]: PowerPoint Binary File Format (.ppt) Structure Specification
[MS-XLS]: Excel Binary File Format (.xls) Structure Specification
[MS-XLSB]: Excel Binary File Format (.xlsb) Structure Specification
[MS-ODRAW]: Office Drawing Binary File Format Structure Specification
[MS-CTDOC]: Word Custom Toolbar Binary File Format Structure Specification
[MS-CTXLS]: Excel Custom Toolbar Binary File Format Structure Specification
[MS-OFORMS]: Office Forms Binary File Format Structure Specification
[MS-OGRAPH]: Office Graph Binary File Format Structure Specification
[MS-OSHARED]: Office Common Data Types and Objects Structure Specification
[MS-OVBA]: Office VBA File Format Structure Specification
[MS-OFFCRYPTO]: Office Document Cryptography Structure Specification
Windows compound formats
[MS-CFB] Windows Compound Binary File Format Specification
Graphics formats
Windows Metafile Format (.wmf) Specification
Ink Serialized Format (ISF) Specification
JPEG XR (.jxr) Format
Microsoft computer languages
[MS-XAML]: XAML Object Mapping Specification 2006 (Draft v0.1)
[MS-XAML]: XAML Object Mapping Specification 2006 (v1.0)
[MS-WPFXV]: WPF XAML Vocabulary Specification 2006 (Draft v0.1)
[MS-WPFXV]: WPF XAML Vocabulary Specification 2006 (v1.0)
[MS-SLXV]: Silverlight XAML Vocabulary Specification 2008 (Draft v0.9)
Robotics
Decentralized Software Services Protocol – DSSP/1.0
Synchronization
FeedSync v1.0, v1.0.1
Windows Rally Technologies
Windows Connect Now – UFD and Windows Vista
Windows Connect Now – UFD for Windows XP
Published protocols
In Microsoft's list of covered protocols there are many third-party protocols which Microsoft did not create but for which they imply they have patents which are necessary for implementation:
AppleTalk
[MC-BUP]: Background Intelligent Transfer Service (BITS) Upload Protocol Specification
[MC-CCFG]: Server Cluster: Configuration (ClusCfg) Protocol Specification
[MC-COMQC]: Component Object Model Plus (COM+) Queued Components Protocol Specification
[MC-FPSEWM]: FrontPage Server Extensions: Website Management Specification
[MC-SMP]: Session Multiplex Protocol Specification
[MC-SQLR]: SQL Server Resolution Protocol Specification
1394 Serial Bus Protocol 2
IBM NetBIOS Extended User Interface (NetBEUI) v 3.0
IEC 61883-1
IEEE 1284 – Interface - Parallel
IEEE 802.1x - 2004
Infrared Data Association (IrDA) Published Standards
Intel Preboot Execution Environment (PXE)
Novell Internetwork Packet Exchange (IPX)
Novell Sequenced Packet Exchange (SPX)
Novell Service Advertising Protocol (SAP)
RFC 1001 and RFC 1002 – NetBIOS over TCP (NETBT)
RFC 1055 – Serial Line Internet Protocol (SLIP)
RFC 1058, RFC 1723, and RFC 2453 – Routing Information Protocol 1.0, 2.0 (RIP)
RFC 1112, RFC 2236, and RFC 3376 – Internet Group Management Protocol (IGMP) v1, v2, and v3
RFC 1155, RFC 1157, RFC 1213, RFC 1289, RFC 1901, RFC 1902, RFC 1903, RFC 1904, RFC 1905, RFC 1906, RFC 1907, and RFC 1908: Simple Network Management Protocol v2 (SNMP)
RFC 1179 – Line Printer Daemon (LPD)
RFC 1191, RFC 1323, RFC 2018, and RFC 2581 – TCP/IP Extensions
RFC 1256 – ICMP Router Discovery Messages
RFC 1258 and RFC 1282 – Remote LOGIN (rlogin)
RFC 1332 and RFC 1877 – Internet Protocol Control Protocol (IPCP)
RFC 1334 – Password Authentication Protocol (PAP)
RFC 1393 – Traceroute
RFC 1436 – Internet Gopher
RFC 1483, RFC 1755, and RFC 2225 – Internet Protocol over Asynchronous Transfer Mode (IP over ATM)
RFC 1510 and RFC 1964 – Kerberos Network Authentication Service (v5)
RFC 1552 – PPP Internetwork Packet Exchange Control Protocol (IPXCP)
RFC 1661 – Point-to-Point Protocol (PPP)
RFC 1739 Section 2.2 – Packet Internet Groper (ping)
RFC 1889 and RFC 3550 – Real-Time Transport Protocol (RTP)
RFC 1939 and RFC 1734 – Post Office Protocol, v3 (POP3)
RFC 1962 – Compression Control Protocol (CCP)
RFC 1990 – Multilink Protocol (MP)
RFC 1994 – MD5 Challenge Handshake Authentication Protocol (MD5-CHAP)
RFC 2097 – NetBIOS Frames Control Protocol (NBFCP)
RFC 2118 – Microsoft Point-to-Point Compression (MPPC)
RFC 2125 – Bandwidth Allocation Protocol (BAP)
RFC 2131, RFC 2132, and RFC 3361 – Dynamic Host Configuration Protocol (DHCP)
RFC 2205, RFC 2209, and RFC 2210 – Resource Reservation Setup (RSVP)
RFC 2222 – Simple Authentication and Security Layer (SASL)
RFC 2225 – Asynchronous Transfer Mode
Server Message Block
Sun Microsystems Remote Procedure Call (SunRPC)
T.120
Tabular Data Stream (TDS) v7.1, 7.2, 7.3
Universal Plug and Play (UPnP)
Universal Serial Bus (USB) Revision 2.0
See also
Microsoft
Glossary of patent law terms
References
External links
Open Specification Promise — Microsoft page describing the OSP and listing the specifications covered by it.
Analysis of OSP by standards lawyer Andy Updegrove
Analysis of OSP by Software Freedom Law Center. Rebuttal by Gray Knowlton, group product manager for Microsoft Office.
MSDN Library: Open Specifications — Documentation for the covered specifications.
Microsoft initiatives
Patent law | Operating System (OS) | 1,174 |
Serial computer
A serial computer is a computer typified by bit-serial architecture i.e., internally operating on one bit or digit for each clock cycle. Machines with serial main storage devices such as acoustic or magnetostrictive delay lines and rotating magnetic devices were usually serial computers.
Serial computers require much less hardware than their parallel computing counterpart, but are much slower. There are modern variants of the serial computer available as a soft microprocessor which can serve niche purposes where size of the CPU is the main constraint.
The first computer that was not serial (the first parallel computer) was the Whirlwind in 1951.
A serial computer is not necessarily the same as a computer with a 1-bit architecture, which is a subset of the serial computer class. 1-bit computer instructions operate on data consisting of single bits, whereas a serial computer can operate on N-bit data widths, but does so a single bit at a time.
Serial machines
EDVAC 1949
BINAC 1949
SEAC 1950
UNIVAC I 1951
Elliott Brothers Elliott 153 1954
Bendix G-15 1956
LGP-30 1956
Elliott Brothers Elliott 803 1958
ZEBRA 1958
D-17B guidance computer 1962
PDP-8/S 1966
General Electric GE-PAC 4040 process control computer
Datapoint 2200 1971
F14 CADC 1970: transferred all data serially, but internally operated on many bits in parallel.
HP-35 1972
Massively parallel
Most of the early massive parallel processing machines were built out of individual serial processors, including:
ICL Distributed Array Processor 1979
Goodyear MPP 1983
Connection Machine CM-1 1985
Connection Machine CM-2 1987
MasPar MP-11990 (32-bit architecture, internally processed 4 bits at a time)
VIRAM1 computational RAM 2003
See also
1-bit computing
References
Classes of computers
Serial computers | Operating System (OS) | 1,175 |
Web operations
Web operations (WebOps) is a domain of expertise within IT systems management that involves the deployment, operation, maintenance, tuning, and repair of web-based applications and systems.
Historically, operations was seen as a late phase of the Waterfall model development process. After engineering had built a software product, and QA had verified it as correct, it would be handed to a support staff to operate the working software. Such a view assumed that software was mostly immutable in production and that usage would be mostly stable. Increasingly, "a web application involves many specialists, but it takes people in web ops to ensure that everything works together throughout an application's lifetime." The role is gaining respect as a distinct specialty among developers and managers, and is considered by many to be a subset of the larger DevOps movement.
With the rise of web technologies since mid-1995, specialists have emerged that understand the complexities of running a web application. Earlier examples of IT operations teams exist, such as the Network Operations Center (NOC) and the Database Administration (DBA) function.
WebOps vs DevOps
Web applications are unique in many ways, presenting challenges that other software types do not have to deal with:
Their use by a distributed, often uncontrolled, user base.
The many independent networks between end users and the data center from which content is served.
The way in which web pages are delivered as atomic transactions, requiring additional technologies (such as HTTP cookies) to associate sequences of pages into a user interaction.
The three-tiered model of web, application, and database components (such as LAMP environments consisting of Linux, Apache, MySQL and either Perl or PHP).
The requirement that you must often import the application's database and uploaded files (including potentially sensitive user data) to properly develop or test the application (such as when building a content management system, or using a CMS framework such as Drupal, Wordpress, or webframeworks like Django).
In this sense, WebOps simply refers to DevOps for web applications.
Responsibilities
Web operations teams are tasked with a variety of responsibilities, including:
The deployment of web applications
The monitoring, error isolation, escalation, and repair of problems
Performing performance management, availability reporting, and other administration
Configuring load-balancing and working with content delivery networks to improve the reliability and reduce the latency of the system.
Measuring the impact of changes to content, applications, networks, and infrastructure
Typically, web operations personnel are familiar with the TCP/IP stack, the http protocol, HTML page markup, and Rich Internet applications (RIAs) such as AJAX and the like.
References
Computer occupations
Information technology management | Operating System (OS) | 1,176 |
Apple File System
Apple File System (APFS) is a proprietary file system developed and deployed by Apple Inc. for macOS Sierra (10.12.4) and later, iOS 10.3 and later, tvOS 10.2 and later, watchOS 3.2 and later, and all versions of iPadOS. It aims to fix core problems of HFS+ (also called Mac OS Extended), APFS's predecessor on these operating systems. APFS is optimized for solid-state drive storage and supports encryption, snapshots, and increased data integrity, among other capabilities.
History
Apple File System was announced at Apple's developers conference (WWDC) in June 2016 as a replacement for HFS+, which had been in use since 1998. APFS was released for 64-bit iOS devices on March 27, 2017, with the release of iOS 10.3, and for macOS devices on September 25, 2017, with the release of macOS 10.13.
Apple released a partial specification for APFS in September 2018 which supported read-only access to Apple File Systems on unencrypted, non-Fusion storage devices. The specification for software encryption was documented later.
Design
The file system can be used on devices with relatively small or large amounts of storage. It uses 64-bit inode numbers, and allows for more secure storage. The APFS code, like the HFS+ code, uses the TRIM command, for better space management and performance. It may increase read-write speeds on iOS and macOS, as well as space on iOS devices, due to the way APFS calculates available data.
Partition scheme
APFS uses the GPT partition scheme. Within the GPT scheme are one or more APFS containers (partition type GUID is ). Within each container there are one or more APFS volumes, all of which share the allocated space of the container, and each volume may have APFS volume roles. macOS Catalina (macOS 10.15) introduced the APFS volume group, which are groups of volumes that Finder displays as one volume. APFS firmlinks lie between hard links and soft links and link between volumes.
In macOS Catalina the volume role (usually named "Macintosh HD") became read-only, and in macOS Big Sur (macOS 11) it became a signed system volume (SSV) and only volume snapshots are mounted. The volume role (usually named "Macintosh HD - Data") is used as an overlay or shadow of the volume, and both the and volumes are part of the same volume group and shown as one in Finder.
Clones
Clones allow the operating system to make efficient file copies on the same volume without occupying additional storage space. Changes to a cloned file are saved as delta extents, reducing storage space required for document revisions and copies. There is, however, no interface to mark two copies of the same file as clones of the other, or for other types of data deduplication.
Snapshots
APFS volumes support snapshots for creating a point-in-time, read-only instance of the file system.
Encryption
Apple File System natively supports full disk encryption, and file encryption with the following options:
no encryption
single-key encryption
multi-key encryption, where each file is encrypted with a separate key, and metadata is encrypted with a different key.
Increased maximum number of files
APFS supports 64-bit inode numbers, supporting over 9 quintillion files (263) on a single volume.
Data integrity
Apple File System uses checksums to ensure data integrity for metadata.
Crash protection
Apple File System is designed to avoid metadata corruption caused by system crashes. Instead of overwriting existing metadata records in place, it writes entirely new records, points to the new ones and then releases the old ones, an approach known as redirect-on-write. This avoids corrupted records containing partial old and partial new data caused by a crash that occurs during an update. It also avoids having to write the change twice, as happens with an HFS+ journaled file system, where changes are written first to the journal and then to the catalog file.
Compression
APFS supports transparent compression on individual files using Deflate (Zlib), LZVN (libFastCompression), and LZFSE. All three are Lempel-Ziv-type algorithms. This feature is inherited from HFS+, and is implemented with the same AppleFSCompression / decmpfs system using resource forks or extended attributes. As with HFS+, the transparency is broken for tools that do not use decmpfs-wrapped routines.
Space sharing
APFS adds the ability to have multiple logical drives (referred to as volumes) in the same container where free space is available to all volumes in that container (block device).
Limitations
While APFS includes numerous improvements relative to its predecessor, HFS+, a number of limitations have been noted.
Limited integrity checks for user data
APFS does not provide checksums for user data. It also does not take advantage of byte-addressable non-volatile random-access memory.
Performance on hard disk drives
Enumerating files, and any inode metadata in general, is much slower on APFS when it is located on a hard disk drive. This is because instead of storing metadata at a fixed location like HFS+ does, APFS stores them alongside the actual file data. This fragmentation of metadata means more seeks are performed when listing files, acceptable for SSDs but not HDDs.
Compatibility with Time Machine prior to macOS 11
Unlike HFS+, APFS does not support hard links to directories. Since the version of the Time Machine backup software included in Mac OS X 10.5 (Leopard) through macOS 10.15 (Catalina) relied on hard links to directories, APFS was initially not a supported option for its backup volumes. This limitation was overcome starting in macOS 11 Big Sur, wherein APFS is now the default file system for new Time Machine backups (existing HFS+-formatted backup drives are also still supported). macOS Big Sur's implementation of Time Machine in conjunction with APFS-formatted drives enables "faster, more compact, and more reliable backups" than were possible with HFS+-formatted backup drives.
Security issues
In March 2018, the APFS driver in High Sierra was found to have a bug that causes the disk encryption password to be logged in plaintext.
In January 2021, the APFS driver in iOS < 14.4, macOS < 11.2, watchOS < 7.3, and tvOS < 14.4 was found to have a bug that allowed a local user to read arbitrary files, regardless of their permissions.
Support
macOS
Limited, experimental support for APFS was first introduced in macOS Sierra 10.12.4. Since macOS 10.13 High Sierra, all devices with flash storage are automatically converted to APFS. As of macOS 10.14 Mojave, Fusion Drives and hard disk drives are also upgraded on installation. The primary user interface to upgrade does not present an option to opt out of this conversion, and devices formatted with the High Sierra version of APFS will not be readable in previous versions of macOS. Users can disable APFS conversion by using the installer's startosinstall utility on the command line and passing --converttoapfs NO.
FileVault volumes are not converted to APFS as of macOS Big Sur 11.2.1. Instead macOS formats external FileVault drives as CoreStorage Logical Volumes formatted with Mac OS Extended (Journaled). FileVault drives can be optionally Encrypted.
An experimental version of APFS, with some limitations, is available in macOS Sierra through the command line diskutil utility. Among these limitations, it does not perform Unicode normalization while HFS+ does, leading to problems with languages other than English. Drives formatted with Sierra’s version of APFS may also not be compatible with future versions of macOS or the final version of APFS, and the Sierra version of APFS cannot be used with Time Machine, FileVault volumes, or Fusion Drives.
iOS, tvOS, and watchOS
iOS 10.3, tvOS 10.2, and watchOS 3.2 convert the existing HFSX file system to APFS on compatible devices.
Third-party utilities
Despite the ubiquity of APFS volumes in today's Macs and the format's 2016 introduction, third-party repair utilities continue to have notable limitations in supporting APFS volumes, due to Apple's delayed release of complete documentation. According to Alsoft, the maker of DiskWarrior, Apple's 2018 release of partial APFS format documentation has delayed the creation of a version of DiskWarrior that can safely rebuild APFS disks. Competing products, including MicroMat's TechTool and Prosoft's Drive Genius, are expected to increase APFS support as well.
Paragon Software Group has published a software development kit under the 4-Clause BSD License that supports read-only access of APFS drives. An independent read-only open source implementation by Joachim Metz, libfsapfs, is released under GNU Lesser General Public License v3. It has been packaged into Debian and Ubuntu software repositories. Both are command-line tools that do not expose a normal filesystem driver interface. There is a Filesystem in Userspace (FUSE) driver for Linux called apfs-fuse with read-only access. An "APFS for Linux" project is working to integrate APFS support into the Linux kernel.
See also
Comparison of file systems
References
External links
Apple Developer: Apple File System Guide
Apple Developer: Apple File System Reference
WWDC 2016: Introduction of APFS by Apple software engineers Dominic Giampaolo and Eric Tamura
Detailed Overview of APFS by independent file system developer Adam Leventhal
2017 software
Apple Inc. file systems
Computer file systems
Disk file systems
Flash file systems
IOS
MacOS | Operating System (OS) | 1,177 |
Atari Pascal
The Atari Pascal Language System (usually shortened to Atari Pascal) is a version of the Pascal programming language released by Atari, Inc. for the Atari 8-bit family of home computers in March 1982. Atari Pascal was published through the Atari Program Exchange as unsupported software instead of in Atari's official product line. It requires two disk drives, which greatly limited its potential audience. It includes a 161-page manual.
Development
Atari Pascal was developed by MT Microsystems, which was owned by Digital Research. It's similar to MT/PASCAL+ from the same company. The compiler produces code for a virtual machine, as with UCSD Pascal, instead of generating machine code, but the resulting programs are as much as seven times faster than Apple Pascal. MT Microsystems wrote Atari Pascal with a planned "super Atari" 8-bit model in mind, one with 128K of RAM and a dual-floppy drive. This machine never materialized, but the software was released because of pressure within Atari, though only through the Atari Program Exchange.
References
1982 software
Atari 8-bit family software
Pascal (programming language) compilers
Atari Program Exchange software | Operating System (OS) | 1,178 |
Gary Kildall
Gary Arlen Kildall (; May 19, 1942 – July 11, 1994) was an American computer scientist and microcomputer entrepreneur.
During the 1970s, Kildall created the CP/M operating system among other operating systems and programming tools, and subsequently founded Digital Research, Inc. (or "DRI") to market and sell his software products. Kildall was among the earliest individuals to recognize microprocessors as fully capable computers (rather than simply as equipment controllers), and to organize a company around this concept. Due to his accomplishments during this era, Kildall is considered a pioneer of the personal computer revolution.
During the 1980s, Kildall also appeared on PBS as co-host (with Stewart Cheifet) of Computer Chronicles, a weekly informational program which covered the latest developments in personal computing.
Although Kildall's career in computing spanned more than two decades, he is mainly remembered in connection with his development of the CP/M operating system, an early multi-platform microcomputer OS that has many parallels to the later MS-DOS used on the IBM PC.
Early life
Gary Kildall was born and grew up in Seattle, Washington, where his family operated a seamanship school. His father, Joseph Kildall, was a captain of Norwegian heritage. His mother Emma was of half Swedish descent, as Kildall's grandmother was born in Långbäck, Sweden, in Skellefteå Municipality, but emigrated to Canada at 23 years of age.
A self-described "greaser" during high school, Kildall later attended the University of Washington (UW), hoping to become a mathematics teacher. During his studies, Kildall became increasingly interested in computer technology. After receiving his degree, he fulfilled a draft obligation to the United States Navy by teaching at the Naval Postgraduate School (NPS) in Monterey, California. Being within an hour's drive of Silicon Valley, Kildall heard about the first commercially available microprocessor, the Intel 4004. He bought one of the processors and began writing experimental programs for it. To learn more about the processors, he worked at Intel as a consultant on his days off.
Kildall briefly returned to UW and finished his doctorate in computer science in 1972, then resumed teaching at NPS. He published a paper that introduced the theory of data-flow analysis used today in optimizing compilers (sometimes known as Kildall's method), and he continued to experiment with microcomputers and the emerging technology of floppy disks. Intel lent him systems using the 8008 and 8080 processors, and in 1973, he developed the first high-level programming language for microprocessors, called PL/M. For Intel he also wrote an 8080 instruction set simulator named INTERP/80. He created CP/M the same year to enable the 8080 to control a floppy drive, combining for the first time all the essential components of a computer at the microcomputer scale. He demonstrated CP/M to Intel, but Intel had little interest and chose to market PL/M instead.
Business career
CP/M
Kildall and his wife Dorothy established a company, originally called "Intergalactic Digital Research" (later renamed as Digital Research, Inc.), to market CP/M through advertisements in hobbyist magazines. Digital Research licensed CP/M for the IMSAI 8080, a popular clone of the Altair 8800. As more manufacturers licensed CP/M, it became a de facto standard and had to support an increasing number of hardware variations. In response, Kildall pioneered the concept of a BIOS, a set of simple programs stored in the computer hardware (ROM or EPROM chip) that enabled CP/M to run on different systems without modification.
CP/M's quick success took Kildall by surprise, and he was slow to update it for high density floppy disks and hard disk drives. After hardware manufacturers talked about creating a rival operating system, Kildall started a rush project to develop CP/M 2. By 1981, at the peak of its popularity, CP/M ran on different computer models and DRI had million in yearly revenues.
IBM dealings
IBM approached Digital Research in 1980, at Bill Gates' suggestion, to negotiate the purchase of a forthcoming version of CP/M called CP/M-86 for the IBM PC. Gary had left negotiations to his wife, Dorothy, as he usually did, while he and colleague and developer of MP/M operating system Tom Rolander used Gary's private airplane to deliver software to manufacturer Bill Godbout. Before the IBM representatives would explain the purpose of their visit, they insisted that Dorothy sign a non-disclosure agreement. On the advice of DRI attorney Gerry Davis, Dorothy refused to sign the agreement without Gary's approval. Gary returned in the afternoon and tried to move the discussion with IBM forward, and accounts disagree on whether he signed the non-disclosure agreement, as well as if he ever met with the IBM representatives.
Various reasons have been given for the two companies failing to reach an agreement. DRI, which had only a few products, might have been unwilling to sell its main product to IBM for a one-time payment rather than its usual royalty-based plan. Dorothy might have believed that the company could not deliver CP/M-86 on IBM's proposed schedule, as the company was busy developing an implementation of the PL/I programming language for Data General. Also possible, the IBM representatives might have been annoyed that DRI had spent hours on what they considered a routine formality. According to Kildall, the IBM representatives took the same flight to Florida that night that he and Dorothy took for their vacation, and they negotiated further on the flight, reaching a handshake agreement. IBM lead negotiator Jack Sams insisted that he never met Gary, and one IBM colleague has confirmed that Sams said so at the time. He accepted that someone else in his group might have been on the same flight, and noted that he flew back to Seattle to talk with Microsoft again.
Sams related the story to Gates, who had already agreed to provide a BASIC interpreter and several other programs for the PC. Gates' impression of the story was that Gary capriciously "went flying", as he would later tell reporters. Sams left Gates with the task of finding a usable operating system, and a few weeks later he proposed using the operating system 86-DOS—an independently developed operating system that implemented Kildall's CP/M API—from Seattle Computer Products (SCP). Paul Allen negotiated a licensing deal with SCP. Allen had 86-DOS adapted for IBM's hardware, and IBM shipped it as IBM PC DOS.
Kildall obtained a copy of PC DOS, examined it, and concluded that it infringed on CP/M. When he asked Gerry Davis what legal options were available, Davis told him that intellectual property law for software was not clear enough to sue. Instead Kildall only threatened IBM with legal action, and IBM responded with a proposal to offer CP/M-86 as an option for the PC in return for a release of liability. Kildall accepted, believing that IBM's new system (like its previous personal computers) would not be a significant commercial success. When the IBM PC was introduced, IBM sold its operating system as an unbundled option. One of the operating system options was PC DOS, priced at . PC DOS was seen as a practically necessary option; most software titles required it and without it the IBM PC was limited to its built-in Cassette BASIC. CP/M-86 shipped a few months later six times more expensive at , and sold poorly against DOS and enjoyed far less software support.
Later work
With the loss of the IBM deal, Gary and Dorothy found themselves under pressure to bring in more experienced management, and Gary's influence over the company waned. He worked in various experimental and research projects, such as a version of CP/M with multitasking (MP/M) and an implementation of the Logo programming language. He hoped that Logo, an educational dialect of LISP, would supplant BASIC in education, but it did not. After seeing a demonstration of the Apple Lisa, Kildall oversaw the creation of DRI's own graphical user interface, called GEM. Novell acquired DRI in 1991 in a deal that netted millions for Kildall.
Kildall resigned as CEO of Digital Research on 28 June 1985, but remained chairman of the board.
Kildall also pursued computing-related projects outside DRI. During the seven years from 1983 to 1990 he co-hosted a public television program on the side, called Computer Chronicles, that followed trends in personal computing.
In 1984 he started another company, Activenture, which adapted optical disc technology for computer use. In early 1985 it was renamed KnowledgeSet and released the first computer encyclopedia in June 1985, a CD-ROM version of Grolier's Academic American Encyclopedia named The Electronic Encyclopedia, later acquired by Banta Corporation. Kildall's final business venture, known as Prometheus Light and Sound (PLS) and based in Austin, Texas, developed a modular PBX communication system that integrated land-line telephones with mobile phones (called "Intelliphone") to reduce the then-high online costs and to remotely connect with home appliances. It included a UUCP-based store and forward system to exchange emails and files between the various nodes and was planned to include TCP/IP support at a later point in time.
Personal life
Kildall's colleagues recall him as creative, easygoing, and adventurous. In addition to flying, he loved sports cars, auto racing, and boating, and had a lifelong love of the sea.
Although Kildall preferred to leave the IBM affair in the past and to be known for his work before and afterward, he continually faced comparisons between himself and Bill Gates, as well as fading memories of his contributions. A legend grew around the fateful IBM-DRI meeting, encouraged by Gates and various journalists, suggesting that Kildall had irresponsibly taken the day off for a recreational flight.
In later years, Kildall privately expressed bitter feelings about being overshadowed by Microsoft, and began suffering from alcoholism.
Selling DRI to Novell had made Kildall a wealthy man, and he moved to the West Lake Hills suburb of Austin. His Austin house was a lakeside property, with stalls for several sports cars, and a video studio in the basement. Kildall owned and flew his own Learjet and had at least one boat on the lake. While in Austin he also participated in volunteer efforts to assist children with HIV/AIDS. He also owned a mansion with a panoramic ocean view in Pebble Beach, California, near the headquarters of DRI.
Computer Connections
In 1992, Kildall was invited to the University of Washington computer science program's 25th anniversary event. As a distinguished graduate of the program, Kildall was disappointed when asked to attend simply as an audience member. He also took offense at the decision to give the keynote speech to Bill Gates, a Harvard dropout who had donated to UW, but had never attended.
In response, Kildall began writing a memoir, entitled Computer Connections: People, Places, and Events in the Evolution of the Personal Computer Industry. The memoir, which Kildall sought to publish, expressed his frustration that people did not seem to value elegance in computer software.
Writing about Bill Gates, Kildall described him as "more of an opportunist than a technical type, and severely opinionated, even when the opinion he holds is absurd."
In an appendix, he called DOS "plain and simple theft" because its first 26 system calls worked the same as CP/M's. He accused IBM of contriving the price difference between PC DOS and CP/M-86 in order to marginalize CP/M.
Kildall had completed a rough draft of the manuscript by the end of 1993, but the full text remains unpublished. Journalist Harold Evans used the memoir as a primary source for a chapter about Kildall in the 2004 book They Made America, concluding that Microsoft had robbed Kildall of his inventions. IBM veterans from the PC project disputed the book's description of events, and Microsoft described it as "one-sided and inaccurate."
In August 2016, Kildall's family made the first seven chapters of Computer Connections available as a free public download.
Death
On July 8, 1994, Kildall sustained a head injury at the Franklin Street Bar & Grill, a biker bar in Monterey, California. The exact circumstances of the injury are unclear. Various sources have claimed he fell from a chair, fell down steps, or was assaulted because he had entered the establishment wearing Harley-Davidson leathers. Harold Evans, in They Made America, states that Kildall "stumbled and hit his head" inside the premises, and "was found on the floor."
Following the injury, Kildall was discharged from the hospital twice. He was pronounced dead at the Community Hospital of the Monterey Peninsula, on July 11, 1994.
Kildall's exact cause of death remains unclear. An autopsy, conducted on July 12, did not conclusively determine the cause of death. Evans states that Kildall's head injury triggered a cerebral hemorrhage, causing a blood clot to form inside the skull. A CP/M Usenet FAQ states that Kildall was concussed due to his injury, and died of a heart attack; the connection between the two is unclear.
Kildall's body was cremated. His remains were buried in Evergreen Washelli Memorial Park, in north Seattle.
Recognition
Following the announcement of Kildall's death, Bill Gates commented that he was "one of the original pioneers of the PC revolution" and "a very creative computer scientist who did excellent work. Although we were competitors, I always had tremendous respect for his contributions to the PC industry. His untimely death was very unfortunate and his work will be missed."
In March 1995, Kildall was posthumously honored by the Software Publishers Association (SPA) for his contributions to the microcomputer industry:
The first programming language and first compiler specifically for microprocessors: PL/M. (1973)
The first microprocessor disk operating system, which eventually sold a quarter of a million copies: CP/M. (1974)
The first successful open system architecture by segregating system-specific hardware interfaces in a set of BIOS routines. (1975)
Creation of the first diskette track buffering schemes, read-ahead algorithms, file directory caches, and RAM drive emulators.
Introduction of operating systems with preemptive multitasking and windowing capabilities and menu-driven user interfaces (with Digital Research): MP/M, Concurrent CP/M, Concurrent DOS, DOS Plus, GEM.
Introduction of a binary recompiler: XLT86. (1981)
The first computer interface for video disks to allow automatic nonlinear playback, presaging today's interactive multimedia. (1984, with Activenture)
The file system and data structures for the first consumer CD-ROM. (1985, with KnowledgeSet)
In April 2014, the city of Pacific Grove installed a commemorative plaque outside Kildall's former residence, which also served as the early headquarters of Digital Research.
See also
History of personal computers
John Q. Torode
References
Further reading
(Part 2 not released due to family privacy reasons.)
(18 pages)
External links
1942 births
1994 deaths
American computer programmers
American computer scientists
American computer businesspeople
American technology company founders
American technology chief executives
Digital Research people
Digital Research employees
CP/M people
Naval Postgraduate School faculty
Scientists from Seattle
University of Washington alumni
American people of Norwegian descent
Accidental deaths from falls
20th-century American businesspeople
Death conspiracy theories
People from Pebble Beach, California | Operating System (OS) | 1,179 |
Global Resource Serialization
Global Resource Serialization (GRS) is the component within the IBM z/OS operating system responsible for enabling fair access to serially reusable computing resources, such as datasets and tape drives or virtual resources, such as lists, queues, and control blocks. Programs can request exclusive access to a resource (which means that program and all subsequent requesting programs are blocked until that program is given access to the resource), usually requested when a program needs to update the resource or shared access (which means that multiple programs can be given access to the resource), usually requested when a program only needs to query the state of the resource. GRS manages all requests in FIFO (first in/first out) order.
Scoping
GRS manages resources at three different levels of scoping:
STEP - this level is for resources that exist within a single MVS address space. Only threads (tasks) within that address space can request access to the resource.
SYSTEM - this level is for resources that exist within a single MVS instance. Any thread running on the system can request access to the resource.
SYSTEMS - also known as GLOBAL, these resources are accessible by multiple MVS instances. Any thread running on a system in the GRS complex can request access to the resource.
Clustering
In order for GRS to serialize resources between multiple systems, the systems must be clustered. There are several options to enable this clustering:
GRS Ring - each of the systems (LPARs) are connected with channel-to-channel adapters (CTCAs) in a ring configuration. The GRS software sends messages around the ring to ensure the integrity of the complex and to arbitrate correct succession of ownership.
Basic Sysplex - each of the systems in the sysplex has complete connectivity to every other system via CTCAs or ESCON CTCAs, managed by the XCF (Cross System Coupling Facility) component. The GRS component utilizes the Messaging and Group Services provided by XCF to replace and augment the function through the GRS managed CTCAs.
GRS Star (Parallel Sysplex) - Rather than using a message passing protocol to manage resource ownership succession, GRS uses the locking services provided by the XES (Cross System Extended Services) component of MVS. Use of locking services requires a lock structure (called ISGLOCK) to be created in a Coupling Facility (CF).
Similar
CA, Inc. licenses a product called "Multi-Image Manager" (CA-MIM) which contains a component called "Multi-Image Integrity" (MII) which can be used to implement similar functions to GRS.
References
IBM mainframe operating systems | Operating System (OS) | 1,180 |
Electronic Document System
The Electronic Document System (EDS) was an early hypertext system – also known as the Interactive Graphical Documents (IGD) hypermedia system – focused on creation of interactive documents such as equipment repair manuals or computer-aided instruction texts with embedded links and graphics. EDS was a 1978–1981 research project at Brown University by Steven Feiner, Sandor Nagy and Andries van Dam.
EDS used a dedicated Ramtech raster display and VAX-11/780 computer to create and navigate a network of graphic pages containing interactive graphic buttons. Graphic buttons had programmed behaviors such as invoking an animation, linking to another page, or exposing an additional level of detail.
The system had three automatically created navigation aids:
a timeline showing thumbnail images of pages traversed;
a 'neighbors' display showing thumbnails of all pages linking to the current page on the left, and all pages reachable from the current page on the right;
a visual display of thumbnail page images arranged by page keyword, color coded by chapter.
Unlike most hypertext systems, EDS incorporated state variables associated with each page. For example, clicking a button indicating a particular hardware fault might set a state variable that would expose a new set of buttons with links to a relevant choice of diagnostic pages. The EDS model prefigured graphic hypertext systems such as Apple's HyperCard.
References
van Dam, Andries. (1988, July). Hypertext '87 keynote address. Communications of the ACM, 31, 887–895.
Feiner, Steven; Nagy, Sandor; van Dam, Andries. (1981). An integrated system for creating and presenting complex computer-based documents. SIGGRAPH Proceedings of the 8th annual conference on Computer graphics and interactive techniques, Dallas Texas.
Brown University Department of Computer Science. (2019, 23 May). A Half-Century of Hypertext at Brown: A Symposium.
Hypertext
Brown University
History of human–computer interaction | Operating System (OS) | 1,181 |
WinNuke
In computer security, WinNuke is an example of a Nuke remote denial-of-service attack (DoS) that affected the Microsoft Windows 95, Microsoft Windows NT and Microsoft Windows 3.1x computer operating systems. The exploit sent a string of out-of-band data (OOB data) to the target computer on TCP port 139 (NetBIOS), causing it to lock up and display a Blue Screen of Death. This does not damage or change the data on the computer's hard disk, but any unsaved data would be lost.
Details
The so-called OOB simply means that the malicious TCP packet contained an Urgent pointer (URG). The "Urgent pointer" is a rarely used field in the TCP header, used to indicate that some of the data in the TCP stream should be processed quickly by the recipient. Affected operating systems did not handle the Urgent pointer field correctly.
A person under the screen-name "_eci" published C source code for the exploit on May 9, 1997. With the source code being widely used and distributed, Microsoft was forced to create security patches, which were released a few weeks later. For a time, numerous flavors of this exploit appeared going by such names as fedup, gimp, killme, killwin, knewkem, liquidnuke, mnuke, netnuke, muerte, nuke, nukeattack, nuker102, pnewq, project1, , simportnuke, sprite, sprite32, vconnect, vzmnuker, wingenocide, winnukeit, winnuker02, winnukev95, wnuke3269, wnuke4, and wnuke95.
A company called SemiSoft Solutions from New Zealand created a small program, called AntiNuke, that blocks WinNuke without having to install the official patch.
Years later, a second incarnation of WinNuke that uses another, similar exploit was found.
See also
Ping of death
References
External links
WinNuke Relief Page
Denial-of-service attacks
1997 in computing | Operating System (OS) | 1,182 |
CrossDOS
CrossDOS is a file system handler for accessing FAT formatted media on Amiga computers. It was bundled with AmigaOS 2.1 and later. Its function was to allow working with disks formatted for PCs and Atari STs (and others). In the 1990s it became a commonly used method of file exchange between Amiga systems and other platforms.
CrossDOS supported both double density (720 KB) and high density (1.44 MB) floppy disks on compatible disk drives. As with AmigaDOS disk handling, it allowed automatic disk-change detection for FAT formatted floppy disks. The file system was also used with hard disks and other media for which CrossDOS provided hard disk configuration software. However, the versions of CrossDOS bundled with AmigaOS did not support long filenames, an extension to FAT that was introduced with Microsoft's Windows 95.
History
CrossDOS was originally developed as a stand-alone commercial product by Consultron, which was available for AmigaOS 1.2 and 1.3. In 1992 Commodore included a version of CrossDOS with AmigaOS 2.1 (and with later versions), so that users could work with PC formatted disks. In fact, the bundled version will also work with version 2.0 of AmigaOS. The bundled CrossDOS replaced an obscure tool in earlier versions of AmigaOS that could access FAT formatted disks on a secondary floppy disk drive only (this tool was not a complete file system but a user program to read files from a FAT formatted disk). Development of CrossDOS continued after being bundled with the OS. CrossDOS 7 was the last version released and included support for long filenames and other features not available in the bundled version.
References
See also
Amiga Fast File System
Amiga Old File System
File Allocation Table
List of file systems
Comparison of file systems
AmigaOS
Amiga software
Disk file systems | Operating System (OS) | 1,183 |
Performance Monitor
Performance Monitor (known as System Monitor in Windows 9x, Windows 2000 and Windows XP) is a system monitoring program introduced in Windows NT 3.1. It monitors various activities on a computer such as CPU or memory usage. This type of application may be used to determine the cause of problems on a local or remote computer by measuring the performance of hardware, software services, and applications.
In Windows 9x, System Monitor is not installed automatically during Windows setup, but could be installed manually using the Add/Remove Programs applet, located in the Control Panel. It has few counters available and offers little in the way of customization. In contrast, the Windows NT Performance Monitor is available out-of-the-box and has over 350 performance measurement criteria (called "counters") available. Performance Monitor can display information as a graph, a bar chart, or numeric values and can update information using a range of time intervals. The categories of information that can be monitored depends on which networking services are installed, but they always include file system, kernel, and memory manager. Other possible categories include Microsoft Network Client, Microsoft Network Server, and protocol categories.
In Windows 2000, the System Monitor of Windows 9x and the Performance Monitor of Windows NT 4 and earlier, as well as another program called Network Monitor, were merged into a Microsoft Management Console (MMC) plug-in called Performance, which consisted of two parts: "System Monitor" and "Performance Logs and Alerts". The "System Monitor" naming was kept in Windows XP. Some third-party publications referred to it as "Performance Monitor" however, even in Windows 2000 or XP contexts.
The name displayed inside the MMC plug-in was changed back to "Performance Monitor" in Windows Vista, although it was also bundled with a Reliability Monitor and with a new performance summary feature called Resource Overview. In Windows 7, the resource overview feature was split to a stand-alone Resource Monitor application, with the landing page for the Performance Monitor in Windows 7 containing a pointer to the (new) Resource Monitor; Windows 7 also moved the Reliability Monitor to the Action Center. A new feature added to the Performance Monitor in Windows Vista is Data Collector Set, which allows sets of accounting parameters to be easily manipulated as a group.
See also
Windows Task Manager
Process Explorer
References
Windows components | Operating System (OS) | 1,184 |
Input/Output Configuration Program
The Input/Output Configuration Program is a program on IBM mainframes.
History
In the original S/360 and S/370 architectures, each processor had its own set of I/O channels and addressed I/O devices with a 12-bit cuu address, containing a 4-bit channel number and an 8-bit unit (device) number to be sent on the channel bus in order to select the device; the operating system had to be configured to reflect the processor and cuu address for each device. The operating system had logic to queue pending I/O on each channel and to handle selection of alternate channels. Initiating an I/O to a channel on a different processor required causing a shoulder tap interrupt on the other processor so that it could initiate the I/O.
Starting with the IBM 3081 and IBM 4381 in S/370-Extended Architecture mode, IBM changed the I/O architecture to allow the Channel Subsystem to handle the channel scheduling that the operating system had to handle in S/370 mode. The new I/O architecture used a 16-bit Channel Path Id (CHPID); the Channel Subsystem was responsible for mapping the CHPID to the channel and device numbers, for queuing I/O requests and for selecting from the available paths. The installation was responsible for defining the Input/Output Configuration Data Sets (IOCDS's), and the operator could select a specific IOCDS as part of a power on reset (POR). Input/Output Configuration Program (IOCP) is a program for IBM mainframes that compiles a description of the Channel Subsystem and LPAR configuration, optionally loading it into an Input/Output Configuration Data Set (IOCDS); it recognizes the syntax of MVS Configuration Program (MVSCP) input, and there is no need to maintain separate input files.
The direct use of IOCP and MVSCP has been mostly supplanted by Hardware Configuration Definition (HCD).
See also
Channel I/O
Computer types
History of computing
Mainframe computer
Timeline of computing
References
External links
Input/Output Configuration Program User's Guide and ESCON Channel-to-Channel Reference, GC38-0401-00
z/OS V1R1.0 HCD Planning, GA22-7525-00
IBM mainframe operating systems
IBM mainframe technology | Operating System (OS) | 1,185 |
Systime Computers
Systime Computers Ltd was a British computer manufacturer and systems integrator of the 1970s and 1980s. During the late 1970s and early 1980s, Systime become the second largest British manufacturer of computers, specializing in the minicomputer market.
The company was based in Leeds, England, and founded in 1973. Its success was based on selling systems built around OEM components from Digital Equipment Corporation (DEC), and it grew to have over 1,300 employees with turnover peaking around £60 million.
Systime was unusual among systems integrators in that it actually manufactured the hardware it sold to customers.
A portion of Systime was purchased in 1983 by Control Data Corporation and the company's founder departed. Systime Computers then went through a period of sharp decline, in part due to lawsuits from DEC for intellectual property infringement, and even more so due to charges of violating Cold War-era U.S. export restrictions regarding indirect sales to Eastern Bloc countries.
In 1985, what was left of Systime was fully acquired by Control Data Corporation, and a year later the DEC-related services part of that subsidiary was bought by DEC. Systime then focused on selling products built by its own engineers. The Systime–Control Data arrangement did not prosper, and in 1989 Control Data split Systime into four companies, each sold to a management buyout.
Origins of company
John Gow was a mechanical engineering graduate of the University of Leeds who had gone into computer programming and then became a software support manager at a Lancashire office of the British subsidiary of Digital Equipment Corporation (DEC). He also did some hardware sales work and realised that few of the customers to whom he was selling actually understood the capabilities of the computers they were buying. In 1972, Gow, then 27 years old, and three others set up a partnership on their own, labouring in Gow's bungalow workshop.
System Computers Ltd was created the following year, being incorporated in October 1973. Gow and the three others moved their work into the canteen of an abandoned mill in Leeds.
Due to inadequate capitalisation – £2,800, in a field in which the minicomputers they would be selling cost £60,000 each – the new company had a shaky start and came close to going under right away. The key turning point was engaging with Leeds-based jukebox firm Musichire, which had purchased a computer from DEC but were struggling with it. Systime came in on a consulting basis and sold Musichire both software and new hardware. John Parkinson, financial director of Musichire, was sufficiently impressed with Gow's sales abilities that, in 1974, Musichire took a financial stake in Systime. Parkinson subsequently became chair of the board of directors of Systime.
Period of rapidly increasing growth
Gow emphasized that Systime would provide not just hardware but also software applications, systems engineering, and support. By 1975, Systime had £2.75 million in turnover and profits of £300,000 and was already opening offices and subsidiaries overseas.
Musichire's stake in the company impeded the company's ability to grow. Gow engaged with financiers but did not like them and did not want to accept investment from either the Industrial and Commercial Finance Corporation or from merchant banks, fearing they would demand too much control of the company's direction.
However, in 1977 Gow arranged for investment firm Ivory and Sime to buy out Musichire's share. Around the same time,
the National Enterprise Board (NEB) convinced Gow to sign up with them; they invested £500,000 in Systime in return for a 26 percent stake in the young firm (which would in time grow to near 30 percent.) The NEB also facilitated the participation of Systime in a new marketing effort in an NEB subsidiary known as Insac Data Systems, which would promote exports of British technology products.
Systime's business model was selling products centred around computers originally built by DEC in the United States. They would take actual DEC components and put them together with items such as power supplies and storage cables that they built themselves or obtained from other industry sources. To this base of equipment, Systime added peripherals and software from other vendors and then added some of its own application software. This allowed Systime to provide full solutions to growing customers, such as Gordon Spice Cash and Carry, that were first embracing computerised line-of-business systems during the 1970s.
Accordingly, the Systime product lines were based around the minicomputers they produced, the most popular of which were the Systime 1000, Systime 3000, and Systime 5000, all based on different models of the DEC 16-bit PDP-11 minicomputer (roughly, the PDP-11/04, /60, and /34 respectively). The PDP-11-based Systime systems would typically run the DEC's RSTS/E operating system. These systems had many kinds of users; for instance, a botany group at the University of Reading used a Systime 5000.
Systime's use of the PDP-11 coincided with an upsurge in the popularity of that model within the computer-using community, one that DEC had not fully anticipated, leading to wait times up to three years for systems or components. As a result, Systime began manufacturing its own DEC-compatible memory boards and storage devices.
Later, the Systime 8000 series came out, which were based upon the DEC 32-bit VAX-11 supermini. The 8000 series had names that indicated the DEC model they were derived from, so the Systime 8750 was equivalent to the VAX-11/750 and the Systime 8780 was equivalent to the VAX-11/780. The Systime 8000 series systems could run DEC's VMS operating system, but many of them were instead running one variant or another of Unix. This was another successful product; by the mid-1980s around one-third of all VAXen in the United Kingdom were Systime-based systems.
A pure software product was Systel, the Systime Teleprocessing System, which acted as a transaction processing system with data dictionary-based programming assist features. As such it was a competitor to products such as TAPS from Informatics General on the PDP-11, but in 1980–81 Systime saw an opening on the VAX-11 where there were no rival teleprocessing monitors yet. Systel development was half-funded by the Insac arrangement and that entity received royalties on Systel sales. Systime had some success with Systel in the United Kingdom and Holland and made a push to sell it in the United States as well.
Ian Fallows was technical director of the company during the 1970s. Systime was rapidly hiring not just hardware engineers but also software engineers to work on operating systems, controllers, and telecommunications and networking components.
In 1980, Systime had turnover of £24.6 million and a profit of £1.6 million. Those figures increased to £32.1 million and £2.2 million in 1981, respectively. By then, Systime had some 1,150 employees and eleven offices around the United Kingdom. Systime was one of four companies short-listed for the Institute of Directors's annual Business Enterprise Award for 1981. It was an unusual case of a British company succeeding in making minicomputers, a market dominated by American firms. Despite its successes and fast growth, Systime was little known to the general public.
New facility and changes of management
In September 1981, Gow announced an ambitious three-year, £46 million expansion plan for Systime, including the building of a second large facility in Leeds, with some of the funding to come from the European Investment Bank and various government grants. The second facility was to enter the microcomputer business for small businesses and, in a first for Systime, would not rely upon DEC components. This reflected that Systime was in the process of manufacturing not just minicomputers but also desktop systems, as well as terminals and printers, most of which were targeted to the Western European market. Systime also ran a service bureau, that offered the creation of application software and that sold maintenance contracts on a third-party basis. In all, Systime's plans anticipated a doubling of its employee count.
By 1983, Systime was considered, as The Times wrote, "one of the largest and fastest-growing British computer companies". It was the second largest computer manufacturer based in Britain, behind only the mainframe-oriented International Computers Limited (ICL). Ian McNeill was technical director of the company during this period. In addition, Systime was considered an exemplar of new industrial potential in Northern England, and the company was often visited by government ministers as a result.
However, the switch from the National Enterprise Board to the successor British Technology Group (BTG) left Systime with uncertain funding while it was in the process of its big expansion; as Gow subsequently said, "we were sailing along and suddenly started to get really tight on cash. We'd outgrown our resources." Gow had previously considered organising a flotation but now did not have time to do so, so he sought investments from other British companies, but they all wanted to stage a full acquisition. In particular, there were meetings in January 1983 with two large British technology companies, Ferranti and Standard Telephones and Cables (STC), that did not achieve fruition.
Instead, in March 1983, it was announced that Control Data Corporation was buying 38 percent of Systime for £8 million, with another 25 percent to be controlled by Ivory and Sime. At the same time, BTG reduced its investment down to 12 percent. The two companies had had existing business dealings, as Systime bought many Control Data peripheral devices to include in its full systems. The recapitalisation of Systime was completed in June 1983. At this point, Parkinson departed as chair of Systime and retired from the industry altogether for a while.
The new facility, built for £20 million in a nearby area of Beeston, Leeds,
had begun operations in October 1982, with computer production taking place there. The facility was formally opened on 27 June 1983 by Princess Anne. The large building featured what one newspaper termed "a distinctive reflective glass front"; more popularly it became known as the "Glass Palace".
At its peak, Systime had some 1,370 employees and turnover of £60 million.
Systime was growing at a 30 percent annual rate during the early-mid 1980s and the strain on its finances was considerable.
Systime attempted to gain a greater public visibility during this period. They became shirt sponsor for Leeds United F.C. for the 1983–84 season, only the second such sponsor in the club's history, and they sponsored a Tyrrell 012 car during the 1984 Formula One World Championship season, with drivers such as Stefan Bellof, Mike Thackwell, and Stefan Johansson.
Relations between Gow and Control Data management did not work out, with the two parties clashing on fundamental decisions. Accordingly, Gow departed Systime in December 1983. He was replaced as managing director by Rod Attwooll, formerly head of the UK division of Texas Instruments. Gow subsequently started his own firm, WGK Electronics, hoping to succeed in largely untapped third-world markets.
Legal actions filed by DEC
In putting together such PDP-11- and VAX-based systems, Systime was inevitably a rival of DEC UK, Digital's United Kingdom subsidiary, which sought to sell those systems themselves. Indeed, Systime had discovered in 1979 that it could acquire the same components for 25 percent less in the United States than it could through DEC UK, and it filed anti-trust action in the United States to force DEC to sell it components at American prices.
In 1983, DEC UK sued Systime, claiming that Systime was engaging in practices that violated the license for the VAX/VMS operating system. The lawsuit was settled later in 1983 via Systime making a $5.5 million payment to DEC.
Then in June 1984, DEC sued Systime in the High Court of Justice, saying it had found evidence that Systime, in its layouts and connection schematics for the manufacture of seven different printed-circuit boards for VAX-associated disk drives and controllers, had infringed upon DEC copyrights. The claimed violations had taken place prior to 1983. The suit asked for £5 million in damages and came after a year of negotiations between the two companies had been unable to arrive at an out-of-court agreement. In July 1985, Systime counterfiled in the European Common Market, claiming that DEC's filing represented an attempted act of unfair competition in trying to limit Systime's ability to compete with DEC in the Western European market. The counterfiling also alleged that DEC had in fact infringed upon Systime's copyrights in the printed circuit board matter.
Charges of violating export control restrictions
In the early-mid-1980s Systime Computers
ran afoul of the Coordinating Committee for Multilateral Export Controls (COCOM), which regulated what goods and technologies Western countries could sell to countries within the Eastern Bloc. British and other European companies protested that many of the computer components prohibited by COCOM were widely available in Asian markets anyway, but the regulations remained in effect. So in order to sell computers outside Britain, Systime not only had to obtain an export license from the Department of Trade and Industry (United Kingdom), but because American products were involved due to the DEC components in Systime computers, it had to obtain an export license from the United States Department of Commerce.
In 1982, Systime voluntarily acknowledged that it had sold some systems to Eastern Bloc countries without that necessary US export license and agreed to pay a fine to the Department of Commerce.
However, lawyers for DEC UK pressed further with more serious charges, saying that Systime had not disclosed that it had shipped 400 DEC-based minicomputers, disguised as jukeboxes, to Switzerland that were in fact then headed for the Eastern Bloc.
Thus, Systime came under investigation by the United States Department of Commerce for irregularities in the export of computers from the United Kingdom during the 1980–83 period. In particular, while Systime was not accused of directly trading with the Soviet Union, it was said to have traded with non-aligned countries, including Switzerland, Libya, Syria, Zimbabwe, Pakistan, India, and Malaysia, without having the requisite US export license.
Systime denied that its actions were in violation of any British law. Some British officials felt that the export regulations were partly an effort to prevent British firms from gaining a foothold in the burgeoning computer market.
In particular, Member of Parliament Michael Meadowcroft, representing the Leeds West constituency, tried to get the British government to intercede against the American action.
The directors of Systime said that the export accusation was an underhanded way of those envious of the company's successes to target it. Indeed, they alleged, and MP Meadowcroft related in an address to the House of Commons, that DEC had created a "Kill Systime" campaign. Elements of this campaign, by this telling, included hiring of private detectives, surveillance of employees, burglary, bribery, destruction of documents, and spreading of false rumours. In any case, some of these allegations had been made at the time of the January 1983 meeting between Systime and Ferranti and STC, and these claims played a role in preventing a British-based financial rescue of Systime at the time.
The US action against Systime would involve a $400,000 fine against it, to be accompanied by a prohibition against the company using American goods.
This had a devastating effect upon Systime, in particular as corporate investors were no longer willing to put monies into the company.
Some 1,000 jobs were lost at the Leeds factory, leaving around only 200 employees remaining. Systime directors would put the overall cost to capital and profits at £110 million.
As author George E. Shambaugh relates, "By 1985, in the aftermath of U.S. sanctions, the company was virtually destroyed."
A search by Systime for additional sources of UK funding having failed,
in April 1985, Control Data Corporation acquired the balance of Systime that it did not already own.
The purchase was also seen as a defensive measure against the still-ongoing U.S. Commerce Department investigation.
Then in February 1986, DEC bought 50 percent of the Systime subsidiary from Control Data (which itself was experiencing financial struggles), taking over the Customer Services Division and all the services contracts for DEC hardware. That division had 250 employees and sales and field service contracts representing some 2,500 user accounts. As part of the deal, DEC dropped its £5 million copyright infringement lawsuit against Systime regarding the printed-circuit memory boards.
The DEC deal took away Systime's most lucrative business. MP Meadowcroft protested the action, accusing DEC of having "improperly colonized" Systime. In addition, during 1986 the Systel transaction processing monitor product was split off into the new firm Performance Software Ltd, via a management buyout. As author Kevin Cahill wrote, Systime had become "dismembered".
By 21 April 1986, Systime's tale was the lead story on the front page of The Guardian newspaper.
The directors of Systime filed an action with the European Commission saying that the US actions were a breach of European Union laws protecting free trade among member nations, but the damage was done.
The British government did eventually file a protest against the United States based on the allegation that the latter had used the Central Intelligence Agency to illicitly gain information about British companies. However the British government did not intercede in any way that forestalled the damage done to Systime, and Meadowcroft's efforts had come to naught.
The US–UK trade issues were by no means limited to Systime; smaller firms that could not afford the bureaucratic approval process of an export license were affected, as were much larger enterprises such as IBM and Toshiba. An investigation in 1985 conducted by Datamation magazine showed that there was an extensive grey market for computers, especially DEC equipment such as the VAX-11, and that Systime was but one of several sources for such products. Despite COCOM-based efforts to curtail such trade with the Eastern Bloc, it only grew more vigorous. The whole matter generated considerable debate during the second Thatcher ministry and a February 1987 editorial from The Guardian, one that mentioned Systime, emphasized the broader importance of the issue and criticised the prime minister for failing to fully take a stand against the Americans on behalf of British technology interests. A September 1987 account in the New Scientist also mentioned Systime as the worst hit and criticised COCOM as being antiquated.
The exports control issue was not the only factor that led to the collapse of Systime. Primary among the other causes was the company engaging in an overly aggressive expansion without having sufficient funding in place for it. Nevertheless, the role of the exports issue was critical.
Further decline and initiatives in software
During 1986, with business in rapid decline and the company having lost £3.4 million the previous year, Systime moved to a smaller facility in the Leeds Business Park off Bruntcliffe Lane in the Morley area of Leeds. The "glass palace", officially opened just two and half years prior, was put up for sale. (An April 1988 piece in The Times pointed to Systime as a cautionary tale in seeing trends of economic rebirth in the North.)
Now, what remained of Systime – "a mere shadow of its former self", as Computergram International described it – decided to focus on Unix-based initiatives among its hardware and software offerings.
During
1987, Systime announced its Series 3 computers, based on the Intel 80386 and running flavours of Unix, as well as an OEM agreement with Computer Consoles Inc. to resell that company's Power632S line of Intel-based systems.
These joined the Intel-based Series 2 systems that Systime also offered.
In addition, Systime forged an OEM agreement with Altos Computer Systems for that company's 80386-based Series 2000 systems, to further complement the Intel product line at different price points and numbers of supported users.
Finally, Systime also formed an agreement with Parallel Computers, Inc. to resell that US-based company's fault-tolerant systems.
In terms of software offerings, Systime
tried to ease migration for DEC PDP-11 users by offering its own Trans-Basic translator, which converted the BASIC programming language from a dialect used on RSTS to one used on Unix. A similar tool allowed users of the COBOL programming language on ICL or Wang Laboratories systems to migrate to Unix-based compilation and deployment.
As 1987 became 1988, Systime announced a strategic direction that embraced innovation in software over in-house production of hardware systems.
Primary among these was a new product line called Visionware, the first piece of which was PC-Connect, which was in part a terminal emulator for Microsoft Windows that was composed of implementation elements that ran on both Unix and Windows, and supported cut-and-paste between Windows, graphical Unix-based X Window, and Unix character mode applications. PC-Connect was a released product from Systime by 1987, and was further emphasized in 1988. It found early customer use among Systime partners Altos Computer and Computer Consoles, as well as at the UK government's Central Computer and Telecommunications Agency and Manpower Services Commission. Ongoing work on the X Window aspects of it was done in collaboration with Cambridge-based IXI Limited. Several other Windows–Unix connectivity products were also under development as part of the Visionware line.
Dissolution of company and legacy
On 2 June 1989, as Computergram International wrote, "Control Data Corp finally got shot of its troublesome UK Systime Ltd business ... and the solution for the once-substantial Leeds systems integrator is dismemberment by management buy-out." Four separate companies were formed from what had been Systime; the largest of these was Computer Service Technology Ltd, which gained the support and distribution rights for the rebadged Altos Computer Systems and Computer Consoles systems. The other three were Visionware Ltd, which gained the rights to PC-Connect and the Visionware technologies; Manufacturing Solutions Group Ltd, which gained the Sysimp Unix package for manufacturing control; and Streetwise Ltd, which gained Unix-based software for the back-office side for retail point-of-sale systems. Some venture capital monies were involved in support of the management buyouts. In addition, two financial packaged products were sold to other companies within Leeds.
Computer Service Technology carried on in Leeds into the mid-1990s, with activities such as becoming a UK distributor for Wyse Technology as well as, under the names CST Distribution and CST Group Ltd, continuing to sell what were now known as Acer Altos systems. After a 1996 merger CST became part of Sphinx CST, which in turn in 2010 became part of Arrow ECS.
Visionware succeeded to the point where it was acquired by the Santa Cruz Operation in 1994. It later become part of Tarantella, Inc., which was then acquired by Sun Microsystems and subsequently became part of the Oracle Secure Global Desktop. Noted British entrepreneur Peter Wilkinson, who later co-founded Planet Online and a number of other Internet-related firms, began his career at Systime. Many of his later efforts were based in Leeds and included former Systime employees.
Control Data kept the Systime UK name after the breakup, and that name was listed on subsequent company reports although essentially inactive. The last paper vestige of Systime Computers Ltd was not formally removed from the books at Companies House until 2015.
However, the Systime name was kept alive in a different venue. Systime had started a branch company in India in 1979 to do outsourcing work; this Indian entity was then acquired as a wholly owned subsidiary by the India-based CMS Group in 1984, during the time of turbulence for Systime UK. Doing business under the name Systime, and with essentially the same logo as Systime UK had had, this firm became substantial in size and a power in the global software projects field with offices around the world. It continued under the name and in the IT services business until it was acquired by KPIT Cummins in 2011; it kept the Systime name for a couple more years as a subsidiary; during 2014, the name effectively went out of use.
The Systime "Glass Palace" was bought and refurbished as the Arlington Business Centre, opened in 1988, and eventually became part of the White Rose Office Park.
Leeds Industrial Museum (part of Leeds Museums & Galleries) holds examples of Systime computers in its collections, and an example was displayed in its exhibition Leeds to Innovation in 2019.
References
External links
Grace's Guide entry on Systime Computers
Images of Systime Computers
Exchange concerning Systime Computers between MP Michael Meadowcroft (Leeds, West) and Minister for Information Technology Geoffrey Pattie in the House of Commons, 25 February 1986
Systime alumni site
Defunct companies based in Leeds
Defunct manufacturing companies of England
Defunct computer hardware companies
Defunct computer companies of the United Kingdom
Computer companies established in 1973
Technology companies established in 1973
Manufacturing companies established in 1973
Computer companies disestablished in 1989
Manufacturing companies disestablished in 1989
Software companies of England
1973 establishments in England
1989 disestablishments in England
British companies disestablished in 1989
British companies established in 1973 | Operating System (OS) | 1,186 |
DX10
DX10 was a general purpose international, multitasking operating system designed to operate
with the Texas Instruments 990/10, 990/10A and 990/12 minicomputers using the memory mapping feature.
The Disk Executive Operating System (DX10)
DX10 was a versatile disk-based operating system capable of supporting a wide
range of commercial and industrial applications.
DX10 was also a multiterminal system capable of making each of several users
appear to have exclusive control of the system.
DX10 was an international operating system designed to meet the commercial
requirements of the United States, most European countries, and Japan.
DX10 supported several models of video display terminals (VDTs), most of which
permit users to enter, view, and process data in their own language.
DX10 Capabilities
DX10 required a basic hardware configuration, but allows additional members of
an extensive group of peripherals to be included in the configuration.
During system generation, the user could configure DX10 to support peripheral devices
that are not members of the 990 family and devices that require realtime
support.
This capability required that the user also provide software control for these
devices.
The user communicated with DX10 easily through the System Command Interpreter (SCI).
SCI was designed to provide simple, convenient interaction between the user and
DX10 in a conversational format.
Through SCI the user had access to complete control of DX10.
SCI was flexible in its mode of communication.
While SCI is convenient for interactive communication through a data terminal,
SCI can be accessed in batch mode as well.
DX10 was capable of extensive file management.
The built-in file structures include key indexed files, relative record files,
and sequential files.
A group of file control utilities exists for copying and modifying files, and
controlling file parameters.
DX10 Features
DX10 offered a number of features that provide convenient use of the minicomputers system capabilities:
Easy system generation for systems with custom device configurations. With proper preparation, peripheral devices that are not part of the 990 computer family can be interfaced through DX10.
A macro assembler for translating assembly language programs into executable machine code.
A text editor for entering source code or data into accessible files.
Support of high-level languages, including Fortran, COBOL, Pascal, RPG II, and BASIC.
A link editor and extended debugging facilities are provided to further support program development.
References
External links
Dave Pitts' TI 990 page — Includes a simulator and DX10 Operating System images.
Proprietary operating systems
Texas Instruments | Operating System (OS) | 1,187 |
Pecom 32
Pecom 32 is an educational and/or home computer developed by Elektronska Industrija Niš of Serbia in 1985.
Specifications
CPU: CDP 1802B 5V7 running at 5 MHz
ROM: 16 KB, with optional 16 KB upgrade containing enhanced editor and assembler
Primary memory: 36 KB (32 KB available to user)
Secondary storage: cassette tape
Display: 8-colours, text mode 24 lines with 40 characters each; pseudo-graphics mode using user-defined characters
Sound: (probably) AY-3-8912
I/O ports: cassette tape storage, composite and RF video, RS-232 and expansion connector
See also
Pecom 64
External links
Old-Computers.com
Retrospec.sgn.net - games in audio format
Home computers
EI Niš | Operating System (OS) | 1,188 |
Retail software
Retail software is computer software typically installed on PC-type computers or more recently (past 2005) delivered via the Internet (also known as cloud-based). Traditionally this software was delivered via physical data storage media sold to end consumer but very few companies still provide their software using physical media. The software is typically sold under restricted licenses (e.g. EULAs) or in the case of cloud-based software sold as a Software-as-a-Service (SaaS) model.
Types
Cloud-based software: this is software that is not installed on a user's device but delivered on-demand via the Internet to the end user's device(s) either through web-based apps or native apps (iOS and Android). Most new software companies provide both or a combination of web, and native apps which may provide different functionality depending on the actual user in a client company.
OEM Pack -HOW This is a licensed copy of software given by the software manufacturer to a computer manufacturer to pre-install on a computer being sold to a customer. A backup copy may or may not be provided on a CD to the end-user along with the computer.
Box Pack - This is a licensed copy of the software that an end-user buys off the shelf from any authorized retail outlet. They may sometimes be more highly-priced than OEM versions as you generally get additional software along with the main software within the pack.
Paper License - This is a scheme provided by the software manufacturer to companies or businesses that require many copies of particular software to be installed on multiple computers within the organization (Volume license key). Say, for example, a company requires installing software on 50 computers in its office. Instead of buying 50 CDs and managing those 50 individually, the company can buy one copy of the software and request the software vendor to issue a paper license authorizing them to use it on 50 computers. The software vendor then charges them accordingly. This method is also much cheaper than buying 50 individual packs.
History
An important historical event that led to the expansion of the market for retail software was the Open Letter to Hobbyists by Bill Gates in 1976.
Until the 2000s with the emergence of the Internet, retail software represented the vast majority of all end consumer software used and was referred to as shrinkware because of software almost always ships in a shrinkwrapped box.
The most famous examples of retail software are the products offered on the IBM PC and clones in the 1980s and 1990s, including famous programs like Lotus 123, Word Perfect and the various parts that make up Microsoft Office. Microsoft Windows is also shrinkware, but is most often pre-installed on the computer.
The rise of the Internet and software licensing schemes has dramatically changed the retail software market e.g. by Digital Distribution. Users are capable of finding shareware, freeware and free software products or use Web services as easily as retail. Producers of proprietary software have shifted to providing much of their software and services via the Internet, including Google, Microsoft, Yahoo!, and Apple Inc. Software is also becoming available as part of an integrated device, as well.
In 2011 Apple declared the discontinuation of many of its boxed retail software products.
See also
Proprietary software
List of proprietary software for Linux
References
Software distribution
Service retailing | Operating System (OS) | 1,189 |
Opportunity (rover)
Opportunity, also known as MER-B (Mars Exploration Rover – B) or MER-1, and nicknamed Oppy, is a robotic rover that was active on Mars from 2004 until mid-2018. Opportunity was operational on Mars for sols ( days, or ). Launched on July 7, 2003, as part of NASA's Mars Exploration Rover program, it landed in Meridiani Planum on January 25, 2004, three weeks after its twin Spirit (MER-A) touched down on the other side of the planet. With a planned 90-sol duration of activity (slightly less than 92.5 Earth days), Spirit functioned until it got stuck in 2009 and ceased communications in 2010, while Opportunity was able to stay operational for sols after landing, maintaining its power and key systems through continual recharging of its batteries using solar power, and hibernating during events such as dust storms to save power. This careful operation allowed Opportunity to operate for 57 times its designed lifespan, exceeding the initial plan by (in Earth time). By June 10, 2018, when it last contacted NASA, the rover had traveled a distance of .
Mission highlights included the initial 90-sol mission, finding meteorites such as Heat Shield Rock (Meridiani Planum meteorite), and over two years of exploring and studying Victoria crater. The rover survived moderate dust storms and in 2011 reached Endeavour crater, which has been described as a "second landing site." The Opportunity mission is considered one of NASA's most successful ventures.
Due to the planetary 2018 dust storm on Mars, Opportunity ceased communications on June 10 and entered hibernation on June 12, 2018. It was hoped it would reboot once the weather cleared, but it did not, suggesting either a catastrophic failure or that a layer of dust had covered its solar panels. NASA hoped to re-establish contact with the rover, citing recurring windy period, forecast for November 2018 to January 2019, that could potentially clean off its solar panels. On February 13, 2019, NASA officials declared that the Opportunity mission was complete, after the spacecraft had failed to respond to over 1,000 signals sent since August 2018.
Mission overview
Collectively, the Opportunity and Spirit rovers were part of the Mars Exploration Rover program in the long-term Mars Exploration Program. The Mars Exploration Program's four principal goals were to determine if the potential for life exists on Mars (in particular, whether recoverable water may be found on Mars), to characterize the Mars climate and its geology, and then to prepare for a potential human mission to Mars. The Mars Exploration Rovers were to travel across the Martian surface and perform periodic geologic analyses to determine if water ever existed on Mars as well as the types of minerals available, as well as to corroborate data taken by the Mars Reconnaissance Orbiter (MRO).
Spirit and Opportunity were launched a month apart, on June 10 and July 7, 2003, and both reached the Martian surface by January 2004. Both rovers were designed with an expected 90 sols (92 Earth days) lifetime, but each lasted much longer than expected. Spirit mission lasted 20 times longer than its expected lifetime, and its mission was declared ended on May 25, 2011, after it got stuck in soft sand and expended its power reserves trying to free itself. Opportunity lasted 55 times longer than its 90 sol planned lifetime, operating for days from landing to mission end. An archive of weekly updates on the rover's status can be found at the Opportunity Update Archive.
From its initial landing, by chance, into an impact crater amidst an otherwise generally flat plain, Opportunity successfully investigated regolith and rock samples and took panoramic photos of its landing site. Its sampling allowed NASA scientists to make hypotheses concerning the presence of hematite and past presence of water on the surface of Mars. Following this, it was directed to travel across the surface of Mars to investigate another crater site, Endurance crater, which it investigated from June to December 2004. Subsequently, Opportunity examined the impact site of its own heat shield and discovered an intact meteorite, now known as Heat Shield Rock, on the surface of Mars.
From late April to early June 2005, Opportunity was perilously lodged in a sand dune, with several wheels buried in the sand. Over a six-week period, Earth-based physical simulations were performed to decide how best to extract the rover from its position without risking its permanent immobilization. Successful maneuvering a few centimeters at a time eventually freed the rover, which resumed its travels.
Opportunity was directed to proceed in a southerly direction to Erebus crater, a large, shallow, partially buried crater and a stopover on the way south towards Victoria crater, between October 2005 and March 2006. It experienced some mechanical problems with its robotic arm.
In late September 2006, Opportunity reached Victoria crater and explored along the rim in a clockwise direction. In June 2007 it returned to Duck Bay, its original arrival point at Victoria crater; in September 2007 it entered the crater to begin a detailed study. In August 2008, Opportunity left Victoria crater for Endeavour crater, which it reached on August 9, 2011.
Here at the rim of the Endeavour crater, the rover moved around a geographic feature named Cape York. The Mars Reconnaissance Orbiter had detected phyllosilicates there, and the rover analyzed the rocks with its instruments to check this sighting on the ground. This structure was analyzed in depth until summer 2013. In May 2013 the rover was heading south to a hill named Solander Point.
Opportunitys total odometry by June 10, 2018 (sol 5111), was , while the dust factor was 10.8. Since January 2013, the solar array dust factor (one of the determinants of solar power production) varied from a relatively dusty 0.467 on December 5, 2013 (sol 3507), to a relatively clean 0.964 on May 13, 2014 (sol 3662).
In December 2014, NASA reported that Opportunity was suffering from "amnesia" events in which the rover failed to write data, e.g. telemetry information, to non-volatile memory. The hardware failure was believed to be due to an age-related fault in one of the rover's seven memory banks. As a result, NASA had aimed to force the rover's software to ignore the failed memory bank; amnesia events continued to occur, however, which eventually resulted in vehicle resets. In light of this, on Sol 4027 (May 23, 2015), the rover was configured to operate in RAM-only mode, completely avoiding the use of non-volatile memory for storage.
End of mission
[[File:Mars Opportunity tau watt-hours graph.jpg|thumb|left|200px|Graph of atmospheric opacity and Opportunity'''s energy reserve]]
In early June 2018, a large planetary-scale dust storm developed, and within a few days the rover's solar panels were not generating enough power to maintain communications, with the last contact on June 10, 2018. NASA stated that they did not expect to resume communication until after the storm subsided, but the rover kept silent even after the storm ended in early October, suggesting either a catastrophic failure or a layer of dust covering its solar panels. The team remained hopeful that a windy period between November 2018 and January 2019 might clear the dust from its solar panels, as had happened before. Wind was detected nearby on January 8, and on January 26 the mission team announced a plan to begin broadcasting a new set of commands to the rover in case its radio receiver failed.
On February 12, 2019, past and present members of the mission team gathered in JPL's Space Flight Operations Facility to watch final commands being transmitted to Opportunity via the 70 meter dish of the Goldstone Deep Space Communications Complex in California. Following 25 minutes of transmission of the final 4 sets of commands, communication attempts with the rover were handed off to Canberra, Australia.
More than 835 recovery commands were transmitted since losing signal in June 2018 to the end of January 2019 with over 1000 recovery commands transmitted before February 13, 2019. NASA officials held a press conference on February 13 to declare an official end to the mission. NASA associate administrator Thomas Zurbuchen said, "It is therefore that I am standing here with a deep sense of appreciation and gratitude that I declare the Opportunity mission is complete." As NASA ended their attempts to contact the rover, the last data sent was the song "I'll Be Seeing You" performed by Billie Holiday. Assets that had been needed to support Opportunity were transitioned to support the Mars rovers Curiosity and Perseverance.
The final communication from the rover came on June 10, 2018 (sol 5111) from Perseverance Valley, and indicated a solar array energy production of 22 Watt-hours for the sol, and the highest atmospheric opacity (tau) ever measured on Mars: 10.8.
Objectives
The scientific objectives of the Mars Exploration Rover mission were to:
Search for and characterize a variety of rocks and regolith that hold clues to past water activity. In particular, samples sought include those that have minerals deposited by water-related processes such as precipitation, evaporation, sedimentary cementation or hydrothermal activity.
Determine the distribution and composition of minerals, rocks, and regolith surrounding the landing sites.
Determine what geologic processes have shaped the local terrain and influenced the chemistry. Such processes could include water or wind erosion, sedimentation, hydrothermal mechanisms, volcanism, and cratering.
Perform calibration and validation of surface observations made by Mars Reconnaissance Orbiter instruments. This will help determine the accuracy and effectiveness of various instruments that survey Martian geology from orbit.
Search for iron-containing minerals, identify and quantify relative amounts of specific mineral types that contain water or were formed in water, such as iron-bearing carbonates.
Characterize the mineralogy and textures of rocks and regolith and determine the processes that created them.
Search for geological clues to the environmental conditions that existed when liquid water was present.
Assess whether those environments were conducive to life.
During the next two decades, NASA will continue to conduct missions with other spacecraft to address whether life ever arose on Mars. The search begins with determining whether the Martian environment was ever suitable for life. Life, as we understand it, requires water, so the history of water on Mars is critical to finding out if the Martian environment was ever conducive to life. Although the Mars Exploration Rovers did not have the ability to detect life directly, they offered very important information on the habitability of the environment in the planet's history.
Design and constructionSpirit and Opportunity are twin rovers, each a six-wheeled, solar-powered robot standing high, wide, and long and weighing . Six wheels on a rocker-bogie system enable mobility. Each wheel has its own motor, the vehicle is steered at front and rear and was designed to operate safely at tilts of up to 30 degrees. Maximum speed is although average speed was about a sixth of this (). Both Spirit and Opportunity have pieces of the fallen World Trade Center's metal on them that were "turned into shields to protect cables on the drilling mechanisms".
Solar arrays generate about 140 watts for up to fourteen hours per sol, while rechargeable lithium ion batteries stored energy for use at night. Opportunitys onboard computer uses a 20 MHz RAD6000 CPU with 128 MB of DRAM, 3 MB of EEPROM, and 256 MB of flash memory. The rover's operating temperature ranges from and radioisotope heaters provide a base level of heating, assisted by electrical heaters when necessary. A gold film and a layer of silica aerogel provides insulation.
Communications depend on an omnidirectional low-gain antenna communicating at a low data rate and a steerable high-gain antenna, both in direct contact with Earth. A low gain antenna is also used to relay data to spacecraft orbiting Mars.
Fixed science/engineering instruments included:
Panoramic Camera (Pancam) – examines the texture, color, mineralogy, and structure of the local terrain.
Navigation Camera (Navcam) – monochrome with a higher field of view but lower resolution, for navigation and driving.
Miniature Thermal Emission Spectrometer (Mini-TES) – identifies promising rocks and regolith for closer examination, and determines the processes that formed them.
Hazcams, two B&W cameras with 120 degree field of view, that provide additional data about the rover's surroundings.
The rover arm holds the following instruments:
Mössbauer spectrometer (MB) MIMOS II – used for close-up investigations of the mineralogy of iron-bearing rocks and regolith.
Alpha particle X-ray spectrometer (APXS) – close-up analysis of the abundances of elements that make up rocks and regolith.
Magnets – for collecting magnetic dust particles
Microscopic Imager (MI) – obtains close-up, high-resolution images of rocks and regolith.
Rock Abrasion Tool (RAT) – exposes fresh material for examination by instruments on board.
The cameras produce 1024-pixel by 1024-pixel images, the data is compressed with ICER, stored, and transmitted later.
The rover's name was chosen through a NASA sponsored student essay competition.Opportunity was 'driven' by several operators throughout its mission, including JPL roboticist Vandi Verma who also cowrote the PLEXIL command language used in its software.
Power
The rover uses a combination of solar cells and a rechargeable chemical battery. This class of rover has two rechargeable lithium batteries, each composed of 8 cells with 8 amp-hour capacity. At the start of the mission the solar panels could provide up to around 900 watt-hours (Wh) to recharge the battery and power system in one Sol, but this could vary due to a variety of factors. In Eagle crater the cells were producing about 840 Wh, but by Sol 319 in December 2004, it had dropped to 730 Wh.
Like Earth, Mars has seasonal variations that reduce sunlight during winter. However, since the Martian year is longer than that of the Earth, the seasons fully rotate roughly once every 2 Earth years. By 2016, MER-B had endured seven Martian winters, during which times power levels drop which can mean the rover avoids doing activities that use a lot of power. During its first winter power levels dropped to under 300 Wh per day for two months, but some later winters were not as bad.
Another factor that can reduce received power is dust in the atmosphere, especially dust storms. Dust storms have occurred quite frequently when Mars is closest to the Sun. Global dust storms in 2007 reduced power levels for Opportunity and Spirit so much they could only run for a few minutes each day. Due to the 2018 dust storms on Mars, Opportunity entered hibernation mode on June 12, but it remained silent after the storm subsided in early October.
Examples
Examples of watt-hours per sol collected by the rover:
LaunchOpportunitys launch was managed by NASA's Launch Services Program. This was the first launch of the Delta II Heavy. The launch period went from June 25 to July 15, 2003. The first launch attempt occurred on June 28, 2003, but the spacecraft launched nine days later on July 7, 2003, due to delays for range safety and winds, then later to replace items on the rocket (insulation and a battery). Each day had two instantaneous launch opportunities. On the day of launch, the launch was delayed to the second opportunity (11:18 p.m. EDT) in order to fix a valve.
Landing
On January 25, 2004 (GMT) (January 24, 2004 PST) the airbag-protected landing craft settled onto the surface of Mars in the Eagle crater.
Heat shield impact site
In late December 2004, Opportunity reached the impact site of its heat shield, and took a panorama around Sol 325.
Scientific findings Opportunity has provided substantial evidence in support of the mission's primary scientific goals: to search for and characterize a wide range of rocks and regolith that hold clues to past water activity on Mars. In addition to investigating the water, Opportunity has also obtained astronomical observations and atmospheric data.
Honors
Honoring Opportunity's great contribution to the exploration of Mars, an asteroid was named Opportunity: 39382 Opportunity. The name was proposed by Ingrid van Houten-Groeneveld who, along with Cornelis Johannes van Houten and Tom Gehrels, discovered the asteroid on September 24, 1960. Opportunitys lander is Challenger Memorial Station.
On July 28, 2014, it was announced that Opportunity, having traversed over , had become the rover achieving the longest off-world distance, surpassing the previous record of on the Moon by Lunokhod 2.
On March 24, 2015, NASA celebrated Opportunity having traveled the distance of a marathon race, , from the start of Opportunitys landing and traveling on Mars.
Superlatives
Steepest slope
Highest elevation
On Sol 3894 (January 6, 2015), Opportunity reached the summit of "Cape Tribulation," which is above "Botany Bay" level and the highest point yet reached by the rover on western rim of Endeavour Crater according to NASA.
Driving distance
Longest traverse
Opportunity held the record for longest distance (220 meters) traversed in a single sol by any rover until February 4th, 2022, when Perseverance traversed 245 in one sol.
Images
The rover could take pictures with its different cameras, but only the PanCam camera had the ability to photograph a scene with different color filters. The panorama views are usually built up from PanCam images. By February 3, 2018, Opportunity had returned 224,642 pictures.
Views
Panoramas
A selection of panoramas from the mission:
Close-up images
From orbit
Area maps
Traverse maps
An example of a rover traverse map featuring a line showing path of the rover, and mission sols, which are Mars days counted from its landing and typical of Mars surface mission time reporting. Topographic lines and various feature names are also common.
Legacy
With word on February 12, 2019, that NASA was likely to conclude the Opportunity mission, many media outlets and commentators issued statements praising the mission's success and stating their goodbyes to the rover. One journalist, Jacob Margolis, tweeted his translation of the last data transmission sent by Opportunity on June 10, 2018, as "My battery is low and it's getting dark." The phrase struck a chord with the public, inspiring a period of mourning, artwork, and tributes to the memory of Opportunity.
When the quote became widely reported, some news reports mistakenly asserted that the rover sent that English message, inundating NASA with additional questions. Margolis wrote a clarifying article on February 16, making it clear he had taken statements from NASA officials who were interpreting the data sent by Opportunity, both on the state of its low power and Mars's high atmospheric opacity, and rephrased them in a poetic manner, never to imply the rover had sent the specific words.
Film adaptation
Amazon Studios announced in March 2021 that it was developing a documentary Good Night Oppy based on the rover and its prolonged mission. The documentary will be directed by Ryan White, and will include support from JPL and Industrial Light & Magic.
See also
List of surface features of Mars visited by Spirit and Opportunity
Perseverance (rover)
Zhurong (rover)
References
External links
NASA links
NASA/JPL Mission page
Sunrise on Mars – video (02:10) (NASA; November 7, 2018)
End of Opportunity'' Mission (February 13, 2019; videos) ‒ (3:52) overview ‒ (59:47) final panel
MSSS and WUSTL links
Finding Opportunity: high-resolution images of landing site (Mars Global Surveyor – Mars Orbiter Camera)
MER Analyst's Notebook, Interactive access to mission data and documentation
Other links
Archive of MER progress reports by A.J.S. Rayl at planetary.org
2003 robots
Margaritifer Sinus quadrangle
Mars rovers
Missions to Mars
Robots of the United States
Six-wheeled robots
Solar-powered robots
Space probes decommissioned in 2019
Space probes launched in 2003
Spacecraft launched by Delta II rockets
Derelict landers (spacecraft)
Soft landings on Mars
2004 on Mars | Operating System (OS) | 1,190 |
HP 9000
HP 9000 is a line of workstation and server computer systems produced by the Hewlett-Packard (HP) Company. The native operating system for almost all HP 9000 systems is HP-UX, which is based on UNIX System V.
The HP 9000 brand was introduced in 1984 to encompass several extant technical workstation models launched formerly in the early 1980s. Most of these were based on the Motorola 68000 series, but there were also entries based on HP's own FOCUS designs. From the mid-1980s, the line was transitioned to HP's new PA-RISC architecture. Finally, in the 2000s, systems using the IA-64 were added.
The HP 9000 line was discontinued in 2008, being superseded by Itanium-based HPE Integrity Servers running HP-UX.
History
The first HP 9000 models comprised the HP 9000 Series 200 and Series 500 ranges. These were rebadged existing models, the Series 200 including various Motorola 68000 (68k) based workstations such as the HP 9826 and HP 9836, and the Series 500 using HP's FOCUS microprocessor architecture introduced in the HP 9020 workstation. These were followed by the HP 9000 Series 300 and Series 400 workstations which also used 68k-series microprocessors. From the mid-1980s onward, HP began changing to its own microprocessors based on its proprietary PA-RISC instruction set architecture (ISA), for the Series 600, 700, 800, and later lines. More recent models use either the PA-RISC or its successor, the HP–Intel IA-64 ISA.
All of the HP 9000 line run various versions of the HP-UX operating system, except earlier Series 200 models, which ran standalone applications or the Basic Workstation / Pascal 3.1 Workstation operating systems. HP released the Series 400, also known as the Apollo 400, after acquiring Apollo Computer in 1989. These models had the ability to run either HP-UX or Apollo's Domain/OS.
From the early 1990s onward, HP replaced the HP 9000 Series numbers with an alphabetical Class nomenclature. In 2001, HP again changed the naming scheme for their HP 9000 servers. The A-class systems were renamed as the rp2400s, the L-class became the rp5400s, and the N-class the rp7400s. The rp prefix signified a PA-RISC architecture, while rx was used for IA-64-based systems, later rebranded HPE Integrity Servers.
On 30 April 2008, HP announced end of sales for the HP 9000. The last order date for HP 9000 systems was 31 December 2008 and the last ship date was 1 April 2009. The last order date for new HP 9000 options was December 31, 2009, with a last ship date of 1 April 2010. HP intends to support these systems through to 2013, with possible extensions.
The end of life for HP 9000 also marks the end of an era, as it essentially marks HP's withdrawal from the Unix workstation market (the HP 9000 workstations are end of life, and there are no HP Integrity workstations, so there is no longer a solution which targets HP/UX at the desktop). When the move from PA-RISC (9000) to Itanium (Integrity) was announced, Integrity workstations running either HP/UX or Windows were initially announced and offered, but were moved to end of sales life relatively quickly, with no replacement (arguably because x86-64 made IA-64 uncompetitive on the desktop, and HP/UX does not support x86-64, with HP offering desktop Linux as an alternative, not fully compatible, solution).
Workstation models
Prior to January 1985 (see also HP 9800 series):
Series 200 16 (HP 9816), 20 (HP 9920), 26 (HP 9826), 36 (HP 9836)
Series 500 20 (HP 9020), 30 (HP 9030), 40 (HP 9040)
After 1985:
Series 200 216 (HP 9816), 217 (HP 9817), 220 (HP 9920), 226 (HP 9826), 236 (HP 9836), 237 (HP 9837)
Series 300 310, 318, 319, 320, 322, 330, 332, 340, 345, 350, 360, 362, 370, 375, 380, 382, 385
Series 400 (HP Apollo 9000 Series 400) 400dl, 400s, 400t, 425dl, 425e, 425s, 425t, 433dl, 433s, 433t
Series 500 520 (HP 9020), 530 (HP 9030), 540 (HP 9040), 550, 560
Series 600 635SV, 645SV
Series 700 705, 710, 712, 715, 720, 725, 730, 735, 742, 743, 744, 745, 747, 748, 750, 755
B-class B132L, B160L, B132L+, B180L, B1000, B2000, B2600
C-class C100, C110, C132L, C160, C160L, C180, C180L, C180XP, C200, C240, C360, C3000, C3600, C3650, C3700, C3750, C8000
J-class J200, J210, J210XC, J280, J282, J2240, J5000, J5600, J6000, J6700, J6750, J7000
Series 200
The Series 200 workstations originated before there were any "Series" at HP. The first model was the HP 9826A, followed by the HP 9836A. Later, a color version of the 9836 (9836C) was introduced. There was also a rack-mount version, the HP 9920A. These were all based on the Motorola 68000 chip. There were 'S' versions of the models that included memory bundled in. When HP-UX was included as an OS, there was a 'U' version of the 9836s and 9920 that used the 68012 processor. The model numbers included the letter 'U' (9836U, 9836CU, and 9920U). Later versions of the Series 200's included the 9816, 9817, and 9837. These systems were soon renamed as the HP Series 200 line, before being renamed again as part HP 9000 family, the HP 9000 Series 200.
There was also a "portable" version of the Series 200 called the Integral. The official model was the HP9807. This machine was about the size of a portable sewing machine, contained a MC68000 processor, ROM based HP-UX, 3½ inch floppy disk drive, inkjet printer, a keyboard, mouse, and an electroluminescent display similar to the early GRiD Compass computers. It was not battery powered, and unlike the other Series 200's that were manufactured in Fort Collins, Colorado, it was made in Corvallis, Oregon.
Series 300/400
The Series 300 workstations were based around Motorola 68000-series processors, ranging from the 68010 (Model 310, introduced 1985) to the Motorola 68040 (Model 38x, introduced 1991). The Series 400 (introduced 1990) were intended to supersede the Apollo/Domain workstations and were also based on the 68030/040. They were branded "HP Apollo" and added Apollo Domain/OS compatibility. The suffix 's' and 't' used on the Series 400 represented "Side" (as in Desk side) and "Top" (as in Desk top) model. The last two digits of the Series 400 originally was the clock frequency of the processor in MHz (e.g. 433 was 33 MHz). At introduction, the Series 400 had a socket for the MC68040, but since they were not available at the time, an emulator card with an MC68030 and additional circuitry was installed. Customers who purchased systems were given a guaranteed upgrade price of $5,000USD to the MC68040, when they became available. The Series 300 and 400 shared the same I/O interface as the Series 200. The 32-bit DIO-II bus is rated at 6 MB/s.
Series 500
The Series 500s were based on the HP FOCUS microprocessor. They began as the HP 9020, HP 9030, and HP 9040, were renamed the HP Series 500 Model 20, 30, and 40 shortly after introduction, and later renamed again as the HP 9000 Model 520, 530 and 540. The 520 was a complete workstation with built-in keyboard, display, 5.25-inch floppy disk, and optional thermal printer and 5 MB hard disk. The 520 could run BASIC or HP-UX and there were three different models based on the displays attached (two color and one monochrome). The 530 was a rackmount version of the Series 500, could only run HP-UX, and used a serial interface console. The 540 was a 530 mounted inside a cabinet, similar to the disk drives offered then and included a serial multiplexer (MUX). Later models of the Series 500s were the 550 and 560, which had a completely different chassis and could be connected to graphics processors. The processors in the original Series 500s ran at 20 MHz, and could reach a benchmark speed of 1 million instructions per second (MIPS), equivalent to a VAX-11/780, then a common benchmark standard. They could be networked together and with 200 and 300 series using the Shared Resource Manager (SRM).
Because of their performance, the US government placed the 500 series on its export restricted list. The computers were only permitted to be sold in Western Europe, Canada, Australia, and New Zealand, with any other country needing written approval.
Series 700
The first workstations in the series, the Model 720, Model 730 and Model 750 systems were introduced on 26 March 1991 and were code-named "Snakes". The models used the PA-7000 microprocessor, with the Model 720 using a 50 MHz version and the Model 730 and Model 750 using a 66 MHz version. The PA-7000 is provided with 128 KB of instruction cache on the Model 720 and 730 and 256 KB on the Model 750. All models are provided with 256 KB of data cache. The Model 720 and Model 730 supported 16 to 64 MB of memory, while the Model 750 supported up to 192 MB. Onboard SCSI was provided by an NCR 53C700 SCSI controller. These systems could use both 2D and 3D graphics options, with 2D options being the greyscale GRX and the color CRX. 3D options were the Personal VRX and the Turbo GRX.
In early January 1992, HP introduced the Model 705, code-named "Bushmaster Snake", and the Model 710, code-named "Bushmaster Junior". Both systems are low-end diskless workstations, with the Model 705 using a 32 MHz PA-7000 and the Model 710 using a 50 MHz version. At introduction, the Model 705 was priced at under US$5,000, and the Model 710 under US$10,000.
The first Series 700 workstations were superseded by the Model 715/33, 715/50, 725/50 low-end workstations and the Model 735/99, 735/125, 755/99 and 755/125 high-end workstations on 10 November 1992. The existing Model 715 and Model 725 were later updated with the introduction of the Model 715/75 and 725/75 in September 1993. The new models used a 75 MHz PA-7100.
Increasing integration led to the introduction of the Model 712/60 and Model 712/80i workstations on 18 January 1994. Code-named "Gecko", these models were intended to compete with entry-level workstations from Sun Microsystems and high-end personal computers. They used the PA-7100LC microprocessor operating at 60 and 80 MHz, respectively. The Model 712/80i was an integer only model, with the floating point-unit disabled. Both supported 16 to 128 MB of memory.
The Model 715/64, 715/80, 715/100 and 725/100 were introduced in May 1994, targeted at the 2D and 3D graphics market. These workstations use the PA-7100LC microprocessor and supported 32 to 128 MB of memory, except for the Model 725/100, which supported up to 512 MB.
The Model 712/100 (King Gecko), an entry-level workstation, and Model 715/100 XC, a mid-range workstation, were introduced in June 1995. The Model 712/100 is a Model 712 with a 100 MHz PA-7100LC and 256 KB of cache while the Model 715/100 XC is a Model 715/100 with 1 MB of cache.
The Model 712 and 715 workstations feature the Lasi ASIC, connected by the GSC bus. The Lasi ASIC provided an integrated NCR 53C710 SCSI controller, an Intel Apricot 10 Mbit Ethernet interface, CD-quality sound, PS/2 keyboard and mouse, a serial and a parallel port. All models, except for the 712 series machines also use the Wax ASIC to provide an EISA adapter, a second serial port and support for the HIL bus.
The SGC bus (System Graphics Connect), which is used in the earlier series 700 workstations, has similar specifications as PCI with 32-bit/33 MHz and a typical bandwidth of about 100 MB/s .
VME Industrial Workstations
Models 742i, 743i, 744, 745/745i, 747i, 748i.
B, C, J class
The C100, C110, J200, J210 and J210XC use the PA-7200 processor, connected to the UTurn IOMMU via the Runway bus. The C100 and C110 are single processor systems, and the J200 and J210 are dual processor systems. The Uturn IOMMU has two GSC buses. These machines continue to use the Lasi and Wax ASICs.
The B132L (introduced 1996), B160L, B132L+, B180L, C132L, C160L and C180L workstations are based on the PA-7300LC processor, a development of the PA-7100LC with integrated cache and GSC bus controller. Standard graphics is the Visualize EG. These machines use the Dino GSC to PCI adapter which also provides the second serial port in place of Wax; they optionally have the Wax EISA adapter.
The C160, C180, C180-XP, J280 and J282 use the PA-8000 processor and are the first 64-bit HP workstations. They are based on the same Runway/GSC architecture as the earlier C and J class workstations.
The C200, C240 and J2240 offer increased speed with the PA-8200 processor and the C360 uses the PA-8500 processor.
The B1000, B2000, C3000, J5000 and J7000 were also based on the PA-8500 processor, but had a very different architecture. The U2/Uturn IOMMU and the GSC bus is gone, replaced with the Astro IOMMU, connected via Ropes to several Elroy PCI host adapters.
The B2600, C3600 and J5600 upgrade these machines with the PA-8600 processor. The J6000 is a rack-mountable workstation which can also be stood on its side in a tower configuration.
The C3650, C3700, C3750, J6700 and J6750 are PA-8700-based.
The C8000 uses the dual-core PA-8800 or PA-8900 processors, which uses the same bus as the McKinley and Madison Itanium processors and shares the same zx1 chipset. The Elroy PCI adapters have been replaced with Mercury PCI-X adapters and one Quicksilver AGP 8x adapter.
Server models
800 Series 807, 817, 822, 825, 827, 832, 835, 837, 840, 842, 845, 847, 850,855, 857, 867, 877, 887, 897
1200 FT Series 1210, 1245, 1245 PLUS
A-class A180, A180C (Staccato), A400, A500
D-class D200, D210, D220, D230, D250, D260, D270, D280, D300, D310, D320, D330, D350, D360, D370, D380, D390
E-class E25, E35, E45, E55
F-class F10, F20, F30 (Nova)
G-class G30, G40, G50, G60, G70 (Nova / Nova64)
H-class H20, H30, H40, H50, H60, H70
I-class I30, I40, I50, I60, I70
K-class K100, K200, K210, K220, K250, K260, K360, K370, K380, K400, K410, K420, K450, K460, K570, K580
L-class L1000, L1500, L2000, L3000
N-class N4000
N-class N4004
N-class N4005
N-class N4006
R-class R380, R390
S-class rebadged Convex Exemplar SPP2000 (single-node)
T-class T500, T520, T600
V-class V2200, V2250, V2500, V2600
X-class rebadged Convex Exemplar SPP2000 (multi-node)
rp2400 rp2400 (A400), rp2405 (A400), rp2430 (A400), rp2450 (A500), rp2470 (A500) (former A-class)
rp3400 rp3410-2, rp3440-4 (1-2 PA-8800/8900 processors)
rp4400 rp4410-4, rp4440-8
rp5400 rp5400, rp5405, rp5430, rp5450, rp5470 (former L-class)
rp7400 rp7400 (former N-class)
rp7405 rp7405, rp7410, rp7420-16, rp7440-16
rp8400 rp8400, rp8410, rp8420-32, rp8440-32
HP 9000 Superdome SD-32, SD-64, SD-128 (PA-8900 processors)
D-class (Codename: Ultralight)
The D-class are entry-level and mid-range servers that succeeded the entry-level E-class servers and the mid-range G-, H-, I-class servers. The first models were introduced in late January 1996, consisting of the Model D200, D210, D250, D310 and D350. The Model D200 is a uniprocessor with a 75 MHz PA-7100LC microprocessor, support for up to 512 MB of memory and five EISA/HP-HSC slots. The Model D210 is similar, but it used a 100 MHz PA-7100LC. The Model D250 is dual-processor model and it used the 100 MHz PA-7100LC. It supported up to 768 MB of memory and had five EISA/HP-HSC slots. The Model D310 is a uniprocessor with a 100 MHz PA-7100LC, up to 512 MB of memory and eight EISA/HP-HSC slots. The Model D350 is a high-end D-class system, a dual-processor, it had two 100 MHz PA-7100LCs, up to 768 MB of memory and eight EISA/HP-HSC slots.
In mid-September 1996, two new D-class servers were introduced to utilize the new 64-bit PA-8000 microprocessor, the Model D270 uniprocessor and the Model D370 dual-processor. Both were positioned as entry-level servers. They used the 160 MHz PA-8000 and supported 128 MB to 1.5 GB of memory.
In January 1997, the low-end Model D220, D230, D320 and D330 were introduced, using 132 and 160 MHz versions of the PA-7300LC microprocessor.
The D-class are tower servers with up to two microprocessors and are architecturally similar to the K-class. They sometimes masquerade as larger machines as HP shipped them mounted vertically inside a large cabinet containing a power supply and multiple disks with plenty of room for air to circulate.
R-class
The R-class is simply a D-class machine packaged in a rack-mount chassis. Unlike the D-class systems, it does not support hot-pluggable disks.
N-class
The N-class is a 10U rackmount server with up to eight CPUs and 12 PCI slots. It uses two Merced buses, one for every four processor slots. It is not a NUMA machine, having equal access to all memory slots. The I/O is unequal though; having one Ike IOMMU per bus means that one set of CPUs are closer to one set of I/O slots than the other.
The N-class servers were marketed as "Itanium-ready", although when the Itanium shipped, no Itanium upgrade was made available for the N class. The N class did benefit from using the Merced bus, bridging the PA-8x00 microprocessors to it via a special adapter called DEW.
The N4000 was upgraded with newer processors throughout its life, with models called N4000-36, N4000-44 and N4000-55 indicating microprocessor clock frequencies of 360, 440, and 550 MHz, respectively. It was renamed to the rp7400 series in 2001.
L-class
The L-class servers are 7U rackmount machines with up to 4 CPUs (depending on model). They have 12 PCI slots, but only 7 slots are enabled in the entry-level L1000 system. Two of the PCI slots are occupied by factory integrated cards and cannot be utilized for I/O expansion by the end-user.
The L1000 and L2000 are similar to the A400 and A500, being based on an Astro/Elroy combination. They initially shipped with 360 MHz and 440 MHz PA-8500 and were upgraded with 540 MHz PA-8600.
The L3000 is similar to the N4000, being based on a DEW/Ike/Elroy combination. It shipped only with 550 MHz PA-8600 CPUs.
The L-class family was renamed to the rp5400 series in 2001.
A-class
The A180 and A180C were 32-bit, single-processor, 2U servers based on the PA-7300LC processor with the Lasi and Dino ASICs.
The A400 and A500 servers were 64-bit, single and dual-processor 2U servers based on the PA-8500 and later processors, using the Astro IOMMU and Elroy PCI adapters. The A400-36 and A500-36 machines used the PA-8500 processor running at 360 MHz; the A400-44 and A500-44 are clocked at 440 MHz. The A500-55 uses a PA-8600 processor running at 550 MHz and the A500-75 uses a PA-8700 processor running at 750 MHz.
The A-class was renamed to the rp2400 series in 2001.
S/X-class
The S- and X-class were Convex Exemplar SPP2000 supercomputers rebadged after HP's acquisition of Convex Computer in 1995. The S-class was a single-node SPP2000 with up to 16 processors, while the X-class name was used for multi-node configurations with up to 512 processors. These machines ran Convex's SPP-UX operating system.
V-class
The V-class servers were based on the multiprocessor technology from the S-class and X-class. The V2200 and V2250 support a maximum of 16 processors, and the V2500 and V2600 support a maximum of 32 processors. The V-class systems are physically large systems that need extensive cooling and three-phase electric power to operate. They provided a transitional platform between the T-class and the introduction of the Superdome.
Operating systems
Apart from HP-UX and Domain/OS (on the 400), many HP 9000s can also run the Linux operating system. Some PA-RISC-based models are able to run NeXTSTEP.
Berkeley Software Distribution (BSD) Unix was ported to the HP 9000 as HPBSD; the resulting support code was later added to 4.4BSD. Its modern variants NetBSD and OpenBSD also support various HP 9000 models, both Motorola 68k and PA-RISC based.
In the early 1990s, several Unix R&D systems were ported to the PA-RISC platform, including several attempts of OSF/1, various Mach ports and systems that combined parts of Mach with other systems (MkLinux, Mach 4/Lites). The origin of these ports were mostly either internal HP Labs projects or HP products, or academic research, mostly at the University of Utah.
One project conducted at HP Laboratories involved replacing core HP-UX functionality, specifically the virtual memory and process management subsystems, with Mach functionality from Mach 2.0 and 2.5. This effectively provided a vehicle to port Mach to the PA-RISC architecture, as opposed to starting with the Berkeley Software Distribution configured to use the Mach kernel infrastructure and porting this to PA-RISC, and thereby delivered a version of HP-UX 2.0 based on Mach, albeit with certain features missing from both Mach and HP-UX. The motivation for the project was to investigate performance issues with Mach related to the cache architecture of PA-RISC along with potential remedies for these issues.
See also
HP 3000
HPE Integrity Servers
HP Superdome
HP 9800 series, prior series of scientific computer workstations
HP 7935 disc drive
Notes
External links
HP 9000 evolution, HP 9000 evolution to HP Integrity
Official HP Mission-Critical Musings Blog
HP 9836 at old-computers.com
HP Computer Museum
OpenPA.net Information resource on HP PA-RISC-based computers, including HP 9000/700, 800 and later systems
Site communautaire sur les stations de travail et serveurs hp9000, regroupant des informations, part number ainsi que de la documentation au format PDF.
9000
9000
Computer workstations
Computer-related introductions in 1984
32-bit computers
64-bit computers | Operating System (OS) | 1,191 |
Sun-3
Sun-3 is a series of UNIX computer workstations and servers produced by Sun Microsystems, launched on September 9, 1985. The Sun-3 series are VMEbus-based systems similar to some of the earlier Sun-2 series, but using the Motorola 68020 microprocessor, in combination with the Motorola 68881 floating-point co-processor (optional on the Sun 3/50) and a proprietary Sun MMU. Sun-3 systems were supported in SunOS versions 3.0 to 4.1.1_U1 and also have current support in NetBSD and Linux.
Sun-3 models
Models are listed in approximately chronological order.
{| class="wikitable"
!Model
!Codename
!CPU board
!CPU MHz
!Max. RAM
!Chassis
|-
| 3/75
| Carrera
| Sun 3004
| 16.67 MHz
| 8 MB
| 2-slot VME (desktop)
|-
| 3/140
| Carrera
| Sun 3004
| 16.67 MHz
| 16 MB
| 3-slot VME (desktop/side)
|-
| 3/160
| Carrera
| Sun 3004
| 16.67 MHz
| 16 MB
| 12-slot VME (deskside)
|-
| 3/180
| Carrera
| Sun 3004
| 16.67 MHz
| 16 MB
| 12-slot VME (rackmount)
|-
| 3/150
| Carrera
| Sun 3004
| 16.67 MHz
| 16 MB
| 6-slot VME (deskside)
|-
| 3/50
| Model 25
| —
| 15.7 MHz
| 4 MB
| "wide Pizza-box" desktop
|-
| 3/110
| Prism
| —
| 16.67 MHz
| 12 MB
| 3-slot VME (desktop/side)
|-
| 3/260
| Sirius
| Sun 3200
| 25 MHz (CPU), 20 MHz (FPU)
| 32 MB
| 12-slot VME (deskside)
|-
| 3/280
| Sirius
| Sun 3200
| 25 MHz (CPU), 20 MHz (FPU)
| 32 MB
| 12-slot VME (rackmount)
|-
| 3/60
| Ferrari
| —
| 20 MHz
| 24 MB
| "wide Pizza-box" desktop
|-
| 3/E
| Polaris
| Sun 3/E
| 20 MHz
| 16 MB
| none (6U VME board)
|}
(Max. RAM sizes may be greater when third-party memory boards are used.)
Keyboard
The Sun Type 3 keyboard is split into three blocks:
special keys
main block
numeric pad
It shipped with Sun-3 systems.
Sun-3x
In 1989, coincident with the launch of the SPARCstation 1, Sun launched three new Sun-3 models, the 3/80, 3/470 and 3/480. Unlike previous Sun-3s, these use a Motorola 68030 processor, 68882 floating-point unit, and the 68030's integral MMU. This 68030-based architecture is called Sun-3x.
{| class="wikitable"
!Model
!Codename
!CPU board
!CPU MHz
!Max. RAM
!Chassis
|-
| 3/80
| Hydra
| -
| 20 MHz
| 16, 40 or 64 MB
| "Pizza-box" desktop
|-
| 3/460
| Pegasus
| Sun 3400
| 33 MHz
| 128 MB
| 12-slot VME (deskside, older design)
|-
| 3/470
| Pegasus
| Sun 3400
| 33 MHz
| 128 MB
| 12-slot VME (deskside, newer design )
|-
| 3/480
| Pegasus
| Sun 3400
| 33 MHz
| 128 MB
| 12-slot VME (rackmount)
|}
Sun 3/260s upgraded with Sun 3400 CPU boards are known as Sun 3/460s.
See also
Sun-1
Sun-2
Sun386i
Sun-4
SPARCstation
References
External links
Sun Microsystems
The Sun Hardware Reference, Part 1
Sun Field Engineer Handbook, 20th edition
Peter's Sun3 Zoo
Bruce Becker's Sun 3 archive
Obsolyte!—Fan site for old Unix Workstations, including Sun machines
68k architecture
Computer-related introductions in 1985
Sun servers
Sun workstations
32-bit computers | Operating System (OS) | 1,192 |
Features new to Windows 11
Windows 11, a major release of the Windows NT operating system and the successor to Windows 10, introduces new features compared to its predecessors. Some of these include a redesigned interface, new productivity and social features, and updates to security and accessibility, alongside improvements to performance.
Windows shell and user interface
Fluent Design System
Updates to the Fluent Design System, a design language introduced by Microsoft in 2017, are featured in Windows 11. According to Microsoft, the design of Windows 11 is "effortless, calm, personal, familiar, complete, and coherent." The redesign focuses on simplicity, ease of use, and flexibility, addressing some of the deficiencies of Windows 10. Most interfaces in Windows 11 are streamlined and feature rounded geometry, refreshed iconography, new typography, and a refreshed color palette. In addition, translucency and shadows are made more prevalent throughout the system. Windows 11 also introduces "Mica", a new opaque Material that is tinted with the color of the desktop wallpaper.
Start menu
The Start menu has been significantly redesigned in Windows 11, adhering to the principles of the updated Fluent Design System. The menu has now been moved to the center (but can be moved back to the left-hand corner), with the Live Tiles feature from Windows 8 being replaced by a set of pinned apps and a new cloud-powered "Recommended" section that shows recently opened files and documents from any location, including a PC, a smartphone, and OneDrive. The new Start menu also includes a search box.
Taskbar
The Taskbar has also been center-aligned, and now includes new animations for pinning, rearranging, minimizing and switching apps on the Taskbar. The buttons can still be moved to the left-hand corner as in Windows 10.
Notification Center & Quick Settings
The Action Center from Windows 10 has been replaced by a Notification Center and a Quick Settings menu, both accessible from the lower-right corner of the Taskbar. The Notification Center contains all the user's notifications and a full-month calendar, while the Quick Settings menu lets the user manage common PC settings quickly and easily like Volume, Brightness, Wi-Fi, Bluetooth and Focus Assist. Directly above the Quick Settings menu, the user can see media playback information when watching a video on, for example, YouTube, or when listening to music in apps like Spotify.
File Explorer
The File Explorer on Windows 11 has been refreshed with the Fluent Design System and the Ribbon interface has been replaced with a new command bar. It also introduces revamped context menus with rounded corners, larger text, and Acrylic. App developers will also be able to extend the new context menus.
Themes
In addition to brand new default themes on Windows 11 for both Light and Dark mode, it also includes four new additional themes. Windows 11 also adds new high contrast themes for people with visual impairments.
Sounds
Windows 11 introduces a new set of system sounds. The sounds are also slightly different depending on whether the theme is set to light or dark mode. In addition, a new Windows startup sound replaces the one used since Windows Vista.
Widgets
Windows 11 adds a new taskbar flyout named "Widgets", which displays a panel with Microsoft Start, a news aggregator with personalized stories and content (expanding upon the "news and interests" panel introduced in later builds of Windows 10). The user can customize the panel by adding or removing widgets, rearranging, resizing, and personalizing the content.
Other UI improvements
Windows 11 updates several system dialog boxes such as the alert for when the battery is running low.
The taskbar previews have been updated to reflect Windows 11's new visual design.
The hidden icons flyout on the lower-right corner of the taskbar has also been redesigned to match Windows 11's visuals.
Multitasking
Snap layouts
Users can now hover over a window's maximize button to view available snap layouts, and then click a zone to snap the window. They will then be guided to snap windows to the rest of the zones within the layout using a guided snap assist. There is a set of four available snap layouts on smaller screens.
Snap groups
Snap groups are a way to easily switch back to a set of snapped windows.
Virtual desktops
Virtual desktops can be accessed via the Task View feature on the Taskbar. Users can reorder and customize the background for each of their desktops. They can also hover over the Task View button on the Taskbar to quickly access their desktops or to create a new one.
Docking
When the user undocks a laptop, the windows on the monitor will be minimized, and when the laptop is redocked to a monitor, Windows will put everything exactly where it was before.
Input
Touch keyboard
Windows 11 introduces thirteen new themes to customize the touch keyboard, including 3 hardware matching themes that match the Surface keyboard colors. It also adds a new theme engine that allows the user to create a custom theme using background images. In addition, Windows 11 adds the ability to resize the touch keyboard.
Voice typing
Windows 11 includes a new voice typing launcher to easily start voice typing in a selected field. It is turned off by default, but it can be turned on in the Settings and placed in any area of the screen.
Touch improvements
Windows 11 also features improvement to touch-based interactions. Tablet mode is removed; instead, Windows will automatically adapt when needed. New and improved gestures can be used on tablets and touchscreens. App windows now have larger touch targets, and will automatically arrange themselves in split view when the screen is rotated. Windows 11 seems to be optimized for desktops and tablets without combining the two like in Windows 8 and Windows 10.
Pen menu
For digital pen users, a new pen menu has been added, which is accessible by clicking the pen icon on the taskbar. By default, it contains two apps which can be customized by clicking the gear icon and selecting "Edit pen menu". In the flyout, users can add up to four of their favorite drawing or writing apps to the pen menu to open them quickly when using a pen.
Language and input switcher
A switcher that will show up next to the Quick Settings menu allows the user to switch languages and keyboard layouts. Users can press the Windows + Spacebar keyboard shortcut to toggle between input methods.
Display improvements
Dynamic refresh rate
Dynamic Refresh Rate allows the PC to automatically boost the refresh rate when scrolling or inking and lower, when possible, to save battery power.
Other display improvements
Other display improvements coming with Windows 11 include Auto HDR, Content adaptive brightness control disabling (CABC), HDR support to color managed apps, and HDR certification.
Development platform
Windows Subsystem for Android
Windows 11 will also allow users to install and run Android apps onto their device using the new Windows Subsystem for Android (WSA) and the Android Open Source Project (AOSP). This runs with Intel Bridge Technology, a runtime postcompiler that enables applications written for other architectures to run on x86. These apps can be obtained from within the Microsoft Store via the Amazon Appstore, or through any source.
Bundled software
Microsoft Store
The Microsoft Store, which serves as a unified storefront for apps and other content, is also redesigned in Windows 11. Microsoft now allows developers to distribute Windows API, progressive web applications, and other packaging technologies in the Microsoft Store, alongside the standard Universal Windows Platform apps. The new Microsoft Store will also enable users to install Android apps onto their device via the Amazon Appstore. This feature will require a Microsoft account, an Amazon account, and a one-time install for Windows Amazon Appstore client.
Microsoft Teams
The collaboration platform Microsoft Teams is directly integrated into Windows 11. Skype will no longer be bundled with the OS by default. Teams will appear as an icon in the Windows taskbar, letting users message and call their contacts instantly.
Settings
The Settings app first introduced in Windows 8 has been redesigned to be visually pleasing, easy to use and inclusive in Windows 11. It has a left-handed navigation that persists between pages, and it adds breadcrumbs as the user navigates deeper into the settings to help them know where they are and not to get lost. The Settings app also includes brand new pages, with new controls at the top that highlight key information and frequently used settings for the user to adjust as they need. These new controls span across several category pages like System, Bluetooth & devices, Personalization, Accounts and Windows Update. It also adds expandable boxes for pages with many settings.
Snipping Tool
In Windows 11, both the classic Snipping Tool and Snip & Sketch apps have been replaced by a new Snipping Tool app that represents the best experiences of both apps in the next generation of screen capture for Windows. The Snipping Tool on Windows 11 includes a new user interface that builds off the classic app with extra features like the Windows + Shift + S keyboard shortcut from Snip & Sketch and richer editing. Windows 11 also introduces a new Settings page for the Snipping Tool. In addition, the new Snipping Tool adds support for dark mode.
Calculator
The Calculator app has also been redesigned for Windows 11. Like the Snipping Tool, it includes a new app theme setting. The Calculator has been completely rewritten in C# and includes several new features.
Mail & Calendar
The Mail and Calendar apps have been updated with a new visual style. They include rounded corners and other adjustments to make them look and feel part of Windows 11. The Mail and Calendar apps can also reflect the Windows theme.
Clock
The Clock app is getting an updated look with support for Focus Sessions and Spotify integration on Windows 11. The Focus Sessions integration in Windows 11 will allow the user to pick a task from Microsoft To Do and play music in the background while they complete their work.
Photos
Windows 11 updates the Photos app with a new viewing experience, editing features, Fluent Design, WinUI controls, rounded corners, and more. Photos app, which would be set up as the default image viewer in Windows 11, will allow users to explore collection, album, and folders. The Collection feature remains unchanged, and it will show the most recent photos and screenshots, organized in proper order by date. Albums are also generated automatically using Microsoft’s UI technology, but users can always customize the experience with their own albums. The Photos app is also getting a floating menu with new editing controls and will let users compare up to 4 pictures at once.
Tips
Windows 11 introduces a refreshed Tips app with a new look and additional UI updates. It comes with over 100 new tips to get started with Windows 11 or to learn new things.
Paint
One of the oldest Windows apps, which remained unchanged since Windows 7, has been given an updated user interface with rounded corners and the Mica material for Windows 11. The most prominent change to Paint is a new simplified toolbar, a rounded color palette, and a new set of drop-down menus.
Other applications
Notepad and Voice Recorder also feature refreshed interfaces. These apps now feature designs adhering to the Fluent Design principles.
The Microsoft Office apps have been redesigned to align with Fluent Design.
Windows 11 is also getting a new Media Player app, which will act as a replacement for Windows 10's Groove Music app.
Multimedia and gaming
Xbox app
An updated Xbox app is bundled with Windows 11. Features such as Xbox Cloud Gaming and Xbox Game Pass are integrated directly into the app.
Other features
The Auto HDR and DirectStorage technologies introduced by the Xbox Series X and Series S will be integrated into Windows 11; the latter requires a graphics card supporting DirectX 12 Ultimate, and an NVMe solid-state drive.
System security and performance
Microsoft promoted performance improvements such as smaller update sizes, faster web browsing in "any browser", faster wake time from sleep mode, and faster Windows Hello authentication.
Security
As part of the minimum system requirements, Windows 11 only officially supports devices with a Trusted Platform Module 2.0 security coprocessor. According to Microsoft, the TPM 2.0 coprocessor is a "critical building block" for protection against firmware and hardware attacks. In addition, Microsoft now requires devices with Windows 11 to include virtualization-based security (VBS), hypervisor-protected code integrity (HVCI), and Secure Boot built-in and enabled by default. The operating system also features hardware-enforced stack protection for supported Intel and AMD processors for protection against zero-day exploits. Windows 11 Home requires an internet connection and a Microsoft account for first time setup.
See also
References
Windows 11
Software features
Microsoft lists
Computing-related lists | Operating System (OS) | 1,193 |
Corvus Systems
Corvus Systems was a computer technology company that offered, at various points in its history, computer hardware, software, and complete PC systems.
History
Corvus was founded by Michael D'Addio and Mark Hahn in 1979. This San Jose, Silicon Valley company pioneered in the early days of personal computers, producing the first hard disk drives, data backup, and networking devices, commonly for the Apple II series. The combination of disk storage, backup, and networking was very popular in primary and secondary education. A classroom would have a single drive and backup with a full classroom of Apple II computers networked together. Students would log in each time they use the computer and access their work
via the Corvus Omninet network, which also supported eMail.
They went public in 1981 and were traded on the NASDAQ exchange. In 1985 Corvusacquired a company named Onyx & IMI. IMI (International Memories Incorporated) manufactured the hard disks used by Corvus.
The New York Times followed their financial fortunes. They were a modest success in the stock market during their first few years as a public company. The company's founders left Corvus in 1985 as the remaining board of directors made the decision to enter the PC clone market. D'Addio and Hahn went on to found Videonics in 1986, the same year Corvus discontinued hardware
manufacturing.
In 1987, Corvus filed for Chapter 11. That same year two top executives left. Its demise was partially caused by Ethernet establishing itself over Omninet as the local area network standard for PCs, and partially by the decision to become a PC clone company in a crowded and unprofitable market space.
Disk drives and backup
The company modified the Apple II's DOS operating system to enable using Corvuss 10 MB Winchester technology hard disk drives. Apple DOS normally was limited to the usage of 140 KB floppy disks. The Corvus disks not only increased the size of available storage but were also considerably faster than floppy disks. These disk drives were initially sold to software engineers inside Apple Computer.
The disk drives were manufactured by IMI (International Memories Incorporated) in Cupertino, California. Corvus provided the hardware and software to interface them to Apple II's, Tandy TRS-80s, Atari 800, and S-100 bus systems. Later, the DEC Rainbow, Corvus Concept, IBM PCs and Macs were added to the list. These 5 MB and 10 MB drives were twice the size of a shoebox and initially retailed for US$5000. Corvus sold many stand alone drives whose numbers increased as they became shared over Omninet. This allowed sharing a then-very costly hard drive among multiple inexpensive Apple II computers. An entire office or classroom could thus share a single Omninet-connected Corvus drive.
Certain models of the drives offered a tape backup option called "Mirror" to make hard disk backups using a VCR, which was itself a relatively new technology. A standalone version of "Mirror" was also made available. Data was backed up at roughly one megabyte per minute which resulted in five or ten-minute backup times. Tapes could hold up to 73MB. Even though Corvus had a on this technology, several other computer companies later used this technique.
A later version of tape backup for the Corvus Omninet was called The Bank. and was a standalone Omninet connected device that used custom backup tape media that were very similar in shape and size to today's DLT tapes. Both the Corvus File Server and The Bank tape backup units were in white plastic housings roughly the size of two stacked reams of paper.
Networking
In 1980 Corvus came out with the first commercially successful local area network (LAN), called Omninet''. Most Ethernet deployments of the time ran at 3 Mbit/s and cost one or two thousand dollars per computer. Ethernet also used a thick and heavy cable that felt like a lead pipe when bent, which was run in proximity to each computer, often in the ceiling plenum. The weight of the cable was such that injury to workers from ceiling failure and falling cables was a real danger. A transceiver unit was spliced or tapped into the cable for each computer, with an additional AUI cable running from the transceiver to the computer itself.
Corvus's Omninet ran at one megabit per second, used twisted pair cables and had a simple add-in card for each computer. The card cost $400 and could be installed by the end user. Cards and operating software were produced for both the Apple II and the IBM PC and XT. At the time, many networking experts said that twisted pair could never work because "the bits would leak off", but it eventually became the de facto standard for wired LANs.
Other Omninet devices included the "Utility Server" that was an Omninet connected device that allowed one Parallel printer and two Serial devices (usually printers) connected to it to be shared on an Omninet network. Internally the Utility Server was a single-board Z80 computer with 64 kB of RAM, and on startup the internal boot ROM retrieved its operating program from the File Server. The literature/documentation and software that shipped with the Utility Server included a memory map and I/O ports writeup. It was possible to replace the Utility Server's operating code file with a stand-alone copy of WordStar configured for the serial port, and to fetch and save its files on the file server. A dumb terminal connected to the first serial port then became an inexpensive diskless word processing station.
A single Omninet was limited to 64 devices, and the device address was set with a 5-bit DIP switch: spending both sides of the dollar bill. Device zero was the first file server, device one was the Mirror or The Bank tape backup, the rest were user computers, or Utility Servers. Systems with more than one file server had them at zero and up, then the tape backup, then the user computers. No matter what the configuration, you could only have 64 devices.
Corvus Concept
In April 1982, Corvus launched a computer called the Corvus Concept'''. This was a Motorola 68000-based computer in a pizza-box case with a 15" full page display mounted on its top, the first that could be rotated between landscape and portrait modes. Changing display orientation did not require rebooting the computer - it was all automatic and seamless and selected by a mercury switch inside the monitor shell. The screen resolution was 720×560 pixels. Positioned vertically, the monitor displayed 72 rows by 91 columns of text; the horizontal resolution was 56 rows by 120 columns.
The first version of the Concept came with 256 kB standard, and expanding the RAM to its maximum supported capacity of 1MB cost $995 at the time. The Concept was capable of using more RAM, and a simple hack provided up to 4MB. The failure of the Concept was mostly related to its lack of compatibility with the IBM PC, introduced the previous August.
The Concept interface, though not a GUI, was a standardized text user interface that made heavy use of function keys. Application programs could contextually redefine these keys, and the current command performed by each key was displayed on a persistent status line at the bottom of the screen. The function keys were placed on the top row of the keyboard close to their onscreen representation. A crude "Paint" program was available for $395 that permitted a user to create simple bitmap graphics. These could be pasted into Corvus' word processing program called "Edword", which was quite powerful by the standards of the day; it was judged to be worth the cost of the system by itself.
The operating system, called CCOS, was prompt driven, communicating with the user using full sentences such as Copy which file(s)? when the "Copy file" function key was pressed. The user would respond by typing the path of the file to be copied. The OS would then prompt for a destination path. Wildcard pattern matching was supporting using the * and ? characters. The OS supported pipes and "Exec files", which were similar to shell scripts.
Versions of the Concept running Unix were available; these configurations could not run standard Concept software. The UCSD p-System was available, and a Pascal compiler was available supporting most UCSD extensions FORTRAN was also standard. Built-in BASIC was also an option, enabling the computer to boot without a disk attached. A software CP/M emulator was available from Corvus, but it was of limited usefulness since it only emulated 8080 instructions and not the more-common Z80-specific instructions. Wesleyan University ported the KERMIT file transfer protocol.
The entire motherboard could slide out of the back of the cabinet for easy access to perform upgrades and repairs. The system was equipped with four 50-pin Apple II bus compatible slots for expansion cards. External 5.25" and 8" floppy disk drive peripherals (made by Fujitsu) were available for the Concept. The 8" drive had a formatted capacity of 250kB. The 5.25" drive was read-only, and disks held 140kB. The video card was integrated in the monitor's update circuitry. The system had a battery-backed hardware clock that stored the date and month, but not the year. There was a leap year switch that set February to have 29 days.
The system had a built in Omninet port on it. The system could boot from a locally connected floppy disk or Corvus Hard Drive or it could be booted over the Omninet network.
In 1984, the base 256K system cost $3995 with monitor and keyboard and bundled Edword word processor. The floppy drive cost an additional $750. Hard drives from 6MB ($2195) to 20MB ($3995) were also available (SCSI I on some). A software bundle containing ISYS integrated spreadsheet, graphing, word processing, and communication software cost $495. The hardware necessary for networking cost $495 per workstation. The Concept Unix workstation came with 512K and cost $4295 for the Concept Uniplex that can be expanded to two users and $5995 for the Concept Plus that can service eight users. The Concept was available as part of turnkey systems from OEMs, such as the Oklahoma Seismic Corporation Mira for oil well exploration, and the KeyText Systems BookWare for publishing.
References
External links
Collection of Corvus documentation
The Corvus Museum Website
1979 establishments in California
1987 disestablishments in California
American companies established in 1979
American companies disestablished in 1987
Computer companies established in 1979
Computer companies disestablished in 1987
Defunct computer companies of the United States
Defunct computer hardware companies
Manufacturing companies based in San Jose, California | Operating System (OS) | 1,194 |
Window (computing)
In computing, a window is a graphical control element. It consists of a visual area containing some of the graphical user interface of the program it belongs to and is framed by a window decoration. It usually has a rectangular shape that can overlap with the area of other windows. It displays the output of and may allow input to one or more processes.
Windows are primarily associated with graphical displays, where they can be manipulated with a pointer by employing some kind of pointing device. Text-only displays can also support windowing, as a way to maintain multiple independent display areas, such as multiple buffers in Emacs. Text windows are usually controlled by keyboard, though some also respond to the mouse.
A graphical user interface (GUI) using windows as one of its main "metaphors" is called a windowing system, whose main components are the display server and the window manager.
History
The idea was developed at the Stanford Research Institute (led by Douglas Engelbart). Their earliest systems supported multiple windows, but there was no obvious way to indicate boundaries between them (such as window borders, title bars, etc.).
Research continued at Xerox Corporation's Palo Alto Research Center / PARC (led by Alan Kay). They used overlapping windows.
During the 1980s the term "WIMP", which stands for window, icon, menu, pointer, was coined at PARC.
Apple had worked with PARC briefly at that time. Apple developed an interface based on PARC's interface. It was first used on Apple's Lisa and later Macintosh computers. Microsoft was developing Office applications for the Mac at that time. Some speculate that this gave them access to Apple's OS before it was released and thus influenced the design of the windowing system in what would eventually be called Microsoft Windows.
Properties
Windows are two dimensional objects arranged on a plane called the desktop metaphor. In a modern full-featured windowing system they can be resized, moved, hidden, restored or closed.
Windows usually include other graphical objects, possibly including a menu-bar, toolbars, controls, icons and often a working area. In the working area, the document, image, folder contents or other main object is displayed. Around the working area, within the bounding window, there may be other smaller window areas, sometimes called panes or panels, showing relevant information or options. The working area of a single document interface holds only one main object. "Child windows" in multiple document interfaces, and tabs for example in many web browsers, can make several similar documents or main objects available within a single main application window. Some windows in Mac OS X have a feature called a drawer, which is a pane that slides out the side of the window and to show extra options.
Applications that can run either under a graphical user interface or in a text user interface may use different terminology. GNU Emacs uses the term 'window' to refer to an area within its display while a traditional window, such as controlled by an X11 window manager, is called a 'frame'.
Any window can be split into the window decoration and the window's content, although some systems purposely eschew window decoration as a form of minimalism.
Window decoration
The window decoration is a part of a window in most windowing systems.
A windows decoration typically consists of a title bar, usually along the top of each window and a minimal border around the other three sides. On Microsoft Windows this is called "non-client area".
In the predominant layout for modern window decorations, the top bar contains the title of that window and buttons which perform windowing-related actions such as:
Close
Maximize
Minimize
Resize
Roll-up
The border exists primarily to allow the user to resize the window, but also to create a visual separation between the window's contents and the rest of the desktop environment.
Window decorations are considered important for the design of the look and feel of an operating system and some systems allow for customization of the colors, styles and animation effects used.
Window border
Window border is a window decoration component provided by some window managers, that appears around the active window. Some window managers may also display a border around background windows. Typically window borders can be used to provide window motion enabling the window to be moved or resized by using a drag action. Some window managers provide useless borders which are purely for decorative purposes and offer no window motion facility. These window managers do not allow windows to be resized by using a drag action on the border.
Title bar
Title bar is a graphical control element and part of the window decoration. provided by some window managers. As a convention it is located at the top of the window as a horizontal bar. The title bar is typically used to display the name of the application, or the name of the open document, and may provide title bar buttons for minimizing, maximizing, closing or rolling up of application windows. Typically titlebars can be used to provide window motion enabling the window to be moved around the screen by using a drag action. Some window managers provide titlebars which are purely for decorative purposes and offer no window motion facility. These window managers do not allow windows to be moved around the screen by using a drag action on the titlebar.
Default title-bar text often incorporates the name of the application and/or of its developer. The name of the host running the application also appears frequently. Various methods (menu-selections, escape sequences, setup parameters, command-line options – depending on the computing environment) may exist to give the end-user some control of title-bar text. Document-oriented applications like a text editor may display the filename or path of the document being edited. Most web browsers will render the contents of the HTML element title in their title bar, sometimes pre- or postfixed by the application name. Google Chrome and some versions of Mozilla Firefox place their tabs in the title bar. This makes it unnecessary to use the main window for the tabs, but usually results in the title becoming truncated. An asterisk at its beginning may be used to signify unsaved changes.
The title bar often contains widgets for system commands relating to the window, such as a maximize, minimize, rollup and close buttons; and may include other content such as an application icon, a clock, etc.
In many graphical user interfaces, including the Mac OS and Microsoft Windows interfaces, the user may move a window by grabbing the title bar and dragging.
Titlebar buttons
Some window managers provide titlebar buttons which provide the facility to minimize, maximize, roll-up or close application windows. Some window managers may display the titlebar buttons in the taskbar or taskpanel, rather than in the titlebars.
The following buttons may appear in the titlebar:
Close
Maximize
Minimize
Resize
Roll-up (or WindowShade)
Note that a context menu may be available from some titlebar buttons or by right-clicking.
Titlebar icon
Some window managers display a small icon in the titlebar that may vary according to the application on which it appears. The titlebar icon may behave like a menu button, or may provide a context menu facility. OS X applications commonly have a proxy icon next to the window title that functions the same as the document's icon in the file manager.
Document status icon
Some window managers display an icon or symbol to indicate that the contents of the window have not been saved or confirmed in some way: Mac OS X displays a dot in the center of its close button; RISC OS appends an asterisk to the title.
Tiling window managers
Some tiling window managers provide title bars which are purely for informative purposes and offer no controls or menus. These window managers do not allow windows to be moved around the screen by using a drag action on the titlebar and may also serve the purpose of a status line from stacking window managers.
In popular operating systems
See also
Client-Side Decoration
Display server
Graphical user interface
Human interface guidelines
WIMP (computing)
Window manager
References
Graphical control elements
Graphical user interface elements | Operating System (OS) | 1,195 |
Webbook
Webbooks (a portmanteau of web and notebook computer) are a class of laptop computers such as the litl, Elonex and Coxion webbook computers.
The word may also refer to books that are available in HTML on the web. and the NIST Chemistry WebBook, a scientific database of chemical properties maintained by the National Institute of Standards and Technology.
The word may also refer to The WebBook Company of Birmingham, Michigan, which planned to deliver a Net computer based on the PSC1000 RISC processor (then and now also known as the ShBoom) designed by Charles H. Moore.
Legal issues
was filed by Robert & Colleen Kell of Austin, Texas on 18 November 2008. The application was deemed abandoned on Aug. 23, 2009.
See also
Netbook
Tablet computer
References
Mobile computers
Electronic publishing | Operating System (OS) | 1,196 |
Obliq
Obliq is an interpreted, object-oriented programming language designed to make distributed, and locally multithreaded, computing simpler and easier to program, while providing program safety and an implicit type system. The interpreter is written in Modula-3, and provides Obliq with full access to Modula-3's network objects abilities. A type inference algorithm for record concatenation, subtyping, and recursive types has been developed for Obliq. Further, it has been proved to be NP-complete
and its lowest complexity to be or if under other modeling up to certain conditions down to and its best known implementation runs in .
Obliq's syntax is very similar to Modula-3, the biggest difference being that Obliq has no need of explicit typed variables (i.e., a variable can hold any data type allowed by the type checker and if does not accepts one, i.e., a given expression execution error will display) although explicit type declarations are allowed and ignored by the interpreter. The basic data types in the language include booleans, integers, reals, characters, strings, and arrays. Obliq supports the usual set of sequential control structures (conditional, iteration, and exception handling forms), and special control forms for concurrency (mutexes and guarded statements). Further, Obliq's objects can be cloned and safely copied remotely by any machine in a distributed network object and it can be done safely and transparently.
Obliq's large standard library provides strong support for mathematical operations, input/output (I/O), persistence, thread control, graphics, and animation. Distributed computing is object-based: objects hold a state, which is local to one process. Scope of objects and other variables is purely lexical. Objects can call methods of other objects, even if those objects are on another machine on the network. Obliq objects are simply collections of named fields (similar to slots in Self and Smalltalk), and support inheritance by delegation (like Self).
The common uses of Obliq involve programming over networks, 3D animation, and distributed computing, as occurs over a local area network (LAN) such as Ethernet. Obliq is included free with the Digital Equipment Corporation (DEC) Modula-3 distribution, but other free versions exist elsewhere including precompiled binaries for several operating systems.
Projects using Obliq
The Collaborative Active Textbooks (CAT) developed using Obliq applets and the Zeus algorithm animation System (written in Modula-3).
Obliq applets (Oblets) special web browser (written in Modula-3) Obliq web page embedded applications.
References
External links
Luca Cardelli's Obliq Quick Start page (archived on 2008-10-17)
Modula programming language family
Oberon programming language family
Prototype-based programming languages
Procedural programming languages
Systems programming languages
Programming languages created in 1993
Dynamically typed programming languages | Operating System (OS) | 1,197 |
PC-based IBM mainframe-compatible systems
Since the rise of the personal computer in the 1980s, IBM and other vendors have created PC-based IBM-compatible mainframes which are compatible with the larger IBM mainframe computers. For a period of time PC-based mainframe-compatible systems had a lower price and did not require as much electricity or floor space. However, they sacrificed performance and were not as dependable as mainframe-class hardware. These products have been popular with mainframe developers, in education and training settings, for very small companies with non-critical processing, and in certain disaster relief roles (such as field insurance adjustment systems for hurricane relief).
Background
Up until the mid-1990s, mainframes were very large machines that often occupied entire rooms. The rooms were often air conditioned and had special power arrangements to accommodate the three-phase electric power required by the machines. Modern mainframes are now physically comparatively small and require little or no special building arrangements.
System/370
IBM had demonstrated use of a mainframe instruction set in their first desktop computer—the IBM 5100, released in 1975. This product used microcode to execute many of the System/370's processor instructions, so that it could run a slightly modified version of IBM's APL mainframe program interpreter.
In 1980 rumors spread of a new IBM personal computer, perhaps a miniaturized version of the 370. In 1981 the IBM Personal Computer appeared, but it was not based on the System 370 architecture. However, IBM did use their new PC platform to create some exotic combinations with additional hardware that could execute S/370 instructions locally.
Personal Computer XT/370
In October 1983, IBM announced the IBM Personal Computer XT/370. This was essentially a three-in-one product. It could run PC DOS locally, it could also act as 3270 terminal, and finally—its most important distinguishing feature relative to an IBM 3270 PC—was that it could execute S/370 instructions locally.
The XT/370 was an IBM Personal Computer XT (System Unit 5160) with three custom 8-bit cards. The processor card (370PC-P), contained two modified Motorola 68000 chips (which could emulate most S/370 fixed-point instructions and non-floating-point instructions), and an Intel 8087 coprocessor modified to emulate the S/370 floating point instructions. The second card (370PC-M), which connected to the first with a unique card back connector contained 512 KiB of memory. The third card (PC3277-EM), was a 3270 terminal emulator required to download system software from the host mainframe. The XT/370 computer booted into DOS, then ran the VM/PC Control Program. The card's memory space added additional system memory, so the first 256 KiB (motherboard) memory could be used to move data to the 512 KiB expansion card. The expansion memory was dual ported, and provided an additional 384 KiB to the XT Machine bringing the total RAM on the XT side to 640 KiB. The memory arbitrator could bank switch the second 128 KiB bank on the card to other banks, allowing the XT Intel 8088 processor to address all the RAM on the 370PC-M card. Besides the 416 KB of usable RAM for S/370 applications, the XT/370 also supported up to 4 MB of virtual memory using the hard drive as its paging device.
IBM claimed the XT/370 reached 0.1 MIPS (when the data fit in RAM). In 1984, the list price of an XT/370 in its typical configuration was approximately $12,000 so compared favorably with IBM's own mainframes on a $/MIPS basis; for example, an IBM 4341 delivered 1.2 MIPS for $500,000. While it theoretically reduced demand on customers' mainframes by offloading load onto the smaller computer, as customers purchased more XT/370s they likely increased the overall load on the mainframes, increasing IBM's mainframe sales.
Similarly to the mainframe version of VM/CMS, the VM/PC also created the illusion of virtual disks, but on the PC version these were maintained as PC DOS files, either on floppy or hard disk. For example, the CMS virtual disk belonging to user FRED at device address 101 was stored as the DOS file FRED.101. The CMS IMPORT and EXPORT commands allowed extraction of files from these virtual drives as well as ASCII/EBCDIC conversion.
The XT/370 came with an XT-style 83-key keyboard (10 function keys). Newer revisions of the XT/370 dropped the PC3277-EM in favor of the IBM 3278/79 boards. The XT/370 was among the XT systems that could use a second hard drive mounted in the 5161 expansion chassis.
BYTE in 1984 called the XT/370 "a qualified success". The magazine praised IBM for "fitting all of the 370's features into the XT", and hoped for technical improvements that "might result in an even better computer".
Personal Computer AT/370
In 1984, IBM introduced the IBM Personal Computer AT/370 with similar cards as for the XT/370 and updated software, supporting both larger hard disks and DMA transfers from the 3277 card to the AT/370 Processor card. The system was almost 60% faster than the XT/370. The AT/370 used different, 16-bit interface co-processing cards than the XT, called PC/370-P2 and PC/370-M2. The latter card still had only 512 KB for memory, out of which 480 KB were usable for programs in S/370 mode, while 32 KB were reserved for microcode storage. For the terminal emulation function, the AT/370 came with the same 3278/79 Emulation Adapter as the late-series XT/370. The AT/370 motherboard itself was equipped with 512 KB of RAM.
The AT/370 also ran VM/PC, but with PC DOS 3.0 instead of 2.10 that the XT version used. VM/PC version 2, launched in November 1985, improved performance by up to 50%; it allowed add-on memory (in addition to the disk) to be used as a page cache for VM.
A November 1985 Computerworld article noted that the machine was "slow selling".
IBM 7437 VM/SP Technical Workstation
In April 1988, IBM introduced a System/370 workstation that had been shipping to some customers since August 1987. Officially called the IBM 7437 VM/SP Technical Workstation (and later also known as the Personal System/370), it was a freestanding tower that connected to a MCA card installed in a PS/2 Model 60, 70, or 80. The 7437 tower contained the processor and a 16main memory, and the PS/2 provided I/O and disk storage. The 7437 ran the IBM VM/SP operating system, and one IBM representative described the 7437 "like a 9370 with a single terminal". It was intended for existing S/370 users and its November 1988 list price was $18,100 for a minimum 25-unit order. One of its intended roles was to provide a single-user S/370-compatible computer that could run computer-aided design and engineering applications that originated on IBM mainframes such as CADAM and CATIA. Graphics support was provided by an IBM 5080 graphics system, a floor-standing tower. The 5080 was connected to the 7437 through the PS/2 via a cable and MCA adapter.
Personal/370
Later, IBM introduced the Personal/370 (aka P/370), a single slot 32-bit MCA card that can be added to a PS/2 or RS/6000 computer to run System/370 OSs (like MUSIC/SP, VM, VSE) parallel to OS/2 (in PS/2) or AIX (in RS/6000) supporting multiple concurrent users. It is a complete implementation of the S/370 Processor including a FPU co-processor and 16 MB memory. Management and standard I/O channels are provided via the host OS/hardware. An additional 370 channel card can be added to provide mainframe-specific I/O such as 3270 local control units, 3400/3480 tape drives or 7171 protocol converters.
Although a single-card product, the P/370 ran three times faster than the 7437, attaining 3.5 MIPS, on par with a low-end IBM 4381. A subsequent book (by the same author) claims 4.1 MIPS for the P/370.
The Personal/370 was available as early as November 1989 although on a "special bid basis".
System/390
In 1995 IBM introduced a card, the "Enhanced S/390 MicroProcessor Complex", which supported IBM ESA/390 architecture on a PC-based system. IBM's PC-related products evolved to support that as well, employing the card (IBM part number 8640-PB0) in the "IBM PC Server 330 in 1998 and the IBM PC Server 500 models.
S/390 Processor Card
An important goal in the design of the S/390 Processor Card was complete compatibility with existing mainframe operating systems and software. The processor implements all of the ESA/390 and XA instructions which prevents the need for instruction translation. There are three generations of the card:
The original S/390 Processor Card incorporated 32MB of dedicated memory, with optional 32MB or 96MB daughter cards, for a combined total of 64MB or 128MB of RAM. The processor was officially rated at 4.5 MIPS. It was built to plug into a MicroChannel host system.
The second version was built for a PCI host system. It included 128 MB of dedicated memory as standard, and was still rated at 4.5 MIPS.
The third version, referred to as a P/390E card (for Enhanced), included 256 MB of dedicated memory and was rated at 7 MIPS. It, too, was built for a PCI host system. There was an extremely rare (possibly only ever released as pre-production samples) 1 GB memory version of the P/390E card.
R/390
R/390 was the designation used for the expansion card used in an IBM RS/6000 server. The original R/390 featured a 67 or 77 MHz POWER2 processor and 32 to 512 MB of RAM, depending on the configuration. The MCA P/390 expansion card can be installed in any MCA RS/6000 system, while the PCI P/390 card can be installed in a number of early PCI RS/6000s; all such configurations are referred to as an R/390. R/390 servers need to run AIX version 4 as the host operating system.
P/390
P/390 was the designation used for the expansion card used in an IBM PC Server and was less expensive than the R/390. The original P/390 server was housed in an IBM PC Server 500 and featured a 90 MHz Intel Pentium processor for running OS/2. The model was revised in mid-1996 and rebranded as the PC Server 520, which featured a 133 MHz Intel Pentium processor. Both models came standard with 32 MB of RAM and were expandable to 256 MB. The PC Server 500 featured eight MCA expansion slots while the PC Server 520 added two PCI expansion slots and removed two MCA slots.
S/390 Integrated Server
The S/390 Integrated Server (aka S/390 IS) is a mainframe housed in a comparably small case (HxWxD are 82 x 52 x 111 cm). It became available from November 1998. It is intended for customers who do not require the I/O bandwidth and performance of the S/390 Multiprise 3000 (which has the same size). Only 256 MB of ECC Memory and a single CMOS main processor (performance about 8 MIPS) are used; the S/390 CPU used in the Integrated Server is in fact the P/390 E-card. A Pentium II is used as IOSP (I/O Service Processor). It supports four ESCON and to four parallel channels. Standard PCI and ISA slots are present. A maximum of 255 GB internal harddisks are supported (16x 18GB HDs, with 2x HDs for redundancy). The supported OSs are OS/390, MVS/ESA, VM/ESA and VSE/ESA.
Fujitsu PC-based systems
Fujitsu offers two based systems that make up the lower end of Fujitsu's S/390-based BS2000 mainframe product line. The SQ100 is the slower configuration, using dual-core 2.93GHz Intel Xeon E7220 processors, and is capable of up to 200RPF of performance. The SQ200 was introduced more recently, uses six-core 2.66GHz Xeon X7542 processors, and has performance of up to 700RPF. All Intel 64-based BS2000 mainframes can run Linux or Windows in separate partitions. Fujitsu also continues to make custom S/390-native processors and mainframe hardware for the high end of its BS2000 line.
z/Architecture and today
Since the late 1990s, PC processors have become fast enough to perform mainframe emulation without the need for a coprocessor card. There are currently several personal computer emulators available that support System/390 and z/Architecture.
FLEX-ES by Fundamental Software emulates both System/390 (ESA/390) and z/Architecture. Claimed to be one of the most popular PC-based IBM-compatible mainframe products (as of 2006). While FLEX-ES is capable of running on most PC hardware, the licensing agreement requires that FLEX-ES must run on the machine with which it was sold; in the past, this included Compaq Proliant and HP servers, but today this is nearly always an approved IBM xSeries server or a ThinkPad laptop.
Hercules, an open source emulator for the System/370, System/390, and z/Architecture instruction sets. It does however require a complete operating system in order to execute application programs. While IBM does not license its current operating systems to run on Hercules, earlier System/370 operating systems are in the public domain and can be legally run on Hercules.
zPDT (System/z Personal Development Tool), an IBM offering allowing IBM PartnerWorld Independent Software Developers (ISVs) to legally run z/OS 1.6 (or higher), DB2 V8 (or higher), z/TPF, or z/VSE 4.1 (or higher) on PC-based machines that can be acquired based on a Linux emulation.
IBM ZD&T (Z Development and Test Environment), an IBM offering provides an x86-based environment that emulates Z hardware and runs genuine z/OS software, offering unmatched application portability and compatibility. IBM Z Development and Test Environment can be used for education, demonstration, and development and test of applications that include mainframe components.
The Z390 and zCOBOL is a portable macro assembler and COBOL compiler, linker, and emulator toolkit providing a way to develop, test, and deploy mainframe compatible assembler and COBOL programs using any computer that supports J2SE 1.6.0+ runtime.
See also
List of IBM products
References
External links
P/390 and R/390 with OS/390: An Introduction (IBM Redbook)
P/390, R/390, S/390 Integrated Server: OS/390 New User's Cookbook (IBM Redbook)
S/390 Integrated Server - Hardware Announcement; September 8, 1998
VM/ESA Performance on P/390 and R/390 PC Server 520 and RS/6000 591
Detail pictures of a PC Server 500, on the private website of Alfred Arnold
Detail pictures of a S/390 IS (incl. screenshot of console), on the private website of Michael J. Ross
P/390 Information at '9595 Ardent Tool of Capitalism'
IBM PC Server System/390 FAQ at '9595 Ardent Tool of Capitalism'
zPDT: Introduction and Reference. (IBM Redbook)
zPDT: User's guide (IBM Manual)
zPDT worldwide distributor
zPDT for Rational Developer for System z Unit Test
Micro/370 - the chips used in the XT/370
A performance evaluation of the IBM 370/XT personal computer, NASA
IBM PC compatibles
Mainframe-compatible systems
IBM System/360 mainframe line | Operating System (OS) | 1,198 |
Applesoft BASIC
Applesoft BASIC is a dialect of Microsoft BASIC, developed by Marc McDonald and Ric Weiland, supplied with the Apple II series of computers. It supersedes Integer BASIC and is the BASIC in ROM in all Apple II series computers after the original Apple II model. It is also referred to as FP BASIC (from floating point) because of the Apple DOS command used to invoke it, instead of INT for Integer BASIC.
Applesoft BASIC was supplied by Microsoft and its name is derived from the names of both Apple and Microsoft. Apple employees, including Randy Wigginton, adapted Microsoft's interpreter for the Apple II and added several features. The first version of Applesoft was released in 1977 on cassette tape and lacked proper support for high-resolution graphics. Applesoft II, which was made available on cassette and disk and in the ROM of the Apple II Plus and subsequent models, was released in 1978. It is this latter version, which has some syntax differences and support for the Apple II high-resolution graphics modes, that is usually synonymous with the term "Applesoft."
A compiler for Applesoft BASIC, TASC (The Applesoft Compiler), was released by Microsoft in 1981.
History
When Steve Wozniak wrote Integer BASIC for the Apple II, he did not implement support for floating point math because he was primarily interested in writing games, a task for which integers alone were sufficient. In 1976, Microsoft had developed Microsoft BASIC for the MOS Technology 6502, but at the time there was no production computer that used it. Upon learning that Apple had a 6502 machine, Microsoft asked if the company were interested in licensing BASIC, but Steve Jobs replied that Apple already had one.
The Apple II was unveiled to the public at the West Coast Computer Faire in April 1977 and became available for sale in June. One of the most common customer complaints about the computer was BASIC's lack of floating-point math. Making things more problematic was that the rival Commodore PET personal computer had a floating point-capable BASIC interpreter from the beginning. As Wozniak—the only person who understood Integer BASIC well enough to add floating point features—was busy with the Disk II drive and controller and with Apple DOS, Apple turned to Microsoft.
Apple reportedly obtained an eight-year license for Applesoft BASIC from Microsoft for a flat fee of $31,000, renewing it in 1985 through an arrangement that gave Microsoft the rights and source code for Apple's Macintosh version of BASIC. Applesoft was designed to be backwards-compatible with Integer BASIC and uses the core of Microsoft's 6502 BASIC implementation, which includes using the GET command for detecting key presses and not requiring any spaces on program lines. While Applesoft BASIC is slower than Integer BASIC, it has many features that the older BASIC lacks:
Atomic strings: A string is no longer an array of characters (as in Integer BASIC and C); it is instead a garbage-collected object (as in Scheme and Java). This allows for string arrays; creates a array of eleven string variables numbered 0–10.
Multidimensional arrays (numbers or strings)
Single-precision floating point variables with an 8-bit exponent and a 31-bit significand and improved math capabilities, including trigonometry and logarithmic functions
Commands for high-resolution graphics
DATA statements, with READ and RESTORE commands, for representing numerical and string values in quantity
CHR$, STR$, and VAL functions for converting between string and numeric types (both languages did have the ASC function)
User-defined functions: simple one-line functions written in BASIC, with a single parameter
Error-trapping: allowing BASIC programs to handle unexpected errors via subroutine written in BASIC
Conversely, Applesoft lacks the MOD (remainder) operator from Integer BASIC.
Adapting BASIC for the Apple II was a tedious job as Apple received a source listing for Microsoft 6502 BASIC which proved to be buggy and also required the addition of Integer BASIC commands. Since Apple had no 6502 assembler on hand, the development team was forced to send the source code over the phone lines to Call Computer, an outfit that offered compiler services. This was an extremely tedious, slow process and after Call Computer lost the source code due to an equipment malfunction, one of the programmers, Cliff Huston, used his own IMSAI 8080 computer to cross assemble the BASIC source.
Features
Applesoft is similar to Commodore's BASIC 2.0 aside from features inherited from Integer BASIC. There are a few minor differences such as Applesoft's lack of bitwise operators; otherwise most BASIC programs that do not use hardware-dependent features will run on both BASICs.
The PR# statement redirects output to an expansion card, and IN# redirects input from an expansion card. The slot number of the card is specified after the PR# or IN# within the statement. The computer locks-up if there is no card present in the slot. PR#0 restores output to the 40 column screen and IN#0 to the keyboard.
The PR# statement can be used to redirect output to the printer (e.g. ) where x is the slot number containing the printer port card. To send a BASIC program listing to the printer, the user types .
PR#6 causes Applesoft to boot the disk drives (although the Disk II controller can be in any slot, it is usually in slot 6). PR#3 switches to 80 column text mode if an 80 column card is present.
As with Commodore BASIC, numeric variables are stored as 40-bit floating point; each variable requires five bytes of memory. The programmer may designate variables as integer by following them with a percent sign, in which case they use two bytes and are limited to a range of -32768 to 32767; however BASIC internally converts them back to floating point, while each percent sign also takes an additional byte of program code, so in practice this feature is only useful for reducing the memory usage of large array variables.
The RND function generates a pseudorandom fractional number between 0 and 1. returns the most recently generated random number. RND with a negative number will jump to a point in the sequence determined by the particular negative number used. RND with any positive value generates the next number in the sequence, not dependent on the actual value given. Locations $4E and $4F, which the system cycles the values of continuously while waiting for user keystrokes, can be PEEKed to provide truly random values to use as a seed (when negated) for RND. For example, after keyboard input, will seed RND with the values of $4E and $4F.
Like other implementations of Microsoft BASIC, Applesoft discards spaces (outside of strings and comments) on program lines. LIST adds spaces when displaying code for the sake of readability. Since LIST adds a space before and after every tokenized keyword, it often produces two spaces in a row where one would suffice for readability.
The default prompt for INPUT is a question mark. PRINT does not add a leading space in front of numbers.
Coleco claimed that its Adam home computer's SmartBASIC was source-code compatible with Applesoft. Microsoft licensed a BASIC compatible with Applesoft to VTech for its Laser 128 clone.
Limitations
Through several early models of the Apple II, Applesoft BASIC did not support the use of lowercase letters in programs, except in strings. PRINT is a valid command but print and Print result in a syntax error.
Applesoft lacks several commands and functions common to most of the non-6502 Microsoft BASIC interpreters, such as:
INSTR (search for a substring in a string)
PRINT USING (format numbers in printed output)
INKEY$ (check for a keypress without stopping the program; although a PEEK to location $C000 achieves this action)
LPRINT (output to a printer instead of the screen)
Applesoft does not have commands for file or disk handling, other than to save and load programs via cassette tape. The Apple II disk operating system, known simply as DOS, augments the language to provide such abilities.
Only the first two letters of variables names are significant. For example, "LOW" and "LOSS" are treated as the same variable, and attempting to assign a value to "LOSS" overwrites any value assigned to "LOW". A programmer also has to avoid consecutive letters that are Applesoft commands or operations. The name "SCORE" for a variable is interpreted as containing the OR Boolean operator, rendered as SC OR E. "BACKGROUND" contains GR, the command to invoke the low-resolution graphics mode, and results in a syntax error.
Sound and graphics
The only sound support is the option to PRINT an ASCII bell character to sound the system alert beep, and a PEEK command to click the speaker. The language is not fast enough to produce more than a baritone buzz from repeated clicks. Programs can, however, store a machine-language routine to be called to generate electronic musical tones spanning several octaves.
Applesoft supports the low resolution (lores) graphics display, where 40 color pixels horizontally, and up to 48 vertically, can be displayed in 16 colors, and the 280 by 192 high resolution (hires) mode. There are commands to plot pixels and draw horizontal and vertical lines in lores. Hires allows drawing arbitrary lines. Vector-based shape tables can be used to draw objects in high-resolution graphic modes. They consist of horizontal and vertical lines, and entire shapes can be scaled to larger sizes and rotated to any angle. No provision exists for mixing text and graphics, except for the Apple's four lines of text at the bottom of a graphic display.
Beginning with the Apple IIe, a "double-high resolution" mode became available on machines with 128k of memory. This mode essentially duplicates the resolution of the original hires mode, but including all 16 colors of the lores palette. Applesoft does not provide direct support for this mode. Apple IIgs-specific modes are likewise not supported.
Extensions
Applesoft BASIC can be extended by two means: the ampersand (&) command and the USR() function. These are two features that call low-level machine-language routines stored in memory, which is useful for routines that need to be fast or require direct access to arbitrary functions or data in memory. The USR() function takes one numerical argument, and can be programmed to derive and return a calculated function value, to be used in a numerical expression. "&" is effectively a shorthand for CALL, with an address that is predefined.
Bugs
A deficiency with error-trapping via ONERR means that the system stack is not reset if an error-handling routine does not invoke RESUME, potentially leading to a crash. The built-in pseudorandom number generator function RND is capable of producing a predictable series of outputs due to the manner in which the generator is seeded when first powering on. This behavior is contrary to how Apple's documentation describes the function.
Performance
Wozniak originally referred to his Integer BASIC as "Game BASIC" (having written it so he could implement a Breakout clone for his new computer). Few action games were written in Applesoft BASIC, in large part because the use of floating-point numbers for all math operations degrades performance.
Applesoft BASIC programs are stored as a linked list of lines; a GOTO or GOSUB takes linear time. Some programs have the subroutines at the top to reduce the time for calling them.
Unlike Integer BASIC, Applesoft does not convert literal numbers (like 100) in the source code to binary when a line is entered. Rather, the ASCII string is converted whenever the line is executed. Since variable lookup is often faster than this conversion, it can be faster to store numeric constants used inside loops in variables before the loop is entered.
Sample code
Hello World in Applesoft BASIC can be entered as the following:
10TEXT:HOME
20?"HELLO WORLD"
Multiple commands can be included on the same line of code if separated by a colon (:). The ? can be used in Applesoft BASIC (and almost all versions of Microsoft BASIC) as a shortcut for "PRINT", though spelling out the word is not only acceptable but canonical—Applesoft converted "?" in entered programs to the same token as "PRINT" (thus no memory is actually saved by using "?"), thus either appears as "PRINT" when a program is listed. The program above appears in a LIST command as:
10 TEXT : HOME
20 PRINT "HELLO WORLD"
When Applesoft II BASIC was initially released in mid-1978, it came on cassette tape and could be loaded into memory via the Apple II's machine language monitor. When the enhanced Apple II+ replaced the original II in 1979, Applesoft was now included in ROM and automatically started on power-up if no bootable floppy disk was present. Conversely, Integer BASIC was now removed from ROM and turned into an executable file on the DOS 3.3 disk.
Early evolution
The original Applesoft, stored in RAM as documented in its Reference Manual of November 1977, has smaller interpreter code than the later Applesoft II, occupying 8½ kb of memory, instead of the 10 kb used by the later Applesoft II. Consequently, it lacks a number of command features developed for the later, mainstream version:
All commands supporting Apple's "high resolution" graphics (9 total)
Error-trapping with ONERR...GOTO and RESUME
Machine-routine shorthand call "&"
Screen-clearing HOME (a call to a system ROM routine)
Text-output control NORMAL, INVERSE, FLASH and SPEED=
The print-space function SPC() is listed among reserved words in the manual, but is not otherwise documented (the TAB() print-function is documented)
Cassette tape storage of numerical arrays: STORE and RECALL
Device response: WAIT
as well as several the later version would have, that had already been present in Apple's Integer BASIC:
Program-line deletion: DEL
Machine-routine access: CALL
Peripheral device access: IN# and PR# (although IN without "#" is listed among reserved words)
Memory range control: HIMEM: and LOMEM:
Execution tracking for debugging: TRACE and NOTRACE
Screen-positioning: HTAB and VTAB
Subroutine aborting POP
Functions PDL() to read the analog controllers, and SCRN() to read the low-resolution graphics screen (both accessing system ROM routines)
In addition, its low-resolution graphics commands have different names from their Integer BASIC/Applesoft II counterparts. All command names are of the form PLTx such that GR, COLOR=, PLOT, HLIN and VLIN are called PLTG, PLTC, PLTP, PLTH, and PLTV, respectively. The command for returning to text mode, known as TEXT in other versions, is simply TEX, and carries the proviso that it has to be the last statement in a program line.
Applesoft BASIC 1.x was closer to Microsoft's original 6502 BASIC code than the later Applesoft II; it retained the Memory Size? prompt and displayed a Microsoft copyright notice. To maintain consistency with Integer BASIC, the "Ok" prompt from Microsoft's code was replaced by a ] character. Applesoft 1.x also prompted the user upon loading if he wished to disable the REM statement and the LET keyword in assignment statements in exchange for lores graphics commands.
The USR() function is also defined differently, serving as a stand-in for the absent CALL command. Its argument is not for passing a numerical value to the machine-language routine, but is instead the call-address of the routine itself; there is no "hook" to pre-define the address. All of several examples in the manual use the function only to access "system monitor ROM" routines, or short user-routines to manipulate the ROM routines. No mention is made of any code to calculate the value returned by the function itself; the function is always shown being assigned to "dummy" variables, which, without action to set a value by user-code, just receive a meaningless value handed back to them. Even accessed ROM routines that return values (in examples, those that provide the service of PDL() and SCRN() functions) merely have their values stored, by user-routines, in locations that are separately PEEKed in a subsequent statement.
Unlike in Integer BASIC and Applesoft II, the Boolean operators AND, OR and NOT perform bitwise operations on 16-bit integer values. If they are given values outside that range, an error results.
The terms OUT and PLT (and the aforementioned IN) appear in the list of reserved words, but are not explained anywhere in the manual.
See also
ALF's Formula Transfer Link, speed enhancement for Applesoft BASIC
Chinese BASIC, a Chinese-localized version of Applesoft BASIC
Apple III BASICs from Apple and Microsoft
References
External links
Disassembled ROM
AppleSoft BASIC in JavaScript
Apple II software
BASIC interpreters
Discontinued Microsoft BASICs
BASIC programming language family
Microsoft programming languages | Operating System (OS) | 1,199 |