text
stringlengths 101
134k
| type
stringclasses 12
values | __index_level_0__
int64 0
14.7k
|
---|---|---|
Point-of-sale malware
Point-of-sale malware (POS malware) is usually a type of malicious software (malware) that is used by cybercriminals to target point of sale (POS) and payment terminals with the intent to obtain credit card and debit card information, a card's track 1 or track 2 data and even the CVV code, by various man-in-the-middle attacks, that is the interception of the processing at the retail checkout point of sale system. The simplest, or most evasive, approach is RAM-scraping, accessing the system's memory and exporting the copied information via a remote access trojan (RAT) as this minimizes any software or hardware tampering, potentially leaving no footprints. POS attacks may also include the use of various bits of hardware: dongles, trojan card readers, (wireless) data transmitters and receivers. Being at the gateway of transactions, POS malware enables hackers to process and steal thousands, even millions, of transaction payment data, depending upon the target, the number of devices affected, and how long the attack goes undetected. This is done before or outside of the card information being (usually) encrypted and sent to the payment processor for authorization.
List of POS RAM scraper malware variants
Rdasrv
It was discovered in 2011, and installs itself into the Windows computer as a service called rdasrv.exe. It scans for track 1 and track 2 credit card data using Perl compatible regular expressions which includes the customer card holder's name, account number, expiry date, CVV code and other discretionary information. Once the information gets scraped it is stored into data.txt or currentblock.txt and sent to the hacker.
Alina
It was discovered in October 2012 and gets installed into the PC automatically. It gets embedded into the Auto It script and loads the malware into the memory. Then it scrapes credit card (CC) data from POS software.
VSkimmer
Vskimmer scrapes the information from the Windows system by detecting the card readers attached to the reader and then sends the captured data to the cyber criminal or control server.
Dexter
It was discovered in December 2012 to steal system information along with the track 1 and track 2 card details with the help of keylogger installed onto the computer.
BlackPOS
It is a spyware, created to steal credit and debit card information from the POS system. BlackPOS gets into the PC with stealth-based methods and steals information to send it to some external server.
Backoff
This memory-scraping malware tracks Track 2 data to access the card magnetic stripe with the help of magnetic stripe readers and sends data to hacker to clone fake credit cards.
FastPOS
FastPOS Malware is a POS malware that was discovered by Trend Micro researchers. This strikes the point of sale system very fast and snatches the credit and debit card information and sends the data to the cyber criminal instantly. The malware has the capability to exfiltrate the track data using two techniques such as key logger and memory scraper.
PunkeyPOS Malware
PandaLabs discovered this malware and it infects the point of sale system to breach credit and debit card details. PunkeyPOS Malware uses two functions such as keylogger and RAM Scraper to steal information at Point of Sale Terminal. Once the information is stolen, it is encrypted and sent to cybercriminal's Control and Command Server (C&C).
Multigrain Malware
This new variant of pos malware or point of sale malware was discovered by FireEye. It follows new advanced technique to steal retail customer's card information with the help of Lunh Algorithm. To exfiltrate the stolen information it first block http and ftp traffic that monitors the data exfiltration. It belongs to the family of NewPosThings malware.
CenterPOS Malware
CenterPOS is a POS (Point of Sale) Malware that been found in the year 2015 of September along with the other malicious malware such as BlackPOS, NewPOSThings and Alina Malware by FireEye Experts. It scrapes the stolen credit and debit card and sends the data HTTP POST request with the help of Triple DES encryption.
MalumPOS Malware
MalumPOS is a point of sale malware that records point of sale's data which is running in an Oracle MICROS payment system and has breached 333,000 data's all over the world. It uses Delphi programming language for stealing the credit and debit card details. The stolen data is then sent to the cyber criminal or sold in the black market.
See also
Point of sale
Cyber security standards
List of cyber attack threat trends
Cyber electronic warfare
Malware
References
Malware
Carding (fraud)
Retail point of sale systems | Operating System (OS) | 800 |
MacOS Mojave
macOS Mojave ( ; version 10.14) is the fifteenth major release of macOS, Apple Inc.'s desktop operating system for Macintosh computers. Mojave was announced at Apple's Worldwide Developers Conference on June 4, 2018, and was released to the public on September 24, 2018. The operating system's name refers to the Mojave Desert and is part of a series of California-themed names that began with OS X Mavericks. It succeeded macOS High Sierra and was followed by macOS Catalina.
macOS Mojave brings several iOS apps to the desktop operating system, including Apple News, Voice Memos, and Home. It also includes a much more comprehensive "dark mode", is the final version of macOS to support 32-bit application software, and is also the last version of macOS to support the iPhoto app, which had already been superseded in OS X Yosemite (10.10) by the newer Photos app.
Mojave was well received and was supplemented by point releases after launch.
Overview
macOS Mojave was announced on June 4, 2018, at Apple's annual Worldwide Developers Conference in San Jose, California. Apple pitched Mojave, named after the California desert, as adding "pro" features that would benefit all users. The developer preview of the operating system was released for developers the same day, followed by a public beta on June 26. The retail version of 10.14 was released on September 24. It was followed by several point updates and supplemental updates.
System requirements
Mojave requires a GPU that supports Metal, and the list of compatible systems is more restrictive than the previous version, macOS High Sierra. Compatible models are the following Macintosh computers running OS X Mountain Lion or later:
MacBook: Early 2015 or newer
MacBook Air: Mid 2012 or newer
MacBook Pro: Mid 2012 or newer, Retina display not needed
Mac Mini: Late 2012 or newer
iMac: Late 2012 or newer
iMac Pro: Late 2017
Mac Pro: Late 2013 or newer; Mid 2010 or Mid 2012 models require a Metal-capable GPU
macOS Mojave requires at least 2GB of RAM as well as 12.5GB of available disk space to upgrade from OS X El Capitan, macOS Sierra, or macOS High Sierra, or 18.5GB of disk space to upgrade from OS X Yosemite and earlier releases. Some features are not available on all compatible models.
Changes
System updates
macOS Mojave deprecates support for several legacy features of the OS. The graphics frameworks OpenGL and OpenCL are still supported by the operating system, but will no longer be maintained; developers are encouraged to use Apple's Metal library instead.
OpenGL is a cross-platform graphics framework designed to support a wide range of processors. Apple chose OpenGL in the late 1990s to build support for software graphics rendering into the Mac, after abandoning QuickDraw 3D. At the time, moving to OpenGL allowed Apple to take advantage of existing libraries that enabled hardware acceleration on a variety of different GPUs. As time went on, Apple has shifted its efforts towards building its hardware platforms for mobile and desktop use. Metal makes use of the homogenized hardware by abandoning the abstraction layer and running on the "bare metal". Metal reduces CPU load, shifting more tasks to the GPU. It reduces driver overhead and improves multithreading, allowing every CPU thread to send commands to the GPU.
macOS does not natively support Vulkan, the Khronos group's official successor to OpenGL. The MoltenVK library can be used as a bridge, translating most of the Vulkan 1.0 API into the Metal API.
Continuing the process started in macOS High Sierra (10.13), which issued warnings about compatibility with 32-bit applications, Mojave issues warnings when opening 32-bit apps that they will not be supported in future updates. In macOS Mojave 10.14, this alert appears once every 30 days when launching the app, as macOS 10.15 will not support 32-bit applications.
When Mojave is installed, it will convert solid-state drives (SSDs), hard disk drives (HDDs), and Fusion Drives, from HFS Plus to APFS. On Fusion Drives using APFS, files will be moved to the SSD based on the file's frequency of use and its SSD performance profile. APFS will also store all metadata for a Fusion Drive's file system on the SSD.
New data protections require applications to get permission from the user before using the Mac camera and microphone or accessing system data like user Mail history and Messages database.
Removed features
Mojave removes integration with Facebook, Twitter, Vimeo, and Flickr, which was added in OS X Mountain Lion.
The only supported Nvidia graphics cards are the Quadro K5000 and GeForce GTX 680 Mac Edition.
Applications
Mojave features changes to existing applications as well as new ones. Finder now has metadata preview accessed via View > Show Preview, and many other updates, including a Gallery View (replacing Cover Flow) that lets users browse through files visually. After a screenshot is taken, as with iOS, the image appears in the corner of the display. The screenshot software can now record video, choose where to save files, and be opened via + + .
Safari's Tracking Prevention features now prevent social media "Like" or "Share" buttons and comment widgets from tracking users without permission. The browser also sends less information to web servers about the user's system, reducing the chance of being tracked based on system configuration. It can also automatically create, autofill, and store strong passwords when users create new online accounts; it also flags reused passwords so users can change them.
A new Screenshot app was added to macOS Mojave to replace the Grab app. Screenshot can capture a selected area, window or the entire screen as well as screen record a selected area or the entire display. The Screenshot app is located in the /Applications/Utilities/ folder, as was the Grab app. Screenshot can also be accessed by pressing ++.
FaceTime
macOS 10.14.1, released on October 30, 2018, adds Group FaceTime, which lets users chat with up to 32 people at the same time, using video or audio from an iPhone, iPad or Mac, or audio from Apple Watch. Participants can join in mid-conversation.
App Store
The Mac App Store was rewritten from the ground up and features a new interface and editorial content, similar to the iOS App Store. A new 'Discover' tab highlights new and updated apps; Create, Work, Play and Develop tabs help users find apps for a specific project or purpose.
iOS apps ported to macOS
Four new apps (News, Stocks, Voice Memos and Home) are ported to macOS Mojave from iOS, with Apple implementing a subset of UIKit on the desktop OS. Third-party developers would be able to port iOS applications to macOS in 2019.
With Home, Mac users can control their HomeKit-enabled accessories to do things like turn lights off and on or adjust thermostat settings. Voice Memos lets users record audio (e.g., personal notes, lectures, meetings, interviews, or song ideas), and access them from iPhone, iPad or Mac. Stocks delivers curated market news alongside a personalized watchlist, with quotes and charts.
Other applications found on macOS 10.14 Mojave
Adobe Flash Player (installer)
AirPort Utility
Archive Utility
Audio MIDI Setup
Automator
Bluetooth File Exchange
Books
Boot Camp Assistant
Calculator
Calendar
Chess
ColorSync Utility
Console
Contacts
Dictionary
Digital Color Meter
Disk Utility
DVD Player
Font Book
GarageBand (may not be pre-installed)
Grab (still might be pre-installed)
Grapher
iMovie (may not be pre-installed)
iTunes
Image Capture
Ink (can only be accessed by connecting a graphics tablet into your Mac)
Keychain Access
Keynote (may not be pre-installed)
Mail
Migration Assistant
Notes, version 4.6
Numbers (may not be pre-installed)
Pages (may not be pre-installed)
Photo Booth
Preview
QuickTime Player
Reminders
Screenshot (succeeded Grab since Mojave or Catalina)
Script Editor
Siri
Stickies
System Information
Terminal
TextEdit
Time Machine
VoiceOver Utility
X11/XQuartz (may not be pre-installed)
User interface
Dark mode and accent colors
Mojave introduces "Dark Mode", a Light-on-dark color scheme that darkens the user interface to make content stand out while the interface recedes. Users can choose dark or light mode when installing Mojave, or any time thereafter from System Preferences.
Apple's built-in apps support Dark Mode. App developers can implement Dark mode in their apps via a public API.
A limited dark mode that affected only the Dock, menu bar, and drop-down menus was previously introduced in OS X Yosemite.
Desktop
Stacks, a feature introduced in Mac OS X Leopard, now lets users group desktop files into groups based on file attributes such as file kind, date last opened, date modified, date created, name and tags. This is accessed via View > Use Stacks.
macOS Mojave features a new Dynamic Desktop that automatically changes specially made desktop backgrounds (two of which are included) to match the time of day.
Dock
The Dock has a space for recently used apps that have not previously been added to the Dock.
Preferences
macOS update functionality has been moved back to System Preferences from the Mac App Store. In OS X Mountain Lion (10.8), system and app updates moved to the App Store from Software Update.
Reception
Mojave was generally well received by technology journalists and the press. The Verges Jacob Kastrenakes considered Mojave a relatively minor update, but Kastrenakes and Jason Snell thought the release hinted at the future direction of macOS. In contrast, Ars Technicas Andrew Cunningham felt that "Mojave feels, if not totally transformative, at least more consequential than the last few macOS releases have felt." Cunningham highlighted productivity improvements and continued work on macOS's foundation.
TechCrunch’s Brian Heater dubbed Mojave "arguably the most focused macOS release in recent memory", playing an important role in reassuring professional users that it was still committed to them.
Mojave's new features were generally praised. Critics welcomed the addition of Dark Mode.
Release history
References
External links
– official site
macOS Mojave download page at Apple
14
X86-64 operating systems
2018 software
Computer-related introductions in 2018 | Operating System (OS) | 801 |
System monitor
A system monitor is a hardware or software component used to monitor system resources and performance in a computer system.
Among the management issues regarding use of system monitoring tools are resource usage and privacy.
Overview
Software monitors occur more commonly, sometimes as a part of a widget engine. These monitoring systems are often used to keep track of system resources, such as CPU usage and frequency, or the amount of free RAM. They are also used to display items such as free space on one or more hard drives, the temperature of the CPU and other important components, and networking information including the system IP address and current rates of upload and download. Other possible displays may include the date and time, system uptime, computer name, username, hard drive S.M.A.R.T. data, fan speeds, and the voltages being provided by the power supply.
Less common are hardware-based systems monitoring similar information. Customarily these occupy one or more drive bays on the front of the computer case, and either interface directly with the system hardware or connect to a software data-collection system via USB. With either approach to gathering data, the monitoring system displays information on a small LCD panel or on series of small analog or LED numeric displays. Some hardware-based system monitors also allow direct control of fan speeds, allowing the user to quickly customize the cooling in the system.
A few very high-end models of hardware system monitor are designed to interface with only a specific model of motherboard. These systems directly utilize the sensors built into the system, providing more detailed and accurate information than less-expensive monitoring systems customarily provide.
Software monitoring
Software monitoring tools operate within the device they're monitoring.
Hardware monitoring
Unlike software monitoring tools, hardware measurement tools can either located within the device being measure, or they can be attached and operate from an external location.
A hardware monitor is a common component of modern motherboards, which can either come as a separate chip, often interfaced through I2C or SMBus, or as part of a Super I/O solution, often interfaced through Low Pin Count (LPC). These devices make it possible to monitor temperature in the chassis, voltage supplied to the motherboard by the power supply unit and the speed of the computer fans that are connected directly to one of the fan headers on the motherboard. Many of these hardware monitors also have fan controlling capabilities. System monitoring software like SpeedFan on Windows, lm_sensors on Linux, envstat on NetBSD, and sysctl hw.sensors on OpenBSD and DragonFly can interface with these chips to relay this environmental sensor information to the user.
Privacy
When an individual user is measuring the performance of a single-user system, whether it is a standalone box or a virtual machine on a multi-user system, access does not impede the privacy of others. Privacy becomes an issue when someone other than the end-user, such as a system manager, has legitimate need to access data about other users.
Resource usage
When events occur faster than a monitor can record them, a workaround is needed, such as replacing event recording with simple counting.
Another consideration is not having major impact on the CPU and storage available for useful work. While a hardware monitor will usually have less impact than a software monitor, there are data items, such as "some descriptive information, such as program names" that must involve software.
A further consideration is that a bug in this domain can have severe impact: an extreme case would be "cause the OS to crash."
List of software monitors
Single system:
Activity Monitor
AIDA64
CPU-Z
GPU-Z
Conky
htop
hw.sensors on OpenBSD and DragonFly BSD
iftop
iostat
KDE System Guard (KSysguard)
lm_sensors
monit
Monitorix
Motherboard Monitor
Netdata
nmon
ntop
Process Explorer
Resource Monitor (resmon)
Samurize
Sar in UNIX
SpeedFan
sysmon/envsys on NetBSD
systat
System Monitor (sysmon)
top
Vigilo NMS (Community Edition)
vmstat
Windows Desktop Gadgets
Windows Task Manager
Distributed:
Argus (monitoring software)
Collectd
Ganglia
GKrellM
HP SiteScope
monit (paid version M/monit)
NMIS
Munin
Nagios
NetCrunch
Opmantek
Pandora FMS
Performance Monitor (perfmon)
Prometheus (software)
symon
Vigilo NMS (Enterprise Editions)
Zenoss Core
See also
Application performance management (APM)
Application service management (ASM)
I2C and SMBus
Network monitoring
Mean time between failures (MTBF)
Intelligent Platform Management Interface (IPMI)
System profiler
Website monitoring
References
External links
Monitoring a Linux System with X11/Console/Web-Based Tools
Computer peripherals
Liquid crystal displays
System administration
Utility software types | Operating System (OS) | 802 |
Departmental boot image
A departmental boot image is a boot image for any computer that has been enhanced by adding some applications and passwords specific to a task or group or department in an organization. This has many of the advantages of a thin client strategy, but can be done on any operating system base as long as the boot device is large enough to accommodate the boot and applications together.
A typical departmental Windows XP boot image is usually so large that it requires a DVD to store, and may be too large for network booting. Accordingly, it is usually installed on a fixed or removable hard drive kept inside the machine, rather than installed over a network or from a ROM.
There are some boot image control complexity and total cost of operations advantages to using a departmental boot image instead of a common boot image for the entire organization, or a thin client:
all the capabilities of a full operating system are available, not just those of a thin client
applications with inflexible software licenses need not be paid for the entire organization, but can be paid only for the departments that actually use them and have them installed on their machines
applications that interact badly can be segmented so that an accounting program and an engineering program do not "clobber each other's libraries" or otherwise interfere as they would if both sets of applications were installed in one boot image
overall size of each boot image can be controlled to fit within network or removable disk limits
Disadvantages include the complexity of creating and managing several large boot images, and determining when a department needs to upgrade its applications. If each user is allowed to do this on their own, then, the discipline soon degrades and the shop will be no easier to manage than one that consists of one-off computers with their own quirks, frequently requiring re-imaging and whose issues are not really diagnosable nor comparable to each other. Some experts believe that any departmental boot image regime degrades rather quickly to this state without extraordinary discipline and controls, and advocate thin clients to ensure such control.
It is however increasingly possible to restrict users from installing "their own" applications on a standard boot image, and to automatically re-install when a variant boot image is detected. While this would be draconian in a large organization with one boot image it is often quite acceptable when the boot image is maintained at a departmental level and users can request that it be upgraded with a minimum of bureaucracy and waiting.
Booting
Disk images | Operating System (OS) | 803 |
DOS/V
DOS/V is a Japanese computing initiative starting in 1990 to allow DOS on IBM PC compatibles with VGA cards to handle double-byte (DBCS) Japanese text via software alone. It was initially developed from PC DOS by IBM for its PS/55 machines (a localized version of the PS/2), but IBM gave the source code of drivers to Microsoft so Microsoft licensed a DOS/V-compatible version of MS-DOS to other companies. Kanji fonts and other locale information are store on the hard disk rather than on special chips as in the preceding AX architecture. As with AX, its great value for the Japanese computing industry is in allowing compatibility with foreign software. This had not been possible under NEC's proprietary PC-98 system, which was the market leader before DOS/V emerged. DOS/V stands for "Disk Operating System/VGA" (not "version 5"; DOS/V came out at approximately the same time as DOS 5). In Japan, IBM compatible PCs became popular along with DOS/V, so they are often referred to as "DOS/V machine" or "DOS/V pasocom" even though DOS/V operating systems are no longer common.
The promotion of DOS/V was done by IBM and its consortium called PC Open Architecture Developers' Group (OADG).
Digital Research released a Japanese DOS/V-compatible version of DR DOS 6.0 in 1992.
History
In the early 1980s, IBM Japan developed two x86-based personal computer lines for the Asia-Pacific region, IBM 5550 and IBM JX. The 5550 reads Kanji fonts from the disk, and draws text as graphic characters on 1024×768 high resolution monitor. The JX extends IBM PCjr and IBM PC architecture. It supports English and Japanese versions of PC DOS with 720×512 resolution monitor. Both machines couldn't break dominant NEC's PC-98 in consumer market in Japan. Because the 5550 was expensive, it was mostly sold for large enterprises who used IBM's mainframe. The JX used 8088 processor instead of faster 8086 processor because IBM thought a consumer-class JX mustn't surpass a business-class 5550. It damaged buyer's reputations whatever the actual speed was. In another point, a software company said IBM was uncooperative for developing JX software. IBM Japan planned a 100% PC/AT compatible machine codenamed "JX2", but cancelled it in 1986.
Masahiko Hatori was a developer of JX's DOS. Through the development of JX, he learned the skills needed to localize an English computer into Japanese. In 1987, he started developing the DOS/V during spare time at IBM Yamato Development Laboratory. He thought the 480-line mode of VGA and a processor as fast as the 80386 would realize his idea, but they were expensive hardwares as of 1987. In this era, Toshiba released the J-3100 laptop computer, and Microsoft introduced the AX architecture. IBM Japan didn't join in the AX consortium. His boss, Tsutomu Maruyama , thought IBM's headquarters wouldn't allow to adopt the AX because they requested IBM Japan to use the same standard as worldwide IBM offices used. In October 1987, IBM Japan released the PS/55 Model 5535 which was a proprietary laptop using a special version of DOS. It was more expensive than the J-3100 because its LCD display used a non-standard 720×512 resolution. Hatori thought IBM needed to shift their own proprietary PC to IBM PC compatibles. Maruyama and Nobuo Mii thought Japan's closed PC market needed to be changed and this attempt couldn't be done by IBM alone. In summer of 1989, they decided to carry out the development of DOS/V, disclose the architecture of PS/55, and found the PC Open Architecture Developers' Group (OADG).
The DOS/V development team designed the DOS/V to be simple for better scalability and compatibility with original PC DOS. They had difficulty reducing text drawing time. "A stopwatch was a necessity for DOS/V development", Hatori said.
IBM Japan announced the first version of DOS/V, IBM DOS J4.0/V, on 11 October 1990, and shipped out in November 1990. At the same time, IBM Japan released the PS/55 Model 5535-S, a laptop computer with VGA resolution. The announcement letter stated DOS/V was designed for low-end desktops and laptops of PS/55, but users reported on BBS that they could run DOS/V on IBM PC clones. The development team unofficially confirmed these comments, and modified incompatibilities of DOS/V. It was a secret inside the company because it would prevent sales of PS/55 and meet with opposition. Hatori said,
Maruyama and Mii had to convince IBM's branches to agree with the plan. In the beginning of December 1990, Maruyama went to IBM's Management Committee, and presented his plan "The low-end PC strategy in Japan". At the committee, a topic usually took 15 minutes, but his topic took an hour. The plan was finally approved by John Akers.
After the committee, Susumu Furukawa, a president of Microsoft Japan, could make an appointment with IBM Japan to share the source code of DOS/V. On 20 December 1990, IBM Japan announced they founded OADG and Microsoft would supply DOS/V for other PC manufacturers. From 1992 to 1994, many Japanese manufacturers began selling IBM PC clones with DOS/V. Some global PC manufacturers entered into the Japanese market, Compaq in 1992 and Dell in 1993. Fujitsu released IBM PC clones (FMV series) in October 1993, and about 200,000 units were shipped in 1994.
The initial goal of DOS/V was to enable Japanese software to run on laptop computers based on the IBM global standards rather than the domestic computer architecture. As of 1989, the VGA was not common, but they expected the LCD panels with VGA resolution would be affordable within a few years. The DOS/V lacked its software library, so IBM Japan requested third-party companies to port their software to the DOS/V. The PS/55 Model 5535-S was released as a laptop terminal for the corporate sector. They only had to supply a few major business software to the DOS/V.
In March 1991, IBM Japan released the PS/55note Model 5523-S which was the lower-price laptop computer. It was a strategically important product to popularize the DOS/V into the consumer market, and led to the success of subsequent consumer products such as the ThinkPad. However, the DOS/V itself sold much better than the 5523S because advanced users purchased it to build a Japanese language environment on their IBM compatible PCs.
In 1992, IBM Japan released the PS/V (similar to the PS/ValuePoint) and the ThinkPad. They were based upon an architecture closer to PC compatibles, and intended to compete with rivals in the consumer market. As of December 1992, the PS/V was the most selling DOS/V computer. In January 1993, NEC released a new generation of the PC-98 to take back its initiative. NEC advertised that the scrolling speed of the word processor Ichitaro on the PC-9801BX was faster than on the PS/V 2405-W. Yuzuru Takemura of IBM Japan said, "Let us suppose the movement towards Windows is inevitability. Processors and graphics cards will become faster and faster. If the PC-98 holds its architecture, it never beat our machine at speed. Windows is developed for the PC/AT architecture. Kanji glyphs are also supplied as a software font. The only thing IBM have to do is tuning up it for the video card. On the different architecture, it will be hard to tune up Windows".In 1993, Microsoft Japan released first retail versions of Windows (Windows 3.1) for both DOS/V and PC-98. The DOS/V contributed the dawn of IBM PC clones in Japan, yet PC-98 had kept 50% of market share until 1996. It was turned round by the release of Windows 95.
Drivers
Three device drivers enable DBCS code page support in DOS on IBM PC compatibles with VGA; the font driver, the display driver and the input assisted subsystem driver. The font driver loads a complete set of the glyphs from a font file into the extended memory. The display driver sets the 640×480 graphics mode on the VGA, and allocates about 20 KB of the conventional memory for text, called the simulated video buffer. A DOS/V program writes the codes of the characters to the simulated video buffer through DOS output functions, or writes them directly and calls driver's function to refresh the screen. The display driver copies the font bitmap data from the extended memory to the actual video memory, corresponding to the simulated video buffer. The input assisted subsystem driver communicates with optional input methods and enables the text editing in the on-the-spot or below-the-spot styles. Without installing these drivers, the DOS/V is equivalent to the generic MS-DOS without DBCS code page support.
$FONT.SYS – Font driver
$DISP.SYS – Display driver
$IAS.SYS – Input assist subsystem (IAS) with front end processor (FEP) support driver
$PRN.SYS – Printer driver
$PRNUSER.SYS – Printer driver
$PRNESCP.SYS – Printer driver for Epson ESC/P J84
Versions
In 1988, IBM Japan released a new model of the PS/55 which was based on the PS/2 with Japanese language support. It is equipped with a proprietary video card, the Display Adapter, which has a high resolution text mode and a Japanese character set stored in a ROM on the card. It supports Japanese DOS K3.3, PC DOS 3.3 (English) and OS/2.
IBM DOS J4.0 was released in 1989. It combines Japanese DOS and PC DOS, which runs Japanese DOS as the Japanese mode (PS/55 mode) and PC DOS as the English mode (PS/2 mode). Although it had two separated modes that needed a reboot to switch between them, IBM Japan called it bilingual. This version requires the PS/55 display adapter.
The first version of DOS/V, IBM DOS J4.0/V (J4.05/V), was released in the end of 1990. The word 'DOS/V' was quickly known to Japanese computer industry, but the DOS/V itself didn't spread quickly. As of 1991, some small companies sold American or Taiwanese computers in Japan, but DOS J4.0/V caused some issues on PC compatibles. Its EMS driver only supports IBM's Expanded Memory Adapter. The input method doesn't support the US keyboard nor the Japanese AX keyboard, so it locates some keys at the wrong place. PS/55 keyboards were available from IBM, but it must be used with an AT to PS/2 adapter because AX machines (thus PC/AT clones) generally have the older 5-pin DIN connector. Scrolling text with the common Tseng Labs ET4000 graphics controller makes the screen unreadable. This issue can be fixed by the new /HS=LC switch of $DISP.SYS in DOS J4.07/V. "Some VGA clones did not correctly implement the CRTC address wraparound. Most likely those were Super VGAs with more video memory than the original VGA (i.e. more than 256 KB). Software relying on the address wraparound was very rare and therefore the functionality was not necessarily correctly implemented in hardware. On the other hand, the split screen technique was relatively well documented and well understood, and commercial software (especially games) sometimes used it. It was therefore likely to be tested and properly implemented in hardware."
IBM Japan released DOS J5.0/V in October 1991, and DOS J5.0 in December 1991. DOS J5.0 combines Japanese DOS and DOS/V. This is the last version developed for the PS/55 display adapter. DOS J5.02/V was released in March 1992. It added official support for the IBM PS/2 and the US English layout keyboard.
The development of MS-DOS 5.0/V was delayed because IBM and Microsoft disputed how to implement the API for input methods. It took a few months to make an agreement that the OEM adaptation kit (OAK) of MS-DOS 5.0/V provided both IAS (Input Assist Subsystem) and MKKC (Microsoft Kana-Kanji Conversion). Microsoft planned to add the AX application support into DOS/V, but cancelled it because its beta release was strongly criticized by users for lacking compatibility. Some PC manufacturers couldn't wait Microsoft's DOS/V. Toshiba developed a DOS/V emulator that could run DOS/V applications on a VGA-equipped J-3100 computer. AST Research Japan and Sharp decided to bundle IBM DOS J5.0/V. Compaq developed own DOS/V drivers, and released their first DOS/V computers in April 1992.
On 10 December 1993, Microsoft Japan and IBM Japan released new versions of DOS/V, MS-DOS 6.2/V Upgrade and PC DOS J6.1/V. Although both were released at the same time, they were separately developed. MS-DOS 6.2/V Upgrade is the only Japanese version of MS-DOS released by Microsoft under its own brand for retail sales. Microsoft Japan continued selling it after Microsoft released MS-DOS 6.22 to resolve patent infringement of DoubleSpace disk compression.
IBM Japan ended support for PC DOS 2000 on 31 January 2001, and Microsoft Japan ended support for MS-DOS on 31 December 2001.
Japanese versions of Windows 2000 and XP have a DOS/V environment in NTVDM. It was removed in Windows Vista.
PC DOS versions
PC DOS versions of DOS/V (J for Japanese, P for Chinese (PRC), T for Taiwanese, H for Korean (Hangul)):
IBM DOS J4.0/V "5605-PNA" (version 4.00 – 4.04 were not released for DOS/V)
IBM DOS J4.05/V for PS/55 (announced 1990-10-11, shipped 1990-11-05)
IBM DOS J4.06/V (1991-04)
IBM DOS J4.07/V (1991-07)
IBM DOS J5.0/V "5605-PJA" (1991-10), IBM DOS T5.0/V, IBM DOS H5.0/V
IBM DOS J5.02/V for PS/55 (1992-03)
IBM DOS J5.02A/V
IBM DOS J5.02B/V
IBM DOS J5.02C/V
IBM DOS J5.02D/V (1993-05)
Sony OADG DOS/V (includes IBM DOS J5.0/V and drivers for AX machines)
PC DOS J6.1/V "5605-PTA" (1993-12), PC DOS P6.1/V, PC DOS T6.10/V
PC DOS J6.10A/V (1994-03)
PC DOS J6.3/V "5605-PDA" (1994-05)
PC DOS J6.30A/V
PC DOS J6.30B/V
PC DOS J6.30C/V (1995-06)
PC DOS J7.0/V "5605-PPW" (1995-08), PC DOS P7/V, PC DOS T7/V, PC DOS H7/V
PC DOS J7.00A/V
PC DOS J7.00B/V
PC DOS J7.00C/V (1998-07)
PC DOS 2000 Japanese Edition "04L5610" (1998-07)
MS-DOS versions
MS-DOS versions of DOS/V:
Toshiba Nichi-Ei (日英; Japanese-English) MS-DOS 5.0
Compaq MS-DOS 5.0J/V (1992-04)
MS-DOS 5.0/V (OEM, generic MS-DOS 5.0/V)
MS-DOS 6.0/V
MS-DOS 6.2/V (Retail, 1993-12)
MS-DOS 6.22/V (1994-08)
Fujitsu Towns OS for FM Towns (only late issues had DOS/V compatibility added)
DR DOS versions
DR DOS versions of DOS/V:
DR DOS 6.0/V (Japanese) (1992-07), DR DOS 6.0/V (Korean)
ViewMAX 2 (Japanese) (1991–1992)
NetWare Lite 1.1J (Japanese) (1992–1997)
Novell DOS 7 (Japanese)?
Personal NetWare J 1.0 (Japanese) (1994–1995)
(DR-DOS 7.0x/V) (2001–2006) (an attempt to build a DR-DOS/V from existing components)
Extensions
IBM DOS/V Extension extends DOS/V drivers to set up a variety of text modes for certain video adapters. The High-quality Text Mode is the default 80 columns by 25 rows with 12×24 pixels large characters. The High-density Text Mode (Variable Text; V-Text) offers large text modes with various font sizes. DOS/V Extension V1.0 included drivers for VGA, XGA, PS/55 Display Adapter, SVGA (800×600) and ET4000 (1024×768). Some of its drivers were included in PC DOS J6.1/V and later.
IBM DOS/V Extension V1.0 (1993-01) includes V-Text support
IBM DOS/V Extension V2.0 "5605-PXB"
See also
Unicode
List of DOS commands
Kanji CP/M-86 (1984)
(A Japanese magazine on IBM clones)
Notes
References
Further reading
DOS on IBM PC compatibles
1990 software | Operating System (OS) | 804 |
AOKP
AOKP, short for Android Open Kang Project, is an open-source replacement distribution for smartphones and tablet computers based on the Android mobile operating system. The name is a play on the word kang (slang for stolen code) and AOSP (Android Open Source Project). The name was a joke, but it stuck. It was started as free and open-source software by Roman Birg based on the official releases of Android Open Source Project by Google, with added original and third-party code, features, and control.
Although only a portion of the total AOKP users elect to report their use of the firmware, as of September 2013, it is used by more than 3.5 million devices around the world.
Features
AOKP allows users to change many aspects of the OS including its appearance and its functions. It allows customizations normally not permitted by the factory firmware.
LED control: The color and pulsing of the notification LED can be custom set for various applications.
Navigation ring: Actions can be assigned to the navigation ring, to allow for quicker access applications.
Ribbon: Allows users to use swipe gestures anywhere and enables a system-wide custom application shortcuts and actions.
Vibration patterns: Users can build custom vibration patterns to be assigned to notifications from certain applications or calls from certain people.
Native theme support: Themes, downloaded from the Google Play Store or from other sources, can be applied to give a modified appearance to the device interface. AOKP now features Substratum support.
Customization of the hardware and software buttons, including track skip/flashlight while the screen is off, PIE control and the ROM's unique Fling navigation system
UI control, including colour strokes and background blue
Status bar customization, such as battery icon stylization and network activity
Power menu customization
Notification and quick settings configurations, such as how many toggles are displayed on the quick settings header at a time
Release versions
AOKP builds/releases are provided on a milestone and nightly schedule:
Milestones: Most stable builds which are usually released once a month. However, milestone builds have not been released for several years and the AOKP team appears to just release nightlies as of Nougat builds.
Nightlies: Automatic builds every 3 days with the latest code committed but may contain bugs
To be notified of new releases, users can get the AOKPush application that uses the Google Cloud Messaging (GCM) service provided by Google to immediately receive push notifications when a build is complete and ready to download. With AOKPush, users also get the available test builds and random messages from the developer team. GCM is integrated into the Android framework so the application does not wake up the device periodically to fetch data nor use extra battery. There are also devices that would rely on AOKP to get latest android update.
Firmware history and development
Not long after the introduction of the HTC Dream (named the "T-Mobile G1" in the United States) mobile phone in September 2008, a method was discovered to attain privileged control (termed "root access") within Android's Linux-based subsystem. Having root access, combined with the open source nature of the Android operating system, allowed the phone's stock firmware to be modified and re-installed onto the phone.
In the following years, several modified firmware releases for mobile devices were developed and distributed by Android enthusiasts. One, maintained by a developer named Roman Birg of AOKP, quickly became popular among several high-end Android mobile owners. AOKP started in November 2011 and quickly grew in popularity, forming a small community of developers called the AOKP Team (also known as "Team Kang"). Within a few months, the number of devices and features supported by AOKP escalated, and AOKP quickly became the second most popular Android firmware distributions, CyanogenMod being the first.
AOKP is developed using a distributed revision control system with the official repositories hosted on GitHub like many other open source projects. New features or bug fix changes made by contributors are submitted using Google's source code review system, Gerrit. Contributions may be tested by anyone, voted up or down by registered users, and ultimately accepted into the code by AOKP developers.
In early 2020 AOKP Developers posted a blog outlining parity with LineageOS upstream. "Device support will be a bit different this time around. We can support any device that is getting Lineage 16.0 builds. We just need a maintainer to test builds and maintain a forum thread."
2011
AOKP Ice Cream Sandwich (ICS) Android 4.0.X
2012
AOKP Jelly Bean (JB) Android 4.1.X
2013
AOKP Jelly Bean (JB-MR1) Android 4.2.X
AOKP Jelly Bean (JB) Android 4.3.X
2014
AOKP KitKat Android 4.4.X
2014
AOKP Lollipop Android 5.0.x
2015
AOKP Marshmellow Android 6.0.1
2016
AOKP Nougat Android 7.0
AOKP Nougat Android 7.1.x
2017
AOKP Oreo Android 8.0
AOKP Oreo Android 8.1
2020
AOKP Pie Android 9.0
Supported devices
ASUS
Nexus 7 (2013) WiFi
Nexus 7 (GSM)
Nexus 7 (WiFi)
Asus ZenFone 2(ZE551ML)
BQ
Aquaris E5 4G
Elephone
P9000
HTC
One (Intl. / AT&T / T-Mobile) – Legacy Builds
One (Generic GSM / Sprint / Verizon)
One XL (AT&T)
Huawei
Ascend Mate 2 4G
Nexus 6P
Lenovo
Vibe K5 (A6020)
LG
G PAD 8.3
G2 (GSM – LTE / AT&T / Sprint / T-Mobile / Verizon)
Nexus 4
Nexus 5
Nitro HD (AT&T)
Optimus (LTE)
Spectrum (LTE)
Motorola
Droid 3 (XT862)
Droid 4 (XT894)
Droid Bionic (XT875)
Droid Razr (GSM / XT910 • VZW / XT912)
Moto X(T-Mobile / Verizon Dev Version)
Moto G4 Plus
Oppo
Find 5
N1
Samsung
Galaxy Nexus (GSM / Sprint / Verizon)
Galaxy Note 2 (GSM – LTE / AT&T / Sprint / T-Mobile / Verizon)
Galaxy Note 3 LTE (Unified)
Galaxy S2 (Intl. Exynos, Intl. Omap / T-Mobile)
Galaxy S3 (Intl. / AT&T / T-Mobile / US Cellular / Verizon)
Galaxy S3 LTE (Unified)
Galaxy S4 (C Spire / Cricket / C Spint / T-Mobile / US Cell / Verizon)
Galaxy S4 LTE (Unified)
Galaxy S4 Mini (GT-I9190 (3G) / GT-I9192 (DS) / GT-I9195 (LTE))
Galaxy S5 (GSM / Sprint / US Cell / Vodafone)
Nexus 10
Vibrant (T-Mobile)
Sony
Xperia SP
Xperia T
Xperia Tablet Z (LTE / WiFi)
Xperia V
Xperia Z
Xperia Z Ultra
Xperia Z1
Xperia Z1 Compact
Xperia Z2
Xperia ZL
Xperia ZR
OnePlus
One
2
3
X
YU
Yuphoria
Yureka / Yureka Plus
Xiaomi
Mi 3 , Mi4
Mi note 2
Redmi 1s
Redmi note 3
Redmi note 4
See also
List of custom Android firmware
Android rooting
Comparison of mobile operating systems
References
External links
Android (operating system) development software
Custom Android firmware
Mobile operating systems
Free mobile software
Free software
Mobile Linux | Operating System (OS) | 805 |
System Center Operations Manager
System Center Operations Manager (SCOM) is a cross-platform data center monitoring system for operating systems and hypervisors. It uses a single interface that shows state, health, and performance information of computer systems. It also provides alerts generated according to some availability, performance, configuration, or security situation being identified. It works with Microsoft Windows Server and Unix-based hosts.
History
The product began as a network management system called SeNTry ELM, which was developed by the British company Serverware Group plc. In June 1998 the intellectual property rights were bought by Mission Critical Software, Inc. who renamed the product Enterprise Event Manager. Mission Critical undertook a complete rewrite of the product, naming the new version OnePoint Operations Manager (OOM). Mission Critical Software merged with NetIQ in early 2000, and sold the rights of the product to Microsoft in October 2000. It was later renamed into Microsoft Operations Manager (MOM) - in 2003, Microsoft began work on the next version of MOM: It was called Microsoft Operations Manager 2005 and was released in August 2004. Service Pack 1 for MOM 2005 was released in July 2005 with support for Windows 2003 Service Pack 1 and SQL Server 2000 Service Pack 4. It was also required to support SQL Server 2005 for the operational and reporting data- base components. The development for the next version - at this time its codename was “MOM V3,” began in 2005. Microsoft renamed the product System Center Operations Manager and released System Center Operations Manager 2007 in March 2007. System Center Operations Manager 2007 was designed from a fresh code base, and although sharing similarities to Microsoft Operations Manager, is not an upgrade from the previous versions.
2009
In May 2009 System Center Operations Manager 2007 had a so-called “R2” release - the general enhancement was cross platform support for UNIX and Linux servers. Instead of publishing individual service packs, bug fixes to the product after System Center Operations Manager 2007 R2 were released in the form of so-called cumulative updates (CUs).
Central concepts
The basic idea is to place a piece of software, an agent, on the computer to be monitored. The agent watches several sources on that computer, including the Windows Event Log, for specific events or alerts generated by the applications executing on the monitored computer. Upon alert occurrence and detection, the agent forwards the alert to a central SCOM server. This SCOM server application maintains a database that includes a history of alerts. The SCOM server applies filtering rules to alerts as they arrive; a rule can trigger some notification to a human, such as an e-mail or a pager message, generate a network support ticket, or trigger some other workflow intended to correct the cause of the alert in an appropriate manner.
SCOM uses the term management pack to refer to a set of filtering rules specific to some monitored application. While Microsoft and other software vendors make management packages available for their products, SCOM also provides for authoring custom management packs. While an administrator role is needed to install agents, configure monitored computers and create management packs, rights to simply view the list of recent alerts can be given to any valid user account.
Several SCOM servers can be aggregated together to monitor multiple networks across logical Windows domain and physical network boundaries. In previous versions of Operations Manager, a web service was employed to connect several separately-managed groups to a central location. As of Operations Manager 2007, a web service is no longer used. Rather, a direct TCP connection is used, making use of port 5723 for these communications.
Integration with Microsoft Azure
To monitor servers which are running at Microsofts Cloud Infrastructure Azure it is possible to enable Log Analytics Data Sources which are collecting and sending their data to on premises SCOM Management Servers.
In November 2020 Microsoft announced the plan to make SCOM a fully cloud managed Instance at their Azure Environment, Codename was "Aquila".
The Command Shell
Since Operations Manager 2007 the product includes an extensible command line interface called The Command Shell, which is a customized instance of the Windows PowerShell that provides interactive and script-based access to Operations Manager data and operations.
Management Pack
SCOM can be extended by importing management packs (MPs) which define how SCOM monitors systems. By default, SCOM only monitors basic OS-related services, but new MPs can be imported to monitor services such as SQL servers, SharePoint, Apache, Tomcat, VMware and SUSE Linux.
Many Microsoft products have MPs that are released with them, and many non-Microsoft software companies write MPs for their own products as well.
Whilst a fair amount of IT infrastructure is monitored using currently available MPs, new MPs can be created by end-users in order to monitor what is not already covered.
Management Pack creation is possible with the System Center Operations Manager 2007 R2 Resource Kit, Visual Studio with Authoring Extensions and Visio MP Designer.
Versions
See also
Microsoft System Center
SCOM Build Numbers
System Center Configuration Manager
System Center Data Protection Manager
System Center Virtual Machine Manager
Microsoft Servers
Oracle Enterprise Manager
IBM Director
References
Literature
External links
System Center Operations Manager
Microsoft Tech Net guide on MOM
Microsoft Operations Manager SDK (in MSDN)
Introducing System Center Operations Manager 2007 A tutorial by David Chappell, Chappell & Associates
Operations Manager 2007 R2 Management Pack Authoring Guide (from UK TechNet)
System Center Central (System Center community)
TechNet Ramp Up: Learn how to install, implement and administer Operations Manager 2007 R2.
Blog of Kevin Holman regarding SCOM
Windows Server System
Information technology management | Operating System (OS) | 806 |
Computer compatibility
A family of computer models is said to be compatible if certain software that runs on one of the models can also be run on all other models of the family. The computer models may differ in performance, reliability or some other characteristic. These differences may affect the outcome of the running of the software.
Software compatibility
Software compatibility can refer to the compatibility that a particular software has running on a particular CPU architecture such as Intel or PowerPC. Software compatibility can also refer to ability for the software to run on a particular operating system. Very rarely is a compiled software compatible with multiple different CPU architectures. Normally, an application is compiled for different CPU architectures and operating systems to allow it to be compatible with the different system. Interpreted software, on the other hand, can normally run on many different CPU architectures and operating systems if the interpreter is available for the architecture or operating system. Software incompatibility occurs many times for new software released for a newer version of an operating system which is incompatible with the older version of the operating system because it may miss some of the features and functionality that the software depends on.
Hardware compatibility
Hardware compatibility can refer to the compatibility of computer hardware components with a particular CPU architecture, bus, motherboard or operating system. Hardware that is compatible may not always run at its highest stated performance, but it can nevertheless work with legacy components. An example is RAM chips, some of which can run at a lower (or sometimes higher) clock rate than rated. Hardware that was designed for one operating system may not work for another, if device or kernel drivers are unavailable. As an example, much of the hardware for macOS is proprietary hardware with drivers unavailable for use in operating systems such as Linux.
Free and open-source software
See also
Binary-code compatibility
Compatibility layer
Interchangeability
Forward compatibility
Backward compatibility
Cross-platform
Emulator
List of computer standards
Portability
Plug compatible
Hardware security
References
Interoperability
Computer hardware
Software | Operating System (OS) | 807 |
DOS Wedge
The DOS Wedge is a piece of Commodore 64 system software that was popular in its time. It was written by Bob Fairbairn, and was included by Commodore (CBM) on the 1541 disk drive Test/Demo Disk (filename: "DOS 5.1") and also packaged with the C64 Macro Assembler (filename: "DOS WEDGE64"). The DOS Wedge was referred to in the 1541 drive manual as DOS Support and on the software startup screen as DOS MANAGER. The original design was developed by Bill Seiler.
The Wedge made disk operations in BASIC 2.0 significantly easier by introducing several keyword shortcuts. The DOS Wedge became somewhat of a de facto standard, with third party vendors such as Epyx often incorporating identical commands into fastloader cartridges and other Commodore 64 expansion devices. COMPUTE!'s Gazette published several type-in variations on the DOS Wedge, including a C128 version in its February 1987 issue (see External links, below).
The original Commodore DOS Wedge was a 1-KB program written in MOS 6502 assembly language. It resided in the otherwise unused memory block $CC00–$CFFF (52224–53247) and worked by altering BASIC's "CHRGET" subroutine at $0073 (115) so that each character passing by the BASIC interpreter would be checked for wedge commands, and the associated "wedged-in" routines run if needed.
DOS Wedge functions
Any command that contains an symbol may substitute instead, if desired.
See also
Comparison of computer shells
References
CBM Professional Computer Division (1982). Commodore 64 Macro Assembler Development System Manual. West Chester, PA: Commodore Business Machines. Chapter 5.0. Additional BASIC Disk Commands.
External links
Commodore DOS Wedges: An Overview - Jim Butterfield, COMPUTE!, October 1983.
DOS Wedge documentation (MS Word format)
Commodore 64 Macro Assembler Development System Manual
COMPUTE!'s Gazette February 1987 issue: "DOS Wedge 128" (Part A), (Part B)
Commodore Disk Loading Basics
Commodore 1541 Drive Manual (ZIPped text file)
Commodore 64 software
Command shells
Assembly language software | Operating System (OS) | 808 |
Netscape Portable Runtime
In computing, the Netscape Portable Runtime, or NSPR, a platform abstraction library, makes all operating systems it supports appear the same to (for example) Mozilla-style web-browsers. NSPR provides platform independence for non-GUI operating system facilities. These facilities include:
threads
thread synchronization
normal file and network I/O
interval timing and calendar time
basic memory management (malloc and free)
shared library linking.
Much of the library, and perhaps the overall thrust of it in the Gromit environment, provides the underpinnings of the Java virtual machine, more or less mapping the sys layer that Sun defines for the porting of the Java VM to various platforms. NSPR does go beyond that requirement in some areas, as it also functions as the platform-independent layer for most of the servers produced by Netscape.
History
The first generation of NSPR originally aimed just to satisfy the requirements of porting Java to various host environments. NSPR20, an effort started in 1996, built on that original idea, though very little remains of the original code. (The "20" in "NSPR20" does not mean "version 2.0" but rather "second generation".) Many of the concepts show reform, expansion, and maturation. In 2009, NSPR still functioned appropriately as the platform-dependent layer under Java, but it also served in supporting clients written entirely in C or in C++.
How it works
NSPR has the goal of providing uniform service over a wide range of operating-system environments. It strives not to export the lowest common denominator, but to exploit the best features of each operating system on which it runs, while still providing a uniform service across a wide range of host offerings.
Threads
Threads feature prominently in NSPR. The software industry's offering of threads lacks consistency. NSPR, while far from perfect, does provide a single API to which clients may program and expect reasonably consistent behavior. The operating systems provide everything from no concept of threading at all up to and including sophisticated, scalable and efficient implementations. NSPR makes as much use of what the systems offer as it can. NSPR aims to impose as little overhead as possible in accessing those appropriate system features.
Thread synchronization
Thread synchronization loosely depends on monitors as described by C. A. R. Hoare in "Monitors: An operating system structuring concept", Communications of the ACM, 17(10), October 1974 and then formalized by Xerox' Mesa programming language ("Mesa Language Manual", J.G. Mitchell et al., Xerox PARC, CSL-79-3 (Apr 1979)). This mechanism provides the basic mutual exclusion (mutex) and thread notification facilities (condition variables) implemented by NSPR. Additionally, NSPR provides synchronization methods more suited for use by Java. The Java-like facilities include monitor reentrancy, implicit and tightly bound notification capabilities with the ability to associate the synchronization objects dynamically.
I/O
NSPR's I/O slightly augments the Berkeley sockets model and allows arbitrary layering. The designers originally intended to export synchronous I/O methods only, relying on threads to provide the concurrency needed for complex applications. That method of operation remains preferred, though one can configure the network I/O channels as non-blocking in the traditional sense.
Network addresses
Part of NSPR deals with manipulation of network addresses. NSPR defines an IP-centric network address object. While it does not define the object as opaque, the API provides methods that allow and encourage clients to treat the addresses as polymorphic items. In this area NSPR aims to provide a migration path between IPv4 and IPv6. To that end one can perform translations of ASCII strings (DNS names) into NSPR's network address structures, regardless of whether the addressing technology uses IPv4 or IPv6.
Time
NSPR makes timing facilities available in two forms: interval timing and calendar functions.
Interval timers are based on a free running, 32-bit, platform-dependent resolution timer. Such timers are normally used to specify timeouts on I/O, waiting on condition variables and other rudimentary thread scheduling. Since these timers have finite namespace and are free running, they can wrap at any time. NSPR does not provide an epoch, but expects clients to deal with that issue. The granularity of the timers is guaranteed to be between 10 microseconds and 1 millisecond. This allows a minimal timer period in of approximately 12 hours. But in order to deal with the wrap-around issue, only half that namespace may be utilized. Therefore, the minimal usable interval available from the timers is slightly less than six hours.
Calendar times are 64-bit signed numbers with units of microseconds. The epoch for calendar times is midnight, January 1, 1970, Greenwich Mean Time. Negative times extend to times before 1970, and positive numbers forward. Use of 64 bits allows a representation of times approximately in the range of −30000 to the year 30000. There exits a structural representation (i.e., exploded view), routines to acquire the current time from the host system, and convert them to and from the 64-bit and structural representation. Additionally there are routines to convert to and from most well-known forms of ASCII into the 64-bit NSPR representation.
Memory management
NSPR provides API to perform the basic malloc, calloc, realloc and free functions. Depending on the platform, the functions may be implemented almost entirely in the NSPR runtime or simply shims that call immediately into the host operating system's offerings.
Linking
Support for linking (shared library loading and unloading) forms part of NSPR's feature set. In most cases this is simply a smoothing over of the facilities offered by the various platform providers.
See also
Apache Portable Runtime
Adaptive Communication Environment
Cross-platform support middleware
References
External links
Official website
NSPR source code
Application programming interfaces
Mozilla
Netscape | Operating System (OS) | 809 |
AIDS (Trojan horse)
AIDS, also known as Aids Info Disk or PC Cyborg Trojan, is a Trojan horse that replaces the AUTOEXEC.BAT file, which would then be used by AIDS to count the number of times the computer has booted. Once this boot count reaches 90, AIDS hides directories and encrypts the names of all files on drive C: (rendering the system unusable), at which time the user is asked to 'renew the license' and contact PC Cyborg Corporation for payment (which would involve sending US$189 to a post office box in Panama). There exists more than one version of AIDS, and at least one version does not wait to mung drive C:, but will hide directories and encrypt file names upon the first boot after AIDS is installed. The AIDS software also presented to the user an end user license agreement, some of which read:
If you install [this] on a microcomputer...
then under terms of this license you agree to pay PC Cyborg Corporation in full for the cost of leasing these programs...
In the case of your breach of this license agreement, PC Cyborg reserves the right to take legal action necessary to recover any outstanding debts payable to PC Cyborg Corporation and to use program mechanisms to ensure termination of your use...
These program mechanisms will adversely affect other program applications...
You are hereby advised of the most serious consequences of your failure to abide by the terms of this license agreement; your conscience may haunt you for the rest of your life...
and your [PC] will stop functioning normally...
You are strictly prohibited from sharing [this product] with others...
AIDS is considered to be an early example of a class of malware known as "ransomware".
History
AIDS was introduced into systems through a floppy disk called the "AIDS Information Introductory Diskette", which had been mailed to a mailing list. Harvard-taught evolutionary biologist Dr. Joseph Popp, was identified as the author of the AIDS trojan horse and was a subscriber to this list.
Popp was eventually discovered by the British anti-virus industry and named on a New Scotland Yard arrest warrant. He was detained in Brixton Prison. Though charged with eleven counts of blackmail and clearly tied to the AIDS trojan, Popp defended himself by saying money going to the PC Cyborg Corporation was to go to AIDS research. A Harvard-trained anthropologist, Popp was actually a collaborator of the Flying Doctors, a branch of the African Medical Research Foundation (AMREF), and a consultant for the WHO in Kenya, where he had organized a conference in the new Global AIDS Program that very year. Popp had been behaving erratically since the day of his arrest during a routine baggage inspection at Amsterdam Schiphol Airport. He was declared mentally unfit to stand trial and was returned to the United States.
Jim Bates analyzed the AIDS Trojan in detail and published his findings in the Virus Bulletin. He wrote that the AIDS Trojan did not alter the contents of any of the user's
files, just their file names. He explained that once the extension and filename encryption tables are known, restoration is possible. AIDSOUT was a reliable removal program for the Trojan and the CLEARAID program recovered encrypted plaintext after the Trojan triggered. CLEARAID automatically reversed the encryption without having to contact the extortionist.
The AIDS Trojan was analyzed even further a few years later. Young and Yung pointed out the fatal weakness in malware such as the AIDS Trojan, namely, the reliance on symmetric cryptography. They showed how to use public key cryptography to implement a secure information extortion attack. They published this discovery (and expanded upon it) in a 1996 IEEE Security and Privacy paper. A cryptovirus, cryptotrojan, or cryptoworm hybrid encrypts the victim's files using the public key of the author and the victim must pay (with money, information, etc.) to obtain the needed session key. This is one of many attacks, both overt and covert, in the field known as cryptovirology.
References
External links
An early analysis of the trojan
THE COMPUTER INCIDENT ADVISORY CAPABILITY, by CIAC, on AIDS infection and distribution
The Original Anti-Piracy Hack, by George Smith, on the interesting AIDS EULA
Computer Viruses (A), by Probert Encyclopedia
AIDS Information Trojan, by CA
Aids Trojan, by CA
Ransomware
Trojan horses | Operating System (OS) | 810 |
Access method
An access method is a function of a mainframe operating system that enables access to data on disk, tape or other external devices. Access methods were present in several mainframe operating systems since the late 1950s, under a variety of names; the name access method was introduced in 1963 in the IBM OS/360 operating system. Access methods provide an application programming interface (API) for programmers to transfer data to or from device, and could be compared to device drivers in non-mainframe operating systems, but typically provide a greater level of functionality.
Purpose of access methods
System/360 and successor systems perform input/output using a special program for an I/O channel, a processor dedicated to control peripheral storage device access and data transfer to and from main memory. Channel programs are composed of channel command words (CCWs). Programming those is a complex task requiring detailed knowledge of the hardware characteristics. Channel programs are initiated by a START IO instruction issued by the operating system. This is usually front ended by the Execute Channel Program (EXCP) macro for application programmer convenience. EXCP issues an SVC (supervisor call instruction) that directs the operating system to issue the START IO on the application's behalf.
Access methods provide:
Ease of programming - programmer would no longer deal with a specific device procedures, including error detection and recovery tactics in each and every program. A program designed to process a sequence of 80-character records would work no matter where the data are stored.
Ease of hardware replacement - programmer would no longer alter a program when data should be migrated to newer model of storage device, provided it supports the same access methods.
Ease shared data set access - an access method is a trusted program, that allows multiple programs to access the same file, while ensuring the basic data integrity and system security.
Read-ahead - Queued access methods may start as many I/O operations as there are buffers available, anticipating application program requirements.
Unlike systems derived from Unix, where all files and devices are considered to be an unformatted stream of bytes, mainframes offer a variety of data options and formats, such as varying types and sizes of records, and different ways of accessing data, such as via record keys. Access methods provide programs a way of dealing with this complexity.
Programs can read or write a record or block of data and wait until the input/output operation is complete (queued access methods) or allow the operation to be started and the program to continue to run, waiting for the completion at a later time (basic access methods).
Programs can specify the size and number of buffers for a file. The same buffer or pool can be used for multiple files, allowing blocks of data to be read from one file and written to another without requiring data movement in memory.
Programs can specify the type of error recovery to be used in case of input/output errors.
Storage access methods
Storage-oriented access methods in approximate chronological order:
BDAM - Basic direct access method
BSAM - Basic sequential access method
QSAM - Queued sequential access method
BPAM - Basic partitioned access method
ISAM - Indexed sequential access method
VSAM - Virtual storage access method, introduced with OS/VS
OAM - Object access method, introduced in MVS/SP (1989)
Distributed Data Management Architecture - access methods for distributed file access.
Basic versus queued
Both types of access deal with records of a data set. Basic access methods read or write one physical record – block – at a time. Queued methods support internal blocking of data and also often read-ahead scheme. Queued access methods generally provide better performance, while basic methods provide more flexibility.
Sequential versus direct
Sequential access assumes that records can be processed only sequentially, as opposed to direct (or random) access. Some devices, such as magnetic tape, naturally enforce sequential access, but it can be used as well on direct access storage devices (DASD), such as disk drives. In the latter case, a data set written with sequential access can be later processed in a direct manner.
Networking access methods
Network-oriented access methods in approximate chronological order:
BTAM - Basic telecommunications access method
QTAM - Queued teleprocessing access method
TCAM - Telecommunications access method
VTAM - Virtual telecommunications access method, introduced with OS/VS
TCP/IP for MVS - Transmission Control Protocol/Internet Protocol
IMS
The IBM Information Management System (IMS) uses the term "access method" to refer to its methods for manipulating "segments in a database record". These are:
Generalized Sequential Access Method (GSAM),
Hierarchical Direct Access Method (HDAM),
Hierarchical Indexed Direct Access Method (HIDAM),
Hierarchical Indexed Sequential Access Method (HISAM),
Hierarchical Sequential Access Method (HSAM),
Overflow sequential access method (OSAM),
Partitioned Hierarchical Direct Access Method (PHDAM),
Partitioned Hierarchical Indexed Direct Access Method (PHIDAM),
Partitioned Secondary Index (PSIMDEX),
Simple Hierarchical Sequential Access Method (SHSAM), and
Simple Hierarchical Indexed Sequential Access Method (SHISAM).
This is a different use of the term from the other access methods mentioned in this article.
Modern implementations
In the z/OS operating system, two elements provide access methods:
Data Facility Product
Communications Server
References
Computer file systems
Computer-related introductions in 1963
IBM mainframe operating systems | Operating System (OS) | 811 |
Free software
Free software (or libre software) is computer software distributed under terms that allow users to run the software for any purpose as well as to study, change, and distribute it and any adapted versions. Free software is a matter of liberty, not price; all users are legally free to do what they want with their copies of a free software (including profiting from them) regardless of how much is paid to obtain the program. Computer programs are deemed "free" if they give end-users (not just the developer) ultimate control over the software and, subsequently, over their devices.
The right to study and modify a computer program entails that source code—the preferred format for making changes—be made available to users of that program. While this is often called "access to source code" or "public availability", the Free Software Foundation (FSF) recommends against thinking in those terms, because it might give the impression that users have an obligation (as opposed to a right) to give non-users a copy of the program.
Although the term "free software" had already been used loosely in the past, Richard Stallman is credited with tying it to the sense under discussion and starting the free-software movement in 1983, when he launched the GNU Project: a collaborative effort to create a freedom-respecting operating system, and to revive the spirit of cooperation once prevalent among hackers during the early days of computing.
Context
Free software thus differs from:
proprietary software, such as Microsoft Office, Google Docs, Sheets, and Slides or iWork from Apple. Users cannot study, change, and share their source code.
freeware, which is a category of proprietary software that does not require payment for basic use.
For software under the purview of copyright to be free, it must carry a software license whereby the author grants users the aforementioned rights. Software that is not covered by copyright law, such as software in the public domain, is free as long as the source code is in the public domain too, or otherwise available without restrictions.
Proprietary software uses restrictive software licences or EULAs and usually does not provide users with the source code. Users are thus legally or technically prevented from changing the software, and this results in reliance on the publisher to provide updates, help, and support. (See also vendor lock-in and abandonware). Users often may not reverse engineer, modify, or redistribute proprietary software. Beyond copyright law, contracts and lack of source code, there can exist additional obstacles keeping users from exercising freedom over a piece of software, such as software patents and digital rights management (more specifically, tivoization).
Free software can be a for-profit, commercial activity or not. Some free software is developed by volunteer computer programmers while other is developed by corporations; or even by both.
Naming and differences with Open Source
Although both definitions refer to almost equivalent corpora of programs, the Free Software Foundation recommends using the term "free software" rather than "open-source software" (a younger vision coined in 1998), because the goals and messaging are quite dissimilar. According to the Free Software Foundation, "Open source" and its associated campaign mostly focus on the technicalities of the public development model and marketing free software to businesses, while taking the ethical issue of user rights very lightly or even antagonistically. Stallman has also stated that considering the practical advantages of free software is like considering the practical advantages of not being handcuffed, in that it is not necessary for an individual to consider practical reasons in order to realize that being handcuffed is undesirable in itself.
The FSF also notes that "Open Source" has exactly one specific meaning in common English, namely that "you can look at the source code." It states that while the term "Free Software" can lead to two different interpretations, at least one of them is consistent with the intended meaning unlike the term "Open Source". The loan adjective "libre" is often used to avoid the ambiguity of the word "free" in English language, and the ambiguity with the older usage of "free software" as public-domain software. (See Gratis versus libre.)
Definition and the Four Essential Freedoms of Free Software
The first formal definition of free software was published by FSF in February 1986. That definition, written by Richard Stallman, is still maintained today and states that software is free software if people who receive a copy of the software have the following four freedoms. The numbering begins with zero, not only as a spoof on the common usage of zero-based numbering in programming languages, but also because "Freedom 0" was not initially included in the list, but later added first in the list as it was considered very important.
Freedom 0: The freedom to the program for any purpose.
Freedom 1: The freedom to how the program works, and change it to make it do what you wish.
Freedom 2: The freedom to and make copies so you can help your neighbour.
Freedom 3: The freedom to the program, and release your improvements (and modified versions in general) to the public, so that the whole community benefits.
Freedoms 1 and 3 require source code to be available because studying and modifying software without its source code can range from highly impractical to nearly impossible.
Thus, free software means that computer users have the freedom to cooperate with whom they choose, and to control the software they use. To summarize this into a remark distinguishing libre (freedom) software from gratis (zero price) software, the Free Software Foundation says: "Free software is a matter of liberty, not price. To understand the concept, you should think of 'free' as in 'free speech', not as in 'free beer. (See Gratis versus libre.)
In the late 1990s, other groups published their own definitions that describe an almost identical set of software. The most notable are Debian Free Software Guidelines published in 1997, and the Open Source Definition, published in 1998.
The BSD-based operating systems, such as FreeBSD, OpenBSD, and NetBSD, do not have their own formal definitions of free software. Users of these systems generally find the same set of software to be acceptable, but sometimes see copyleft as restrictive. They generally advocate permissive free-software licenses, which allow others to use the software as they wish, without being legally to provide the source code. Their view is that this permissive approach is more free. The Kerberos, X11, and Apache software licenses are substantially similar in intent and implementation.
Examples
There are thousands of free applications and many operating systems available on the Internet. Users can easily download and install those applications via a package manager that comes included with most Linux distributions.
The Free Software Directory maintains a large database of free-software packages. Some of the best-known examples include the Linux kernel, the BSD and Linux operating systems, the GNU Compiler Collection and C library; the MySQL relational database; the Apache web server; and the Sendmail mail transport agent. Other influential examples include the Emacs text editor; the GIMP raster drawing and image editor; the X Window System graphical-display system; the LibreOffice office suite; and the TeX and LaTeX typesetting systems.
History
From the 1950s up until the early 1970s, it was normal for computer users to have the software freedoms associated with free software, which was typically public-domain software. Software was commonly shared by individuals who used computers and by hardware manufacturers who welcomed the fact that people were making software that made their hardware useful. Organizations of users and suppliers, for example, SHARE, were formed to facilitate exchange of software. As software was often written in an interpreted language such as BASIC, the source code was distributed to use these programs. Software was also shared and distributed as printed source code (Type-in program) in computer magazines (like Creative Computing, SoftSide, Compute!, Byte, etc.) and books, like the bestseller BASIC Computer Games. By the early 1970s, the picture changed: software costs were dramatically increasing, a growing software industry was competing with the hardware manufacturer's bundled software products (free in that the cost was included in the hardware cost), leased machines required software support while providing no revenue for software, and some customers able to better meet their own needs did not want the costs of "free" software bundled with hardware product costs. In United States vs. IBM, filed January 17, 1969, the government charged that bundled software was anti-competitive. While some software might always be free, there would henceforth be a growing amount of software produced primarily for sale. In the 1970s and early 1980s, the software industry began using technical measures (such as only distributing binary copies of computer programs) to prevent computer users from being able to study or adapt the software applications as they saw fit. In 1980, copyright law was extended to computer programs.
In 1983, Richard Stallman, one of the original authors of the popular Emacs program and a longtime member of the hacker community at the MIT Artificial Intelligence Laboratory, announced the GNU Project, the purpose of which was to produce a completely non-proprietary Unix-compatible operating system, saying that he had become frustrated with the shift in climate surrounding the computer world and its users. In his initial declaration of the project and its purpose, he specifically cited as a motivation his opposition to being asked to agree to non-disclosure agreements and restrictive licenses which prohibited the free sharing of potentially profitable in-development software, a prohibition directly contrary to the traditional hacker ethic. Software development for the GNU operating system began in January 1984, and the Free Software Foundation (FSF) was founded in October 1985. He developed a free software definition and the concept of "copyleft", designed to ensure software freedom for all.
Some non-software industries are beginning to use techniques similar to those used in free software development for their research and development process; scientists, for example, are looking towards more open development processes, and hardware such as microchips are beginning to be developed with specifications released under copyleft licenses (see the OpenCores project, for instance). Creative Commons and the free-culture movement have also been largely influenced by the free software movement.
1980s: Foundation of the GNU Project
In 1983, Richard Stallman, longtime member of the hacker community at the MIT Artificial Intelligence Laboratory, announced the GNU Project, saying that he had become frustrated with the effects of the change in culture of the computer industry and its users. Software development for the GNU operating system began in January 1984, and the Free Software Foundation (FSF) was founded in October 1985. An article outlining the project and its goals was published in March 1985 titled the GNU Manifesto. The manifesto included significant explanation of the GNU philosophy, Free Software Definition and "copyleft" ideas.
1990s: Release of the Linux kernel
The Linux kernel, started by Linus Torvalds, was released as freely modifiable source code in 1991. The first licence was a proprietary software licence. However, with version 0.12 in February 1992, he relicensed the project under the GNU General Public License. Much like Unix, Torvalds' kernel attracted the attention of volunteer programmers.
FreeBSD and NetBSD (both derived from 386BSD) were released as free software when the USL v. BSDi lawsuit was settled out of court in 1993. OpenBSD forked from NetBSD in 1995. Also in 1995, The Apache HTTP Server, commonly referred to as Apache, was released under the Apache License 1.0.
Licensing
All free-software licenses must grant users all the freedoms discussed above. However, unless the applications' licenses are compatible, combining programs by mixing source code or directly linking binaries is problematic, because of license technicalities. Programs indirectly connected together may avoid this problem.
The majority of free software falls under a small set of licenses. The most popular of these licenses are:
The MIT License
The GNU General Public License v2 (GPLv2)
The Apache License
The GNU General Public License v3 (GPLv3)
The BSD License
The GNU Lesser General Public License (LGPL)
The Mozilla Public License (MPL)
The Eclipse Public License
The Free Software Foundation and the Open Source Initiative both publish lists of licenses that they find to comply with their own definitions of free software and open-source software respectively:
List of FSF approved software licenses
List of OSI approved software licenses
The FSF list is not prescriptive: free-software licenses can exist that the FSF has not heard about, or considered important enough to write about. So it's possible for a license to be free and not in the FSF list. The OSI list only lists licenses that have been submitted, considered and approved. All open-source licenses must meet the Open Source Definition in order to be officially recognized as open source software. Free software, on the other hand, is a more informal classification that does not rely on official recognition. Nevertheless, software licensed under licenses that do not meet the Free Software Definition cannot rightly be considered free software.
Apart from these two organizations, the Debian project is seen by some to provide useful advice on whether particular licenses comply with their Debian Free Software Guidelines. Debian does not publish a list of licenses, so its judgments have to be tracked by checking what software they have allowed into their software archives. That is summarized at the Debian web site.
It is rare that a license announced as being in-compliance with the FSF guidelines does not also meet the Open Source Definition, although the reverse is not necessarily true (for example, the NASA Open Source Agreement is an OSI-approved license, but non-free according to FSF).
There are different categories of free software.
Public-domain software: the copyright has expired, the work was not copyrighted (released without copyright notice before 1988), or the author has released the software onto the public domain with a waiver statement (in countries where this is possible). Since public-domain software lacks copyright protection, it may be freely incorporated into any work, whether proprietary or free. The FSF recommends the CC0 public domain dedication for this purpose.
Permissive licenses, also called BSD-style because they are applied to much of the software distributed with the BSD operating systems: many of these licenses are also known as copyfree as they have no restrictions on distribution. The author retains copyright solely to disclaim warranty and require proper attribution of modified works, and permits redistribution and modification, even closed-source ones. In this sense, a permissive license provides an incentive to create non-free software, by reducing the cost of developing restricted software. Since this is incompatible with the spirit of software freedom, many people consider permissive licenses to be less free than copyleft licenses.
Copyleft licenses, with the GNU General Public License being the most prominent: the author retains copyright and permits redistribution under the restriction that all such redistribution is licensed under the same license. Additions and modifications by others must also be licensed under the same "copyleft" license whenever they are distributed with part of the original licensed product. This is also known as a viral, protective, or reciprocal license. Due to the restriction on distribution not everyone considers this type of license to be free.
Security and reliability
There is debate over the security of free software in comparison to proprietary software, with a major issue being security through obscurity. A popular quantitative test in computer security is to use relative counting of known unpatched security flaws. Generally, users of this method advise avoiding products that lack fixes for known security flaws, at least until a fix is available.
Free software advocates strongly believe that this methodology is biased by counting more vulnerabilities for the free software systems, since their source code is accessible and their community is more forthcoming about what problems exist, (This is called "Security Through Disclosure") and proprietary software systems can have undisclosed societal drawbacks, such as disenfranchising less fortunate would-be users of free programs. As users can analyse and trace the source code, many more people with no commercial constraints can inspect the code and find bugs and loopholes than a corporation would find practicable. According to Richard Stallman, user access to the source code makes deploying free software with undesirable hidden spyware functionality far more difficult than for proprietary software.
Some quantitative studies have been done on the subject.
Binary blobs and other proprietary software
In 2006, OpenBSD started the first campaign against the use of binary blobs in kernels. Blobs are usually freely distributable device drivers for hardware from vendors that do not reveal driver source code to users or developers. This restricts the users' freedom effectively to modify the software and distribute modified versions. Also, since the blobs are undocumented and may have bugs, they pose a security risk to any operating system whose kernel includes them. The proclaimed aim of the campaign against blobs is to collect hardware documentation that allows developers to write free software drivers for that hardware, ultimately enabling all free operating systems to become or remain blob-free.
The issue of binary blobs in the Linux kernel and other device drivers motivated some developers in Ireland to launch gNewSense, a Linux based distribution with all the binary blobs removed. The project received support from the Free Software Foundation and stimulated the creation, headed by the Free Software Foundation Latin America, of the Linux-libre kernel. As of October 2012, Trisquel is the most popular FSF endorsed Linux distribution ranked by Distrowatch (over 12 months). While Debian is not endorsed by the FSF and does not use Linux-libre, it is also a popular distribution available without kernel blobs by default since 2011.
Business model
Selling software under any free-software licence is permissible, as is commercial use. This is true for licenses with or without copyleft.
Since free software may be freely redistributed, it is generally available at little or no fee. Free software business models are usually based on adding value such as customization, accompanying hardware, support, training, integration, or certification. Exceptions exist however, where the user is charged to obtain a copy of the free application itself.
Fees are usually charged for distribution on compact discs and bootable USB drives, or for services of installing or maintaining the operation of free software. Development of large, commercially used free software is often funded by a combination of user donations, crowdfunding, corporate contributions, and tax money. The SELinux project at the United States National Security Agency is an example of a federally funded free-software project.
Proprietary software, on the other hand, tends to use a different business model, where a customer of the proprietary application pays a fee for a license to legally access and use it. This license may grant the customer the ability to configure some or no parts of the software themselves. Often some level of support is included in the purchase of proprietary software, but additional support services (especially for enterprise applications) are usually available for an additional fee. Some proprietary software vendors will also customize software for a fee.
The Free Software Foundation encourages selling free software. As the Foundation has written, "distributing free software is an opportunity to raise funds for development. Don't waste it!". For example, the FSF's own recommended license (the GNU GPL) states that "[you] may charge any price or no price for each copy that you convey, and you may offer support or warranty protection for a fee."
Microsoft CEO Steve Ballmer stated in 2001 that "open source is not available to commercial companies. The way the license is written, if you use any open-source software, you have to make the rest of your software open source." This misunderstanding is based on a requirement of copyleft licenses (like the GPL) that if one distributes modified versions of software, they must release the source and use the same license. This requirement does not extend to other software from the same developer. The claim of incompatibility between commercial companies and free software is also a misunderstanding. There are several large companies, e.g. Red Hat and IBM, which do substantial commercial business in the development of free software.
Economic aspects and adoption
Free software played a significant part in the development of the Internet, the World Wide Web and the infrastructure of dot-com companies. Free software allows users to cooperate in enhancing and refining the programs they use; free software is a pure public good rather than a private good. Companies that contribute to free software increase commercial innovation.
The economic viability of free software has been recognized by large corporations such as IBM, Red Hat, and Sun Microsystems. Many companies whose core business is not in the IT sector choose free software for their Internet information and sales sites, due to the lower initial capital investment and ability to freely customize the application packages. Most companies in the software business include free software in their commercial products if the licenses allow that.
Free software is generally available at no cost and can result in permanently lower TCO costs compared to proprietary software. With free software, businesses can fit software to their specific needs by changing the software themselves or by hiring programmers to modify it for them. Free software often has no warranty, and more importantly, generally does not assign legal liability to anyone. However, warranties are permitted between any two parties upon the condition of the software and its usage. Such an agreement is made separately from the free software license.
A report by Standish Group estimates that adoption of free software has caused a drop in revenue to the proprietary software industry by about $60 billion per year. Eric S. Raymond argued that the term free software is too ambiguous and intimidating for the business community. Raymond promoted the term open-source software as a friendlier alternative for the business and corporate world.
See also
Definition of Free Cultural Works
Digital rights
Free content
Libre knowledge
List of formerly proprietary software
List of free software project directories
List of free software for Web 2.0 Services
Open format
Open standard
Open-source hardware
Outline of free software
:Category:Free software lists and comparisons
Appropriate Technology
Sustainable Development
Notes
References
Further reading
Puckette, Miller. "Who Owns our Software?: A first-person case study." eContact (September 2009). Montréal: CEC
Hancock, Terry. "The Jargon of Freedom: 60 Words and Phrases with Context". Free Software Magazine. 2010-20-24
External links
Software licenses
Applied ethics | Operating System (OS) | 812 |
Linux console
The Linux console is a system console internal to the Linux kernel. A system console is the device which receives all kernel messages and warnings and which allows logins in single user mode. The Linux console provides a way for the kernel and other processes to send text output to the user, and to receive text input from the user. The user typically enters text with a computer keyboard and reads the output text on a computer monitor. The Linux kernel supports virtual consoles – consoles that are logically separate, but which access the same physical keyboard and display. The Linux console (and Linux virtual consoles) are implemented by the VT subsystem of the Linux kernel, and do not rely on any user space software. This is in contrast to a terminal emulator, which is a user space process that emulates a terminal, and is typically used in a graphical display environment.
The Linux console was one of the first features of the kernel and was originally written by Linus Torvalds in 1991 (see history of Linux). There are two main implementations: framebuffer and text mode. The framebuffer implementation is the default in modern Linux distributions, and together with kernel mode setting, provides kernel-level support for display hardware and features such as showing graphics while the system is booting. The legacy text mode implementation was used in PC-compatible systems with CGA, EGA, MDA and VGA graphics cards. Non-x86 architectures used framebuffer mode because their graphics cards did not implement text mode. The Linux console uses fixed-size bitmap, monospace fonts, usually defaulting to 8x16 pixels per character.
The Linux console is an optional kernel feature, and most embedded Linux systems do not enable it. These systems typically provide an alternative user interface (e.g. web based), or boot immediately into a graphical user interface and use this as the primary means of interacting with the user. Other implementations of the Linux console include the Braille console to support refreshable Braille displays and the serial port console.
Purpose
The Linux console provides a way for the kernel and other processes to output text-based messages to the user, and to receive text-based input from the user. In Linux, several devices can be used as system console: a virtual terminal, serial port, USB serial port, VGA in text-mode, framebuffer. Some modern Linux-based systems have deprecated kernel based text-mode input and output, and instead show a graphical logo or progress bar while the system is booting, followed by the immediate start of a graphical user interface (e.g. the X.Org Server on desktop distributions, or SurfaceFlinger on Android).
During kernel boot, the console is commonly used to display the boot log of the kernel. The boot log includes information about detected hardware, and updates on the status of the boot procedure. At this point in time, the kernel is the only software running, and hence logging via user-space (e.g. syslog) is not possible, so the console provides a convenient place to output this information. Once the kernel has finished booting, it runs the init process (also sending output to the console), which handles booting of the rest of the system including starting any background daemons.
After the init boot process is complete, the console will be used to multiplex multiple virtual terminals (accessible by pressing Ctrl-Alt-F1, Ctrl-Alt-F2 etc., Ctrl-Alt-LeftArrow, Ctrl-Alt-RightArrow, or using chvt). On each virtual terminal, a getty process is run, which in turn runs /bin/login to authenticate a user. After authentication, a command shell will be run. Virtual terminals, like the console, are supported at the Linux kernel level.
The Linux console implements a terminal type of "linux" and the escape sequences it uses are in the console_codes man page.
Virtual consoles
Virtual consoles allow the storage of multiple text buffers, enabling different console programs to run simultaneously but interact with the user in different contexts. From the user's point of view, this creates the illusion of several independent consoles.
Each virtual console can have its own character set and keyboard layout.
Linux 2.6 introduced the ability to load a different font for each virtual console (kernel versions predating 2.6 change the font only on demand).
Text mode console
The text mode implementation is used on PC-based systems with a legacy CGA/EGA/MDA/VGA video card that implements text-based video modes. In text mode, the kernel sends a 2D array of characters to the video card, and the video card converts the characters to pixels for display.
Font, character set and keyboard layout
The text buffer is a part of VGA memory which describes the content of a text screen in terms of code points and character attributes. Code points in the text buffer and font are generally not the same as encoding used in text terminal semantics to put characters on the screen. The set of glyphs on the screen is determined by the current font. The text screen is handled by and drivers. There is a utility for altering fonts and terminal encodings called .
The Linux kernel ( driver) has almost complete support for keyboard input (keyboard layouts), but it remains a bit inconsistent because it interacts badly with different character sets. Layouts are loaded by the utility.
These two utilities and corresponding data files are packed in Linux Console Tools http://lct.sourceforge.net/ shipped with many Linux distributions.
Efforts on the internationalization of Linux at the kernel level started as early as in 1994 by Markus Kuhn and Andries Brouwer.
Text modes
The Linux console is capable of supporting any VGA-style text mode, but the kernel itself has very limited means to set these modes up. SVGATextMode helps to enable more complex text modes than the standard EGA and VGA modes. It is fully compatible with Console Tools, but has some conflicts with dosemu, SVGAlib and display servers.
Currently, there is no support for different modes on different virtual consoles.
Comparison to Windows and DOS
Microsoft Windows (of any version) does not have a fully functional support of the console. The comparable feature there, but for application software only, is the Win32 console.
Unicode supported since Windows NT based systems, which allow to switch code pages and use Unicode, but only in window mode. Also, NT systems use own text buffer format incompatible with VGA, which produces an overhead in hardware text modes. No Unicode support in non-NT versions of Windows
As non-ASCII keyboard layout should be reloaded because of flawed implementation.
Kernel mode settings in recent kernels make this more practical for some video hardware.
Linux framebuffer console
The Linux framebuffer (fbdev) is a graphic hardware-independent abstraction layer, which was originally implemented to allow the Linux kernel to emulate a text console on systems such as the Apple Macintosh that do not have a text-mode display. Now it offers a kernel space text mode emulation on any platform. Its advantage over (currently unmaintained) SVGATextMode is a reliance and better hardware compatibility. It also permits to overcome all technical restrictions of VGA text modes.
A Linux framebuffer console differs from a VGA one only in ways of drawing characters. The processing of keyboard events and virtual consoles’ support are exactly the same.
Linux serial port console
Linux serial console is a console implementation via serial port, enabled by option CONFIG_SERIAL_CONSOLE in the kernel configuration. It may be used in some embedded systems, and on servers, where a direct interaction with operator is not expected. The serial console allows the same mode of access for the system, but usually at a slower speed due to the small bandwidth of RS-232. A serial console is often used during development of software for embedded systems, and is sometimes left accessible via a debug port.
Control characters
The console responds to a number of control characters:
For ^[ press the Escape key.
The console also supports extended escape sequences, ANSI CSI Mode sequences, and DEC Private Mode sequences. These extended sequences can control colors, visual effects like blinking, underline, intensity and inverse video, bell tone frequency and duration, VESA screen blanking interval. Aside from the textual blanking, there is no known way to place the VGA adapter into standby.
Future plans
The Kmscon projects aims to create a modern user-space replacement for the Linux console. Development priorities include support for multi-monitor setups, Unicode font rendering with Pango, XKB keyboard handling, and GPU OpenGL acceleration. Complaints about the current kernel implementation include "that it's a user-interface in kernel-space, the code is poorly maintained, handles keyboards badly, produces bad font rendering, misses out on mode-setting and multi-head support, contains no multi-seat awareness, and only has limited hot-plugging handling, limited to VT102 compliance."
List of /dev/ entries related to the console
See also
Windows Console
References
Computer terminals
Linux kernel
Text user interface | Operating System (OS) | 813 |
Linux gaming
Linux gaming refers to playing video games on a Linux operating system.
History
Linux gaming started largely as an extension of the already present Unix gaming scene, with both systems sharing many similar titles. These games were either mostly original or clones of arcade games and text adventures. A notable example of this was the so-called "BSD Games", a collection of interactive fiction and other text-mode titles. The free software and open source methodologies which spawned the development of the operating system in general also spawned the creation of various early free games. Popular early titles included NetHack, Netrek, XBill, XEvil, xbattle, Xconq and XPilot. As the operating system itself grew and expanded, the amount of free and open-source games also increased in scale and complexity.
1990–1998
The beginning of Linux as a gaming platform for commercial video games is widely credited to have begun in 1994 when Dave D. Taylor ported the game Doom to Linux, as well as many other systems, during his spare time. From there he would also help found the development studio Crack dot Com, which released the video game Abuse, with the game's Linux port even being published by Linux vendor Red Hat. id Software, the original developers of Doom, also continued to release their products for Linux. Their game Quake was ported to Linux in 1996, once again by Dave D. Taylor working in his free time. Later id products continued to be ported by David Kirsch and Timothee Besset, a practice that continued until the studio's acquisition by ZeniMax Media in 2009. In 1991 DUX Software contracted Don Hopkins to port SimCity to Unix, which he later ported to Linux and eventually released as open source for the OLPC XO Laptop. Other early commercial Linux games included Hopkins FBI, an adventure game released in 1998 by MP Entertainment, and Inner Worlds in 1996, which was released for and developed on Linux. In 1998, two programmers from Origin ported Ultima Online to Linux. A website called The Linux Game Tome began to catalog games created for or ported to Linux in 1995.
1998–2002
On November 9, 1998, a new software firm called Loki Software was founded by Scott Draeker, a former lawyer who became interested in porting games to Linux after being introduced to the system through his work as a software licensing attorney. Loki, although a commercial failure, is credited with the birth of the modern Linux game industry. Loki developed several free software tools, such as the Loki installer (also known as Loki Setup), and supported the development of the Simple DirectMedia Layer, as well as starting the OpenAL audio library project. These are still often credited as being the cornerstones of Linux game development. They were also responsible for bringing nineteen high-profile games to the platform before its closure in 2002. Loki's initial success also attracted other firms to invest in the Linux gaming market, such as Tribsoft, Hyperion Entertainment, Macmillan Digital Publishing USA, Titan Computer, Xatrix Entertainment, Philos Laboratories, and Vicarious Visions. During this time Michael Simms founded Tux Games, one of the first online Linux game retailers.
2002–2010
After Loki's closure, the Linux game market experienced some changes. Although some new firms, such as Linux Game Publishing and RuneSoft, would largely continue the role of a standard porting house, the focus began to change with Linux game proponents encouraging game developers to port their game products themselves or through individual contractors. Influential to this was Ryan C. Gordon, a former Loki employee who would over the next decade port several game titles to multiple platforms, including Linux. Around this time many companies, starting with id Software, also began to release legacy source code leading to a proliferation of source ports of older games to Linux and other systems. This also helped expand the already existing free and open-source gaming scene, especially with regards to the creation of free first person shooters.
The Linux gaming market also started to experience some growth towards the end of the decade with the rise of independent video game development, with many "indie" developers favouring support for multiple platforms. The Humble Indie Bundle initiatives helped to formally demonstrate this trend, with Linux users representing a sizable population of their purchase base, as well as consistently being the most financially generous in terms of actual money spent. The release of a Linux version of Desura, a digital distribution platform with a primary focus on small independent developers, was also heralded by several commentators as an important step to greater acknowledgement of Linux as a gaming platform. In 2009, the small indie game company Entourev LLC published Voltley to Linux which is the first commercial exclusive game for this operating system. In the same year, LGP released Shadowgrounds which was the first commercial game for Linux using the Nvidia PhysX middleware.
2010–present
In July 2012, game developer and content distributor Valve announced a port of their Source engine for Linux as well as stating their intention to release their Steam digital distribution service for Linux. The potential availability of a Linux Steam client has already attracted other developers to consider porting their titles to Linux, including previously Mac OS only porting houses such as Aspyr Media and Feral Interactive.
In November 2012, Unity Technologies ported their Unity engine and game creation system to Linux starting with version 4. All of the games created with the Unity engine can now be ported to Linux easily.
In September 2013 Valve announced that they were releasing a gaming oriented Linux based operating system called SteamOS with Valve saying they had "come to the conclusion that the environment best suited to delivering value to customers is an operating system built around Steam itself."
In March 2014 GOG.com announced they would begin to support Linux titles on their DRM free store starting the same year, after previously stating they would not be able due to too many distributions. GOG.com began their initial roll out on July 24, 2014, by offering 50 Linux supporting titles, including several new to the platform.
In March and April 2014 two major developers Epic Games and Crytek announced Linux support for their next generation engines Unreal Engine 4 and CryEngine respectively.
On August 22, 2018, Valve released their fork of Wine called Proton, aimed at gaming. It features some improvements over the vanilla Wine such as Vulkan-based DirectX 11 implementation, Steam integration, better full screen and game controller support and improved performance for multi-threaded games. It has since grown to include support for DirectX 9 and DirectX 12 over Vulkan.
On February 25, 2022, Valve released Steam Deck, a handheld game console running SteamOS 3.0.
Market share
The Steam Hardware Survey reports that as of July 2021, 1% of users are using some form of Linux as their platform's primary operating system. The Unity game engine used to make their statistics available and in March 2016 reported that Linux users accounted for 0.4% of players. In 2010, in the first Humble Bundle sales, Linux accounted for 18% of purchases.
Supported hardware
Linux as a gaming platform can also refer to operating systems based on the Linux kernel and specifically designed for the sole purpose of gaming. Examples are SteamOS, which is an operating system for Steam Machines, Steam Deck and general computers, video game consoles built from components found in the classical home computer, (embedded) operating systems like Tizen and Pandora, and handheld game consoles like GP2X, and Neo Geo X. The Nvidia Shield runs Android as an operating system, which is based on a modified Linux kernel.
The open source design of the Linux software platform allows the operating system to be compatible with various computer instruction sets and many peripherals, such as game controllers and head-mounted displays. As an example, HTC Vive, which is a virtual reality head-mounted display, supports the Linux gaming platform.
Performance
In 2013, tests by Phoronix showed real-world performance of games on Linux with proprietary Nvidia and AMD drivers were mostly comparable to results on Windows 8.1. Phoronix found similar results in 2015, though Ars Technica described a 20% performance drop with Linux drivers.
Software architecture
An operating system based on the Linux kernel and customized specifically for gaming, could adopt the vanilla Linux kernel with only little changes, or—like the Android operating system—be based on a relative extensively modified Linux kernel. It could adopt GNU C Library or Bionic or something like it. The entire middleware or parts of it, could very well be closed-source and proprietary software; the same is true for the video games. There are free and open-source video games available for the Linux operating system, as well as proprietary ones.
Linux kernel
The subsystems already mainlined and available in the Linux kernel are most probably performant enough so to not impede the gaming experience in any way, however additional software is available, such as e.g. the Brain Fuck Scheduler (a process scheduler) or the Budget Fair Queueing (BFQ) scheduler (an I/O scheduler).
Similar to the way the Linux kernel can be, for example, adapted to run better on supercomputers, there are adaptations targeted at improving the performance of games. A project concerning itself with this issue is called Liquorix.
Available software for video game designers
Debuggers
Several game development tools have been available for Linux, including GNU Debugger, LLDB, Valgrind, glslang and others. VOGL, a debugger for OpenGL was released on 12 March 2014. An open-source, cross-platform clone of Enterbrain's RPG Maker (2000, 2003, XP, VX), called OpenRPG Maker, is currently in development.
Available interfaces and SDKs
There are multiple interfaces and Software Development Kits available for Linux, and almost all of them are cross-platform. Most are free and open-source software subject to the terms of the zlib License, making it possible to static link against them from fully closed-source proprietary software. One difficulty due to this abundance of interfaces, is the difficulty for programmers to choose the best suitable audio API for their purpose. The main developer of the PulseAudio project, Lennart Poettering, commented on this issue.
Physics engines, audio libraries, that are available as modules for game engines, have been available for Linux for a long time.
The book Programming Linux Games covers a couple of the available APIs suited for video game development for Linux, while The Linux Programming Interface covers the Linux kernel interfaces in much greater detail.
Available middleware
Beside majority of the software which acts as an interface to various subsystems of the operating system, there is also software which can be simply described as middleware. A multitude of companies exist worldwide, whose main or only product is software that is meant to be licensed and integrated into a game engine. Their primary target is the video game industry, but the film industry also utilizes such software for special effects. Some very few well known examples are
classical physics: Havok, Newton Game Dynamics and PhysX
audio: Audiokinetic Wwise, FMOD
other: SpeedTree
A significant share of the available middleware already runs natively on Linux, only a very few run exclusively on Linux.
Available IDEs and source code editors
Numerous source code editors and IDEs are available for Linux, among which are Visual Studio Code, Sublime Text, Code::Blocks, Qt Creator, Emacs, or Vim.
Multi-monitor
A multi-monitor setup is supported on Linux at least by AMD Eyefinity & AMD Catalyst, Xinerama and RandR on both X11 and Wayland. Serious Sam 3: BFE is one example of a game that runs natively on Linux and supports very high resolutions and is validated by AMD to support their Eyefinity. Civilization V is another example, it even runs on a "Kaveri" desktop APU in 3x1 portrait mode.
Voice over IP
The specifications of the Mumble protocol are freely available and there are BSD-licensed implementations for both servers and clients. The positional audio API of Mumble is supported by e.g. Cube 2: Sauerbraten.
Wine
Wine is a compatibility layer that provides binary compatibility and makes it possible to run software, that was written and compiled for Microsoft Windows, on Linux. The Wine project hosts a user-submitted application database (known as Wine AppDB) that lists programs and games along with ratings and reviews which detail how well they run with Wine. Wine AppDB also has a commenting system, which often includes instructions on how to modify a system to run a certain game which cannot run on a normal or default configuration. Many games are rated as running flawlessly, and there are also many other games that can be run with varying degrees of success. The use of Wine for gaming has proved controversial in the Linux community as some feel it is preventing, or at least hindering, the further growth of native gaming on the platform.
Emulators
There are numerous emulators for Linux. There are also APIs, virtual machines, and machine emulators that provide binary compatibility:
Basilisk II for the 68040 Macintosh;
DOSBox and DOSEMU for MS-DOS/PC DOS and compatibles;
DeSmuME and melonDS for the Nintendo DS;
Dolphin for the Nintendo GameCube, Wii, and the Triforce;
FCEUX, Nestopia and TuxNES for the Nintendo Entertainment System;
Frotz for Z-Machine text adventures;
Fuse for the Sinclair ZX Spectrum;
Hatari for the Atari ST, STe, TT and Falcon;
gnuboy for the Nintendo Game Boy and Game Boy Color;
MAME for arcade games;
Mednafen and Xe emulating multiple hardware platforms including some of the above;
Mupen64Plus and the no longer actively developed original Mupen64 for the Nintendo 64;
PCSX-Reloaded, pSX and the Linux port of ePSXe for the PlayStation;
PCSX2 for the PlayStation 2;
PPSSPP for the PlayStation Portable;
ScummVM for LucasArts and various other adventure games;
SheepShaver for the PowerPC Macintosh;
Snes9x, higan and ZSNES for the Super NES;
Stella for the Atari 2600;
UAE for the Amiga;
VICE for the Commodore 64;
VisualBoyAdvance for the Game Boy Advance;
vMac for the 680x0 Macintosh;
Linux homebrew on consoles
Linux has been ported to several game consoles, including the Xbox, PlayStation 2, PlayStation 3, PlayStation 4, GameCube, and Wii which allows game developers without an expensive game development kit to access console hardware. Several gaming peripherals also work with Linux.
Linux adoption
Adoption by game engines
The game engine is the software solely responsible for the game mechanics, or rules defining game play. There are different game engines for first-person shooters, strategy video games, etc. Besides the game mechanics, software is also needed to handle graphics, audio, physics, input handling, and networking.
Game engines that are used by many video games and run on top of Linux include:
C4 Engine (Terathon Software)
CryEngine (Crytek)
Diesel 2.0 (Grin)
HPL Engine 1–3 (Frictional Games)
id Tech (id Software)
Serious Engine (Croteam)
Source (Valve)
Unigine (Unigine Corp)
Unity 5 (Unity Technologies)
Unreal Engine 1-4 (Epic Games)
Godot engine
Adoption by video games
There are many free and open-source video games as well as commercially distributed proprietary video games that run natively on Linux. Some independent companies have also begun porting prominent video games from Microsoft Windows to Linux.
Free and open-source games
Original games
A few original open source video games have attained notability:
0 A.D. is a real-time strategy game of ancient warfare, similar to Age of Empires.
AssaultCube is a first-person shooter.
AstroMenace is a 3D scroll-shooter.
BZFlag is a 3D First person tank shooter (With jumping).
Battle for Wesnoth is a turn-based strategy game.
Blob Wars: Metal Blob Solid is a 2D platform game.
Chromium B.S.U. is a fast-paced, arcade-style, top-scrolling space shooter.
CodeRED: Alien Arena is a sci-fi first-person shooter derived from the Quake II engine.
Crimson Fields is a turn-based tactical wargame.
Cube 2: Sauerbraten is a 3D first-person shooter with an integrated map editing mode.
Danger from the Deep is a submarine simulator set in World War II.
Glest is a real-time strategy game, with optional multiplayer.
NetHack and Angband are text-based computer role-playing games.
Netrek is a Star Trek themed multiplayer 2D space battle game.
Nexuiz is a first-person shooter. Although, this has been replaced by Xonotic.
Project: Starfighter a multi-directional, objective based shoot-em-up.
TORCS (The Open Racing Car Simulator) – considered one of the best open-source racing simulators, with realistic graphics and vehicle handling.
Tremulous is a 3D first-person shooter/real-time strategy game.
Tux Racer is a 3D racing game featuring Tux.
Urban Terror is a standalone Quake III Arena first-person shooter. (Proprietary mod).
Vega Strike is a space flight simulation.
Warsow is a Quake-like, fast-paced first-person shooter.
Clones and remakes
There are a larger number of open source clones and remakes of classic games:
FreeCiv is a clone of Civilization II.
FreeOrion is inspired by Master of Orion.
Frets on Fire is a clone of Guitar Hero.
Frozen Bubble is a clone of Puzzle Bobble.
Grid Wars is a clone of Geometry Wars.
Head Over Heels, a ZX-Spectrum action platformer, was remade for Linux, Windows, Mac OS X, and BeOS.
Oolite is a free and open-source remake of Elite.
OpenClonk is a free and open-source remake of Clonk.
OpenTTD is a remake of Transport Tycoon Deluxe.
OpenMW game engine reimplementation of Morrowind.
Performous is a remix of the ideas behind Guitar Hero, SingStar and Dance Dance Revolution.
Pingus is a clone of Lemmings.
Scorched 3D is a 3D adaptation of Scorched Earth.
Spring originally is a clone of Total Annihilation, but actually is a platform for real time strategy games.
StepMania is a clone of Dance Dance Revolution
SuperTuxKart and TuxKart are clones of Mario Kart.
SuperTux and Secret Maryo Chronicles are both clones of Super Mario Bros.
The Dark Mod is a stealth game inspired by the Thief (series) games (particularly 1 and 2) from Looking Glass Studios
The Zod Engine is an actively developed open source remake of the game Z.
UFO: Alien Invasion is heavily influenced by the X-COM series, mostly by UFO: Enemy Unknown.
UltraStar is an open source clone of SingStar
Ur-Quan Masters is based on the original source code for Star Control II
Warzone 2100 is a real-time strategy and real-time tactics hybrid computer game. Originally published by Eidos Interactive and later released as open source.
Widelands is a clone of The Settlers II.
Bill Kendrick has developed many free software games, most inspired by games for the Atari 8-bit and other classic systems.
Proprietary games
Available on Steam
Valve officially released Steam for Linux on February 14, 2013. the number of Linux-compatible games on Steam exceeds 6,500. With the launch of SteamOS, a distribution of Linux made by Valve intended to be used for HTPC gaming, that number is quickly growing. Listed below are some notable games available on Steam for Linux:
Age of Wonders III
Alien: Isolation
American Truck Simulator
And Yet It Moves
Another World
Aquaria
Bastion
The Binding of Isaac
BioShock Infinite
Borderlands 2
Borderlands: The Pre-Sequel!
Braid
Brütal Legend
Cave Story+
Civilization V
Civilization VI
Civilization: Beyond Earth
Counter-Strike
Counter-Strike: Global Offensive
Counter-Strike: Source
Day of the Tentacle Remastered
Dead Island
Deus Ex: Mankind Divided
Dirt Rally
Don't Starve
Dota 2
Empire: Total War
Fez
Freedom Planet
GRID Autosport
Grim Fandango Remastered
Half-Life
Half-Life 2
Hitman
Hitman Go
Kerbal Space Program
Lara Croft Go
Left 4 Dead 2
Life Is Strange
Life Is Strange 2
Limbo
Mad Max
Madout Big City Online
Metro 2033
Metro: Last Light
Middle-earth: Shadow of Mordor
Mini Metro
Pillars of Eternity
Portal
Portal 2
Rocket League
Saints Row 2
Saints Row IV
Saints Row: The Third
Shovel Knight
Skullgirls
Spec Ops: The Line
Star Wars Knights of the Old Republic II: The Sith Lords
Super Meat Boy
System Shock 2
The Talos Principle
Tank Force
Team Fortress 2
Tomb Raider
Total War: Warhammer
TowerFall Ascension
Undertale
VVVVVV
The Witcher 2: Assassins of Kings
XCOM: Enemy Unknown
XCOM 2
Independent game developers
Independent developer 2D Boy released World of Goo for Linux. Role-playing video game titles like Eschalon: Book I, Eschalon: Book II and Penny Arcade Adventures: On the Rain-Slick Precipice of Darkness were developed cross-platform from the start of development, including a Linux version. Sillysoft released Linux versions of their game Lux and its various versions.
Hemisphere Games has released a Linux version of Osmos. Koonsolo has released a Linux version of Mystic Mine. Amanita Design has released Linux versions of Machinarium and Samorost 2. Irrgheist released a Linux version of their futuristic racing game H-Craft Championship. Gamerizon has released a Linux version of QuantZ. InterAction Studios has several titles mostly in the Chicken Invaders series.
Kristanix Games has released Linux versions of Crossword Twist, Fantastic Farm, Guess The Phrase!, Jewel Twist, Kakuro Epic, Mahjong Epic, Maxi Dice, Solitaire Epic, Sudoku Epic, Theseus and the Minotaur. Anawiki Games has released a Linux versions of Path of Magic, Runes of Avalon, Runes of Avalon 2, Soccer Cup Solitaire, The Perfect Tree and Dress-Up Pups. Gaslamp Games released a Linux version of Dungeons of Dredmor. Broken Rules has released a Linux version of And Yet It Moves.
Frictional Games released Linux versions of both Penumbra: Black Plague and Penumbra: Overture, as well as the expansion pack Penumbra: Requiem. They also released Amnesia: The Dark Descent for Linux simultaneously with the Windows and Mac OS X versions. S2 Games released Linux clients for their titles Savage: The Battle for Newerth, Savage 2: A Tortured Soul and Heroes of Newerth. Wolfire Games released a Linux version of their game Lugaru and they will release its sequel Overgrowth for Linux. David Rosen's Black Shades was also ported to Linux. Arctic Paint has released a Linux version of Number Drill. Charlie's Games has released a Linux version of Bullet Candy Perfect, Irukandji, Space Phallus and Scoregasm.
Illwinter Game Design released Conquest of Elysium II, Dominions: Priests, Prophets and Pretenders, Dominions II: The Ascension Wars, and Dominions 3: The Awakening for Linux. Introversion Software released Darwinia, Uplink, and DEFCON. Cartesian Theatre is a Vancouver, British Columbia, Canada, based software house specializing in free, commercial, games for Linux. They have one title currently under active development, Avaneya. Kot-in-Action Creative Artel released their Steel Storm games for Linux. Hazardous Software have released their game Achron for Linux.
Unigine Corp developed Oil Rush using its Unigine engine technology that works on Linux. Unigine Corp was also developing a "shooter-type game" that would have been released for Linux, currently the development on this game is frozen until OilRush is released. The MMORPG game Syndicates of Arkon is also supposed to be coming to Linux. The game Dilogus: The Winds of War is also being developed with Unigine and is planned to have a Linux client.
A number of visual novel developers support Linux. Winter Wolves has released titles such as Spirited Heart, Heileen, The Flower Shop, Bionic Heart, Card Sweethearts, Vera Blanc, Planet Stronghold, and Loren The Amazon Princess for Linux. Hanako Games has released Science Girls, Summer Session, Date Warp, Cute Knight Kingdom, and are considering porting Fatal Hearts to Linux. sakevisual has brought Jisei, Kansei, Yousei, RE: Alistair and Ripples to Linux. Four Leaf Studios has also released Katawa Shoujo for Linux and Christine Love released Digital: A Love Story, both of which, along with Summer Session mentioned previously, are powered by the free software Ren'Py tool.
The Java-based sandbox game Minecraft by Indie developer Mojang is available on Linux, as is any other video game compiled for the Java virtual machine.
Dwarf Fortress, a sandbox management simulator / roguelike, has been made available for Linux by Tarn Adams.
The voxel-based space sandbox game, ScrumbleShip by Indie developer Dirkson is currently under development for Linux, Mac OS X, and Windows.
The realistic replay baseball simulation Out of the Park Baseball by OOTP Developments is currently available for Linux, Mac OS X, and Windows, for single player and multiplayer online leagues.
Grappling Hook, a first-shooter like puzzle game.
The German indie-studio Pixel Maniacs has released both of their games, ChromaGun and Can't Drive This for Linux.
In the Walking Simulator space, Dan Ruscoe's Dark Hill Museum of Death is available for Linux.
Game porters
Independent companies have also taken on the task of porting prominent Windows games to Linux. Loki Software was the first such company, and between 1998 and 2002 ported Civilization: Call to Power, Descent³, Eric's Ultimate Solitaire, Heavy Gear II,
Heavy Metal: F.A.K.K.², Heretic II, Heroes of Might and Magic III, Kohan: Immortal Sovereigns, Myth II: Soulblighter, Postal, Railroad Tycoon II, Quake III Arena, Rune, Sid Meier's Alpha Centauri, Sim City 3000, Soldier of Fortune, Tribes 2, and MindRover to Linux.
Tribsoft created a Linux version of Jagged Alliance 2 by Sir-Tech Canada before shutting down in 2002. Linux Game Publishing was founded in 2001 in response to the impending demise of Loki, and has brought Creatures: Internet Edition, Candy Cruncher, Majesty: Gold Edition, NingPo MahJong, Hyperspace Delivery Boy!, Software Tycoon, Postal²: Share The Pain, Soul Ride, X2: The Threat, Gorky 17, Cold War, Knights and Merchants: The Shattered Kingdom, Ballistics, X3: Reunion, Jets'n'Guns, Sacred: Gold, Shadowgrounds, and Shadowgrounds Survivor to Linux. Some of these games were ported for them by Gordon.
LGP-associated but freelance consultant Frank C. Earl is porting the game Caster to Linux and has released the first episode and also developed the Linux version of Cortex Command being included in the second Humble Indie Bundle. He is also working towards other porting projects such as the entire Myth series. He is largely taking recommendations and he comments as part of the Phoronix community. icculus.org has ported beta releases for Medal of Honor: Allied Assault and Devastation, versions of America's Army, and the titles Prey, Aquaria, Braid, Hammerfight and Cogs.
The German publisher RuneSoft was founded in 2000. They ported the games Northland,
Robin Hood: The Legend of Sherwood, Airline Tycoon Deluxe, Ankh, Ankh: Heart of Osiris, Barkanoid 2, and Jack Keane to Linux, as well as porting Knights and Merchants: The Shattered Kingdom and Software Tycoon, for Linux Game Publishing. Hyperion Entertainment ported games to several systems, they have ported Shogo: Mobile Armor Division and SiN to Linux, as well as porting Gorky 17 for Linux Game Publishing. Wyrmkeep Entertainment has brought the games The Labyrinth of Time and Inherit the Earth: Quest for the Orb to Linux. Alternative Games brought Trine and Shadowgrounds, and Shadowgrounds Survivor for Linux Game Publishing.
Aspyr Media released their first Linux port in June 2014, they claim they are porting to Linux due to Valve bringing out SteamOS. Aspyr Media later ported Borderlands 2 to Linux in September 2014.
Having ported games to Mac OS X since 1996, video game publisher Feral Interactive released XCOM: Enemy Unknown, its first game for Linux, in June 2014. Feral Interactive stated they port games to Linux thanks to SteamOS.
Other developers
Some id Software employees ported the Doom series, the Quake series, Return to Castle Wolfenstein, Wolfenstein: Enemy Territory and Enemy Territory: Quake Wars. Some games published by GarageGames which have Linux versions include Bridge Builder, Marble Blast Gold, Gish, Tribal Trouble, and Dark Horizons: Lore Invasion.
MP Entertainment released Hopkins FBI and Crack dot com released Abuse for Linux, becoming one of the first developers to release a native port. Inner Worlds, another early commercial Linux title, was released for and developed on Linux. Philos Laboratories released a Linux version of Theocracy on the retail disk. Absolutist has supported Linux for a number of years. GLAMUS GmbH released a Linux version of their game Mobility. Vicarious Visions ported the space-flight game Terminus to Linux.
Lava Lord Games released their game Astro Battle for Linux. Xatrix Entertainment released a Linux version of Kingpin: Life of Crime. BioWare released Neverwinter Nights for Linux. Croteam released the Serious Sam series, with the first game ported by Gordon and with the second self-ported. Gordon also ported Epic Games' shooter games Unreal Tournament 2003 and Unreal Tournament 2004.
Revolution System Games released their game Decadence: Home Sweet Home through Steam only for Linux for a period of time after Mac or windows release.
On 12 October 2013 Lars Gustavsson, creative director at DICE, said to polygon.com
Commercial games for non-x86 instruction sets
Some companies ported games to Linux running on instruction sets other than x86, such as Alpha, PowerPC, Sparc, MIPS or ARM. Loki Entertainment Software ported Civilization: Call to Power, Eric's Ultimate Solitaire, Heroes of Might and Magic III, Myth II: Soulblighter, Railroad Tycoon II Gold Edition and Sid Meier's Alpha Centauri with Alien Crossfire expansion pack to Linux PowerPC. They also ported Civilization: Call to Power, Eric's Ultimate Solitaire, Sid Meier's Alpha Centauri with Alien Crossfire expansion pack to Linux Alpha and Civilization: Call to Power, Eric's Ultimate Solitaire to Linux SPARC. Linux Game Publishing published Candy Cruncher, Majesty Gold, NingPo MahJong and Soul Ride to Linux PowerPC. They also ported Candy Cruncher, Soul Ride to Linux SPARC and Soul Ride to Linux Alpha. Illwinter Game Design ported Dominions: Priests, Prophets & Pretenders, Dominions II: The Ascension Wars and Dominions 3 to Linux PowerPC and Conquest of Elysium 3, Dominions 4: Thrones of Ascension to Raspberry Pi. Hyperion Entertainment ported Sin to Linux PowerPC published by Titan Computer and Gorky 17 to Linux PowerPC which later was published by LGP. Runesoft hired Gunnar von Boehn which ported Robin Hood – The Legend of Sherwood to Linux PowerPC. Later Runesoft ported Airline Tycoon Deluxe to Raspberry Pi was running Debian GNU/Linux.
Source ports
Several developers have released the source code to many of their legacy titles, allowing them to be run as native applications on many alternative platforms, including Linux. Examples of games which were ported to Linux this way include Duke Nukem 3D, Shadow Warrior, Rise of the Triad, Ken's Labyrinth, Seven Kingdoms, Warzone 2100, Homeworld, Call to Power II, Wolfenstein 3D, Heretic, Hexen, Hexen II, Aliens versus Predator, Descent, Descent II and Freespace 2. Several game titles that were previously released for Linux were also able to be expanded or updated because of the availability of game code, including Doom, Abuse, Quake, Quake II, Quake III Arena and Jagged Alliance 2. Some derivatives based on released source code have also been released for Linux, such as Aleph One and Micropolis for Marathon 2: Durandal and SimCity respectively.
Certain game titles were even able to be ported due to availability of shared engine code even though the game's code itself remains proprietary or otherwise unavailable, such as the video game Strife or the multiplayer component of Star Trek: Voyager – Elite Force. Some games have even been ported entirely or partially by reverse engineering and game engine recreation such as WarCraft II through Wargus or Commander Keen. Another trick is to attempt hacking the game to work as a mod on another native title, such as with the original Unreal. Additionally, some games can be run through the use of Linux specific runtime environments, such as the case of certain games made with Adventure Game Studio such as the Chzo Mythos or certain titles made with the RPG Maker tool. Games derived from released code, with both free and proprietary media, that are released for Linux include Urban Terror, OpenArena, FreeDoom, World of Padman, Nexuiz/Xonotic, War§ow and Excalibur: Morgana's Revenge.
Massively multiplayer online role-playing games
This is a selected list of MMORPGs that are native on Linux:
A Tale in the Desert III (2003, eGenesis) – A trading and crafting game, set in ancient Egypt, pay-to-play.
Crossfire (1992) – A medieval fantasy 2D game.
Diaspora (1999, Altitude Productions) – 2D Space trading MMORPG. (Project Diaspora version has a Linux client.)
Dofus (2005, Ankama Games) – A 2D fantasy MMORPG.
Eternal Lands (2003, Radu Privantu) – A 3D fantasy free-to-play MMORPG.
PlaneShift – A free 3D fantasy game.
Regnum Online – A 3D fantasy game, free-to-play with premium content.
RuneScape – Java fantasy 3rd person MMORPG.
Salem – An isometric, 3D fantasy game with a focus on crafting and permadeath.
Shroud of the Avatar – An isometric, 3D fantasy game and the spiritual successor to Ultima Online.
Spiral Knights – Java fantasy 3rd person game.
The Saga of Ryzom – has a Linux client and source code available.
Tibia – A 2D Medieval fantasy MMORPG game. Free-to-play with premium content. One of the oldest MMORPG, created January 1997. With Official Linux client.
Ultima Online has an unofficial Linux client.
Vendetta Online – A 3D spacecraft MMOFPS with growing RPG elements, pay to play. Maintains both Linux/32 and Linux/64 clients.
WorldForge – A game engine. There are Linux clients available.
Wyvern – A 2D fantasy MMORPG that runs on Java.
Yohoho! Puzzle Pirates – A puzzle game which runs on Java.
Many Virtual Worlds – (such as Second Life) also have Linux clients.
Types of Linux gaming
Libre gaming
Libre gaming is a form of Linux gaming that emphasizes libre software.
See also
Directories and lists
Free Software Directory
List of emulators
List of open source games
List of video game console emulators
Linux gaming software
Direct3D (alternative implementation)
Lutris
Proton (software)
Vulkan (API)
Wine (software)
Other articles
Linux for PlayStation 2
Sega Lindbergh
References
Gaming
Video game platforms | Operating System (OS) | 814 |
Edinburgh Multiple Access System
The Edinburgh Multi-Access System (EMAS) was a mainframe computer operating system developed at the University of Edinburgh during the 1970s.
EMAS was a powerful and efficient general purpose multi-user system which coped with many of the computing needs of the University of Edinburgh and the University of Kent (the only other site outside Edinburgh to adopt the operating system).
History
Originally running on the ICL System 4/75 mainframe (based on the design of the IBM 360) it was later reimplemented on the ICL 2900 series of mainframes (as EMAS 2900 or EMAS-2) where it ran in service until the mid-1980s. Near the end of its life, the refactored version was back-ported (as EMAS-3) to the Amdahl 470 mainframe clone, and thence to the IBM System/370-XA architecture (the latter with help from the University of Kent, although they never actually ran EMAS-3). The National Advanced System (NAS) VL80 IBM mainframe clone followed later. The final EMAS system (the Edinburgh VL80) was decommissioned in July 1992.
The University of Kent system went live in December 1979, and ran on the least powerful machine in the ICL 2900 range - an ICL 2960, with 2MB of memory, executing about 290k instructions per second. Despite this, it reliably supported around 30 users. This number increased in 1983 with the addition of an additional 2MB of memory and a second Order Code Processor (OCP) (what is normally known as a CPU) running with symmetric multiprocessing. This system was decommissioned in August 1986.
Features
EMAS was written entirely in the Edinburgh IMP programming language, with only a small number of critical functions using embedded assembler within IMP sources. It had several features that were advanced for the time, including dynamic linking, multi-level storage, an efficient scheduler, a separate user-space kernel ('director'), a user-level shell ('basic command interpreter'), a comprehensive archiving system and a memory-mapped file architecture.
Such features led EMAS supporters to claim that their system was superior to Unix for the first 20 years of the latter's existence.
Legacy
The Edinburgh Computer History Project is attempting to salvage some of the lessons learned from the EMAS project and has the complete source code of EMAS online for public browsing.
See also
Atlas Autocode
References
History of computing in the United Kingdom
Time-sharing operating systems
1970s software
University of Edinburgh School of Informatics | Operating System (OS) | 815 |
Avionics software
Avionics software is embedded software with legally mandated safety and reliability concerns used in avionics. The main difference between avionic software and conventional embedded software is that the development process is required by law and is optimized for safety.
It is claimed that the process described below is only slightly slower and more costly (perhaps 15 percent) than the normal ad hoc processes used for commercial software. Since most software fails because of mistakes, eliminating the mistakes at the earliest possible step is also a relatively inexpensive and reliable way to produce software. In some projects however, mistakes in the specifications may not be detected until deployment. At that point, they can be very expensive to fix.
The basic idea of any software development model is that each step of the design process has outputs called "deliverables." If the deliverables are tested for correctness and fixed, then normal human mistakes can not easily grow into dangerous or expensive problems. Most manufacturers follow the waterfall model to coordinate the design product, but almost all explicitly permit earlier work to be revised. The result is more often closer to a spiral model.
For an overview of embedded software see embedded system and software development models. The rest of this article assumes familiarity with that information, and discusses differences between commercial embedded systems and commercial development models.
General overview
Since most avionics manufacturers see software as a way to add value without adding weight, the importance of embedded software in avionic systems is increasing.
Most modern commercial aircraft with auto-pilots use flight computers and so called flight management systems (FMS) that can fly the aircraft without the pilot's active intervention during certain phases of flight. Also under development or in production are unmanned vehicles: missiles and drones which can take off, cruise and land without airborne pilot intervention.
In many of these systems, failure is unacceptable. The reliability of the software running in airborne vehicles (civil or military) is shown by the fact that most airborne accidents occur due to manual errors. Unfortunately reliable software is not necessarily easy to use or intuitive, poor user interface design has been a contributing cause of many aerospace accidents and deaths.
Regulatory issues
Due to safety requirements, most nations regulate avionics, or at least adopt standards in use by a group of allies or a customs union. The three regulatory organizations that most affect international aviation development are the U.S, the E.U. and Russia.
In the U.S., avionic and other aircraft components have safety and reliability standards mandated by the Federal Aviation Regulations, Part 25 for Transport Airplanes, Part 23 for Small Airplanes, and Parts 27 and 29 for Rotorcraft. These standards are enforced by "designated engineering representatives" of the FAA who are usually paid by a manufacturer and certified by the FAA.
In the European Union the IEC describes "recommended" requirements for safety-critical systems, which are usually adopted without change by governments. A safe, reliable piece of avionics has a "CE Mark." The regulatory arrangement is remarkably similar to fire safety in the U.S. and Canada. The government certifies testing laboratories, and the laboratories certify both manufactured items and organizations. Essentially, the oversight of the engineering is outsourced from the government and manufacturer to the testing laboratory.
To assure safety and reliability, national regulatory authorities (e.g. the FAA, CAA, or DOD) require software development standards. Some representative standards include MIL-STD-2167 for military systems, or RTCA DO-178B and its successor DO-178C for civil aircraft.
The regulatory requirements for this software can be expensive compared to other software, but they are usually the minimum that is required to produce the necessary safety.
Development process
The main difference between avionics software and other embedded systems is that the actual standards are often far more detailed and rigorous than commercial standards, usually described by documents with hundreds of pages. It is usually run on a real time operating system.
Since the process is legally required, most processes have documents or software to trace requirements from numbered paragraphs in the specifications and designs to exact pieces of code, with exact tests for each, and a box on the final certification checklist. This is specifically to prove conformance to the legally mandated standard.
Deviations from a specific project to the processes described here can occur due to usage of alternative methods or low safety level requirements.
Almost all software development standards describe how to perform and improve specifications, designs, coding, and testing (See software development model). However avionics software development standards add some steps to the development for safety and certification:
Human interfaces
Projects with substantial human interfaces are usually prototyped or simulated. The videotape is usually retained, but the prototype retired immediately after testing, because otherwise senior management and customers can believe the system is complete. A major goal is to find human-interface issues that can affect safety and usability.
Hazard analysis
Safety-critical avionics usually have a hazard analysis. The early stages of the project, already have at least a vague idea of the main parts of the project. An engineer then takes each block of a block diagram and considers the things that could go wrong with that block, and how they affect the system as a whole. Subsequently, the severity and probability of the hazards are estimated. The problems then become requirements that feed into the design's specifications.
Projects involving military cryptographic security usually include a security analysis, using methods very like the hazard analysis.
Maintenance manual
As soon as the engineering specification is complete, writing the maintenance manual can start. A maintenance manual is essential to repairs, and of course, if the system can't be fixed, it will not be safe.
There are several levels to most standards. A low-safety product such as an in-flight entertainment unit (a flying TV) may escape with a schematic and procedures for installation and adjustment. A navigation system, autopilot or engine may have thousands of pages of procedures, inspections and rigging instructions. Documents are now (2003) routinely delivered on CD-ROM, in standard formats that include text and pictures.
One of the odder documentation requirements is that most commercial contracts require an assurance that system documentation will be available indefinitely. The normal commercial method of providing this assurance is to form and fund a small foundation or trust. The trust then maintains a mailbox and deposits copies (usually in ultrafiche) in a secure location, such as rented space in a university's library (managed as a special collection), or (more rarely now) buried in a cave or a desert location.
Design and specification documents
These are usually much like those in other software development models. A crucial difference is that requirements are usually traced as described above. In large projects, requirements-traceability is such a large expensive task that it requires large, expensive computer programs to manage it.
Code production and review
The code is written, then usually reviewed by a programmer (or group of programmers, usually independently) that did not write it originally (another legal requirement). Special organizations also usually conduct code reviews with a checklist of possible mistakes. When a new type of mistake is found it is added to the checklist, and fixed throughout the code.
The code is also often examined by special programs that analyze correctness (Static code analysis), such as SPARK examiner for the SPARK (a subset of the Ada programming language) or lint for the C-family of programming languages (primarily C, though).
The compilers or special checking programs like "lint" check to see if types of data are compatible with the operations on them, also such tools are regularly used to enforce strict usage of valid programming language subsets and programming styles.
Another set of programs measure software metrics, to look for parts of the code that are likely to have mistakes.
All the problems are fixed, or at least understood and double-checked.
Some code, such as digital filters, graphical user interfaces and inertial navigation systems, are so well understood that software tools have been developed to write the software. In these cases, specifications are developed and reliable software is produced automatically.
Unit testing
"Unit test" code is written to exercise every instruction of the code at least once to get 100% code coverage. A "coverage" tool is often used to verify that every instruction is executed, and then the test coverage is documented as well, for legal reasons.
This test is among the most powerful. It forces detailed review of the program logic, and detects most coding, compiler and some design errors. Some organizations write the unit tests before writing the code, using the software design as a module specification. The unit test code is executed, and all the problems are fixed.
Integration testing
As pieces of code become available, they are added to a skeleton of code, and tested in place to make sure each interface works. Usually the built-in-tests of the electronics should be finished first, to begin burn-in and radio emissions tests of the electronics.
Next, the most valuable features of the software are integrated. It is very convenient for the integrators to have a way to run small selected pieces of code, perhaps from a simple menu system.
Some program managers try to arrange this integration process so that after some minimal level of function is achieved, the system becomes deliverable at any following date, with increasing numbers of features as time passes.
Black box and acceptance testing
Meanwhile, the test engineers usually begin assembling a test rig, and releasing preliminary tests for use by the software engineers. At some point, the tests cover all of the functions of the engineering specification. At this point, testing of the entire avionic unit begins. The object of the acceptance testing is to prove that the unit is safe and reliable in operation.
The first test of the software, and one of the most difficult to meet in a tight schedule, is a realistic test of the unit's radio emissions. This usually must be started early in the project to assure that there is time to make any necessary changes to the design of the electronics.
The software is also subjected to a structural coverage analysis, where test's are run and code coverage is collected and analysed.
Certification
Each step produces a deliverable, either a document, code, or a test report. When the software passes all of its tests (or enough to be sold safely), these are bound into a certification report, that can literally have thousands of pages. The designated engineering representative, who has been striving for completion, then decides if the result is acceptable. If it is, he signs it, and the avionic software is certified.
See also
Annex: Acronyms and abbreviations in avionics
DO-178B
Software development model
Hazard analysis
The Power of 10: Rules for Developing Safety-Critical Code
References
External links
Generic Avionics Software Specification from the Software Engineering Institute (SEI)
Avionics
Transport software
Software | Operating System (OS) | 816 |
Hierarchical File System
Hierarchical File System (HFS) is a proprietary file system developed by Apple Inc. for use in computer systems running Mac OS. Originally designed for use on floppy and hard disks, it can also be found on read-only media such as CD-ROMs. HFS is also referred to as Mac OS Standard (or HFS Standard), while its successor, HFS Plus, is also called Mac OS Extended (or HFS Extended).
With the introduction of Mac OS X 10.6, Apple dropped support for formatting or writing HFS disks and images, which remain supported as read-only volumes. Starting with macOS 10.15, HFS disks can no longer be read.
History
Apple introduced HFS in September 1985, specifically to support Apple's first hard disk drive for the Macintosh, replacing the Macintosh File System (MFS), the original file system which had been introduced over a year and a half earlier with the first Macintosh computer. HFS drew heavily upon Apple's first hierarchical operating system (SOS) for the failed Apple III, which also served as the basis for hierarchical file systems on the Apple IIe and Apple Lisa. HFS was developed by Patrick Dirks and Bill Bruffey. It shared a number of design features with MFS that were not available in other file systems of the time (such as DOS's FAT). Files could have multiple forks (normally a data and a resource fork), which allowed the main data of the file to be stored separately from resources such as icons that might need to be localized. Files were referenced with unique file IDs rather than file names, and file names could be 255 characters long (although the Finder only supported a maximum of 31 characters).
However, MFS had been optimized to be used on very small and slow media, namely floppy disks, so HFS was introduced to overcome some of the performance problems that arrived with the introduction of larger media, notably hard drives. The main concern was the time needed to display the contents of a folder. Under MFS all of the file and directory listing information was stored in a single file, which the system had to search to build a list of the files stored in a particular folder. This worked well with a system with a few hundred kilobytes of storage and perhaps a hundred files, but as the systems grew into megabytes and thousands of files, the performance degraded rapidly.
The solution was to replace MFS's directory structure with one more suitable to larger file systems. HFS replaced the flat table structure with the Catalog File which uses a B-tree structure that could be searched very quickly regardless of size. HFS also redesigned various structures to be able to hold larger numbers, 16-bit integers being replaced by 32-bit almost universally. Oddly, one of the few places this "upsizing" did not take place was the file directory itself, which limits HFS to a total of 65,535 files on each logical disk.
While HFS is a proprietary file system format, it is well-documented; there are usually solutions available to access HFS-formatted disks from most modern operating systems.
Apple introduced HFS out of necessity with its first 20 MB hard disk offering for the Macintosh in September 1985, where it was loaded into RAM from a MFS floppy disk on boot using a patch file ("Hard Disk 20"). However, HFS was not widely introduced until it was included in the 128K ROM that debuted with the Macintosh Plus in January 1986 along with the larger 800 KB floppy disk drive for the Macintosh that also used HFS. The introduction of HFS was the first advancement by Apple to leave a Macintosh computer model behind: the original 128K Macintosh, which lacked sufficient memory to load the HFS code and was promptly discontinued.
In 1998, Apple introduced HFS Plus to address inefficient allocation of disk space in HFS and to add other improvements. HFS is still supported by current versions of Mac OS, but starting with Mac OS X, an HFS volume cannot be used for booting, and beginning with Mac OS X 10.6 (Snow Leopard), HFS volumes are read-only and cannot be created or updated. In macOS Sierra (10.12), Apple's release notes state that "The HFS Standard filesystem is no longer supported." However, read-only HFS Standard support is still present in Sierra and works as it did in previous versions.
Design
A storage volume is inherently divided into logical blocks of 512 bytes. The Hierarchical File System groups these logical blocks into allocation blocks, which can contain one or more logical blocks, depending on the total size of the volume. HFS uses a 16-bit value to address allocation blocks, limiting the number of allocation blocks to 65,535 (216-1).
Five structures make up an HFS volume:
Logical blocks 0 and 1 of the volume are the Boot Blocks, which contain system startup information. For example, the names of the System and Shell (usually the Finder) files which are loaded at startup.
Logical block 2 contains the Master Directory Block (aka MDB). This defines a wide variety of data about the volume itself, for example date & time stamps for when the volume was created, the location of the other volume structures such as the Volume Bitmap or the size of logical structures such as allocation blocks. There is also a duplicate of the MDB called the Alternate Master Directory Block (aka Alternate MDB) located at the opposite end of the volume in the second to last logical block. This is intended mainly for use by disk utilities and is only updated when either the Catalog File or Extents Overflow File grow in size.
Logical block 3 is the starting block of the Volume Bitmap, which keeps track of which allocation blocks are in use and which are free. Each allocation block on the volume is represented by a bit in the map: if the bit is set then the block is in use; if it is clear then the block is free to be used. Since the Volume Bitmap must have a bit to represent each allocation block, its size is determined by the size of the volume itself.
The Extent Overflow File is a B-tree that contains extra extents that record which allocation blocks are allocated to which files, once the initial three extents in the Catalog File are used up. Later versions also added the ability for the Extent Overflow File to store extents that record bad blocks, to prevent the file system from trying to allocate a bad block to a file.
The Catalog File is another B-tree that contains records for all the files and directories stored in the volume. It stores four types of records. Each file consists of a File Thread Record and a File Record while each directory consists of a Directory Thread Record and a Directory Record. Files and directories in the Catalog File are located by their unique Catalog Node ID (or CNID).
A File Thread Record stores just the name of the file and the CNID of its parent directory.
A File Record stores a variety of metadata about the file including its CNID, the size of the file, three timestamps (when the file was created, last modified, last backed up), the first file extents of the data and resource forks and pointers to the file's first data and resource extent records in the Extent Overflow File. The File Record also stores two 16 byte fields that are used by the Finder to store attributes about the file including things like its creator code, type code, the window the file should appear in and its location within the window.
A Directory Thread Record stores just the name of the directory and the CNID of its parent directory.
A Directory Record which stores data like the number of files stored within the directory, the CNID of the directory, three timestamps (when the directory was created, last modified, last backed up). Like the File Record, the Directory Record also stores two 16 byte fields for use by the Finder. These store things like the width & height and x & y co-ordinates for the window used to display the contents of the directory, the display mode (icon view, list view, etc.) of the window and the position of the window's scroll bar.
Limitations
The Catalog File, which stores all the file and directory records in a single data structure, results in performance problems when the system allows multitasking, as only one program can write to this structure at a time, meaning that many programs may be waiting in queue due to one program "hogging" the system. It is also a serious reliability concern, as damage to this file can destroy the entire file system. This contrasts with other file systems that store file and directory records in separate structures (such as DOS's FAT file system or the Unix File System), where having structure distributed across the disk means that damaging a single directory is generally non-fatal and the data may possibly be re-constructed with data held in the non-damaged portions.
Additionally, the limit of 65,535 allocation blocks resulted in files having a "minimum" size equivalent 1/65,535th the size of the disk. Thus, any given volume, no matter its size, could only store a maximum of 65,535 files. Moreover, any file would be allocated more space than it actually needed, up to the allocation block size. When disks were small, this was of little consequence, because the individual allocation block size was trivial, but as disks started to approach the 1 GB mark, the smallest amount of space that any file could occupy (a single allocation block) became excessively large, wasting significant amounts of disk space. For example, on a 1 GB disk, the allocation block size under HFS is 16 KB, so even a 1 byte file would take up 16 KB of disk space. This situation was less of a problem for users having large files (such as pictures, databases or audio) because these larger files wasted less space as a percentage of their file size. Users with many small files, on the other hand, could lose a copious amount of space due to large allocation block size. This made partitioning disks into smaller logical volumes very appealing for Mac users, because small documents stored on a smaller volume would take up much less space than if they resided on a large partition. The same problem existed in the FAT16 file system.
HFS saves the case of a file that is created or renamed but is case-insensitive in operation.
According to bombich.com, HFS is no longer supported on Catalina and future macOS releases.
See also
Comparison of file systems
APFS
HFS Plus
References
External links
HFS specification from developer.apple.com
The HFS Primer (PDF) from MWJ - dead link as of 27. May 2017
Filesystems HOWTO: HFS - slightly out of date
HFS File Structure Explained - early description of HFS
DiskWarrior - Software to eliminate all damage to the HFS disk directory
MacDrive - Software to read and write HFS/HFS Plus-formatted disks on Microsoft Windows
hfsutils - open-source software to manipulate HFS on Unix, DOS, Windows, OS/2
Disk file systems
Apple Inc. file systems
Macintosh operating systems
Computer file systems | Operating System (OS) | 817 |
HRESULT
In the field of computer programming, the HRESULT is a data type used in Windows operating systems, and the earlier IBM/Microsoft OS/2 operating system, to represent error conditions, and warning conditions.
The original purpose of HRESULTs was to formally lay out ranges of error codes for both public and Microsoft internal use in order to prevent collisions between error codes in different subsystems of the OS/2 operating system.
HRESULTs are numerical error codes. Various bits within an HRESULT encode information about the nature of the error code, and where it came from.
HRESULT error codes are most commonly encountered in COM programming, where they form the basis for a standardized COM error handling convention.
HRESULT format
An HRESULT value has 32 bits divided into three fields: a severity code, a facility code, and an error code. The severity code indicates whether the return value represents information, warning, or error. The facility code identifies the area of the system responsible for the error. The error code is a unique number that is assigned to represent the exception. Each exception is mapped to a distinct HRESULT.
Facility is either the facility name or some other distinguishing identifier; Severity is a single letter, S or E, that indicates whether the function call succeeded (S) or produced an error (E); and Reason is an identifier that describes the meaning of the code. For example, the status code STG_E_FILENOTFOUND indicates a storage-related error has occurred; specifically, a requested file does not exist. One should keep in mind that an HRESULT value may well be displayed as an unsigned hexadecimal value.
HRESULTs are organized as follows:
Format details
S - Severity - indicates success/fail
0 - Success
1 - Failure
R - Reserved portion of the facility code, corresponds to NT's second severity bit.
1 - Severe Failure
C - Customer. This bit specifies if the value is customer-defined or Microsoft-defined.
0 - Microsoft-defined
1 - Customer-defined
N - Reserved portion of the facility code. Used to indicate a mapped NT status value.
X - Reserved portion of the facility code. Reserved for internal use. Used to indicate HRESULT values that are not status values, but are instead message ids for display strings.
Facility - indicates the system service that is responsible for the error. Example facility codes are shown below (for the full list see ).
1 - RPC
2 - Dispatch (COM dispatch)
3 - Storage (OLE storage)
4 - ITF (COM/OLE Interface management)
7 - Win32 (raw Win32 error codes)
8 - Windows
9 - SSPI
10 - Control
11 - CERT (Client or server certificate)
...
Code - is the facility's status code
The ITF facility code has subsequently been recycled as the range in which COM components can define their own component-specific error code.
How HRESULTs work
An HRESULT is an opaque result handle defined to be zero or positive for a successful return from a function, and negative for a failure. Generally, successful functions return the S_OK HRESULT value (which is equal to zero). But in rare circumstances, functions may return success codes with additional information e.g. S_FALSE=0x01.
When HRESULTs are displayed, they are often rendered as an unsigned hexadecimal value, usually indicated by a 0x prefix. In this case, a number indicating failure can be identified by starting with hexadecimal figure 8 or higher.
HRESULTs were originally defined in the IBM/Microsoft OS/2 operating system as a general purpose error return code, and subsequently adopted in Windows NT. Microsoft Visual Basic substantially enhanced the HRESULT error reporting mechanisms, by associating an IErrorInfo object with an HRESULT error code, by storing a pointer to an IErrorInfo COM object in thread-local storage. The IErrorInfo mechanism allows programs to associate a broad variety of information with a particular HRESULT error: the class of the object that raised the error, the interface of the object that raised the error, error text; and a link to a help topic in a help file. In addition, receivers of an HRESULT error can obtain localized text for the error message on demand.
Subsequently, HRESULT, and the associated IErrorInfo mechanism were used as the default error reporting mechanism in COM.
Support of the IErrorInfo mechanism in Windows is highly inconsistent. Older windows APIs tend to not support it at all, returning HRESULTs without any IErrorInfo data. More modern Windows COM subsystems will often provide extensive error information in the message description of the IErrorInfo object. The more advanced features of the IErrorInfo error mechanisms—help links, and on-demand localization—are rarely used.
In the .NET Framework, HRESULT/IErrorInfo error codes are translated into CLR exceptions when transitioning from native to managed code; and CLR exceptions are translated to HRESULT/IErrorInfo error codes when transitioning from managed to native COM code.
Using HRESULTs
The winerror.h file defines some generic HRESULT values. Hard-coded HRESULT values are sometimes encoded in associated header files (.h files) for a given subsystem. These values are also defined in the corresponding header (.h) files with the Microsoft Windows Platforms SDK or DDK.
To check if a call that returns an HRESULT succeeded, make sure the S field is 0 (i.e. the number is non-negative) or use the FAILED() macro. To obtain the Code part of an HRESULT, use the HRESULT_CODE() macro. You can also use a tool called ERR.EXE to take the value and translate it to the corresponding error string. Another tool called ERRLOOK.EXE can also be used to display error strings associated with a given HRESULT value. ERRLOOK.EXE can be run from within a Visual Studio command prompt.
The Windows native SetErrorInfo and GetErrorInfo APIs are used to associate HRESULT return codes with a corresponding IErrorInfo object.
The FormatMessage API function can be used to convert some non-IErrorInfo HRESULTs into a user-readable string.
Examples
0x80070005
0x8 - Failure
0x7 - Win32
0x5 - "E_FAULT"
0x80090032
0x8 - Failure
0x9 - SSPI
0x32 - "The request is not supported"
References
External links
Microsoft Open Protocol Specification - HRESULT Values
Microsoft Developer Network Reference
Windows Data Types
Using Macros for Error Handling
List of DOS, Windows and OS/2 error codes, includes a lot of common HRESULT values
Data types | Operating System (OS) | 818 |
OpenOffice
OpenOffice or open office may refer to:
Computing
Software
OpenOffice.org (OOo), a discontinued open-source office software suite, originally based on StarOffice
Apache OpenOffice (AOO), a derivative of OOo by the Apache Software Foundation, with contribution from IBM Lotus Symphony
Programming
OpenOffice Basic (formerly known as StarOffice Basic or StarBasic or OOoBasic), a dialect of the programming language BASIC
File formats
OpenDocument format (ODF), also known as Open Document Format for Office Applications, a widely supported standard XML-based file format originating from OOo
OpenOffice.org XML, a file format used by early versions of OpenOffice.org
Office Open XML (OOXML), a competing file format from Microsoft
Other uses
Open plan, a floor plan
Open Document Architecture (ODA), document interchange format (CCITT T.411-T.424, equivalent to ISO 8613)
OpenDoc, an abandoned multi-platform standard for compound documents, intended as an alternative to Microsoft's Object Linking and Embedding (OLE) | Operating System (OS) | 819 |
Open Windows Foundation
Open Windows Foundation is a US-registered 501(c)(3) non-profit organization focusing on youth education and programming in San Miguel Dueñas, Guatemala. The center was founded in 2001 by Ericka Kaplan, Jean Uelmen, and Teresa Quiñonez and now serves over 1,000 members of the Dueñas community.
Mission
To provide education and community development opportunities for children and families in Guatemala.
History
The Foundation first opened in 2001 with an enrollment of 20 children assembling in a small room. The original 300 books for the center were donated from a library project that had recently closed in Rio Dulce, Guatemala. The Foundation’s primary function was to act as a dynamic library with the aim of getting kids to read for pleasure.
Over the years, Open Windows has continued to accumulate students, books, funds, and other resources through the personal networking of the founders and the community. In 2003, Rotary International donated 10 computers to Open Windows. This donation allowed Open Windows to expand its services by offering computer classes and by allowing students to use computers for homework. Open Windows also launched its initial website in 2003 which allowed for greater publicity.
In September 2005, a formal library room was built and is now filled with over 12,000 books including picture, fiction, non-fiction, and reference books. Two years later 2007, an impending donation from Rotary International for an additional 10 computers prompted the building of a second story computer center at Open Windows. The center was completed in April, 2007. By this time, the mission had been expanded to include enhancing technology skills and to provide educational programming and tutoring for students.
Programs
Open Windows currently provides multiple programs: the after school program, the activities program, the computer center, a scholarship program, a pre-school introduction to learning, installation of eco-stoves and house construction.
After School Program
The after school program allows the students time to finish their homework with the supervision and assistance of the seven teachers at Open Windows Foundation. Because of the close relationship between Open Windows and the local schools, homework for the students is often designed with the specific resources of Open Windows in mind.
Activities Program
The activities program is designed for the students to participate in an interactive reading and learning activity as a group. Every afternoon the teachers at Open Windows Foundation use a book read aloud to ground an activity designed to emphasize reading, writing, creative, listening, and thinking skills.
Computer Center
The computer center was completed in 2007 and is composed of 20 computers donated from Rotary International. Classes are provided for children and adults and for various computer programs. In addition to the classes, 10 of the computers are now equipped with internet access for research and other educational purposes. The computer center is always in high demand as it provides the only public-access computers in the area.
Scholarship Program
Thanks to donations from Tom Sullivan, a scholarship program was set up in 2003 to enable motivated students from low-income families to go to high school. Most students in San Miguel Dueñas can not afford to attend high school because they do not have the money to pay for the tuition, books, and uniform costs. Unfortunately in Guatemala the government does not pay these costs. The scholarship program had three students in its first year, but Open Windows has since sponsored more than 400 scholarships. This year there are 30 scholarship students. The chosen, talented students are given funds to cover the costs of uniforms, books, and transportation. In return, they are asked to keep Open Windows informed of their progress. We find that high school graduates, in turn, help their brothers and sisters attend school. Open Windows also helps graduates in preparing for new job interviews, which includes writing a Curriculum Vitae. We hope to place these students in the local job market. In return, Open Windows scholarship students or family members take turns cleaning the library early each morning. In 2011, we started the community service program. During vacation the scholarship students come to the learning center to help small groups of primary children with math and reading skills. This, in turn, helps them learn to teach, work with others and speak in front of a group. Only $500 sends a scholarship recipient to high school for one year.
Pre-school Introduction to Learning
A more recent program involves five and six year-old children who are not enrolled in school but who started coming to the center wanting to learn. Open Windows' teachers began giving them basic instruction in reading and simple arithmetic and in time the number of youngsters attending grew. When the children participate in this program, the parents agree to send the children to school the following year.
Eco-stoves
Many Guatemalan families cook their meals on open fires insides their houses. This fills the houses with smoke and requires the families to spend a lot of time or a lot of money obtaining firewood. Eco-stoves reduce the wood required for cooking by about two-thirds and they send the smoke out of the house through a chimney. Open Windows, in alliance with the Canadian organization Developing World Connections, installs new eco-stoves in homes all around San Miguel Dueñas, saving families money and health problems.
House Construction
Again in collaboration with Developing World Connections, Open Windows builds houses for families in the area. While many families live in homes made of sheet metal with a dirt floor, these new houses are made of concrete blocks and have cement floors, which allow the families to keep them much cleaner. Volunteers work with masons hired by Open Windows to construct one-, two- or three-room houses, which represent a major improvement in the lives of the families.
References
Foundations based in Guatemala | Operating System (OS) | 820 |
Polish Operational and Systems Research Society
The Polish Operational and Systems Research Society, POSRS (in Polish: Polskie Towarzystwo Badań Operacyjnych i Systemowych, PTBOiS) is the Polish scientific, scholarly and professional non-profit society for the advancements of operational and systems research (OR/SR). The Society is the core active body of the Association of Polish Operational Research Societies (ASPORS), the formal member of the International Federation of Operational Research Societies and its subsidiary, the Association of European Operational Research Societies.
History
The POSRS (PTBOiS) was established in 1986 with the objective of promoting the development of operational and systems research in Poland, in both methodological and applied research aspects. The Society aims at mobilising the Polish community of specialists in the respective fields, within the academia as well as in business and administration.
The POSRS/PTBOiS has since its founding gathered top Polish researchers and scholars in its fields of activities, including members of various academies of sciences, fellows of IEEE, EurAI, IFSA, etc., presidents of various international societies, e.g., IFORS, IFSA, RSS, etc., awardees of top OR/SR prizes exemplified by the EURO Gold Medal, IEEE Pioneer Awards, etc. Since the beginning, and for a long period of time, the President of the Society has been Professor Andrzej Straszak. Currently, the President is Professor Janusz Kacprzyk.
The Society cooperates very closely with virtually all top Polish research and scholarly institutions, public administration, industry and business, notably with the Systems Research Institute, Polish Academy of Sciences (Instytut Badań Systemowych PAN) and the Institute of Computing Science, Poznań University of Technology. The POSRS constitutes the core society of the Association of Polish Operational Research Societies.
Governance
The Society is chaired by the President, and the day-to-day operations are carried out by the Board, with two Vice-Presidents, the Secretary General and the Treasurer, the Board now consisting of altogether 11 persons. The main body of the Society is the General Assembly, electing the Board, the President, and the Controlling Commission.
The seat of the Society is located at the Systems Research Institute, Polish Academy of Sciences in Warsaw.
Membership
As of the end of 2016, the Society had some 100 members, mainly from the research and scholarly institutions, with a noticeable part from the public administration, business and industry.
Publications
For roughly 20 years POSRS (PTBOiS) had been publishing regularly the proceedings of its continuing conference series, altogether several dozen volumes between 1986 and 2008. On the top of this, occasional volumes were published, resulting, in particular, from the activity of the Society's working group MODEST (MODelling of Economies and Societies in Transition).
The members of the POSRS have also published numerous books, edited volumes, and hundreds of papers in highly respected journals and conference proceedings. For example:
J. Błażewicz, K. Ecker, E. Pesch, G. Schmidt, M. Sterna, J. Węglarz, “Handbook on Scheduling”. Springer Verlag, Berlin, New York, 2nd edition (1000 pp.), 2018, forthcoming.
M. Cichenski, F. Jaehn, G. Pawlak, E. Pesch, G. Singh, J. Błażewicz, “An integrated model for the transshipment yard scheduling problem”, Journal of Scheduling 20, 2017, pp. 57–65.
J. Błażewicz, N. Szostak, S. Wasik, “Understanding Life: A Bioinformatics Perspective”, European Review 25, 2017, pp. 231–245.
R. Słowiński, Y. Yao (eds.), “Rough Sets”. Part C of the Handbook of Computational Intelligence, edited by J. Kacprzyk and W. Pedrycz, Springer, Berlin, 2015, pp. 329–451.
J. Kacprzyk, M. Krawczak, G. Szkatuła, „On bilateral matching between fuzzy sets” in: Information Sciences, Volume 402, Elsevier, 2017, pp. 244–266.
F.F. Fagioli, L. Rocchi, L. Paolotti, R. Słowiński, A. Boggia, “From the farm to the agri-food system: A multiple criteria framework to evaluate extended multi-functional value” in: Ecological Indicators 79, Elsevier, 2017, pp. 91–102.
M. Zimniewicz, K. Kurowski, J. Węglarz, “Scheduling aspects in keyword extraction problem”, in: International Transactions in Operational Research, IFORS, 2017, to appear.
Nowadays, the Society cooperates with several journals exemplified by the Journal of Operations Research and Decisions, Control and Cybernetics, Journal of Automation, Mobile Robotics and Intelligent Systems.
Conferences
The flagship conference series of the PORSR is the biannual BOS (the Polish acronym for Operational and Systems Research) conferences which gather some 100 - 150 participants, both from Poland and abroad. In addition, the POSRS (PTBOiS) co-organizes and co-sponsors numerous other events, both national and international, together with its closely cooperating institutions.
Moreover, the POSRS was a co-organizer of EURO 2016 - 28th European Conference on Operational Research (3-6.07.2016), held in Poznań, at the Poznań University of Technology, one of the leading European centers in decision analysis, optimization, project management, and scheduling.
Program and Organizing Committees were chaired by Professor Daniele Vigo from the University of Bologna and Professor Joanna Józefowska from the Poznań University of Technology, respectively.
It is also worth noting the role of members of the Organizing Committee and pillars of the Operations Research and Management Science group at the Poznań University of Technology: Professors Jacek Błażewicz, Roman Słowiński and Jan Węglarz, in the support and organization process of the EURO 2016 conference that stimulated communication and cooperation among the most important European operational researchers.
Awards
The Society awards medals for outstanding achievements in the field of operational and systems research. Special awards for book publications and for young researchers are also planned.
Among the prizes won by the members of POSRS (PTBOiS) it is worth emphasizing EURO Gold Medal won by Professors: Jacek Błażewicz, Roman Słowiński, and Jan Węglarz, leaders of the Operations Research and Management Science group established within the Faculty of Computing at Poznań University of Technology.
References
External links
Operations research societies
Research institutes in Poland | Operating System (OS) | 821 |
Container Linux
Container Linux (formerly CoreOS Linux) is a discontinued open-source lightweight operating system based on the Linux kernel and designed for providing infrastructure to clustered deployments, while focusing on automation, ease of application deployment, security, reliability and scalability. As an operating system, Container Linux provided only the minimal functionality required for deploying applications inside software containers, together with built-in mechanisms for service discovery and configuration sharing.
Container Linux shares foundations with Gentoo Linux, Chrome OS, and Chromium OS through a common software development kit (SDK). Container Linux adds new functionality and customization to this shared foundation to support server hardware and use cases. CoreOS was developed primarily by Alex Polvi, Brandon Philips and Michael Marineau, with its major features available as a stable release.
The CoreOS team announced the end-of-life for Container Linux on May 26, 2020, offering Fedora CoreOS, and RHEL CoreOS as its replacement, both based on Red Hat.
Overview
Container Linux provides no package manager as a way for distributing payload applications, requiring instead all applications to run inside their containers. Serving as a single control host, a Container Linux instance uses the underlying operating-system-level virtualization features of the Linux kernel to create and configure multiple containers that perform as isolated Linux systems. That way, resource partitioning between containers is performed through multiple isolated userspace instances, instead of using a hypervisor and providing full-fledged virtual machines. This approach relies on the Linux kernel's cgroups and namespaces functionalities, which together provide abilities to limit, account and isolate resource usage (CPU, memory, disk I/O, etc.) for the collections of userspace processes.
Initially, Container Linux exclusively used Docker as a component providing an additional layer of abstraction and interface to the operating-system-level virtualization features of the Linux kernel, as well as providing a standardized format for containers that allows applications to run in different environments. In December 2014, CoreOS released and started to support rkt (initially released as Rocket) as an alternative to Docker, providing through it another standardized format of the application-container images, the related definition of the container runtime environment, and a protocol for discovering and retrieving container images. CoreOS provides rkt as an implementation of the so-called app container (appc) specification that describes required properties of the application container image (ACI); CoreOS initiated appc and ACI as an independent committee-steered set of specifications, aiming at having them become part of the vendor- and operating-system-independent Open Container Initiative (OCI; initially named the Open Container Project or OCP)
containerization standard, which was announced in June 2015.
Container Linux uses ebuild scripts from Gentoo Linux for automated compilation of its system components, and uses systemd as its primary init system with tight integration between systemd and various Container Linux's internal mechanisms.
Updates distribution
Container Linux achieves additional security and reliability of its operating system updates by employing FastPatch as a dual-partition scheme for the read-only part of its installation, meaning that the updates are performed as a whole and installed onto a passive secondary boot partition that becomes active upon a reboot or kexec. This approach avoids possible issues arising from updating only certain parts of the operating system, ensures easy rollbacks to a known-to-be-stable version of the operating system, and allows each boot partition to be signed for additional security. The root partition and its root file system are automatically resized to fill all available disk-space upon reboots; while the root partition provides read-write storage space, the operating system itself is mounted read-only under .
To ensure that only a certain part of the cluster reboots at once when the operating system updates are applied, preserving that way the resources required for running deployed applications, CoreOS provides locksmith as a reboot manager for Container Linux. Using locksmith, one can select between different update strategies that are determined by how the reboots are performed as the last step in applying updates; for example, one can configure how many cluster members are allowed to reboot simultaneously. Internally, locksmith operates as the daemon that runs on cluster members, while the command-line utility manages configuration parameters. Locksmith is written in the Go language and distributed under the terms of the Apache License 2.0.
The updates distribution system employed by Container Linux is based on Google's open-source Omaha project, which provides a mechanism for rolling out updates and the underlying request–response protocol based on XML. Additionally, CoreOS provides CoreUpdate as a web-based dashboard for the management of cluster-wide updates. Operations available through CoreUpdate include assigning cluster members to different groups that share customized update policies, reviewing cluster-wide breakdowns of Container Linux versions, stopping and restarting updates, and reviewing recorded update logs. CoreUpdate also provides a HTTP-based API that allows its integration into third-party utilities or deployment systems.
Cluster infrastructure
Container Linux provides etcd, a daemon that runs across all computers in a cluster and provides a dynamic configuration registry, allowing various configuration data to be easily and reliably shared between the cluster members. Since the key–value data stored within is automatically distributed and replicated with automated master election and consensus establishment using the Raft algorithm, all changes in stored data are reflected across the entire cluster, while the achieved redundancy prevents failures of single cluster members from causing data loss. Beside the configuration management, also provides service discovery by allowing deployed applications to announce themselves and the services they offer. Communication with is performed through an exposed REST-based API, which internally uses JSON on top of HTTP; the API may be used directly (through or , for example), or indirectly through , which is a specialized command-line utility also supplied by CoreOS. Etcd is also used in Kubernetes software.
Container Linux also provides the cluster manager which controls Container Linux's separate systemd instances at the cluster level. As of 2017 "fleet" is no longer actively developed and is deprecated in favor of Kubernetes. By using , Container Linux creates a distributed init system that ties together separate systemd instances and a cluster-wide deployment; internally, daemon communicates with local instances over D-Bus, and with the deployment through its exposed API. Using allows the deployment of single or multiple containers cluster-wide, with more advanced options including redundancy, failover, deployment to specific cluster members, dependencies between containers, and grouped deployment of containers. A command-line utility called is used to configure and monitor this distributed init system; internally, it communicates with the daemon using a JSON-based API on top of HTTP, which may also be used directly. When used locally on a cluster member, communicates with the local instance over a Unix domain socket; when used from an external host, SSH tunneling is used with authentication provided through public SSH keys.
All of the above-mentioned daemons and command-line utilities (, , and ) are written in the Go language and distributed under the terms of the Apache License 2.0.
Deployment
When running on dedicated hardware, Container Linux can be either permanently installed to local storage, such as a hard disk drive (HDD) or solid-state drive (SSD), or booted remotely over a network using Preboot Execution Environment (PXE) in general, or iPXE as one of its implementations. CoreOS also supports deployments on various hardware virtualization platforms, including Amazon EC2, DigitalOcean, Google Compute Engine, Microsoft Azure, OpenStack, QEMU/KVM, Vagrant and VMware. Container Linux may also be installed on Citrix XenServer, noting that a "template" for CoreOS exists.
Container Linux can also be deployed through its commercial distribution called Tectonic, which additionally integrates Google's Kubernetes as a cluster management utility. , Tectonic is planned to be offered as beta software to select customers. Furthermore, CoreOS provides Flannel as a component implementing an overlay network required primarily for the integration with Kubernetes.
, Container Linux supports only the x86-64 architecture.
Derivatives
Following its acquisition of CoreOS, Inc. in January 2018, Red Hat announced that it would be merging CoreOS Container Linux with Red Hat's Project Atomic, to create a new operating system, Red Hat CoreOS, while aligning the upstream Fedora Project open source community around Fedora CoreOS, combining technologies from both predecessors.
On March 6, 2018, Kinvolk GmbH announced Flatcar Container Linux, a derivative of CoreOS Container Linux. This tracks the upstream CoreOS alpha/beta/stable channel releases, with an experimental Edge release channel added in May 2019.
Reception
LWN.net reviewed CoreOS in 2014:
See also
Application virtualization software technology that encapsulates application software from the operating system on which it is executed
Comparison of application virtualization software various portable and scripting language virtual machines
Comparison of platform virtualization software various emulators and hypervisors, which emulate the whole physical computers
LXC (Linux Containers) an environment for running multiple isolated Linux systems (containers) on a single Linux control host
Operating-system-level virtualization implementations based on operating system kernel's support for multiple isolated userspace instances
Software as a service (SaaS) a software licensing and delivery model that hosts the software centrally and licenses it on a subscription basis
Virtualization a general concept of providing virtual versions of computer hardware platforms, operating systems, storage devices, etc.
References
External links
Official and GitHub source code repositories: , , , and
First glimpse at CoreOS, September 3, 2013, by Sébastien Han
CoreOS: Linux for the cloud and the datacenter, ZDNet, July 2, 2014, by Steven J. Vaughan-Nichols
What's CoreOS? An existential threat to Linux vendors, InfoWorld, October 9, 2014, by Matt Asay
Understanding CoreOS distributed architecture, March 4, 2015, a talk to Alex Polvi by Aaron Delp and Brian Gracely
CoreOS fleet architecture, August 26, 2014, by Brian Waldon et al.
Running CoreOS on Google Compute Engine, May 23, 2014
CoreOS moves from Btrfs to Ext4 + OverlayFS, Phoronix, January 18, 2015, by Michael Larabel
Containers and persistent data, LWN.net, May 28, 2015, by Josh Berkus
Containerization software
Linux containerization
Operating systems based on the Linux kernel
Red Hat software
Software using the Apache license
Virtualization-related software for Linux
X86-64 operating systems | Operating System (OS) | 822 |
Legacy-free PC
A legacy-free PC is a type of personal computer that lacks a floppy and/or optical disc drive, legacy ports, and an Industry Standard Architecture (ISA) bus (or sometimes, any internal expansion bus at all). According to Microsoft, "The basic goal for these requirements is that the operating system, devices, and end users cannot detect the presence of the following: ISA slots or devices; legacy floppy disk controller (FDC); and PS/2, serial, parallel, and game ports." The legacy ports are usually replaced with Universal Serial Bus (USB) ports. A USB adapter may be used if an older device must be connected to a PC lacking these ports. According to the 2001 edition of Microsoft's PC System Design Guide, a legacy-free PC must be able to boot from a USB device.
Removing older, usually more bulky ports and devices allows a legacy-free PC to be much more compact than earlier systems and many fall into the nettop or All in One form factor. Netbooks and Ultrabooks could also be considered a portable form of a legacy-free PC. Legacy-free PCs can be more difficult to upgrade than a traditional beige box PC, and are more typically expected to be replaced completely when they become obsolete. Many legacy-free PCs include modern devices that may be used to replace ones omitted, such as a memory card reader replacing the floppy drive.
As the first decade of the 21st century progressed, the legacy-free PC went mainstream, with legacy ports removed from commonly available computer systems in all form factors. However, the PS/2 keyboard connector still retains some use, as it can offer some uses (e.g. implementation of n-key rollover) not offered by USB.
With those parts becoming increasingly rare on newer computers as of late 2010s and early 2020s, the term "legacy-free PC" itself have also become increasingly rare.
History
Late 1980s
In 1987 was released the IBM PS/2 line with new internal architecture; the BIOS and the new PS/2 port and VGA port was introduced, but this line was heavily criticized for a relatively high-closed proprietary architecture and low compatibility with PC-cloned hardware.
The first known as "notebook" computer — the released in 1988 NEC UltraLite laptop, with omitted integrated floppy and with limited internal storage, also can be described as Legacy-free machine.
1990s
In 1998 was introduced the Apple's iMac G3 - this was the first widely known example of a class and drawing much criticism for its lack of legacy peripherals such as a floppy drive and Apple Desktop Bus (ADB) connector; However, its success popularized USB ports.
Compaq released the iPaq desktop in 1999.
From November 1999 to July 2000, Dell's WebPC was an early less-successful Wintel legacy-free PC.
2000s
More legacy-free PCs were introduced around 2000 after the prevalence of USB and broadband internet made many of the older ports and devices obsolete. They largely took the form of low-end, consumer systems with the motivation of making computers less expensive, easier to use, and more stable and manageable. The Dell Studio Hybrid, Asus Eee Box and MSI Wind PC are examples of later, more-successful Intel-based legacy-free PCs.
Apple introduced the Apple Modem on October 12, 2005 and removed the internal 56K modem on new computers. The MacBook Air, introduced on January 29, 2008, also omits a built-in SuperDrive and wired Ethernet connectivity that was available on all other Mac computers sold at the time. The SuperDrive would later be removed from all Macs by the end of 2016, while wired Ethernet would later be removed from all MacBook models. These removals are followed by other PC manufacturers who ship lightweight laptops.
PGA packaging of CPUs, and their complementary socket on motherboards, are gradually replaced by LGA starting from 2000s.
2010s
Northbridge, southbridge, and FSB have been replaced by more integrated architectures starting from early 2010s.
The relaunched MacBook in 2015 dropped features such as the MagSafe charging port and the Secure Digital (SD) memory card reader. It only kept two types of ports: a 3.5 mm audio jack and a USB 3.1 Type-C port. This configuration later found its way in the MacBook Pro in 2016, the only difference being that two or four Thunderbolt 3 ports were included instead of just one. In addition, all MacBook Pro except for the entry-level model replaced the function keys with a Touch Bar. These changes led to criticism because many users used the features that Apple had removed, yet this approach have been copied to various degree by some other laptop vendors.
The BIOS is now legacy, replaced by UEFI. PCI has fallen out of favor, as it has been superseded by PCIe
See also
Nettop
Netbook
PC 2001
WebPC
iPAQ (desktop computer)
Network computer
Thin client
Legacy system
References
Cloud clients
Information appliances
Personal computers
Classes of computers
Legacy hardware | Operating System (OS) | 823 |
Ubuntu philosophy
Ubuntu () is a Nguni Bantu term meaning "humanity". It is sometimes translated as "I am because we are" (also "I am because you are"), or "humanity towards others" (in Zulu, ). In Xhosa, the latter term is used, but is often meant in a more philosophical sense to mean "the belief in a universal bond of sharing that connects all humanity".
Different names in other Bantu languages
Although the most popular name referring to the philosophy today is Ubuntu (Zulu language, South Africa), it has several other names in other Bantu languages.
The name also differs by country, such as in Angola (gimuntu), Botswana (muthu), Burundi (ubuntu), Cameroon (bato), Congo (bantu), Congo Democratic Republic (bomoto/bantu), Kenya (utu/munto/mondo), Malawi (umunthu), Mozambique (vumuntu), Namibia (omundu), Rwanda (ubuntu), South Africa (ubuntu/botho), Tanzania (utu/obuntu/bumuntu), Uganda (obuntu), Zambia (umunthu/ubuntu) and Zimbabwe (Ubuntu, unhu or hunhu). It is also found in other Bantu countries not mentioned here.
Definitions
There are various definitions of the word "Ubuntu". The most recent definition was provided by the African Journal of Social Work (AJSW). The journal defined ubuntu as:
{{Quote|A collection of values and practices that people of Africa or of African origin view as making people authentic human beings. While the nuances of these values and practices vary across different ethnic groups, they all point to one thing – an authentic individual human being is part of a larger and more significant relational, communal, societal, environmental and spiritual world}}
There are many different (and not always compatible) definitions of what ubuntu is.
Ubuntu asserts that society, not a transcendent being, gives human beings their humanity. An example is a Zulu-speaking person who when commanding to speak in Zulu would say "khuluma isintu", which means "speak the language of people". When someone behaves according to custom, a Sotho-speaking person would say "ke motho", which means "he/she is a human". The aspect of this that would be exemplified by a tale told (often, in private quarters) in Nguni "kushone abantu ababili ne Shangaan", in Sepedi "go tlhokofetje batho ba babedi le leShangane", in English (two people died and one Shangaan). In each of these examples, humanity comes from conforming to or being part of the tribe.
According to Michael Onyebuchi Eze, the core of ubuntu can best be summarised as follows:
A person is a person through other people strikes an affirmation of one’s humanity through recognition of an "other" in his or her uniqueness and difference. It is a demand for a creative intersubjective formation in which the "other" becomes a mirror (but only a mirror) for my subjectivity. This idealism suggests to us that humanity is not embedded in my person solely as an individual; my humanity is co-substantively bestowed upon the other and me. Humanity is a quality we owe to each other. We create each other and need to sustain this otherness creation. And if we belong to each other, we participate in our creations: we are because you are, and since you are, definitely I am. The "I am" is not a rigid subject, but a dynamic self-constitution dependent on this otherness creation of relation and distance.
An "extroverted communities" aspect is the most visible part of this ideology. There is sincere warmth with which people treat both strangers and members of the community. This overt display of warmth is not merely aesthetic but enables the formation of spontaneous communities. The resultant collaborative work within these spontaneous communities transcends the aesthetic and gives functional significance to the value of warmth. How else are you to ask for sugar from your neighbour? Warmth is not the sine qua non of community formation but guards against instrumentalist relationships. Unfortunately, sincere warmth may leave one vulnerable to those with ulterior motives.
"Ubuntu" as political philosophy encourages community equality, propagating the distribution of wealth. This socialisation is a vestige of agrarian peoples as a hedge against the crop failures of individuals. Socialisation presupposes a community population with which individuals empathise and concomitantly, have a vested interest in its collective prosperity. Urbanisation and the aggregation of people into an abstract and bureaucratic state undermines this empathy. African intellectual historians like Michael Onyebuchi Eze have argued however that this idea of "collective responsibility" must not be understood as absolute in which the community's good is prior to the individual's good. On this view, ubuntu it is argued, is a communitarian philosophy that is widely differentiated from the Western notion of communitarian socialism. In fact, ubuntu induces an ideal of shared human subjectivity that promotes a community's good through an unconditional recognition and appreciation of individual uniqueness and difference. Audrey Tang has suggested that Ubuntu "implies that everyone has different skills and strengths; people are not isolated, and through mutual support they can help each other to complete themselves."
"Redemption" relates to how people deal with errant, deviant, and dissident members of the community. The belief is that man is born formless like a lump of clay. It is up to the community, as a whole, to use the fire of experience and the wheel of social control to mould him into a pot that may contribute to society. Any imperfections should be borne by the community and the community should always seek to redeem man. An example of this is the statement by the African National Congress (in South Africa) that it does not throw out its own but rather redeems.
Other scholars such as Mboti (2015) argue that the normative definition of Ubuntu, notwithstanding its intuitive appeal, is still open to doubt. The definition of Ubuntu, contends Mboti, has remained consistently and purposely fuzzy, inadequate and inconsistent. Mboti rejects the interpretation that Africans are “naturally” interdependent and harmony-seeking, and that humanity is given to a person by and through other persons. He sees a philosophical trap in attempts to elevate harmony to a moral duty – a sort of categorical imperative – that Africans must simply uphold. Mboti cautions against relying on intuitions in attempts to say what Ubuntu is or is not. He concludes that the phrase umuntu ngumuntu ngabantu references a messier, undisciplined relationship between persons, stating that "First, there is value in regarding a broken relationship as being authentically human as much as a harmonious relationship. Second, a broken relationship can be as ethically desirable as a harmonious one. For instance, freedom follows from a break from oppression. Finally, harmonious relations can be as oppressive and false as disharmonious ones. For instance, the cowboy and his horse are in a harmonious relationship."
Ubuntu maxims or short statements
Ubuntu is often presented in short statements called maxims by Samkange (1980). Some of these are:
Motho ke motho ka batho (Sotho/Tswana). A person is a person through other people.
Umuntu ngumuntu ngabantu (Zulu). A person is a person through other people.
Umntu ngumntu ngabantu (Xhosa). A person is a person through other people
Munhu munhu nevanhu (Shona). A person through other people.
Ndiri nekuti tiri (Shona). I am because we are.
History of the concept in African written sources
Although ubuntu has been in existence in orature (oral literature), it appeared in South African written sources from as early as the mid-19th century. Reported translations covered the semantic field of "human nature, humanness, humanity; virtue, goodness, kindness". Grammatically, the word combines the root -ntʊ̀ "person, human being" with the class 14 ubu- prefix forming abstract nouns, so that the term is exactly parallel in formation to the abstract noun humanity.
The concept was popularised in terms of a "philosophy" or "world view" (as opposed to a quality attributed to an individual) beginning in the 1950s, notably in the writings of Jordan Kush Ngubane published in the African Drum magazine. From the 1970s, the ubuntu began to be described as a specific kind of "African humanism". Based on the context of Africanisation propagated by the political thinkers in the 1960s period of decolonisation, ubuntu was used as a term for a specifically African (or Southern African) kind of humanism found in the context of the transition to majority rule in Zimbabwe and South Africa.
The first publication dedicated to ubuntu as a philosophical concept appeared in 1980, Hunhuism or Ubuntuism: A Zimbabwe Indigenous Political Philosophy (hunhu being the Shona equivalent of Nguni ubuntu) by Stanlake J. W. T. Samkange. Hunhuism or Ubuntuism is presented as political ideology for the new Zimbabwe, as Southern Rhodesia attained independence from the United Kingdom.
The concept was used in South Africa in the 1990s as a guiding ideal for the transition from apartheid to majority rule. The term appears in the Epilogue of the Interim Constitution of South Africa (1993), "there is a need for understanding but not for vengeance, a need for reparation but not for retaliation, a need for ubuntu but not for victimisation".
In South Africa, it has come to be used as a contested term for a kind of humanist philosophy, ethic, or ideology, also known as Ubuntuism propagated in the Africanisation (transition to majority rule) process of these countries during the 1980s and 1990s. New research has begun to question the exclusive "humanism" framing, and thus to suggest that ubuntu can have a "militaristic" angle - an ubuntu for warriors.
Since the transition to democracy in South Africa with the Nelson Mandela presidency in 1994, the term has become more widely known outside of Southern Africa, notably popularised to English-language readers through the ubuntu theology of Desmond Tutu. Tutu was the chairman of the South African Truth and Reconciliation Commission (TRC), and many have argued that ubuntu was a formative influence on the TRC.
By country
Zimbabwe
In the Shona language, the majority spoken language in Zimbabwe, ubuntu is unhu or hunhu. In Ndebele, it is known as ubuntu. The concept of ubuntu is viewed the same in Zimbabwe as in other African cultures. The Shona phrase munhu munhu nekuda kwevanhu means a person is human through others while ndiri nekuti tiri means I am because we are.
Stanlake J. W. T. Samkange (1980) highlights the three maxims of Hunhuism or Ubuntuism that shape this philosophy: The first maxim asserts that 'To be human is to affirm one's humanity by recognizing the humanity of others and, on that basis, establish respectful human relations with them.' And 'the second maxim means that if and when one is faced with a decisive choice between wealth and the preservation of the life of another human being, then one should opt for the preservation of life'. The third 'maxim' as a 'principle deeply embedded in traditional African political philosophy' says 'that the king owed his status, including all the powers associated with it, to the will of the people under him'.
South Africa
Ubuntu: "I am what I am because of who we all are." (From a definition offered by Liberian peace activist Leymah Gbowee.)
Archbishop Desmond Tutu offered a definition in a 1999 book:
Tutu further explained Ubuntu in 2008:.
Nelson Mandela explained Ubuntu as follows:
Tim Jackson refers to Ubuntu as a philosophy that supports the changes he says are necessary to create a future that is economically and environmentally sustainable. Judge Colin Lamont expanded on the definition during his ruling on the hate speech trial of Julius Malema:.
At Nelson Mandela's memorial, United States President Barack Obama spoke about Ubuntu, saying,
Malawi
In Malawi, the same philosophy is called "uMunthu" in the local Chewa language.
According to the Catholic Diocese of Zomba bishop Rt. Rev. Fr. Thomas Msusa, “The African worldview is about living as one family, belonging to God”. Msusa noted that in Africa “We say ‘I am because we are’, or in Chichewa kali kokha nkanyama, tili awiri ntiwanthu (when you are on your own you are as good as an animal of the wild; when there are two of you, you form a community).”
The philosophy of uMunthu has been passed on through proverbs such as Mwana wa mnzako ngwako yemwe, ukachenjera manja udya naye (your neighbor's child is your own, his/her success is your success too). Some notable Malawian uMunthu philosophers and intellectuals who have written about this worldview are Augustine Musopole, Gerard Chigona, Chiwoza Bandawe, Richard Tambulasi, Harvey Kwiyani and Happy Kayuni. This includes Malawian philosopher and theologist Harvey Sindima’s treatment of uMunthu as an important African philosophy is highlighted in his 1995 book ‘Africa’s Agenda: The legacy of liberalism and colonialism in the crisis of African values’. In film, the English translation of the proverb lent its hand to forming the title of Madonna's documentary, I Am Because We Are about Malawian orphans.
"Ubuntu diplomacy"
In June 2009, in her swearing-in remarks as US Department of State Special Representative for Global Partnerships, Global Partnership Initiative, Office of the Secretary of State (served 18 June 2009 – 10 October 2010), Elizabeth Frawley Bagley discussed ubuntu in the context of American foreign policy, stating: "In understanding the responsibilities that come with our interconnectedness, we realize that we must rely on each other to lift our World from where it is now to where we want it to be in our lifetime, while casting aside our worn out preconceptions, and our outdated modes of statecraft." She then introduced the notion of "Ubuntu Diplomacy" with the following words:
Ubuntu education philosophy
In education, ubuntu has been used to guide and promote African education, and to decolonise it from western educational philosophies. Ubuntu education uses the family, community, society, environment and spirituality as sources of knowledge but also as teaching and learning media. The essence of education is family, community, societal and environmental well being. Ubuntu education is about learners becoming critical about their social conditions. Interaction, participation, recognition, respect and inclusion are important aspects of ubuntu education. Methods of teaching and learning include groups and community approaches. The objectives, content, methodology and outcomes of education are shaped by ubuntu.
Ubuntu social work, welfare and development
This refers to Afrocentric ways of providing a social safety net to vulnerable members of society. Common elements include collectivity. The approach helps to 'validate worldview and traditions suppressed by Western Eurocentric cultural hegemony'. It is against materialism and individualism. It looks at an individual person as holistically. The social interventions done by social workers, welfare workers and development workers should strengthen, not weaken families, communities, society, the environment and peoples's spirituality. These are the 5 pillars of ubuntu intervention: family, community. society, environment and spirituality. Ubuntu is the current theme for the Global Agenda for Social Work and Social Development and represents the highest level of global messaging within social work profession for the years 2020-2030 . Utilising the biopsychosocial and ecological system approaches, ubuntu is a philosophy that is applicable in clinical social work in mental health.
Ubuntu research philosophy
Ubuntu can guide research objectives, ethics and methodology. Using ubuntu research approach provides researchers with an African oriented tool that decolonises research agenda and methodology. The objectives of ubuntu research are to empower families, communities and society at large. In doing ubuntu research, the position of the researcher is important because it helps create research relationships. The agenda of the research belongs to the community, and true participation is highly valued. Ujamaa is valued, it means pulling together or collaboration.
Ubuntu moral philosophy or ubuntu morality
According to this philosophy, 'actions are right roughly insofar as they are a matter of living harmoniously with others or honouring communal relationships', 'One's ultimate goal should be to become a full person, a real self or a genuine human being'. Ukama, i.e. relationships are important. Among the Shona people for example, when a person dies, his or her property is shared amongst relatives and there are culturally approved ways of doing this. The practice is called kugova. Samkange (1980)'s maxim on morality says “If and when one is faced with a decisive choice between wealth and the preservation of the life of another human being, then one should opt for the preservation of life”.
Ubuntu political and leadership philosophy
Samkange (1980) said no foreign political philosophy can be useful in a country more than the indigenous philosophies. "Is there a philosophy or ideology indigenous to (a) country that can serve its people just as well, if not better than, foreign ideologies?", asked Samkange in the book Hunhuism or Ubuntuism. His maxim for leadership is “The king owes his status, including all the powers associated with it, to the will of the people under him”.
Ubuntu social justice, criminal justice and jurisprudence
Ubuntu justice has elements different from western societies: it values repairing relationships. Ubuntu justice emphasises these elements:
Deterrence which can be done socially, physically, economically or spiritually
Returning and Replacement - meaning bring back what has been stolen, replacing it or compensating. In Shona language this is called kudzora and kuripa Apology, Forgiveness and Reconciliation (restoration of ukama or relations) after meeting the above
Warnings and Punishments (retribution) from leaders and elders if the above have not been achieved or ignored
Warnings and Punishments from spiritual beings if the above have not been met. In Shona culture, these are called jambwa and ngoziFamilies, and at times community are involved in the process of justice.
In popular culture
Ubuntu was a major theme in John Boorman's 2004 film In My Country. Former US president Bill Clinton used the term at the 2006 Labour Party conference in the UK to explain why society is important. The Boston Celtics, the 2008 NBA champions, have chanted "ubuntu" when breaking a huddle since the start of the 2007–2008 season.
At the 2002 UN World Summit on Sustainable Development (WSSD), there was an Ubuntu Village exposition centre. Ubuntu was the theme of the 76th General Convention of the American Episcopal Church. The logo includes the text "I in You and You in Me".
In October 2004 Mark Shuttleworth, a South African entrepreneur and owner of UK based company Canonical Ltd., founded the Ubuntu Foundation that is the company behind the creation of a computer operating system based on Debian GNU/Linux. He named the Linux distribution Ubuntu.
In film, the English translation of the proverb lent its hand to forming the title of pop singer Madonna's documentary, I Am Because We Are about Malawian orphans.
A character in the 2008 animated comedy The Goode Family is named Ubuntu.
Ubuntu was the title and theme of an EP released by British band Clockwork Radio in 2012.
Ubuntu was the title of an EP released by American rapper Sage Francis in 2012.
Ubuntu was chosen as the name of a clan of meerkats in the 2021 season of Meerkat Manor: Rise of the Dynasty.
See also
Traditional African religions
African philosophy
Bantu peoples
Nguni languages
Africanization
Decolonisation
Ethic of reciprocity
Harambee (Kenyan/Swahili concept)
Humanity (virtue)
Negotiations to end apartheid in South Africa
Pan-Africanism
Ubuntu theology
Universalism
Social construction
Footnotes
References
Further reading
Chasi, Colin (2021). Ubuntu for Warriors Trenton, NJ: Africa World Press.
Mboti, N. (2014). "May the Real Ubuntu Please Stand Up?" Journal of Media Ethics 30(2), pp. 125–147.
Battle, Michael (2007). Reconciliation: The ubuntu theology of Desmond Tutu. Pilgrim Press.
Eze, Michael Onyebuchi (2017). "I am Because You Are: Cosmopolitanism in the Age of Xenophobia", Philosophical Papers, 46:1, 85-109
Eze, Michael Onyebuchi (2010). Intellectual history in contemporary South Africa. Palgrave Macmillan. .
Eze, Michael Onyebuchi (2008). "What is African Comunitarianism? Against consensus as a regulative Ideal", South African Journal of Philosophy, Vol. 27:4, pp. 386–399
Forster, Dion (2006). Self validating consciousness in strong artificial intelligence: An African theological contribution. Pretoria: Doctoral Dissertation, University of South Africa / UNISA, an extensive and detailed discussion of ubuntu in chapters 5–6.
Forster, Dion (2006). Identity in relationship: The ethics of ubuntu as an answer to the impasse of individual consciousness (Paper presented at the South African science and religion Forum – Published in the book The impact of knowledge systems on human development in Africa. du Toit, CW (ed.), Pretoria, Research institute for Religion and Theology (University of South Africa) 2007:245–289).Pretoria: UNISA. Dion Forster
Gade, C. B. N. (2017). A Discourse on African Philosophy: A New Perspective on Ubuntu and Transitional Justice in South Africa. New York: Lexington Books.
Gade, C. B. N. (2011). "The historical development of the written discourses on ubuntu", South African Journal of Philosophy, 30(3), 303–329 .
Kamwangamalu, Nkonko M. (2014). Ubuntu in South Africa: A sociolinguistic perspective to a pan-African concept. In Molefi Kete Asante, Yoshitaka Miike, & Jing Yin (eds), The global intercultural communication reader (2nd ed., pp. 226–236). New York, NY: Routledge.
Louw, Dirk J. 1998. "Ubuntu: An African Assessment of the Religious Other". Twentieth World Congress of Philosophy.
Metz, Thaddeus 2007, "Toward an African Moral Theory" (Symposium) S. Afr. J. Philos. 2007, 26(4)
Ramose, Mogobe B. (2003). "The philosophy of ubuntu and ubuntu as a philosophy". In P. H. Coetzee & A. P. J. Roux (eds), The African philosophy reader (2nd ed., pp. 230–238). New York/London: Routledge.
Samkange, S., & T. M. Samkange (1980). Hunhuism or ubuntuism: A Zimbabwe Indigenous Political Philosophy. Salisbury [Harare]: Graham Publishing, . 106pp. Paperback
Education. Rotterdam: Sense Publishers, pp. 27–38. https://www.sensepublishers.com/catalogs/bookseries/other-books/decolonizing-global-citizenship-education/
Chigangaidze, Robert Kudakwashe. (2021). An exposition of humanistic-existential social work in light of ubuntu philosophy: Towards theorizing ubuntu in social work practice. Journal of Religion & Spirituality in Social Work: Social Thought, 40 (2), 146-165.
External links
Ubuntu Party
Ubuntu Planet
Magolego, Melo. 2013. "Ubuntu in Western Society", M&G'' Thought Leader Blog
Sonal Panse, Ubuntu – African Philosophy (buzzle.com)
Sean Coughlan, "All you need is ubuntu", BBC News Magazine, Thursday, 28 September 2006.
A. Onomen Asikele, Ubuntu Republics of Africa (2011)
Humanism
Social ethics
Ethical schools and movements
Decolonization
African philosophy
Pan-Africanism
African and Black nationalism
Politics of South Africa
Politics of Zimbabwe
Articles containing video clips
Bantu
Political terminology in South Africa | Operating System (OS) | 824 |
Windows Live Devices
Windows Live Devices was an online device management service as part of Windows Live which will allow users to centrally access and manage the synchronization of files stored on their computers, mobile devices, as well as other peripherals such as digital photo frames. Windows Live Devices also allows users to remotely access their computers from the internet using a web browser.
This service integrates tightly with Windows Live Mesh to allow files and folders between two or more computers be in sync with each other, as well as to be in sync with files and folders stored on the cloud with SkyDrive. (Now OneDrive) The combination of the three services: Windows Live Devices, Windows Live Mesh, and SkyDrive are very similar to the previous Live Mesh technology preview platform offering from Microsoft, and are based on the same underlying technology.
Windows Live Devices was released on June 24, 2010, as part of Windows Live Wave 4 suite of services.
History
Microsoft released their Live Mesh software as a service platform on April 23, 2008 that enabled PCs and other devices to connect with each other through the internet using FeedSync technologies. Live Mesh allows applications, files and folders to be synchronized across multiple devices. Live Mesh was initially released as a technology preview, however, it was shortly updated to Beta on October 30, 2008 and at the same time incorporated as part of the Azure Services Platform - a "cloud" platform hosted at Microsoft data centers. Live Mesh consisted of the following four elements:
Mesh Operating Environment - the software component of Live Mesh that manages the synchronization relationships between devices and data
Live Desktop - the online cloud storage service that allows synchronized folders to be accessible via a website
Live Mesh Remote Desktop - a software that allow users to remotely access, connect and manage to any of the devices in a synchronization relationship
Live Framework - a REST-based application programming interface for accessing the Live Mesh services over HTTP
In January 2009, the Live Mesh team was merged into the unified Windows Live team at Microsoft such that its incubation technologies will be integrated into Windows Live services. As a result, Live Framework, the developer framework for Live Mesh, was discontinued on September 8, 2009 and was incorporated into Live Services - the developer resources central for all Windows Live services. As part of the merge, the Mesh Operating Environment, or simply the Live Mesh software, is replaced by Windows Live Mesh to support PC-to-PC as well as PC-to-cloud file synchronisation, and the online cloud storage service for Live Mesh - Live Desktop - is replaced by SkyDrive synchronised storage. Windows Live Devices will serve the purposes of managing and providing access to all devices in the synchronization relationship, as well as replacing the Live Mesh Remote Desktop to provide remote access functions to any devices in a synchronization relationship.
The Live Mesh technology preview platform supported the management and synchronisation of data between Windows and Mac OS X computers, mobiles devices, Windows Home Server, Xbox, Zune, Car Automation System, as well as other computer devices and peripherals such as printers, digital cameras, and digital photo frames. These capabilities of Live Mesh are expected to be integrated into Windows Live Devices and Windows Live Mesh in future releases.
See also
Windows Essentials
References
External links
Official website (Archive)
Inside Windows Live
Devices
Data synchronization | Operating System (OS) | 825 |
Osmo (game system)
Osmo is a line of hands-on educational digital/physical games produced by the company Tangible Play, based in Palo Alto, California. Osmo's products are built around its proprietary “Reflective Artificial Intelligence,” a system that uses a stand and a clip-on mirror to allow an iPad or iPhone's front-facing camera to recognize and track objects in the physical play space in front of the device.
Time magazine named Osmo one of the 25 Best Inventions of 2014 and in 2017, Fast Company named Osmo one of the top ten “most innovative companies” in education. Osmo games are available for sale online and in retail outlets such as Target and the Apple Store. Osmo was acquired by Byju's in January 2019 for $120 million.
Development
Osmo was developed by Tangible Play, a company founded in 2013 by Pramod Sharma and Jérôme Scholler, ”two Stanford alums and ex-Googlers with kids.” They were inspired by observing Sharma's daughter, then five years old, interact with an iPad. "She had her face glued to screen, which seems unhealthy and not natural," according to Sharma. The partners created a game system that uses a mirror over the camera to turn the screen into “an interactive partner in physical games”
Products
Words
Words is a game where players examine on-screen picture clues and then spell out words with tangible letter tiles. According to Common Sense Education, “The range of difficulty means every student can be challenged, and the variety of word packs — and the option to add your own — makes it really versatile for fun and learning.”
Tangram
In a modern version of the classic educational game, children arrange tangible tangram pieces to match shapes they see on the screen. Tangrams are good for developing spatial awareness skills.
Newton
Newton is a physics-based game where players direct small bouncing balls into targeted areas by drawing platforms and ramps, or even by placing physical objects in the playing space. According to The Toy Insider, “It’s kind of like a high-tech version of pinball — super fun!”
Numbers
Numbers is an ocean-themed math game, where players try to pop bubbles and free fish by getting an effective combination of number tiles on the table. GeekDad said, “Seeing it in action feels almost magical–you throw a bunch of tiles out there, and the app uses the camera to read them instantly, displaying them on the screen and adding them up (or multiplying, as the case may be).”
Masterpiece
Masterpiece uses computer vision to analyze any image and translate it into a traceable image. According to VentureBeat, “It’s an app that enables kids and adults to become digital artists and regain confidence in their ability to draw.”
Coding Awbie
Players learn about coding by placing magnetically linking coding blocks in sequences to control a character (Awbie) on an adventure. New Atlas called Coding Awbie “a good way of introducing younger children to the concepts of logic and problem solving.”
Monster
Mo, the monster, takes kids real-life drawings and incorporates them into his animated world. “The whole thing is then automatically saved as a video clip, which you can share with grandma,” reported Wired magazine. In 2017, Osmo introduced a Spanish-language version of Monster, voiced by actor Jaime Camil.
Pizza Co.
Pizza Co. combines cooking and entrepreneur play with interactive tokens representing ingredients and money. Pizza Co. won a Gold Award from Parents’ Choice, who said the game “immerses children in basic mathematics skills blended with hours of imaginative and cooperative play.”
Coding Jam
Coding Jam teaches coding concepts through the creative act of making music. “An open-ended music studio with dozens of characters and instruments, Coding Jam is intuitive enough for a 5 year old but offers enough complexity for a 10 year old to master and mix intricate compositions,” according to Venturebeat.
Hot Wheels™ MindRacers
Osmo partnered with Mattel to create MindRacers, a game combining real Hot Wheels™ cars with virtual on-screen racetracks. MindRacers is the first Hot Wheels™ product that says it is for both “boys and girls” on the box.
This is the only Osmo game to be compatible with the IPad only (due to the way the play field is shaped).
References
External links
Educational games
Companies based in Palo Alto, California
Companies established in 2013 | Operating System (OS) | 826 |
Userland
Userland may refer to:
Radio UserLand, a computer program to aid maintaining blogs or podcasts
UserLand Software, a U.S. software company specializing in web applications
UserLAnd Technologies, a mobile app that allows Linux programs to run on mobile devices
User space, operating system software that does not belong in the kernel | Operating System (OS) | 827 |
Windows DNA
Windows DNA, short for Windows Distributed interNet Applications Architecture, is a marketing name for a collection of Microsoft technologies that enable the Windows platform and the Internet to work together. Some of the principal technologies that DNA comprises are ActiveX, Dynamic HTML (DHTML) and COM. Windows DNA has been largely superseded by the Microsoft .NET Framework, and Microsoft no longer uses the term. To support web-based applications, Microsoft has tried to add Internet features into the operating system using COM. However, developing a web-based application using COM-based Windows DNA is quite complex, because Windows DNA requires the use of numerous technologies and languages.
These technologies are completely unrelated from a syntactic point of view.
External links
Unraveling Windows DNA at MSDN
Windows DNA at Smart Computing Encyclopedia
Microsoft's DNA Web page in 1999
Windows communication and services | Operating System (OS) | 828 |
IBM System p
The IBM System p is a high-end line of RISC (Power)/UNIX-based servers. It was the successor of the RS/6000 line, and predecessor of the IBM Power Systems server series.
History
The previous RS/6000 line was originally a line of workstations and servers. The first System p server line was named the eServer pSeries in 2000 as part of the e-Server branding initiative.
In 2004, with the advent of the POWER5 processor, the server family was rebranded the e''Server p5.
In 2005, following IBM's move to streamline its server and storage brands worldwide, and incorporating the "System" brand with the Systems Agenda, the family was again renamed to System p5. The System p5 now encompassed the IBM OpenPower product line.
In 2007, after the introduction of the POWER6 processor models, the last rename under the System p brand dropped the p (numbered) designation.
In April 2008, IBM announced a rebranding of the System p and its unification with the mid-range System i platform. The resulting product line was called IBM Power Systems.
Hardware and software
Processors
Whereas the previous RS/6000 line used a mix of early POWER and PowerPC processors, when pSeries came along, this had evolved into RS64-III and POWER3 across the board—POWER3 for its excellent floating-point performance and RS64 for its scalability, throughput, and integer performance.
IBM developed the POWER4 processor to replace both POWER3 and the RS64 line in 2001. After that, the differences between throughput and number crunching-optimized systems no longer existed. Since then, System p machines evolved to use the POWER5 but also the PowerPC 970 for the low-end and blade systems.
The last System p systems used the POWER6 processor, such as the POWER6-based System p 570 and the JS22 blade. In addition, during the SuperComputing 2007 (SC07) conference in Reno, IBM introduced a new POWER6-based System p 575 with 32 POWER6 cores at 4.7 GHz and up to 256 GB of RAM with water cooling.
Features
All IBM System p5 and IBM e''Server p5 machines support DLPAR (Dynamic Logical Partitioning) with Virtual I/O and Micro-partitioning.
System p generally uses the AIX operating system and, more recently, 64-bit versions of the Linux operating system.
Models
BladeCenter
IBM BladeCenter JS12 (POWER6)
IBM BladeCenter JS22 (POWER6)
IBM BladeCenter JS23 (POWER6)
IBM BladeCenter JS43 (POWER6)
Main line
eServer pSeries
IBM eServer pSeries 610 (7028-6C1 & 6E1)
IBM eServer pSeries 615 (7029-6C3, 7029-6E3) (1~2-core POWER4+ CPU)
IBM eServer pSeries 620 (7025-F80, 6F0 & 6F1) (1~3 2-core RS64-IV CPUs)
IBM eServer pSeries 630 (7028-6C4, 7028-6E4) (1 1-core POWER4 CPU or 1~2 2-core POWER4 CPUs)
IBM eServer pSeries 640 (7026-B80) 1-4 POWER3-II CPUs
IBM eServer pSeries 650 (7038-6M2) 2-8 POWER4 CPUs
IBM eServer pSeries 655 (7039-651) 4-8 POWER4 CPUs
IBM eServer pSeries 660 (7026-H80, 6H0, 6H1, M80 & 6M1)
IBM eServer pSeries 670 (7040-671) 4-16 POWER4 CPUs
IBM eServer pSeries 680 (7017 range)
IBM eServer pSeries 690 (7040-681) 8-32 POWER4 CPUs
The IBM p690 was, at the time of its release in late 2001, the flagship of IBM's high-end Unix servers during the POWER4 era of processors. It was built to run IBM AIX Unix, although it is possible to run a version of Linux minus some POWER4-specific features.
It could support up to 32 (1.5, 1.7 or 1.9 GHz) POWER4+ processors and 1 TB of RAM, which weighs well over 1000 kg. It was used in a supercomputer at Forschungszentrum Jülich in 2004, and was discontinued in late 2005.
eServer p5
Released in 2004.
IBM eServer p5 510 Express (9111-510) (1~2-core 1.5GHz POWER5 CPU)
IBM eServer p5 510 (9111-510) (1~2-core 1.65GHz POWER5 CPU)
IBM eServer p5 520 Express (9111-520) (1~2-core 1.5GHz POWER5 CPU)
IBM eServer p5 520 (9111-520) (2-core 1.65GHz POWER5 CPU)
IBM eServer p5 550 Express (9113-550) (1~2 1~2-core 1.5GHz POWER5 CPUs)
IBM eServer p5 550 (9113-550) (1~2 2-core 1.65GHz POWER5 CPUs)
IBM eServer p5 570 Express (9117-570) (1~8 2-core 1.5GHz POWER5 CPUs)
IBM eServer p5 570 (9117-570) (1~8 2-core 1.65GHz or 1.9GHz POWER5 CPUs)
IBM eServer p5 590 (9119-590) (1~4 8-core 1.65GHz POWER5 MCMs)
IBM eServer p5 595 (9119-595) (2, 4, 6 or 8 8-core 1.65GHz or 1.9GHz POWER5 MCMs)
System p5
IBM System p5 185 (7037-A50) (1~2-core PowerPC 970 CPU)
IBM System p5 505 (9115-505) (1~2-core POWER5 or POWER5+ CPU)
IBM System p5 505Q (9115-505) (4-core POWER5+ CPU)
IBM System p5 510 (9110-51A) (1~2 1~2-core POWER5 or POWER5+ CPUs)
IBM System p5 510Q (9110-51A) (1~2 4-core POWER5+ CPUs)
IBM System p5 520 (9131-52A) (1~2-core POWER5+ CPU)
IBM System p5 520Q (9131-52A) (4-core POWER5+ CPU)
IBM System p5 550 (9133-55A) (1~2 2-core POWER5+ CPUs)
IBM System p5 550Q (9133-55A) (1~2 4-core POWER5+ CPUs)
IBM System p5 560Q (9116-561) (1, 2 or 4 4-core POWER5+ CPUs)
IBM System p5 570 (9117-570) (1~8 2-core POWER5+ CPUs)
IBM System p5 575 (9118-575) (8 1~2-core POWER5+ CPUs)
IBM System p5 590 (9119-590) (1~2 16-core POWER5 or POWER5+ processor books)
IBM System p5 595 (9119-595) (1~4 16-core POWER5 or POWER5+ processor books)
System p
IBM System p 520 Express (1, 2 or 4-core POWER6 CPU)
IBM System p 550 Express (1~4 2-core POWER6 CPUs)
IBM System p 560 Express (POWER6)
IBM System p 570 (POWER6)
IBM System p 575 (POWER6)
IBM System p 595 (9119-FHA) (1~8 8-core POWER6 processor books)
System p was rebranded to Power Systems in 2008.
OpenPower
OpenPower was the name of a range of servers in the System p line from IBM. They featured IBM's POWER5 CPUs and run only 64-bit versions of Linux. IBM's own UNIX variant, AIX is not supported since the OpenPower servers are not licensed for this operating system.
There were two models available, with a variety of configurations.
Before 2005, OpenPower belonged to the eServer product line but were eventually rolled into the IBM's Power Systems product portfolio.
IBM eServer OpenPower 710 (9123-710) (1~2-core POWER5 CPU)
IBM eServer OpenPower 720 (9124-720) 1-4 POWER5 CPUs
IntelliStation POWER
IBM IntelliStation POWER 265
IBM IntelliStation POWER 275
IBM IntelliStation POWER 185 (PowerPC 970)
IBM IntelliStation POWER 285
BladeCenter
IBM BladeCenter JS20 (PowerPC 970)
IBM BladeCenter JS21 (PowerPC 970)
See also
Web-based System Manager, an AIX management software
IBM Hardware Management Console, a management appliance
Dynamic Logical Partitioning
Linux on Power
IBM IntelliStation POWER
PureSystems
List of IBM products
References
External links
IBM Power Systems product page
IBM's System Agenda
Virtualizing an Infrastructure with System p and Linux
System p | Operating System (OS) | 829 |
Intelligent Resource Director
On IBM mainframes running the z/OS operating system, Intelligent Resource Director (IRD) is software that automates the management of CPU resources and certain I/O resources.
IRD is implemented as a collaboration between Workload Manager (WLM), a component of z/OS, and the PR/SM Logical Partitioning (LPAR) hypervisor, a function of the mainframe hardware.
Major IRD functions are:
Logical CP Management - where IRD dynamically varies logical processors on- and off-line. (This does not apply to zIIPs or zAAPs.)
Weight Management - where IRD dynamically redistributes LPAR weights between members of an LPAR Cluster. (An LPAR Cluster is the set of members of a Parallel Sysplex on a single mainframe footprint.) The total of the weights for the LPAR Cluster remains constant as weights are shifted between the members. (Linux on IBM Z LPARs can also participate in Weight Management.)
CHPID Management - where logical channel paths are moved between members of an LPAR Cluster.
IRD's objective is to optimise the use of computing resources while enabling WLM to meet its workload goals. So, for example, IRD will not vary offline logical processors to the point where doing so would cause workloads to miss their goals.
See also
Workload Manager
Literature
Frank Kyne et al., z/OS Intelligent Resource Director, IBM Redbook, SG24-5952
External links
Official z/OS WLM Homepage
IBM mainframe operating systems
IBM mainframe technology | Operating System (OS) | 830 |
Compatibility mode
A compatibility mode is a software mechanism in which a software either emulates an older version of software, or mimics another operating system in order to allow older or incompatible software or files to remain compatible with the computer's newer hardware or software. Examples of the software using the mode are operating systems and Internet Explorer.
Operating systems
A compatibility mode in an operating system is a software mechanism in which a computer's operating system emulates an older processor, operating system, and/or hardware platform in order to allow older software to remain compatible with the computer's newer hardware or software.
This differs from a full-fledged emulator in that an emulator typically creates a virtual hardware architecture on the host system, rather than simply translating the older system's function calls into calls that the host system can understand.
Examples include Classic Mode in Mac OS X and compatibility mode in Microsoft Windows, which both allow applications designed for older versions of the operating system to run. Other examples include Wine to run Windows programs on Linux / OS X and Mono to run .NET programs on various Unix-like systems.
Internet Explorer
"Compatibility View" is a compatibility mode feature of the web browser Internet Explorer in version 8 and later. When active, Compatibility View forces IE to display the webpage in Quirks mode as if the page were being viewed in IE7. When compatibility view is not activated, IE is said to be running in native mode. In IE11, a user can turn on compatibility mode for a web site by clicking the Gears icon and clicking Compatibility View Settings.
IE8+
Internet Explorer 8 was promoted by Microsoft as having stricter adherence to W3C described web standards than Internet Explorer 7. As a result, as in every IE version before it, some percentage of web pages coded to the behavior of the older versions would break in IE8. This would have been a repetition of the situation with IE7 which, while having fixed bugs from IE6, broke pages that used the IE6-specific hacks to work around its non-compliance. This was especially a problem for offline HTML documents, which may not be updatable (e.g. stored on a read-only medium, such as a CD-ROM or DVD-ROM).
To avoid this situation, IE8 implemented a form of version targeting whereby a page could be authored to a specific version of a browser using the X-UA-Compatible declaration either as a meta element or in the HTTP headers.
In order to maintain backwards compatibility, sites can opt into IE7-like handling of content by inserting a specially created meta element into the web page that triggers compatibility mode in the browser, using:
<meta http-equiv="X-UA-Compatible" content="IE=EmulateIE7" />
A newer version of the browser than the page was coded for would emulate the behavior of the older version, so that the assumptions the page made about the browser's behavior hold true.
Microsoft proposed that a page with a doctype that triggers standards mode (or almost standards mode) in IE7 would, by default, trigger IE7-like behavior, called "standards mode" (now called "strict mode") in IE8 and future versions of IE. The new features of IE8 are enabled to trigger what Microsoft called the "IE8 standards mode" (now called "standards mode"). Doctypes that trigger quirks mode in IE7 will continue to do so in IE8.
Peter Bright of Ars Technica claimed that the idea of using a meta tag to pick a specific rendering mode fundamentally misses the point of standards-based development but positioned the issue as one of idealism versus pragmatism in web development, noting that not all of the Web is maintained, and that "demanding that web developers update sites to ensure they continue to work properly in any future browser version is probably too much to ask."
The result for IE 8 Beta 1 was that it could render three modes: "Quirks," "Strict," and "Standard." When there is an old DOCTYPE or when there is no DOCTYPE, IE renders it like IE5 would (quirks mode). When a special meta element or its corresponding HTTP header is included in a web page, IE8 will render that page like IE7 would (strict mode). Otherwise, IE8 renders pages with its own engine (standard mode). Users can switch between the three modes with a few clicks. The release of Internet Explorer 8 Beta 1 revealed that many web sites do not work in this new standards mode.
Microsoft maintains a list of websites that have been reported to have problems in IE8's standards mode, known as the compatibility view list. When a user enables this list IE8 will render the websites in the list using its compatibility view mode. The list is occasionally updated to add newly reported problematic websites, as well as to remove websites whose owners have requested removal. The Internet Explorer team also tests the websites on the list for compatibility issues and removes those where none are found.
See also
Windows XP Mode
Legacy mode
Backward compatibility
Quirks mode
Program information file (PIF)
References
Internet Explorer
Interoperability
Windows software
MacOS software
Linux emulation software | Operating System (OS) | 831 |
Universal Windows Platform apps
Universal Windows Platform (UWP) apps (formerly Windows Store apps and Metro-style apps) are applications that can be used across all compatible Microsoft Windows devices, including personal computers (PCs), tablets, smartphones, Xbox One, Microsoft HoloLens, and Internet of Things. UWP software is primarily purchased and downloaded via the Microsoft Store.
Nomenclature
Starting with Windows 10, Windows initially used "Windows app" to refer to a UWP app. Any app installed from Microsoft Store (formerly Windows Store) was initially "Trusted Windows Store app" and later "Trusted Microsoft Store apps." Other computer programs running on a desktop computer are "desktop apps." Starting with Windows 10 1903, Windows indiscriminately refers to all of them as "Apps."
The terms "Universal Windows Platform" (or "UWP") and "UWP app" only appear on Microsoft documentation for its developers. Microsoft started to retrospectively use "Windows Runtime app" to refer to the precursors of UWP app, for which there was no unambiguous name before.
In Windows 8.x
Windows software first became available under the name "Metro-style apps" when the Windows Store opened in 2012 and were marketed with Windows 8.
Look and feel
In Windows 8.x, Metro-style apps do not run in a window. Instead, they either occupy the entire screen or are snapped to one side, in which case they occupy the entire height of the screen but only part of its width. They have no title bar, system menu, window borders or control buttons. Command interfaces like scroll bars are usually hidden at first. Menus are located in the "settings charm." Metro-style apps use the UI controls of Windows 8.x and typically follow Windows 8.x UI guidelines, such as horizontal scrolling and the inclusion of edge-UIs, like the app bar.
In response to criticism from customers, in Windows 8.1, a title bar is present but hidden unless users move the mouse cursor to the top of the screen. The "hamburger" menu button on their title bar gives access to the charms.
Distribution and licensing
For most users, the only point of entry of Metro-style apps is Windows Store. Enterprises operating a Windows domain infrastructure may enter into a contract with Microsoft that allows them to sideload their line-of-business Metro-style apps, circumventing Windows Store. Also, major web browser vendors such as Google and Mozilla Foundation are selectively exempted from this rule; they are allowed to circumvent Microsoft guidelines and Windows Store and run a Metro-style version of themselves if the user chooses to make their product the default web browser.
Metro-style apps are the only third-party apps that run on Windows RT. Traditional third-party apps do not run on this operating system.
Multiple copies
Before Windows 8, computer programs were identified by their static computer icons. Windows taskbar was responsible for representing every app that had a window when they run. Metro-style apps, however, are identified by their "tiles" that can show their icon and also other dynamic contents. In addition, in Windows 8 and Windows 8.1 RTM, they are not shown on the Windows taskbar when they run, but on a dedicated app switcher on the left side of the screen. Windows 8.1 Update added taskbar icons for Metro-style apps.
There is no set limit on how many copies of desktop apps can run simultaneously. For example, one user may run as many copies of programs such as Notepad, Paint or Firefox as the system resources support. (Some desktop apps, such as Windows Media Player, are designed to allow only a single instance, but this is not enforced by the operating system.) However, in Windows 8, only one copy of Metro-style apps may run at any given time; invoking the app brings the running instance to the front. True multi-instancing of these apps were not available until Windows 10 version 1803 (released in May 2018).
In Windows 10
Windows 10 brings significant changes to how UWP apps look and work.
Look and feel
How UWP apps look depends on the app itself. UWP apps built specifically for Windows 10 typically have a distinct look and feel, as they use new UI controls that look different from those of previous versions of Windows. The exception to this are apps that use custom UI, which is especially the case with video games. Apps designed for Windows 8.x look significantly different from those designed for Windows 10.
UWP apps can also look almost identical to traditional desktop apps, using the same legacy UI controls from Windows versions dating back to Windows 95. These are legacy desktop apps that are converted to the UWP apps and distributed using the APPX file format.
Multitasking
In Windows 10, most UWP apps, even those designed for Windows 8.x, are run in floating windows, and users use the Windows taskbar and Task View to switch between both UWP apps and desktop apps. Windows 10 also introduced "Continuum" or "Tablet Mode". This mode is by default disabled on desktop computers and enabled on tablet computers, but desktop users can switch it on or off manually. When the Tablet Mode is off, apps may have resizable windows and visible title bars. When the Tablet Mode is enabled, resizable apps use the windowing system similar to that of Metro-style apps on Windows 8.x in that they are forced to either occupy the whole screen or be snapped to one side.
UWP apps in Windows 10 can open in multiple windows. Microsoft Edge, Calculator, and Photos are examples of apps that allow this. Windows 10 v1803 (released in May 2018) added true multi-instancing capabilities, so that multiple independent copies of a UWP app can run.
Licensing and distribution
UWP apps can be downloaded from Windows Store or sideloaded from another device. The sideloading requirements were reduced significantly from Windows 8.x to 10, but the app must still be signed by a trusted digital certificate that chains to a root certificate.
Lifecycle
Metro-style apps are suspended when they are closed; suspended apps are terminated automatically as needed by a Windows app manager. Dynamic tiles, background components and contracts (interfaces for interacting with other apps) may require an app to be activated before a user starts it.
For six years, invoking an arbitrary Metro-style app or UWP app from the command line was not supported; this feature was first introduced in the Insider build 16226 of Windows 10, which was released on 21 June 2017.
Development
Windows Runtime
Traditionally, Windows software are developed using Windows API. Software have access to the Windows API with no arbitrary restrictions. Developers were free to choose their own programming language and development tools. Metro-style apps can only be developed using Windows Runtime (WinRT). (Note that not every app using WinRT is a Metro-style app.) A limited subset of WinRT is available for also conventional desktop apps. Calling a forbidden API disqualifies the app from appearing on Windows Store.
Metro-style apps can only be developed using Microsoft's own development tools. According to Allen Bauer, Chief Scientist of Embarcadero Technologies, there are APIs that every computer program must call but Microsoft has forbidden them, except when the call comes from Microsoft's own Visual C++ runtime.
Universal apps
Apps developed to work intrinsically on smartphones, personal computers, video game consoles and HoloLens are called universal apps. This is accomplished by using the universal app API, first introduced in Windows 8.1 and Windows Phone 8.1. Visual Studio 2013 with Update 2 could be used to develop these apps. Windows 10 introduced Universal Windows Platform (UWP) 10 for developing universal apps. Apps that take advantage of this platform are developed with Visual Studio 2015 or later. Older Metro-style apps for Windows 8.1, Windows Phone 8.1 or for both (universal 8.1) need modifications to migrate to this platform.
UWP is not distinct from Windows Runtime; rather, it is an extension of it. Universal apps no longer indicate having been written for specific OS in their manifest; instead, they target one or more device families, e.g. desktop, mobile, Xbox or Internet of Things (IoT). They react to the capabilities that become available to the device. A universal app may run on both a small mobile phone and a tablet and provide suitable experience. The universal app running on the mobile phone may start behaving the way it would on a tablet when the phone is connected to a monitor or a suitable docking station.
APPX
APPX is the file format used to distribute and install apps on Windows 8.x and 10, Windows Phone 8.1, Windows 10 Mobile, Xbox One, Hololens, and Windows 10 IoT Core. Unlike legacy desktop apps, APPX is the only installation system allowed for UWP apps. It replaces the XAP file format on Windows Phone 8.1, in an attempt to unify the distribution of apps for Windows Phone and Windows 8. APPX files are only compatible with Windows Phone 8.1 and later versions, and with Windows 8 and later versions.
The Windows Phone 8.x Marketplace allows users to download APPX files to an SD Card and install them manually. In contrast, sideloading is prohibited on Windows 8.x, unless the user has a developers license or in a business domain.
Security
Traditional Windows software have the power to use and change their ecosystem however they want to. Windows user account rights, User Account Control and antivirus software attempt to keep this ability in check and notify the user when the app tries to use it, possibly for malicious purposes. Metro-style apps, however, are sandboxed and cannot permanently change a Windows ecosystem. They need permission to access hardware devices such as webcam and microphone and their file system access is restricted to user folders, such as My Documents. Microsoft further moderates these programs and may remove them from the Windows Store if they are discovered to have security or privacy issues.
See also
Windows App Studio
WinJS
References
External links
Index of Windows 10 apps
.NET
Computer-related introductions in 2012
Executable file formats
Windows APIs
Windows architecture
Windows technology | Operating System (OS) | 832 |
Authoring system
An authoring system is a program that has pre-programmed elements for the development of interactive multimedia software titles. Authoring systems can be defined as software that allows its user to create multimedia applications for manipulating multimedia objects.
In the development of educational software, an authoring system is a program that allows a non-programmer, usually an instructional designer or technologist, to easily create software with programming features. The programming features are built in but hidden behind buttons and other tools, so the author does not need to know how to program. Generally authoring systems provide many graphics, much interaction, and other tools educational software needs. The three main components of an authoring system are: content organization, control of content delivery, and type(s) of assessment. Content Organization allows the user to structure and sequence the instructional content and media. Control of content delivery refers to the ability for the user to set the pace in which the content is delivered, and how learners engage with the content. Assessment refers to the ability to test learning outcomes within the system, usually in the form of tests, discussions, assignments, and other activities which can be evaluated.
An authoring system usually includes an authoring language, a programming language built (or extended) with functionality for representing the tutoring system. The functionality offered by the authoring language may be programming functionality for use by programmers or domain representation functionality for use by subject experts. There is overlap between authoring languages with domain representation functionality and domain-specific languages.
Authoring language
An authoring language is a programming language used to create tutorials, computer-based training courseware, websites, CD-ROMs and other interactive computer programs. Authoring systems (packages) generally provide high-level visual tools that enable a complete system to be designed without writing any programming code, although the authoring language is there for more in-depth usage.
Examples of authoring languages
DocBook
DITA
PILOT
TUTOR
Examples of Web authoring languages
Bigwig
See also
Chamilo
Hollywood (programming language) with its Hollywood Designer graphical interface.
Learning management system
SCORM
Experience API
Web design program
XML editor
Game engine
References
External links
Authoring system at IFWiki
Locatis, C., Ullmer, E., Carr, V. et al. "Authoring systems: An introduction and assessment." J. Comput. High. Educ. 3, 23–35 (1991). https://doi.org/10.1007/BF02942596
Kearsley, Greg. "Authoring Systems in Computer Based Education." Communications of the ACM, Volume 25, Issue 7, July 1982, pp 429–437. https://doi.org/10.1145/358557.358569
Learning
E-learning
Educational software | Operating System (OS) | 833 |
Exynos
Exynos, formerly Hummingbird (), is a series of ARM-based system-on-chips developed by Samsung Electronics' System LSI division and manufactured by Samsung Foundry. It is a continuation of Samsung's earlier S3C, S5L and S5P line of SoCs.
Exynos is mostly based on the ARM Cortex cores with the exception of some high end SoC's which featured Samsung's proprietary "M" series core design; though from 2021 onwards even the flagship high-end SoC's will be featuring ARM Cortex cores.
History
In 2010, Samsung launched the Hummingbird S5PC110 (now Exynos 3 Single) in its Samsung Galaxy S smartphone, which featured a licensed ARM Cortex-A8 CPU. This ARM Cortex-A8 was code-named Hummingbird. It was developed in partnership with Intrinsity using their FastCore and Fast14 technology.
In early 2011, Samsung first launched the Exynos 4210 SoC in its Samsung Galaxy S II mobile smartphone. The driver code for the Exynos 4210 was made available in the Linux kernel and support was added in version 3.2 in November 2011.
On 29 September 2011, Samsung introduced Exynos 4212 as a successor to the 4210; it features a higher clock frequency and "50 percent higher 3D graphics performance over the previous processor generation". Built with a 32 nm high-κ metal gate (HKMG) low-power process; it promises a "30 percent lower power-level over the previous process generation".
On 30 November 2011, Samsung released information about their upcoming SoC with a dual-core ARM Cortex-A15 CPU, which was initially named "Exynos 5250" and was later renamed to Exynos 5 Dual. This SoC has a memory interface providing 12.8 GB/s of memory bandwidth, support for USB 3.0 and SATA 3, can decode full 1080p video at 60 fps along with simultaneously displaying WQXGA-resolution (2560 × 1600) on a mobile display as well as 1080p over HDMI. Samsung Exynos 5 Dual has been used in a 2015 prototype supercomputer, while the end-product will use a chip meant for servers from another vendor.
On 26 April 2012, Samsung released the Exynos 4 Quad, which powers the Samsung Galaxy S III and Samsung Galaxy Note II. The Exynos 4 Quad SoC uses 20% less power than the SoC in Samsung Galaxy S II. Samsung also changed the name of several SoCs, Exynos 3110 to Exynos 3 Single, Exynos 4210 and 4212 to Exynos 4 Dual 45 nm, and Exynos 4 Dual 32 nm and Exynos 5250 to Exynos 5 Dual.
On 2010 Samsung founded a design center in Austin called Samsung's Austin R&D Center (SARC). Samsung has hired many ex-AMD, ex-Intel, ex-ARM and various other industry veterans. The SARC develop high-performance, low-power, complex CPU and System IP (Coherent Interconnect and memory controller) architectures and designs. In 2012, Samsung began development of GPU IP called "S-GPU". After a three-year design cycle, SARC's first custom CPU core called the M1 was released in the Exynos 8890 in 2016. In 2017 the San Jose Advanced Computing Lab (ACL) was opened to continue custom GPU IP development. Samsung's custom CPU cores were named Mongoose for four generations, named M1 through M4, and Exynos SoCs with such cores were never on par in power efficiency or performance with their Qualcomm Snapdragon equivalents.
On 3 June 2019, AMD and Samsung announced a multi-year strategic partnership in mobile graphics IP based on AMD Radeon GPU IP. NotebookCheck reported that Samsung are targeting 2021 for their first SoC with AMD Radeon GPU IP. However, AnandTech reported 2022. In August 2019, during AMD's Q2 2019 earnings call, AMD stated that Samsung plans to launch SoCs with AMD graphics IP in roughly two years.
On 1 October 2019, rumors emerged that Samsung had laid off their custom CPU core teams at SARC. On 1 November 2019, Samsung filed a WARN letter with the Texas Workforce Commission, notifying of upcoming layoffs of their SARC CPU team and termination of their custom CPU core development. SARC and ACL will still continue development of custom SoC, AI, and GPU.
In June 2021, there was news that Samsung could hire engineers from AMD and Apple to form a new custom architecture team.
List of ARMv7 Exynos SoCs
List of entry-level and mid-range ARMv8 Exynos SoCs
Exynos 7500 series, 7870 and 7880 (2015-17)
Exynos 7872, 7884 series, 7885 and 7904 (2018/19)
Exynos 9600 series (2019)
Exynos 800 series (2020)
List of high-end ARMv8 Exynos SoCs
Exynos 5433 and 7420 (2014/15)
Exynos 8800 series (2016/17)
Exynos 9800 series (2018/19)
Exynos 900 series (2020)
Exynos 1000/2000 series (2021)
Exynos 1080
5nm (5LPE) Samsung process
CPU features
1 + 3 + 4 cores (2.8 GHz Cortex-A78 + 2.6 GHz Cortex-A78 + 2.0 GHz Cortex-A55)
GPU features
Mali G78 MP10
Vulkan 1.2
DSP features
H.265/HEVC, H.264, VP9
HDR10+
ISP features
-
Modem and wireless features
Bluetooth 5.2 (from 5.0 on Exynos 990)
Exynos Modem Integrated
LTE Category 24/18
6CA, 256-QAM
5G NR Sub-6 (DL = 5100 Mbit/s and UL = 1920 Mbit/s)
5G NR mmWave (DL = 7350 Mbit/s and UL = 3670 Mbit/s)
Exynos 2100
5nm (5LPE) Samsung process
6MB System Cache
CPU features
1 + 3 + 4 cores (2.91 GHz Cortex-X1 + 2.81 GHz Cortex-A78 + 2.2 GHz Cortex-A55)
19% better perf on single thread
33% better perf on multi threads
GPU features
Mali G78 MP14 at 854 MHz
40% better perf
Vulkan 1.2
DSP features
8K30 & 4K120 encode & 8K60 decode
Add support of AV1 in 8K60 (decode support claimed, however not implemented, thus this claim is unverified)
H.265/HEVC, H.264, VP9
HDR10+
ISP features
Single: 200MP or Dual: 32MP+32MP
Up to quad simultaneous camera
Modem and wireless features
Bluetooth 5.2 (from 5.0 on Exynos 990)
Exynos Modem Integrated
LTE Category 24/18
6CA, 256-QAM
5G NR Sub-6 (DL = 5100 Mbit/s and UL = 1920 Mbit/s)
5G NR mmWave (DL = 7350 Mbit/s and UL = 3670 Mbit/s)
List of high-end ARMv9 Exynos SoCs
Exynos 2000 series (2022)
List of ARMv8 Exynos Wearable SoCs
List of Exynos modems
Exynos Modem 303
Supported modes LTE FDD, LTE TDD, WCDMA and GSM/EDGE
LTE Cat. 6
Downlink: 2CA 300Mbit/s 64-QAM
Uplink: 100Mbit/s 16-QAM
28 nm HKMG Process
Paired with: Exynos 5 Octa 5430 and Exynos 7 Octa 5433
Devices using: Samsung Galaxy Note 4, Samsung Galaxy Note Edge and Samsung Galaxy Alpha
Exynos Modem 333
Supported modes LTE FDD, LTE TDD, WCDMA, TD-SCDMA and GSM/EDGE
LTE Cat. 10
Downlink: 3CA 450Mbit/s 64-QAM
Uplink: 2CA 100Mbit/s 16-QAM
28 nm HKMG Process
Paired with: Exynos 7 Octa 7420
Devices using: Samsung Galaxy S6, Samsung Galaxy Note 5 and Samsung Galaxy A8 (2016)
Exynos Modem 5100
Supported Modes: 5G NR Sub-6 GHz, 5G NR mmWave, LTE-FDD, LTE-TDD, HSPA, TD-SCDMA, WCDMA, CDMA, GSM/EDGE
Downlink Features:
8CA (Carrier Aggregation) in 5G NR
8CA 1.6Gbit/s in LTE Cat. 19
4x4 MIMO
FD-MIMO
Up to 256-QAM in sub-6 GHz, 2Gbit/s
Up to 64-QAM in mmWave, 6Gbit/s
Uplink Features:
2CA (Carrier Aggregation) in 5G NR
2CA in LTE
Up to 256-QAM in sub-6 GHz
Up to 64-QAM in mmWave
Process: 10 nm FinFET Process
Paired with: Exynos 9820 and Exynos 9825
Devices using: Samsung Galaxy S10 and Samsung Galaxy Note 10
Exynos Modem 5123
Supported Modes: 5G NR Sub-6 GHz, 5G NR mmWave, LTE-FDD, LTE-TDD, HSPA, TD-SCDMA, WCDMA, CDMA, GSM/EDGE
Downlink Features:
8CA 3.0Gbit/s in LTE Cat. 24
Up to 256-QAM in sub-6 GHz, 5.1Gbit/s
Up to 64-QAM in mmWave, 7.35Gbit/s
Uplink Features:
2CA 422 Mbit/s in LTE Cat. 22
Up to 256-QAM in sub-6 GHz
Up to 64-QAM in mmWave
Process: 7 nm FinFET Process
Paired with: Exynos 990, Exynos 2100, Exynos 2200, Google Tensor
Devices using: Samsung Galaxy S20, Samsung Galaxy Note 20, Samsung Galaxy S21, Samsung Galaxy S22, and Google Pixel 6
List of Exynos IoT SoCs
Exynos i T200
CPU: Cortex-M4 @ 320 MHz, Cortex-M0+ @ 320 MHz
WiFi: 802.11b/g/n Single band (2.4 GHz)
On-chip Memory: SRAM 1.4MB
Interface: SDIO/ I2C/ SPI/ UART/ PWM/ I2S
Front-end Module: Integrated T/R switch, Power Amplifier, Low Noise Amplifier
Security: WEP 64/128, WPA, WPA2, AES, TKIP, WAPI, PUF (Physically Unclonable Function)
Exynos i S111
CPU: Cortex-M7 200 MHz
Modem: LTE Release 14 NB-IoT
Downlink: 127 kbit/s
Uplink: 158 kbit/s
On-chip Memory: SRAM 512KB
Interface: USI, UART, I2C, GPIO, eSIM I/F, SDIO(Host), QSPI(Single/Dual/Quad IO mode), SMC
Security: eFuse, AES, SHA-2, PKA, Secure Storage, Security Sub-System, PUF
GNSS: GPS, Galileo, GLONASS, BeiDou
List of Exynos Auto SoCs
Exynos Auto series
The Exynos Auto V9 comes with additional features such as:-
Automotive Safety Integrity Level (ASIL)-B standards
Safety island core
4× Tensilica HiFi 4 DSP
Supports 6 displays and 12 camera connections
See also
Comparison of ARMv8-A cores
Comparison of ARMv7-A cores
Similar platforms
A-Series by Allwinner
Apple silicon (A/S/T/W/H/U/M series) by Apple Inc.
Kirin by HiSilicon (Huawei)
i.MX by NXP
Jaguar and Puma by AMD
MT by MediaTek
NovaThor by ST-Ericsson
OMAP by Texas Instruments
RK by Rockchip Electronics
Snapdragon by Qualcomm
Tegra by Nvidia
References
External links
Samsung Exynos - Custom CPU
Samsung Exynos - 5G
ARM architecture
Embedded microprocessors
Samsung Electronics products
System on a chip | Operating System (OS) | 834 |
Ntdetect.com
ntdetect.com is a component of Microsoft Windows NT-based operating systems that operate on the x86 architecture. It is used during the Windows NT startup process, and is responsible for detecting basic hardware that will be required to start the operating system.
Overview
The bootstrap loader takes the control over the booting process and loads NTLDR.
Ntdetect.com is invoked by NTLDR, and returns the information it gathers to NTLDR when finished, so that it can then be passed on to ntoskrnl.exe, the Windows NT kernel.
Ntdetect.com is used on computers that use BIOS firmware. Computers with Extensible Firmware Interface, such as IA-64, use a method of device-detection that is not tied to the operating system.
Hardware detection operates somewhat differently depending on whether or not Advanced Configuration and Power Interface (ACPI) is supported by the hardware. It passes on the hardware details gathered from the BIOS onto the OS. If ACPI is supported, the list of found devices is handed to the kernel, Windows will take responsibility for assigning each device some resources. On older hardware, where ACPI is not supported, the BIOS takes responsibility for assigning resources, not the operating system, so this information is passed to the kernel as well.
In addition, ntdetect.com will make a determination as to which hardware profile to use. Windows supports multiple distinct hardware profiles, which allows a single copy of Windows to work well in situations where the hardware changes between specific layouts on a regular basis. This is common with portable computers that connect to a docking station.
In Windows Vista and later Windows operating systems, the HAL only supports ACPI, and ntdetect.com has been replaced by winload.exe, so that Windows will be able to control hardware resource allocation on every machine in the same way. Hardware profiles are also no longer supported in Windows Vista.
The information gathered by ntdetect.com is stored in the HKLM\HARDWARE\DESCRIPTION key in the Windows Registry at a later stage in the boot process.
Classes of hardware detected
Hardware identification
Hardware date & time
Bus and adapter types
SCSI adapters
Video adapters
Keyboard
Serial and parallel communication ports
Hard drives
Floppy disks
Mouse
Floating-point coprocessor
Industry Standard Architecture-based devices
Troubleshooting
To aid in troubleshooting, Microsoft has made available "debug" versions of ntdetect.com which will display detailed information about the hardware that was detected. Called ntdetect.chk, it is included in the Windows Support Tools.
Notes
References
Windows XP Resource Kit - Troubleshooting the Startup Process
Windows 2000 Resource Kit - Starting Windows 2000 - Detecting Hardware
Windows NT Workstation Resource Kit - Troubleshooting Startup and Disk Problems
External links
Download of ntdetect.chk for Windows 2000
Windows XP SP2 Support Tools includes ntdetect.chk for Windows XP.
Windows components
Windows files | Operating System (OS) | 835 |
Portable Batch System
Portable Batch System (or simply PBS) is the name of computer software that performs job scheduling. Its primary task is to allocate computational tasks, i.e., batch jobs, among the available computing resources. It is often used in conjunction with UNIX cluster environments.
PBS is supported as a job scheduler mechanism by several meta schedulers including Moab by Adaptive Computing Enterprises and GRAM (Grid Resource Allocation Manager), a component of the Globus Toolkit.
History and versions
PBS was originally developed for NASA under a contract project that began on June 17, 1991. The main contractor who developed the original code was MRJ Technology Solutions. MRJ was acquired by Veridian in the late 1990s. Altair Engineering acquired the rights to all the PBS technology and intellectual property from Veridian in 2003. Altair Engineering currently owns and maintains the intellectual property associated with PBS, and also employs the original development team from NASA.
The following versions of PBS are currently available:
OpenPBS — original open source version released by MRJ in 1998 (actively developed)
TORQUE — a fork of OpenPBS that is maintained by Adaptive Computing Enterprises, Inc. (formerly Cluster Resources, Inc.)
PBS Professional (PBS Pro) — the version of PBS offered by Altair Engineering that is dual licensed under an open source and a commercial license.
License
The license for PBS derived programs allows redistribution accompanied by information on how to obtain the source code and modifications, and requires an acknowledgement in any advertising clause mentioning use of the software (compare the BSD advertising clause). Prior to 2002, PBS and derivative programs (OpenPBS) prohibited commercial redistribution of the software, required registration at the OpenPBS website, and required attribution when PBS contributed to a published research project. These requirements, which did not meet the Open Source Initiative's definition of open source, were set to expire on December 31, 2001.
References
External links
PBS Professional home page
PBS Professional GitHub Project
PBS Express for iPhone/iPad
Job scheduling
1998 software | Operating System (OS) | 836 |
IMAX 432
iMAX 432 (Intel Multifunction Applications Executive for the Intel 432 Micromainframe) was an operating system developed by Intel for digital electronic computers based on the 1980s Intel iAPX 432 32-bit microprocessor. The term micromainframe was an Intel marketing designation describing the iAPX 432 processor's capabilities as being comparable to a mainframe. The iAPX 432 processor and the iMAX 432 operating system were incompatible with the x86 architecture commonly found in personal computers. iMAX 432 was implemented in a subset of the original (1980) version of the Ada, extended with runtime type checking and dynamic package creation.
As of 1982 in iMAX version 2, iMAX was aimed at programmers rather than application users, and it did not provide a command line or other human interface. iMAX provided a runtime environment for the Ada programming language and other high-level languages, as well as an incomplete Ada compiler which was to be extended to cover the full Ada programming language in a later iMAX version after Version 2.
There were at least two versions of iMAX as of 1982, Version 1 and Version 2. Version 1 was undergoing internal Intel testing as of 1981 and was scheduled to be released in 1982. Version 2 was modular and the programmer could choose what parts of the iMAX operating system to load; there were two standard configurations of iMAX version 2 named "Full" and "Minimal", with the minimal configuration being similar to Version 1 of iMAX. As of 1982, a Version 3 of iMAX was planned for release, which was to add support for virtual memory.
See also
History of operating systems
References
Bibliography
Capability systems
Discontinued operating systems
Intel software | Operating System (OS) | 837 |
Ubiquitous computing
Ubiquitous computing (or "ubicomp") is a concept in software engineering, hardware engineering and computer science where computing is made to appear anytime and everywhere. In contrast to desktop computing, ubiquitous computing can occur using any device, in any location, and in any format. A user interacts with the computer, which can exist in many different forms, including laptop computers, tablets, smart phones and terminals in everyday objects such as a refrigerator or a pair of glasses. The underlying technologies to support ubiquitous computing include Internet, advanced middleware, operating system, mobile code, sensors, microprocessors, new I/O and user interfaces, computer networks, mobile protocols, location and positioning, and new materials.
This paradigm is also described as pervasive computing, ambient intelligence, or "everyware". Each term emphasizes slightly different aspects. When primarily concerning the objects involved, it is also known as physical computing, the Internet of Things, haptic computing, and "things that think".
Rather than propose a single definition for ubiquitous computing and for these related terms, a taxonomy of properties for ubiquitous computing has been proposed, from which different kinds or flavors of ubiquitous systems and applications can be described.
Ubiquitous computing touches on distributed computing, mobile computing, location computing, mobile networking, sensor networks, human–computer interaction, context-aware smart home technologies, and artificial intelligence.
Core concepts
Ubiquitous computing is the concept of using small internet connected and inexpensive computers to help with everyday functions in an automated fashion.
For example, a domestic ubiquitous computing environment might interconnect lighting and environmental controls with personal biometric monitors woven into clothing so that illumination and heating conditions in a room might be modulated, continuously and imperceptibly. Another common scenario posits refrigerators "aware" of their suitably tagged contents, able to both plan a variety of menus from the food actually on hand, and warn users of stale or spoiled food.
Ubiquitous computing presents challenges across computer science: in systems design and engineering, in systems modelling, and in user interface design. Contemporary human-computer interaction models, whether command-line, menu-driven, or GUI-based, are inappropriate and inadequate to the ubiquitous case. This suggests that the "natural" interaction paradigm appropriate to a fully robust ubiquitous computing has yet to emerge – although there is also recognition in the field that in many ways we are already living in a ubicomp world (see also the main article on natural user interfaces). Contemporary devices that lend some support to this latter idea include mobile phones, digital audio players, radio-frequency identification tags, GPS, and interactive whiteboards.
Mark Weiser proposed three basic forms for ubiquitous computing devices:
Tabs: a wearable device that is approximately a centimeter in size
Pads: a hand-held device that is approximately a decimeter in size
Boards: an interactive larger display device that is approximately a meter in size
Ubiquitous computing devices proposed by Mark Weiser are all based around flat devices of different sizes with a visual display. Expanding beyond those concepts there is a large array of other ubiquitous computing devices that could exist. Some of the additional forms that have been conceptualized are:
Dust: miniaturized devices can be without visual output displays, e.g. micro electro-mechanical systems (MEMS), ranging from nanometres through micrometers to millimetres. See also Smart dust.
Skin: fabrics based upon light emitting and conductive polymers, organic computer devices, can be formed into more flexible non-planar display surfaces and products such as clothes and curtains, see OLED display. MEMS device can also be painted onto various surfaces so that a variety of physical world structures can act as networked surfaces of MEMS.
Clay: ensembles of MEMS can be formed into arbitrary three dimensional shapes as artefacts resembling many different kinds of physical object (see also tangible interface).
In Manuel Castells' book The Rise of the Network Society, Castells puts forth the concept that there is going to be a continuous evolution of computing devices. He states we will progress from stand-alone microcomputers and decentralized mainframes towards pervasive computing. Castells' model of a pervasive computing system, uses the example of the Internet as the start of a pervasive computing system. The logical progression from that paradigm is a system where that networking logic becomes applicable in every realm of daily activity, in every location and every context. Castells envisages a system where billions of miniature, ubiquitous inter-communication devices will be spread worldwide, "like pigment in the wall paint".
Ubiquitous computing may be seen to consist of many layers, each with their own roles, which together form a single system:
Layer 1: Task management layer
Monitors user task, context and index
Map user's task to need for the services in the environment
To manage complex dependencies
Layer 2: Environment management layer
To monitor a resource and its capabilities
To map service need, user level states of specific capabilities
Layer 3: Environment layer
To monitor a relevant resource
To manage reliability of the resources
History
Mark Weiser coined the phrase "ubiquitous computing" around 1988, during his tenure as Chief Technologist of the Xerox Palo Alto Research Center (PARC). Both alone and with PARC Director and Chief Scientist John Seely Brown, Weiser wrote some of the earliest papers on the subject, largely defining it and sketching out its major concerns.
Recognizing the effects of extending processing power
Recognizing that the extension of processing power into everyday scenarios would necessitate understandings of social, cultural and psychological phenomena beyond its proper ambit, Weiser was influenced by many fields outside computer science, including "philosophy, phenomenology, anthropology, psychology, post-Modernism, sociology of science and feminist criticism". He was explicit about "the humanistic origins of the 'invisible ideal in post-modernist thought'", referencing as well the ironically dystopian Philip K. Dick novel Ubik.
Andy Hopper from Cambridge University UK proposed and demonstrated the concept of "Teleporting" – where applications follow the user wherever he/she moves.
Roy Want, while a researcher and student working under Andy Hopper at Cambridge University, worked on the "Active Badge System", which is an advanced location computing system where personal mobility that is merged with computing.
Bill Schilit (now at Google) also did some earlier work in this topic, and participated in the early Mobile Computing workshop held in Santa Cruz in 1996.
Ken Sakamura of the University of Tokyo, Japan leads the Ubiquitous Networking Laboratory (UNL), Tokyo as well as the T-Engine Forum. The joint goal of Sakamura's Ubiquitous Networking specification and the T-Engine forum, is to enable any everyday device to broadcast and receive information.
MIT has also contributed significant research in this field, notably Things That Think consortium (directed by Hiroshi Ishii, Joseph A. Paradiso and Rosalind Picard) at the Media Lab and the CSAIL effort known as Project Oxygen. Other major contributors include University of Washington's Ubicomp Lab (directed by Shwetak Patel), Dartmouth College's DartNets Lab, Georgia Tech's College of Computing, Cornell University's People Aware Computing Lab, NYU's Interactive Telecommunications Program, UC Irvine's Department of Informatics, Microsoft Research, Intel Research and Equator, Ajou University UCRi & CUS.
Examples
One of the earliest ubiquitous systems was artist Natalie Jeremijenko's "Live Wire", also known as "Dangling String", installed at Xerox PARC during Mark Weiser's time there. This was a piece of string attached to a stepper motor and controlled by a LAN connection; network activity caused the string to twitch, yielding a peripherally noticeable indication of traffic. Weiser called this an example of calm technology.
A present manifestation of this trend is the widespread diffusion of mobile phones. Many mobile phones support high speed data transmission, video services, and other services with powerful computational ability. Although these mobile devices are not necessarily manifestations of ubiquitous computing, there are examples, such as Japan's Yaoyorozu ("Eight Million Gods") Project in which mobile devices, coupled with radio frequency identification tags demonstrate that ubiquitous computing is already present in some form.
Ambient Devices has produced an "orb", a "dashboard", and a "weather beacon": these decorative devices receive data from a wireless network and report current events, such as stock prices and the weather, like the Nabaztag produced by Violet Snowden.
The Australian futurist Mark Pesce has produced a highly configurable 52-LED LAMP enabled lamp which uses Wi-Fi named MooresCloud after Gordon Moore.
The Unified Computer Intelligence Corporation launched a device called Ubi – The Ubiquitous Computer designed to allow voice interaction with the home and provide constant access to information.
Ubiquitous computing research has focused on building an environment in which computers allow humans to focus attention on select aspects of the environment and operate in supervisory and policy-making roles. Ubiquitous computing emphasizes the creation of a human computer interface that can interpret and support a user's intentions. For example, MIT's Project Oxygen seeks to create a system in which computation is as pervasive as air:
In the future, computation will be human centered. It will be freely available everywhere, like batteries and power sockets, or oxygen in the air we breathe...We will not need to carry our own devices around with us. Instead, configurable generic devices, either handheld or embedded in the environment, will bring computation to us, whenever we need it and wherever we might be. As we interact with these "anonymous" devices, they will adopt our information personalities. They will respect our desires for privacy and security. We won't have to type, click, or learn new computer jargon. Instead, we'll communicate naturally, using speech and gestures that describe our intent...
This is a fundamental transition that does not seek to escape the physical world and "enter some metallic, gigabyte-infested cyberspace" but rather brings computers and communications to us, making them "synonymous with the useful tasks they perform".
Network robots link ubiquitous networks with robots, contributing to the creation of new lifestyles and solutions to address a variety of social problems including the aging of population and nursing care.
Issues
Privacy is easily the most often-cited criticism of ubiquitous computing (ubicomp), and may be the greatest barrier to its long-term success.
Public policy problems are often "preceded by long shadows, long trains of activity", emerging slowly, over decades or even the course of a century. There is a need for a long-term view to guide policy decision making, as this will assist in identifying long-term problems or opportunities related to the ubiquitous computing environment. This information can reduce uncertainty and guide the decisions of both policy makers and those directly involved in system development (Wedemeyer et al. 2001). One important consideration is the degree to which different opinions form around a single problem. Some issues may have strong consensus about their importance, even if there are great differences in opinion regarding the cause or solution. For example, few people will differ in their assessment of a highly tangible problem with physical impact such as terrorists using new weapons of mass destruction to destroy human life. The problem statements outlined above that address the future evolution of the human species or challenges to identity have clear cultural or religious implications and are likely to have greater variance in opinion about them.
Research centres
This is a list of notable institutions who claim to have a focus on Ubiquitous computing sorted by country:
Canada
Topological Media Lab, Concordia University, Canada
Finland
Community Imaging Group, University of Oulu, Finland
Germany
Telecooperation Office (TECO), Karlsruhe Institute of Technology, Germany
India
Ubiquitous Computing Research Resource Centre (UCRC), Centre for Development of Advanced Computing
Pakistan
Centre for Research in Ubiquitous Computing (CRUC), Karachi, Pakistan.
Sweden
Mobile Life Centre, Stockholm University
United Kingdom
Mixed Reality Lab, University of Nottingham
See also
Ambient media
Computer accessibility
Human-centered computing
Mobile interaction
Smart city (ubiquitous city)
Ubiquitous commerce
Ubiquitous learning
Ubiquitous robot
Wearable computer
References
Further reading
Adam Greenfield's book Everyware: The Dawning Age of Ubiquitous Computing .
John Tinnell's book Actionable Media: Digital Communication Beyond the Desktop Oxford University Press, 2018.
Salim, Flora, Abowd, Gregory UbiComp-ISWC '20: Adjunct Proceedings of the 2020 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2020 ACM International Symposium on Wearable Computers Association for Computing Machinery, New York, United States .
External links
International Conference on Pervasive Computing (Pervasive)
Pervasive and Mobile Computing journal, PMC (Elsevier)
Proceedings of the Semantic Ambient Media Workshop Series (iAMEA)
University of Siegen, ubicomp home publications
Artificial intelligence laboratories
Human–computer interaction
Ubiquitous computing research centers | Operating System (OS) | 838 |
Next Step
Next Step or Nextstep may refer to:
NeXTSTEP, a UNIX-based computer operating system developed by NeXT in the 1980s and 1990s
OpenStep, an open platform version of NeXTSTEP originated by Sun Microsystems and NeXT
Rhapsody (operating system), the Apple Macintosh NeXTSTEP/classic Mac OS hybrid predecessor to macOS
Darwin (operating system), the open source version of macOS
GNUstep, an open source version of NeXTSTEP originated by the GNU Organization
Next Space Technologies for Exploration Partnerships (NextSTEP), a NASA program
Next Step Tour, a 1999 tour by the British pop group Steps
Nextstep (magazine), an American magazine for high school students
Nextstep, the initial name of Sense of Purpose, a hardcore punk band from Melbourne
See also
The Next Step (disambiguation)
NeXstep, a brand of Coca-Cola Co.
NexStep, a polyurethane product from Interface, Inc.
Next (disambiguation)
Step (disambiguation) | Operating System (OS) | 839 |
InterMezzo (file system)
InterMezzo was a distributed file system written for the Linux kernel, distributed under the GNU General Public License. It was included in the standard Linux kernel from version 2.4.15 but was dropped from version 2.6. InterMezzo is designed to work on top of an existing journaling file system such as ext3, JFS, ReiserFS or XFS. It was developed around 1999.
An InterMezzo system consists of a server, which holds the master copy of the file system, and one or more clients with a cache of the file system. It works either in a replication mode, in which a client maintains a duplicate of the entire file system, or in an on-demand mode in which the client only requests files that it needs. It does this by capturing all writes to the server's file system journal and streaming them to the client systems to be replayed.
InterMezzo is described as a "high availability file system" since a client can continue to operate even if the connection to the server is lost. During a period of disconnection, updates are logged and will be propagated when the connection is restored. Conflicts are detected and handled according to a "conflict resolution policy" (although the best policy is likely to be to avoid conflicts).
Typical applications of replication mode are:
A cluster of servers operating on a shared file system.
Computers that are not always connected to the network, such as laptops.
Typical applications of on-demand mode were distributed file serving, such as File Transfer Protocol (FTP) or WWW, or desktop workstations.
InterMezzo was started as part of the Coda file system project at Carnegie Mellon University and took many design decisions from Coda (but did not share code). Coda in turn was a branch from the OpenAFS project.
It was designed for enhanced scalability, performance, modularity, and easy integration with existing file systems.
A paper was presented at an Open Source Convention in August 1999 by Peter J. Braam, Michael Callahan, and Phil Schwan.
A company called Stelias Computing created a web site in late 1999, and announced a "beta" test version in January 2000.
Although it was supported in the standard Linux kernel in version 2.4, InterMezzo was removed in the 2.6 series. Its developers moved on to a new project named Lustre at a company called Cluster File Systems, around 2001. Development continued through about 2003, and the web site was maintained through 2008.
See also
rsync
Coda
Lustre
References
External links
Debian Bug report logs on InterMezzo, mentioning Lustre.
Network file systems
Virtualization-related software for Linux
Distributed file systems supported by the Linux kernel | Operating System (OS) | 840 |
Whoami
In computing, is a command found on most Unix-like operating systems, Intel iRMX 86, every Microsoft Windows operating system since Windows Server 2003, and on ReactOS. It is a concatenation of the words "Who am I?" and prints the effective username of the current user when invoked.
Overview
The command has the same effect as the Unix command . On Unix-like operating systems, the output of the command is slightly different from because outputs the username that the user is working under, whereas outputs the username that was used to log in. For example, if the user logged in as John and into root, displays and displays . This is because the command does not invoke a login shell by default.
The earliest versions were created in 2.9 BSD as a convenience form for , the Berkeley Unix command's way of printing just the logged in user's identity. This version was developed by Bill Joy.
The GNU version was written by Richard Mlynarik and is part of the GNU Core Utilities (coreutils).
The command is available as a separate package for Microsoft Windows as part of the GnuWin32 project and the UnxUtils collection of native Win32 ports of common GNU Unix-like utilities.
On Intel iRMX 86 this command lists the currents user's identification and access rights.
The command is also available as part of the Windows 2000 Resource Kit and Windows XP SP2 Support Tools.
The ReactOS version was developed by Ismael Ferreras Morezuelas and is licensed under the GPLv2.
This command was also available as a NetWare-Command residing in the public-directory of the fileserver. It also outputs the current connections to which server the workstation is attached with which username.
Example
Unix, Unix-like
# whoami
root
Intel iRMX 86
--WHOAMI
USER ID: 5
ACCESS ID'S: 5, WORLD
Windows, ReactOS
C:\Users\admin>whoami
workgroup\admin
See also
User identifiers for Unix
List of Unix commands
References
Further reading
External links
whoami | Microsoft Docs
Unix user management and support-related utilities
ReactOS commands
Windows administration | Operating System (OS) | 841 |
TI-DNOS
Distributed Network Operating System (DNOS)
The Distributed Network Operating System (DNOS) is a general purpose,
multitasking operating system designed to operate with the Texas Instruments
990/10, 990/10A and 990/12 minicomputers.
DNOS includes a sophisticated file management package which provides support
for key indexed files, sequential files, and relative record files.
DNOS is a multiterminal system that is capable of making each of several users
appear to have exclusive control of the system.
DNOS supports output spooling and program accessible accounting data.
Job level and task level operations enable more efficient use of system
resources.
In addition to multiterminal applications, DNOS provides support for advanced
program development.
Users communicate with DNOS by entering commands at a terminal or by providing
a file of commands.
The System Command Interpreter (SCI) processes those commands and directs the
operating system to initiate the action specified by a command.
A text editor allows the user to enter source programs or data into the system.
A Macro Assembler is provided for assembly language programs.
Several high level languages, including Fortran, COBOL, BASIC, RPG II, and
Pascal, are supported.
A link editor and extended debugging facilities are also provided.
A variety of utility programs and productivity tools support access to and
management of information contained in a data base, design of specific forms
on the screen of a video display terminal (VDT), and word processing.
The system supports a wide range of user environments.
DNOS can support as few as one or two terminals, thus allowing the user of a
smaller System to perform tasks efficiently and yet inexpensively.
Larger configurations with a wide variety of peripherals are also supported.
The maximum configuration size varies with the user's environment.
Almost every minicomputer system requirement or application need can be met
with DNOS.
DNOS provides the base for a variety of communications products.
Standard protocols for IBM 2780/3780 and for IBM 3270 communications are supported.
Local area network software is supported for network input/output (I/O) and
logon.
In addition, sophisticated networking software is available with the
Distributed Network Communications System (DNCS) and Distributed Network I/O
(DNIO) packages.
DNCS includes networking capabilities for X.25 and IBM's Systems Network Architecture
(SNA) protocols.
DNIO provides users transparent access to other TI 990s running DNOS and can
be used to connect to local or wide-area networks.
DNOS is an international operating system designed to meet the commercial
requirements of the United States, most European countries, and Japan.
DNOS supports a complete range of international data terminals that permit
users to enter, view, and process data in their own languages.
The system includes error text files that can be edited so that error messages
can be easily translated into languages other than English.
DNOS Features
DNOS supports features that incorporate the computing power of the larger
computers, and it is upwardly compatible with other Texas Instruments operating
systems.
DNOS features include:
Multiple Terminals - The number of online terminals is limited only by the available computing power and memory for system structures.
File Security - The system can optionally include a file security system that allows system managers and other users to determine which user groups can access data files and execute specific programs.
Output Spooling - Output spooling is the process of queueing files for printing. Files are scheduled for printing based on job priority and availability of the printing device(s). You can specify special printing forms and formats.
Accounting Function - The system can optionally include an accounting function that allows you to maintain accounting information on the use of system resources.
Job Structure - The system incorporates a job structure that assists program management and enables efficient use of resources. A job is a sequence of cooperating tasks.
I/O Resource Management - Resource-specific and resource-independent I/O operations allow flexibility in the selection of devices and file types.
Program Segmentation - Program segmentation is the process of separating a program into segments. A program can consist of up to three segments at any one time. Additional segments can be accessed as necessary during program execution. Segments can be shared by programs.
Interprocess Communication - The system provides the capability, through interprocess communication (IPC), for two programs (tasks) to exchange information.
Power Failure Recovery - Should a power failure occur, DNOS maintains the state of the system at the time of the failure, if the optional software and backup power supply have been added to your system. When full power resumes, the operation will continue from the point at which the power failure occurred.
Synchronization Methods - Event and semaphore synchronization methods are included to assist interaction between programs, either within a job or across job boundaries. Event synchronization allows the program to wait for one or more events to be completed before processing is continued. Semaphore synchronization uses variables to exchange signal information between programs.
Concatenated Files - The system supports file concatenation, in which two or more physical files are recognized as a logically contiguous set of data. These files can exist on one or more volumes.
Temporary Files A temporary file is one that exists only during the life of the created job, or during the extent of a program within that job. A job temporary file can be accessed by any program in a job, and is deleted when the job terminates. Other temporary files are created for use by a single program and are deleted when the program terminates.
Diagnostic Support - The system supports online diagnostics that operate concurrently with program execution and system log analysis tasks.
Batch Jobs - A batch job is a job that executes in the background, independent of a terminal. A user at a terminal can be executing multiple batch jobs, and at the same time, be performing foreground and/or background operations in an interactive job.
Dynamic Configuration Changes - Table size, system characteristics, and device configuration changes can be enabled and take effect after the next Initial Program Load (IPL) sequence, rather than during system generation.
Compatibility - DNOS design enables compatibility with the DX10 operating system. Many of the familiar operating concepts of the DX10 operating system are integrated within the design of DNOS. DNOS includes disk and other media formats, a Supervisor Call (SVC) interface, and SCI user commands that are all upwardly compatible with DX10.
System Generation - The system generation utility allows a user to interactively specify all necessary features, available devices, and optional functions when creating an operating system. This data is used to construct a file that defines the configuration of the operating system.
Message Facilities - The system provides a comprehensive set of codes and messages describing errors and special conditions that occur when using the operating system. The system handles messages in a uniform manner to ensure that they are easy to use. Two system directories maintain message files that contain text describing errors, information, and completion messages generated by the system. The directories are expandable to include message files written by users.
System Log - The system log stores information about errors and messages generated by hardware, input/output operations, tasks, and user programs on two files and an optional specified device.
Systems Problem Analysis - If problems occur during system operation, they can be diagnosed by using a system utility that can analyze the system whether the system is operating or has experienced a failure. In a failure situation, an image of memory can be copied to a file. When the system is operating again, an analysis utility can be used with a variety of commands entered by the user.
System Configuration History - Information about all supplied software products installed on a system is maintained on a system disk file. Users can also send entries to the file for application products they develop.
DNOS International Features - Error message files can be text edited to translate into a language other than English. It is also necessary to change the collating sequence of key indexed files (KIF) according to the translation. DNOS provides methods to change the required collating sequence.
DNOS Performance Package - An optional add-on package is available for DNOS on the larger 990 series computers. This package enhances DNOS performance by using several system routines implemented in microcode in the writable control storage.
References
External links
Dave Pitts' TI 990 page — Includes a simulator and DNOS Operating System images.
Proprietary operating systems
Texas Instruments | Operating System (OS) | 842 |
OLPC XO
The OLPC XO (formerly known as $100 Laptop, Children's Machine, 2B1) is a low cost laptop computer intended to be distributed to children in developing countries around the world, to provide them with access to knowledge, and opportunities to "explore, experiment and express themselves" (constructionist learning). The XO was developed by Nicholas Negroponte, a co-founder of MIT's Media Lab, and designed by Yves Behar's Fuseproject company. The laptop is manufactured by Quanta Computer and developed by One Laptop per Child (OLPC), a non-profit 501(c)(3) organization.
The subnotebooks were designed for sale to government-education systems which then would give each primary school child their own laptop. Pricing was set to start at $188 in 2006, with a stated goal to reach the $100 mark in 2008 and the 50-dollar mark by 2010. When offered for sale in the Give One Get One campaigns of Q4 2006 and Q4 2007, the laptop was sold at $199.
The rugged, low-power computers use flash memory instead of a hard disk drive (HDD), and come with a pre-installed operating system derived from Fedora Linux, with the Sugar graphical user interface (GUI). Mobile ad hoc networking via 802.11s Wi-Fi mesh networking, to allow many machines to share Internet access as long as at least one of them could connect to an access point, was initially announced, but quickly abandoned after proving unreliable.
The latest version of the OLPC XO is the XO-4 Touch, introduced in 2012.
History
The first early prototype was unveiled by the project's founder Nicholas Negroponte and then-United Nations Secretary-General Kofi Annan on November 16, 2005, at the World Summit on the Information Society (WSIS) in Tunis, Tunisia. The device shown was a rough prototype using a standard development board. Negroponte estimated that the screen alone required three more months of development. The first working prototype was demonstrated at the project's Country Task Force Meeting on May 23, 2006.
In 2006, there was a major controversy because Microsoft had suddenly developed an interest in the XO project and wanted the formerly open source effort to run Windows. Negroponte agreed to provide engineer assistance to Microsoft to facilitate their efforts. During this time, the project mission statement changed to remove mentions of "open source". A number of developers, such as Ivan Krstić and Walter Bender, resigned because of these changes in strategy.
Approximately 400 developer boards (Alpha-1) were distributed in mid-2006; 875 working prototypes (Beta 1) were delivered in late 2006; 2400 Beta-2 machines were distributed at the end of February 2007; full-scale production started November 6, 2007. Quanta Computer, the project's contract manufacturer, said in February 2007 that it had confirmed orders for one million units. Quanta indicated that it could ship five million to ten million units that year because seven nations had committed to buy the XO-1 for their schoolchildren: Argentina, Brazil, Libya, Nigeria, Rwanda, Thailand, and Uruguay. Quanta plans to offer machines very similar to the XO-1 on the open market.
The One Laptop Per Child project originally stated that a consumer version of the XO laptop was not planned. In 2007, the project established a website, laptopgiving.org, for outright donations and for a "Give 1 Get 1" offer valid (but only to the United States, its territories, and Canadian addresses) from November 12, 2007 until December 31, 2007. For each computer purchased at a cost of $399, an XO is also sent to a child in a developing nation. OLPC again restarted the G1G1 program through Amazon.com in November 2008, but has since stopped as of December 31 (2008 or 2009).
On May 20, 2008, OLPC announced the next generation of XO, OLPC XO-2 which was thereafter cancelled in favor of the tablet-like designed XO-3. In late 2008, the New York City Department of Education began a project to purchase large numbers of XO computers for use by schoolchildren.
The design received the Community category award of the 2007 Index: Award.
In 2008 the XO was awarded London's Design Museum "Design of the Year", plus two gold, one silver, and one bronze award at the Industrial Design Society of America's International Design Excellence Awards (IDEAs).
Goals
The XO-1 is designed to be low-cost, small, durable, and efficient. It is shipped with a slimmed-down version of Fedora Linux and a GUI named Sugar that is intended to help young children collaborate. The XO-1 includes a video camera, a microphone, long-range Wi-Fi, and a hybrid stylus and touchpad. Along with a standard plug-in power supply, human and solar power sources are available, allowing operation far from a commercial power grid. Mary Lou Jepsen has listed the design goals of the device as follows:
Minimal power use, with a design target of 2–3 Watts (W) total
Minimal production cost, with a target of US$100 per laptop for production runs of millions of units
A "cool" look, implying innovative styling in its physical appearance
E-book function
Open source and free software provided with the laptop
Various use models had been explored by OLPC with the help of Design Continuum and Fuseproject, including: laptop, e-book, theatre, simulation, tote, and tablet architectures. The current design, by Fuseproject, uses a transformer hinge to morph between laptop, e-book, and router modes.
In keeping with its goals of robustness and low power use, the design of the laptop intentionally omits all motor-driven moving parts; it has no hard disk drive, optical (compact disc (CD) or Digital Video Disc DVD) media, floppy disk drive, or fan (the device is passively cooled). No Serial ATA interface is needed due to the lack of hard drive. Storage is via an internal SD card slot. There is also no PC card slot, although Universal Serial Bus (USB) ports are included.
A built-in hand-crank generator was part of the notebook in the original design; however, it is now an optional clamp-on peripheral.
Hardware
The latest version of the OLPC XO is the XO-4 Touch.
Display
1200×900 7.5 inch (19 cm) diagonal transflective LCD (200 dpi) that uses 0.1 to 1.0 W depending on mode. The two modes are:
Reflective (backlight off) monochrome mode for low-power use in sunlight. This mode provides very sharp images for high-quality text
Backlit color mode, with an alternance of red, green and blue pixels
XO 1.75 developmental version for XO-3 has an optional touch screen
The first-generation OLPC laptops have a novel low-cost liquid crystal display (LCD). Later generations of the OLPC laptop are expected to use low-cost, low-power and high-resolution color displays with an appearance similar to electronic paper.
The electronic visual display is the costliest component in most laptops. In April 2005, Negroponte hired Mary Lou Jepsen, who was interviewing to join the Media Arts and Sciences faculty at the MIT Media Lab in September 2008, as OLPC Chief Technology Officer. Jepsen developed a new display for the first-generation OLPC laptop, inspired by the design of small LCDs used in portable DVD players, which she estimated would cost about $35. In the OLPC XO-1, the screen is estimated to be the second most costly component, after the central processing unit (CPU) and chipset.
Jepsen has described the removal of the filters that color the RGB subpixels as the critical design innovation in the new LCD. Instead of using subtractive color filters, the display uses a plastic diffraction grating and lenses on the rear of the LCD to illuminate each pixel. This grating pattern is stamped using the same technology used to make DVDs. The grating splits the light from the white backlight into a spectrum. The red, green, and blue components are diffracted into the correct positions to illuminate the corresponding pixel with R, G or B. This innovation results in a much brighter display for a given amount of backlight illumination: while the color filters in a regular display typically absorb 85% of the light that hits them, this display absorbs little of that light. Most LCD screens at the time used cold cathode fluorescent lamp backlights which were fragile, difficult or impossible to repair, required a high voltage power supply, were relatively power-hungry, and accounted for 50% of the screens' cost (sometimes 60%). The light-emitting diode (LED) backlight in the XO-1 is easily replaceable, rugged, and low-cost.
The remainder of the LCD uses extant display technology and can be made using extant manufacturing equipment. Even the masks can be made using combinations of extant materials and processes.
When lit primarily from the rear with the white LED backlight, the display shows a color image composed of both RGB and grayscale information. When lit primarily from the front by ambient light, for example from the sun, the display shows a monochromatic (black and white) image composed of just the grayscale information.
"Mode" change occurs by varying the relative amounts backlight and ambient light. With more backlight, a higher chrominance is available and a color image display is seen. As ambient light levels, such as sunlight, exceed the backlight, a grayscale display is seen; this can be useful when reading e-books for an extended time in bright light such as sunlight. The backlight brightness can also be adjusted to vary the level of color seen in the display and to conserve battery power.
In color mode (when lit primarily from the rear), the display does not use the common RGB pixel geometry for liquid crystal computer displays, in which each pixel contains three tall thin rectangles of the primary colors. Instead, the XO-1 display provides one color for each pixel. The colors align along diagonals that run from upper-right to lower left (see diagram on the right). To reduce the color artifacts caused by this pixel geometry, the color component of the image is blurred by the display controller as the image is sent to the screen. Despite the color blurring, the display still has high resolution for its physical size; normal displays put about 588(H)×441(V) to 882(H)×662(V) pixels in this amount of physical area and support subpixel rendering for slightly higher perceived resolution. A Philips Research study measured the XO-1 display's perceived color resolution as effectively 984(H)×738(V). A conventional liquid crystal display with the same number of green pixels (green carries most brightness or luminance information for human eyes) as the OLPC XO-1 would be 693×520. Unlike a standard RGB LCD, resolution of the XO-1 display varies with angle. Resolution is greatest from upper-right to lower left, and lowest from upper-left to lower-right. Images which approach or exceed this resolution will lose detail and gain color artifacts. The display gains resolution when in bright light; this comes at the expense of color (as the backlight is overpowered) and color resolution can never reach the full 200 dpi sharpness of grayscale mode because of the blur which is applied to images in color mode.
Power
DC input, ±11–18 V, maximum 15 W power draw
5-cell rechargeable NiMH battery pack, 3000 mAh minimum 3050 mAh typical 80% usable, charge at 0...45 °C (deprecated in 2009)
2-cell rechargeable LiFePO4 battery pack, 2800 mAh minimum 2900 mAh typical 100% usable, charge at 0...60 °C
Four-cell rechargeable LiFePO4 battery pack, 3100 mAh minimum 3150 mAh typical 100% usable, charge at −10...50 °C
External manual power options included a clamp-on crank generator similar to the original built-in one (see photo in the Gallery, below), but they generated 1/4 the power initially hoped, and less than a thousand were produced. A pull-string generator was also designed by Potenco but never mass-produced.
External power options include 110–240 Volt AC and input from an external solar panel. Solar is the predominant alternate power source for schools using XOs.
The laptop design specification goals are about 2 W of power consumed during normal use, far less than the 10 W to 45 W of conventional laptops. With build 656, power use is between 5 and 8 watts measured on G1G1 laptop. Future software builds are expected to meet the 2-watt target.
In e-book mode (XO 1.5), all hardware sub-systems except the monochrome dual-touch display are powered down. When the user moves to a different page, the other systems wake up, render the new page on the display, and then go back to sleep. Power use in this e-book mode is estimated to be 0.3 to 0.8 W. The XO 2.0 is planned to consume even less power than earlier versions, less than 1.0 W in full color mode.
Power options include batteries, solar power panels, and human-powered generators, which make the XO self-powered equipment. 10 batteries at once can be charged from the school building power in the XO multi-battery charger. The low power use, combined with these power options are useful in many countries that lack a power infrastructure.
Networking
Wireless networking using an "Extended Range" 802.11b/g and 802.11s (mesh) Marvell 8388 wireless chip, chosen due to its ability to autonomously forward packets in the mesh even if the CPU is powered off. When connected in a mesh, it is run at a low bitrate (2 Mbit/s) to minimize power use. Despite the wireless chip's minimalism, it supports Wi-Fi Protected Access (WPA). An ARM processor is included.
Dual adjustable antennas for diversity reception.
IEEE 802.11b support will be provided using a Wi-Fi "Extended Range" chip set. Jepsen has said the wireless chip set will be run at a low bit rate, 2 Mbit/s maximum rather than the usual higher speed 5.5 Mbit/s or 11 Mbit/s to minimize power use. The conventional IEEE 802.11b system only handles traffic within a local cloud of wireless devices in a manner similar to an Ethernet network. Each node transmits and receives its own data, but it does not route packets between two nodes that cannot communicate directly. The OLPC laptop will use IEEE 802.11s to form the wireless mesh network.
Whenever the laptop is powered on it can participate in a mobile ad hoc network (MANET) with each node operating in a peer-to-peer fashion with other laptops it can hear, forwarding packets across the cloud. If a computer in the cloud has access to the Internet—either directly or indirectly—then all computers in the cloud are able to share that access. The data rate across this network will not be high; however, similar networks, such as the store and forward Motoman project have supported email services to 1000 schoolchildren in Cambodia, according to Negroponte. The data rate should be sufficient for asynchronous network applications (such as email) to communicate outside the cloud; interactive uses, such as web browsing, or high-bandwidth applications, such as video streaming should be possible inside the cloud. The IP assignment for the meshed network is intended to be automatically configured, so no server administrator or an administration of IP addresses is needed.
Building a MANET is still untested under the OLPC's current configuration and hardware environment. Although one goal of the laptop is that all of its software be open source, the source code for this routing protocol is currently closed source. While there are open-source alternatives such as OLSR or B.A.T.M.A.N., none of these options is yet available running at the data-link layer (Layer 2) on the Wi-Fi subsystem's co-processor; this is critical to OLPC's power efficiency scheme. Whether Marvell Technology Group, the producer of the wireless chip set and owner of the current meshing protocol software, will make the firmware open source is still an unanswered question. As of 2011, it has not done so.
Shell
Yves Behar is the chief designer of the present XO shell. The shell of the laptop is resistant to dirt and moisture, and is constructed with 2 mm thick plastic (50% thicker than typical laptops). It contains a pivoting, reversible display, movable rubber Wi-Fi antennas, and a sealed rubber-membrane keyboard.
Input and ports
Water-resistant membrane keyboard, customized to the locale in which it will be distributed. The multiplication and division symbols are included. The keyboard is designed for the small hands of children.
Five-key cursor-control pad; four directional keys plus Enter
Four "Game Buttons" (functionally PgUp, PgDn, Home, and End) modeled after the PlayStation Controller layout (, , , and ).
Touchpad for mouse control and handwriting input
Built-in color camera, to the right of the display, VGA resolution (640×480)
Built-in stereo speakers
Built-in microphone
Audio based on the AC'97 codec, with jacks for external stereo speakers and microphones, Line-out, and Mic-in
Three external USB 2.0 ports.
More than twenty different keyboards have been laid out, to suit local needs to match the standard keyboard for the country in which a laptop is intended. Around half of these have been manufactured for prototype machines. There are parts of the world which do not have a standard keyboard representing their language. As Negroponte states this is "because there's no real commercial interest in making a keyboard". One example of where the OLPC has bridged this gap is in creating an Amharic keyboard for Ethiopia. For several languages, the keyboard is the first ever created for that language.
Negroponte has demanded that the keyboard not contain a caps lock key, which frees up keyboard space for new keys such as a future "view source" key.
Beneath the keyboard was a large area that resembled a very wide touchpad. The capacitive portion of the mousepad was an Alps GlidePoint touchpad, which was in the central third of the sensor and could be used with a finger. The full width was a resistive sensor which, though never supported by software, was intended to be used with a stylus. This unusual feature was eliminated in the CL1A hardware revision because it suffered from erratic pointer motion. Alps Electronics provided both the capacitive and resistive components of the mousepad.
Release history
The first XO prototype, displayed in 2005, had a built-in hand-crank generator for charging the battery. The XO-1 beta, released in early 2007, used a separate hand-crank generator.
The XO-1 was released in late 2007.
Power option: solar panel.
CPU: 433 MHz IA-32 x86 AMD Geode LX-700 at 0.8 watts, with integrated graphics controller
256 MB of Dual (DDR266) 133 MHz DRAM (in 2006 the specification called for 128 MB of RAM)
1024 kB (1 MB) flash ROM with open-source Open Firmware
1024 MB of SLC NAND flash memory (in 2006 the specifications called for 512 MB of flash memory)
Average battery life three hours
The XO 1.5 was released in early 2010.
Via/x86 CPU 4.5 W
Fewer physical parts
Lower power use
Power option: solar panel.
CPU: 400 to 1000 MHz IA-32 x86 VIA C7 at 0.8 watts, with integrated graphics controller
512 to 1024 MB of Dual (DDR266) 133 MHz DRAM
1024 kB (1 MB) flash ROM with open-source Open Firmware
4 GB of SLC NAND flash memory (upgradable, microSD)
Average battery life 3–5 hours (varies with active suspend)
The XO 1.75 began development in 2010, with full production starting in February 2012.
2 watt ARM CPU
Fewer physical parts, 40% lower power use.
Power option: solar panel.
CPU: 400 to 1000 MHz ARM Marvell Armada 610 at 0.8 watts, with integrated graphics controller
1024 to 2048 MB of DDR3 (TBD)
1024 TBD kB (1 MB) flash ROM with open-source Open Firmware
4-8 GB of SLC NAND flash memory (upgradable, microSD)
Accelerometer
Average battery life 5–10 hours
The XO 2, previously scheduled for release in 2010, was canceled in favor of XO 3. With a price target $75, it had an elegant, lighter, folding dual touch-screen design. The hardware would have been open-source and sold by various manufacturers. A choice of operating system (Windows XP or Linux) was intended outside the United States. Its $150 price target in the United States includes two computers, one donated.
The OLPC XO-3 was scheduled for release in late 2012. It was canceled in favor of the XO-4. It featured one solid color multi-touch screen design, and a solar panel in the cover or carrying case.
The XO 4 is a refresh of the XO 1 to 1.75 with a later ARM CPU and an optional touch screen. This model will not be available for consumer sales. There is a mini HDMI port to allow connecting to a display.
The XO Tablet was designed by third-party Vivitar, rather than OLPC, and based on the Android platform whereas all previous XO models were based on Sugar running on top of Fedora. It is commercially available and has been used in OLPC projects.
Software
Countries are expected to remove and add software to best adapt the laptop to the local laws and educational needs. As supplied by OLPC, all of the software on the laptop will be free and open source. All core software is intended to be localized to the languages of the target countries. The underlying software includes:
A pared-down version of Fedora Linux as the operating system, with students receiving root access (although not normally operating in that mode).
Open Firmware, written in a variant of Forth
A simple custom web browser based upon the Gecko engine used by Mozilla Firefox.
A word processor based on AbiWord.
Email through the web-based Gmail service.
Online chat and VoIP programs.
Python 2.5 is the primary programming language used to develop Sugar "Activities". Several other interpreted programming languages are included, such as JavaScript, Csound, the eToys version of Squeak, and Turtle Art
A music sequencer with digital instruments: Jean Piché's TamTam
Audio and video player software: Totem or Helix.
The laptop uses the Sugar graphical user interface, written in Python, on top of the X Window System and the Matchbox window manager. This interface is not based on the typical desktop metaphor but presents an iconic view of programs and documents and a map-like view of nearby connected users. The current active program is displayed in full-screen mode. Much of the core Sugar interface uses icons, bypassing localization issues. Sugar is also defined as having no folders present in the UI.
Steve Jobs had offered Mac OS X free of charge for use in the laptop, but according to Seymour Papert, a professor emeritus at MIT who is one of the initiative's founders, the designers wanted an operating system that can be tinkered with: "We declined because it's not open source." Therefore, Linux was chosen. However, after a deal with Microsoft, the laptop will now be offered with Windows XP along with an open source alternative.
Jim Gettys, responsible for the laptops' system software, has called for a re-education of programmers, saying that many applications use too much memory or even leak memory. "There seems to be a common fallacy among programmers that using memory is good: on current hardware it is often much faster to recompute values than to have to reference memory to get a precomputed value. A full cache miss can be hundreds of cycles, and hundreds of times the power use of an instruction that hits in the first level cache."
On August 4, 2006, the Wikimedia Foundation announced that static copies of selected Wikipedia articles would be included on the laptops. Jimmy Wales, chair of the Wikimedia Foundation, said that "OLPC's mission goes hand in hand with our goal of distributing encyclopedic knowledge, free of charge, to every person in the world. Not everybody in the world has access to a broadband connection." Negroponte had earlier suggested he would like to see Wikipedia on the laptop. Wales feels that Wikipedia is one of the "killer apps" for this device.
Don Hopkins announced that he is creating a free and open source port of the game SimCity to the OLPC with the blessing of Will Wright and Electronic Arts, and demonstrated SimCity running on the OLPC at the Game Developer's Conference in March 2007. The free and open source SimCity plans were confirmed at the same conference by SJ Klein, director of content for the OLPC, who also asked game developers to create "frameworks and scripting environments—tools with which children themselves could create their own content."
The laptop's security architecture, known as Bitfrost, was publicly introduced in February 2007. No passwords will be required for ordinary use of the machine. Programs are assigned certain bundles of rights at install time which govern their access to resources; users can later add more rights. Optionally, the laptops can be configured to request leases from a central server and to stop working when the leases expire; this is designed as a theft-prevention mechanism.
The pre-8.20 software versions were criticized for bad wireless connectivity and other minor issues.
Deployment
The XO-1 is nicknamed ceibalita in Uruguay after the Ceibal project.
Reception and reviews
The hand-crank system for powering the laptop was abandoned by designers shortly after it was announced, and the "mesh" internet-sharing approach performed poorly and was then dropped. Bill Gates of Microsoft criticized the screen quality.
Some critics of the program would have preferred less money being spent on technology and more money being spent on clean water and "real schools". Some supporters worried about the lack of plans for teaching students. The program was based on constructionism, which is the idea that, if they had the tools, the kids would largely figure out how to do things on their own. Others wanted children to learn the Microsoft Windows operating system, rather than OLPC's lightweight Linux derivative, on the belief that the children would use Microsoft Windows in their careers. Intel's Classmate PC used Microsoft Windows and sold for US$200 to $400.
The project was known as "the $100 laptop", but it originally cost US$130 for a bare-bones laptop, and then the price rose to $180 in the next revision. The solid-state alternative to a hard drive was sturdy, which meant that the laptop could be dropped with a lower risk of breaking – although more laptops were broken than expected – but it was costly, so the machines had limited storage capacity.
See also
Classmate PC
Comparison of netbooks
Computer technology for developing areas
eMate 300
Digital gap
Lemote
Linutop
OLPC XO-3
PlayPower
Sakshat
Sinomanic
VIA pc-1 Initiative
Zonbu
Notes
References
$100 Laptop Nears Launch, SPIE; The International Society for Optical Engineering. The Optics, Photonics, Fibers, and Lasers Resource, July 2006
$100 laptop production begins, BBC News, July 22, 2007
$100-laptop created for world's poorest countries, New Scientist, November 17, 2005
Doing it for the kids, man: Children's laptop inspires open source projects October 27, 2006 Article about how the project's hardware constraints will lead to better apps and kludge-removal for everyone
First video of a working "One Laptop Per Child" laptop – demonstration of the first working prototype, by Silicon Valley Sleuth blog
"Hand-cranked computers: Is this a wind-up?", The Independent, November 24, 2005
"Laptop with a mission widens its audience", The New York Times, October 4, 2007
"Make your own $100 laptop...?", Make magazine, December 2, 2005
Sugar, presentation of the userinterface – Videostream
The $100 Laptop: an Up-Close Look – Web video of the first laptop prototype, by Andy Carvin
External links
2005 software
Free software
Information and communication technologies for development
Laptops
Linux-based devices
Mobile computers
One Laptop per Child
Subnotebooks
Quanta Computer
Computer-related introductions in 2005 | Operating System (OS) | 843 |
GnuWin32
The GnuWin32 project provides native ports in the form of executable computer programs, patches, and source code for various GNU and open source tools and software, much of it modified to run on the 32-bit Windows platform. The ports included in the GnuWin32 packages are:
GNU utilities such as bc, bison, chess, Coreutils, diffutils, ed, Flex, gawk, gettext, grep, Groff, gzip, iconv, less, m4, patch, readline, rx, sharutils, sed, tar, texinfo, units, Wget, which.
Archive management and compression tools, such as: arc, arj, bzip2, gzip, lha, zip, zlib.
Non-GNU utilities such as: cygutils, file, ntfsprogs, OpenSSL, PCRE.
Graphics tools.
PDCurses.
Tools for processing text.
Mathematical software and statistics software.
Most programs have dependencies (typically DLLs), so that the executable files cannot simply be run in Windows unless files they depend upon are available. An alternative set of ported programs is UnxUtils; these are usually older versions, but depend only on the Microsoft C-runtime msvcrt.dll.
There is a package maintenance utility, GetGnuWin32, to download and install or update current versions of all GnuWin32 packages.
See also
Cygwin
DJGPP
Windows Subsystem for Linux
MinGW, MSYS
UnxUtils
UWIN
References
External links
Package List
Free software programmed in C
Free software programmed in C++
Free system software
Windows-only free software | Operating System (OS) | 844 |
Open service interface definitions
Open service interface definitions (OSIDs) are programmatic interface specifications describing services. These interfaces are specified by the Open Knowledge Initiative (O.K.I.) to implement a service-oriented architecture (SOA) to achieve interoperability among applications across a varied base of underlying and changing technologies.
To preserve the investment in software engineering, program logic is separated from underlying technologies through the use of software interfaces each of which defines a contract between a service consumer and a service provider. This separation is the basis of any valid SOA. While some methods define the service interface boundary at a protocol or server level, OSIDs place the boundary at the application level to effectively insulate the consumer from protocols, server identities, and utility libraries that are in the domain to a service provider resulting in software which is easier to develop, longer-lasting, and usable across a wider array of computing environments.
OSIDs assist in software design and development by breaking up the problem space across service interface boundaries. Because network communication issues are addressed within a service provider and below the interface, there isn't an assumption that every service provider implement a remote communications protocol (though many do). OSIDs are also used for communication and coordination among the various components of complex software which provide a means of organizing design and development activities for simplified project management.
OSID providers (implementations) are often reused across a varied set of applications. Once software is made to understand the interface contract for a service, other compliant implementations may be used in its place. This achieves reusability at a high level (a service level) and also serves to easily scale software written for smaller more dedicated purposes.
An OSID provider implementation may be composed of an arbitrary number of other OSID providers. This layering technique is an obvious means of abstraction. When all the OSID providers implement the same service, this is called an adapter pattern. Adapter patterns are powerful techniques to federate, multiplex, or bridge different services contracting from the same interface without the modification to the application.
List
Agent Open Service Interface Definition
Assessment Open Service Interface Definition
Authentication Open Service Interface Definition
Authorization Open Service Interface Definition
CourseManagement Open Service Interface Definition
Dictionary Open Service Interface Definition
Filing Open Service Interface Definition
Grading Open Service Interface Definition
Hierarchy Open Service Interface Definition
Logging Open Service Interface Definition
Messaging Open Service Interface Definition
Repository Open Service Interface Definition
Scheduling Open Service Interface Definition
Workflow Open Service Interface Definition
See also
Open Knowledge Initiative
References
Baving, T., Cook, D., Green, T. Integrating the Educational Enterprise. 2003.
Kraan, W. O.K.I. and IMS, wires and sockets revisited.
Kahn, J. Screen Shots: Repository OSID Interoperability. 2005.
Kumar, V., Merriman, J., Thorne, S. Open Knowledge Initiative Final Report. 2004.
Kahn, J. Repository Developer's Guide. 2006.
Kahn, J. Managing Complexity and Surviving Technology Change. 2005.
External links
OSID Web Site
OSID WIki
PHP OSIDs
Software architecture
Service-oriented (business computing) | Operating System (OS) | 845 |
Kernel
Kernel may refer to:
Computing
Kernel (operating system), the central component of most operating systems
Kernel (image processing), a matrix used for image convolution
Compute kernel, in GPGPU programming
Kernel method, in machine learning
Kernelization, a technique for designing efficient algorithms
Kernel, a routine that is executed in a vectorized loop, for example in general-purpose computing on graphics processing units
KERNAL, the Commodore operating system
Mathematics
Objects
Kernel (algebra), a general concept that includes:
Kernel (linear algebra) or null space, a set of vectors mapped to the zero vector
Kernel (category theory), a generalization of the kernel of a homomorphism
Kernel (set theory), an equivalence relation: partition by image under a function
Difference kernel, a binary equalizer: the kernel of the difference of two functions
Functions
Kernel (geometry), the set of points within a polygon from which the whole polygon boundary is visible
Kernel (statistics), a weighting function used in kernel density estimation to estimate the probability density function of a random variable
Integral kernel or kernel function, a function of two variables that defines an integral transform
Heat kernel, the fundamental solution to the heat equation on a specified domain
Convolution kernel
Stochastic kernel, the transition function of a stochastic process
Transition kernel, a generalization of a stochastic kernel
Pricing kernel, the stochastic discount factor used in mathematical finance
Positive-definite kernel, a generalization of a positive-definite matrix
Kernel trick, in statistics
Reproducing kernel Hilbert space
Science
Seed, inside the nut of most plants or the fruitstone of drupes, especially:
Apricot kernel
Corn kernel
Palm kernel
Wheat kernel
Atomic nucleus, the center of an atom
Companies
Kernel (neurotechnology company), a developer of neural interfaces
The Kernel Brewery, a craft brewery in London
The Kernel, an Internet culture website, now part of The Daily Dot
Other uses
Kernel (EP), by the band Seam
Kernel Fleck, a character in The Demonata series of books
Brigitte Kernel (born 1959), French journalist and writer
See also
Colonel, a senior military officer | Operating System (OS) | 846 |
Lotus Symphony (MS-DOS)
Lotus Symphony was an integrated software package for creating and editing text, spreadsheets, charts and other documents on the MS-DOS operating systems. It was released by Lotus Development as a follow-on to its popular spreadsheet program, Lotus 1-2-3, and was produced from 1984–1992. Lotus Jazz on the Apple Macintosh was a sibling product.
IBM revived the name Lotus Symphony in 2007 for a new office suite based on OpenOffice.org, but the two programs are otherwise unrelated.
History
Lotus 1-2-3 had originally been billed as an integrated product with spreadsheet, database and graphing functions (hence the name "1-2-3"). Other products described as "integrated", such as Ashton-Tate's Framework and AppleWorks, from Apple Computer, normally included word processor functionality. Symphony was Lotus' response.
Overview
Symphony for MS-DOS is a program that loads entirely into memory on startup, and can run as an MS-DOS task on versions of Microsoft Windows (3.x/95/98/ME). Using the Command Prompt, and a .pif file, Symphony can also be used on Windows XP and its successors.
Using ALT+F10 the user can alternate among the five "environments" of the program, each a rendering of the same underlying data. The environments are:
SHEET, a spreadsheet program very similar to 1-2-3
DOC, a word processor
GRAPH, a graphical charting program
FORM, a table-based database management system
COMM, a communications program
Several "add-in applications" can be "attached" and activated, extending Symphony's capabilities, including a powerful macro manager, a document outliner, a spell-checker, statistics, various communications configurations, and a tutorial, which demonstrates Symphony usage by running macros. The program allows the screen to be split into panes and distinct Windows, showing different views of the underlying data simultaneously, each of which can display any of the five environments. The user is then able to see that changes made in one environment are reflected in others simultaneously, perhaps the package's most interesting feature.
All the data that Symphony handles is kept in spreadsheet-like cells. The other environments—word processing, database, communications, graphics—in essence only change the display format and focus of that data (including available menus, special keys, and functionality), which can be saved and retrieved as .WR1 files.
Symphony was designed to work completely in the standard 640k of conventional memory, supplemented by any expanded memory. Similar and competitive packages included SmartWare, Microsoft Works, Context MBA, Framework, Enable and Ability Office.
Symphony's spreadsheet engine was similar to, but not the same as the one used in Lotus 1-2-3, once the most popular of its kind. Additional enhancements included:
The ability to create unique application looking spreadsheets using customizable macro driven menus and display Windows, the result being menu driven applications that, to the user, resembled little of their original spreadsheet heritage.
A rearranged worksheet menu, placing COPY as the first menu item, then the other most frequently used items after that.
Additional @ formula functions building on 1-2-3's spreadsheet only formulas.
Multiple menu systems, retaining 1-2-3's uniquely identified first-character menu items.
The addition of the TAB key to anchor ranges, instead of just using the period key.
The ability to copy "to a location" and end up at that location, instead of at the copy "from location."
Symphony put the power of the spreadsheet at the user's fingertips and used all of the available keys on IBM's 84-key PC keyboard. In this way, the user could use both hands to select menu functions, navigate menus and spreadsheets, as well as, all other Symphony functions by touch. The introduction of the US IBM PC 104-key keyboard and later ergonomic keyboards diluted this advantage.
Compared to other word processors of the day such as Micropro WordStar 3.3, WordPerfect 4.2, and Microsoft Word 2.0, Symphony's word processing environment was simple, but effective and uncomplicated.
Compared to other database programs of the day—Ashton-Tate's dBase III, MDBS Knowledgeman, Borland Paradox 2.0 and Borland Reflex 1.0—Symphony's FORM environment was not as robust, lacking the analytical abilities of Reflex and the pseudo relational power of dBase III. However, it was integrated directly into the spreadsheet and included the ability to "generate" a FORM from spreadsheet fields. The generator would automatically create the database input form, all the underlying spreadsheet architecture, with range names and query fields, turning a simple spreadsheet into an instant database in seconds. 3.0-Symphony extended earlier enhancements with additional add-ons, most notably:
WYSIWYG (what-you-see-is-what-you-get) GUI (graphical user interface) and the addition of mouse support
BASE, the ability to integrate with any dBase IV file, no matter its size.
ExtraK add-on, extending memory capabilities for spreadsheet larger than 4MB.
Like its predecessor Lotus 1-2-3, Symphony contained a reasonably powerful programming language referred to as its "Symphony Command Language (or SCL) ", which could be saved either within a spreadsheet or separately in "libraries" in the form of macros: lists of menu operations, data, and other macro keywords. (One is "menucall," which allows users to call their own menus, embedded into spreadsheets, which behave just like Symphony's own.) Symphony's "learn" mode for macro recording automated this process, helping the end-user to quickly write macros to duplicate repetitive tasks or to go beyond that, without the need to understand computer programming. One of the most significant features of Symphony was the integration of the various modules using this command language. In its day, it was one of the few programs that would be able to log onto a stock market source, select data using dynamic or pre-assigned criteria, place that data into a spreadsheet, perform calculations, then chart the data and print out the results. All of this could take place unattended on a preset schedule.
See also
Lotus Multi-Byte Character Set (LMBCS)
References
External links
Dinosaur Sightings: Lotus Symphony 3.0 (for DOS) by Greg Shultz, TechRepublic
1984 software
DOS software
Symphony
Office suites
Spreadsheet software | Operating System (OS) | 847 |
File deletion
File deletion is the removal of a file from a computer's file system.
All operating systems include commands for deleting files (rm on Unix, era in CP/M and DR-DOS, del/erase in MS-DOS/PC DOS, DR-DOS, Microsoft Windows etc.). File managers also provide a convenient way of deleting files. Files may be deleted one-by-one, or a whole blacklist directory tree may be deleted.
Purpose
Examples of reasons for deleting files are:
Freeing the disk space
Removing duplicate or unnecessary data to avoid confusion
Making sensitive information unavailable to others
Removing an operating system or blanking a hard drive
Accidental removal
A common problem with deleting files is the accidental removal of information that later proves to be important. A common method to prevent this is to back up files regularly. Erroneously deleted files may then be found in archives.
Another technique often used is not to delete files instantly, but to move them to a temporary directory whose contents can then be deleted at will. This is how the "recycle bin" or "trash can" works. Microsoft Windows and Apple's macOS, as well as some Linux distributions, all employ this strategy.
In MS-DOS, one can use the undelete command. In MS-DOS the "deleted" files are not really deleted, but only marked as deleted—so they could be undeleted during some time, until the disk blocks they used are eventually taken up by other files. This is how data recovery programs work, by scanning for files that have been marked as deleted. As the space is freed up per byte, rather than per file, this can sometimes cause data to be recovered incompletely. Defragging a drive may prevent undeletion, as the blocks used by deleted file might be overwritten since they are marked as "empty".
Another precautionary measure is to mark important files as read-only. Many operating systems will warn the user trying to delete such files. Where file system permissions exist, users who lack the necessary permissions are only able to delete their own files, preventing the erasure of other people's work or critical system files.
Under Unix-like operating systems, in order to delete a file, one must usually have write permission to the parent directory of that file.
Sensitive data
The common problem with sensitive data is that deleted files are not really erased and so may be recovered by interested parties. Most file systems only remove the link to data (see undelete, above). But even overwriting parts of the disk with something else or formatting it may not guarantee that the sensitive data is completely unrecoverable. Special software is available that overwrites data, and modern (post-2001) ATA drives include a secure erase command in firmware. However, high-security applications and high-security enterprises can sometimes require that a disk drive be physically destroyed to ensure data is not recoverable, as microscopic changes in head alignment and other effects can mean even such measures are not guaranteed. When the data is encrypted only the encryption key has to be unavailable. Crypto-shredding is the practice of 'deleting' data by (only) deleting or overwriting the encryption keys.
See also
Crypto-shredding
Data erasure | Operating System (OS) | 848 |
Linux From Scratch
Linux From Scratch (LFS) is a type of a Linux installation and the name of a book written by Gerard Beekmans, and as of May 2021, mainly maintained by Bruce Dubbs. The book gives readers instructions on how to build a Linux system from source. The book is available freely from the Linux From Scratch site.
Projects under LFS
Linux From Scratch is a way to install a working Linux system by building all components of it manually. This is, naturally, a longer process than installing a pre-compiled Linux distribution. According to the Linux From Scratch site, the advantages to this method are a compact, flexible and secure system and a greater understanding of the internal workings of the Linux-based operating systems.
To keep LFS small and focused, the book Beyond Linux From Scratch (BLFS) was created, which presents instructions on how to further develop the basic Linux system that was created in LFS. It introduces and guides the reader through additions to the system including the X Window System, desktop environments (KDE, GNOME, Xfce, LXDE), productivity software, web browsers, programming languages and tools, multimedia software, and network management and system administration tools. Since Release 5.0, the BLFS book version matches the LFS book version.
The book Cross Linux From Scratch (CLFS) focuses on cross compiling, including compiling for headless or embedded systems that can run Linux, but lack the resources needed to compile Linux. CLFS supports a broad range of processors and addresses advanced techniques not included in the LFS book such as cross-build toolchains, multilibrary support (32 & 64-bit libraries side-by-side), and alternative instruction set architectures such as Itanium, SPARC, MIPS, and Alpha.
The Linux from Scratch project, like BitBake, also supports cross-compiling Linux for ARM embedded systems such as the Raspberry Pi and BeagleBone.
The book Hardened Linux From Scratch (HLFS) focuses on security enhancements such as hardened kernel patches, mandatory access control policies, stack-smashing protection, and address space layout randomization. Besides its main purpose of creating a security-focused operating system, HLFS had the secondary goal of being a security teaching tool. It has not been updated since 2011.
Automated Linux From Scratch (ALFS) is a project designed to automate the process of creating an LFS system. It is aimed at users who have gone through the LFS and BLFS books several times and wish to reduce the amount of work involved. A secondary goal is to act as a test of the LFS and BLFS books by directly extracting and running instructions from the XML sources of the LFS and BLFS books.
Requirements and procedure
A clean partition and a working Linux system with a compiler and some essential software libraries are required to build LFS. Instead of installing from an existing Linux system, one can also use a Live CD to build an LFS system.
The project formerly maintained the Linux From Scratch Live CD. LFS Live CD contains all the source packages (in the full version of the Live CD only), the LFS book, automated building tools and (except for the minimal Live CD version) an Xfce GUI environment to work in. The official LFS Live CD is no longer maintained, and cannot be used to build the LFS version7 or later. There are, however, two unofficial builds that can be used to build a 32-bit or 64-bit kernel and userspace respectively for LFS 7.x.
First, a toolchain must be compiled consisting of the tools used to compile LFS, like GCC, glibc, binutils, and other necessary utilities. Then, the root directory must be changed, (using chroot), to the toolchain's partition to start building the final system. One of the first packages to compile is glibc; after that, the toolchain's linker must be adjusted to link against the newly built glibc, so that all other packages that will make up the finished system can be linked against it as well. During the chroot phase, bash's hashing feature is turned off and the temporary toolchain's bin directory moved to the end of PATH. This way the newly compiled programs come first in PATH and the new system builds on its own new components.
List of packages in LFS
This is a list of the packages included in CLFS version 1.1.0. Unless otherwise noted, this list is applicable to all supported architectures.
Autoconf 2.61
Automake 1.10.1
Bash 3.2
Bash Documentation 3.2
Bin86 (x86_64 non-multilib only)
Binutils 2.18
Bison 2.3
Bzip2 1.0.4
CLFS-Bootscripts 1.0pre10
Coreutils 6.9
DejaGNU 1.4.4
Diffutils 2.8.7
E2fsprogs 1.40.4
Elftoaout 2.3 (Sparc and Sparc64 only)
Expect 5.43.0
File 4.23
Findutils 4.2.32
Flex 2.5.35
Gawk 3.1.6
GCC 4.2.4
Gettext 0.17
Glibc 2.7
Grep 2.5.3
Groff 1.19.2
GRUB 0.97
Gzip 1.3.12
Hfsutils 3.2.6 (PowerPC and PowerPC64 only)
Iana-Etc. 2.20
Inetutils 1.5
IPRoute2 2.6.23
Kbd 1.13
Less 418
LILO 22.8 (x86_64 non-multilib only)
Libtool 1.5.26
Linux 2.6.24.7
GNU m4 1.4.10
Make 3.81
Man 1.6e
Man-pages 3.01
Mktemp 1.5
Module-Init-Tools 3.4
Ncurses 5.6
Parted 1.8.8 (PowerPC and PowerPC64 only)
Patch 2.5.9
Perl 5.8.8
PowerPC Utils 1.1.3 (PowerPC and PowerPC64 only)
Procps 3.2.7
Psmisc 22.6
Readline 5.2
Sed 4.1.5
Shadow 4.1.2
Silo 1.4.13 (Sparc and Sparc64 only)
Sysklogd 1.5
Sysvinit 2.86
tar 1.20
Tcl 8.4.16
Texinfo 4.11
Tree 1.5.1.1
Udev 124
Util-linux-ng 2.14
Vim 7.1
Vim 7.1 language files (optional)
Yaboot 1.3.13 (PowerPC and PowerPC64 only)
Zlib 1.2.3
Standard build unit
A "standard build unit" ("SBU") is a term used during initial bootstrapping of the system, and represents the amount of time required to build the first package in LFS on a given computer. Its creation was prompted by the long time required to build an LFS system, and the desire of many users to know how long a source tarball will take to build ahead of time.
As of Linux From Scratch version 10.1, the first package built by the user is GNU binutils. When building it, users are encouraged to measure the build process using shell constructs and dub that time the system's "standard build unit". Once this number is known, an estimate of the time required to build later packages is expressed relative to the known SBU.
Several packages built during compilation take much longer to build than binutils, including the GNU C Library (rated at 4.2 SBUs) and the GNU Compiler Collection (rated at 11 SBUs). The unit must be interpreted as an approximation; various factors influence the actual time required to build a package.
Reception
LWN.net reviewed LFS in 2004:
Tux Machines wrote a review about Linux From Scratch 6.1 in 2005:
Tux Machines also has a second and a third part of the review.
See also
Other source-based Linux distributions:
Gentoo Linux
Sorcerer
Source Mage
GoboLinux
CRUX
References
External links
Interview with Gerard Beekmans
Another interview with Gerard Beekmans
Books about Linux
Free software
Works about free software
Linux distributions without systemd | Operating System (OS) | 849 |
IBM PS/2
The Personal System/2 or PS/2 is IBM's second generation of personal computers. Released in 1987, it officially replaced the IBM PC, XT, AT, and PC Convertible in IBM's lineup. Many of the PS/2's innovations, such as the 16550 UART (serial port), 1440 KB 3.5-inch floppy disk format, 72-pin SIMMs, the PS/2 port, and the VGA video standard, went on to become standards in the broader PC market.
The PS/2 line was created by IBM partly in an attempt to recapture control of the PC market by introducing the advanced yet proprietary Micro Channel architecture (MCA) on higher-end models. These models were in the strange position of being incompatible with the IBM-compatible hardware standards previously established by IBM and adopted in the PC industry. However, IBM's initial PS/2 computers were popular with target market corporate buyers, and by September 1988 IBM reported that it had sold 3 million PS/2 machines. This was only 18 months after the new range had been introduced.
Most major PC manufacturers balked at IBM's licensing terms for MCA-compatible hardware, particularly the per-machine royalties. In 1992, Macworld stated that "IBM lost control of its own market and became a minor player with its own technology."
The OS/2 operating system was announced at the same time as the PS/2 line and was intended to be the primary operating system for models with Intel 80286 or later processors. However, at the time of the first shipments, only IBM PC DOS 3.3 was available. OS/2 1.0 (text-mode only) and Microsoft's Windows 2.0 became available several months later. IBM also released AIX PS/2, a UNIX operating system for PS/2 models with Intel 386 or later processors.
Predecessors
1981 IBM PC
1983 IBM PC XT
1984 IBM Portable Personal Computer
1984 IBM PCjr
1984 IBM PC AT
1986 IBM PC Convertible
1986 IBM PC XT 286
Technology
IBM's PS/2 was designed to remain software compatible with their PC/AT/XT line of computers upon which the large PC clone market was built, but the hardware was quite different. PS/2 had two BIOSes: one named ABIOS (Advanced BIOS) which provided a new protected mode interface and was used by OS/2, and CBIOS (Compatible BIOS) which was included to be software compatible with the PC/AT/XT. CBIOS was so compatible that it even included Cassette BASIC. While IBM did not publish the BIOS source code, it did promise to publish BIOS entry points.
Micro Channel Architecture
With certain models to the IBM PS/2 line, Micro Channel Architecture (MCA) was also introduced. MCA was conceptually similar to the channel architecture of the IBM System/360 mainframes. MCA was technically superior to ISA and allowed for higher speed communications within the system. The majority of MCA's features would be seen in later buses with the exception of: streaming-data procedure, channel-check reporting, error logging and internal bus-level video pass-through for devices like the IBM 8514. Transfer speeds were on par with the much later PCI standard. MCA allowed one-to-one, card to card, and multi-card to processor simultaneous transaction management which is a feature of the PCI-X bus format.
Bus mastering capability, bus arbitration, and a primitive form of plug-and-play management of hardware were all benefits of MCA. Gilbert Held in his 2000 book Server Management observes: "MCA used an early (and user-hostile) version of what we know now as 'Plug-N′-Play', requiring a special setup disk for each machine and each card." MCA never gained wide acceptance outside of the PS/2.
When setting up the card with its disk, all choices for interrupts and other changes were accomplished automatically by the PC reading the old configuration from the floppy disk. This made necessary changes, then recorded the new configuration to the floppy disk. This meant that the user must keep that same floppy disk matched to that particular PC. For a small organization with a few PCs, this was annoying, but less expensive and time consuming than bringing in a PC technician to do installation. But for large organizations with hundreds or even thousands of PCs, permanently matching each PC with its own floppy disk was a logistical nightmare. Without the original, (and correctly updated) floppy disk, no changes could be made to the PC's cards.
In addition to the technical setup, legally, royalties were required for each MCA-compatible machine sold. There was nothing unique in IBM insisting on payment of royalties on the use of its patents applied to Micro Channel-based machines. Up until that time, some companies had failed to pay IBM for the use of its patents on the earlier generation of Personal Computer.
Keyboard/mouse
Layout
The PS/2 IBM Model M keyboard used the same 101-key layout of the previous IBM PC/AT Extended keyboard, itself derived from the original IBM PC keyboard. European variants had 102 keys with the addition of an extra key to the right of the left Shift key.
Interface
PS/2 systems introduced a new specification for the keyboard and mouse interfaces, which are still in use today (though increasingly supplanted by USB devices) and are thus called "PS/2" interfaces. The PS/2 keyboard interface, inspired by Apple's ADB interface, was electronically identical to the long-established AT interface, but the cable connector was changed from the 5-pin DIN connector to the smaller 6-pin mini-DIN interface. The same connector and a similar synchronous serial interface was used for the PS/2 mouse port.
The initial desktop Model 50 and Model 70 also featured a new cableless internal design, based on use of interposer circuit boards to link the internal drives to the planar (motherboard). Additionally these machines could be largely disassembled and reassembled for service without tools.
Additionally, the PS/2 introduced a new software data area known as the Extended BIOS Data Area (EBDA). Its primary use was to add a new buffer area for the dedicated mouse port. This also required making a change to the "traditional" BIOS Data Area (BDA) which was then required to point to the base address of the EBDA.
Another new PS/2 innovation was the introduction of bidirectional parallel ports which in addition to their traditional use for connecting a printer could now function as a high speed data transfer interface. This allowed the use of new hardware such as parallel port scanners, CD-ROM drives, and also enhanced the capabilities of printers by allowing them to communicate with the host PC and send back signals instead of simply being a passive output device.
Graphics
Most of the initial range of PS/2 models were equipped with a new frame buffer known as the Video Graphics Array, or VGA for short. This effectively replaced the previous EGA standard. VGA increased graphics memory to 256 KB and provided for resolutions of 640×480 with 16 colors, and 320×200 with 256 colors. VGA also provided a palette of 262,144 colors (as opposed to the EGA palette of 64 colors). The IBM 8514 and later XGA computer display standards were also introduced on the PS/2 line.
Key monitors and their maximum resolutions:
8504: 12″, 640×480, 60 Hz non-interlaced, 1991, monochrome
8507: 19″, 1024×768, 43.5 Hz interlaced, 1988, monochrome
8511: 14″, 640×480, 60 Hz non-interlaced, 1987
8512: 14″, 640×480, 60 Hz non-interlaced, 1987
8513: 12″, 640×480, 60 Hz non-interlaced, 1987
8514: 16″, 1024×768, 43.5 Hz interlaced, 1987
8515: 14″, 1024×768, 43.5 Hz interlaced, 1991
8516: 14″, 1024×768, 43.5 Hz interlaced, 1991
8518: 14″, 640×480, 75 Hz non-interlaced, 1992
9515: 14″, 1024×768, 43.5 Hz interlaced, 1992
9517: 16″, 1280×1024, 53 Hz interlaced, 1991
9518: 14″, 640×480, non-interlaced, 1992
38F4737: 10", 640×480, non-interlaced, 1989, amber monochrome plasma screen; this display was exclusive to models P70 and P75
In truth, all "XGA" 1024×768 monitors are multimode, as XGA works as an add-on card to a built-in VGA and transparently passes-thru the VGA signal when not operating in a high resolution mode. All of the listed 85xx displays can therefore sync 640×480 at 60 Hz (or 720×400 at 70 Hz) in addition to any higher mode they may also be capable of. This however is not true of the 95xx models (and some unlisted 85xx's), which are specialist workstation displays designed for use with the XGA-2 or Image Adapter/A cards, and whose fixed frequencies all exceed that of basic VGA – the lowest of their commonly available modes instead being 640×480 at 75 Hz, if not something much higher still. It is also worth noting that these were still merely dual- or "multiple-frequency" monitors, not variable-frequency (also known as multisync); in particular, despite running happily at 640×480/720×400 and 1024×768, an (e.g.) 8514 cannot sync the otherwise common intermediate 800×600 "SVGA" resolution, even at the relatively low 50 to 56 Hz refresh rates initially used.
Although the design of these adapters did not become an industry standard as VGA did, their 1024×768 pixel resolution was subsequently widely adopted as a standard by other manufacturers, and "XGA" became a synonym for this screen resolution. The lone exception were the bottom-rung 8086-based Model 25 and 30, which had a cut-down version of VGA referred to as MCGA; the 286 models came with VGA. This supported CGA graphics modes, VGA 320x200x256 and 640x480x2 mode, but not EGA or color 640x480.
VGA video connector
All of the new PS/2 graphics systems (whether MCGA, VGA, 8514, or later XGA) used a 15-pin D-sub connector for video out. This used analog RGB signals, rather than four or six digital color signals as on previous CGA and EGA monitors. The digital signals limited the color gamut to a fixed 16 or 64 color palette with no room for expansion. In contrast, any color depth (bits per primary) can be encoded into the analog RGB signals so the color gamut can be increased arbitrarily by using wider (more bits per sample) DACs and a more sensitive monitor. The connector was also compatible with analog grayscale displays. Unlike earlier systems (like MDA and Hercules) this was transparent to software, so all programs supporting the new standards could run unmodified whichever type of display was attached. (On the other hand, whether the display was color or monochrome was undetectable to software, so selection between application displays optimized for color or monochrome, in applications that supported both, required user intervention.) These grayscale displays were relatively inexpensive during the first few years the PS/2 was available, and they were very commonly purchased with lower-end models.
The VGA connector became the de facto standard for connecting monitors and projectors on both PC and non-PC hardware over the course of the early 1990s, replacing a variety of earlier connectors.
Storage
Apple had first popularized the 3.5" floppy on the Macintosh line and IBM brought them to the PC in 1986 with the PC Convertible. In addition, they could be had as an optional feature on the XT and AT. The PS/2 line used entirely 3.5" drives which assisted in their quick adoption by the industry, although the lack of 5.25" drive bays in the computers created problems later on in the 1990s as they could not accommodate internal CD-ROM drives. In addition, the lack of built-in 5.25" floppy drives meant that PS/2 users could not immediately run the large body of existing IBM compatible software. However IBM made available optional external 5.25" drives, with internal adapters for the early PS/2 models, to enable data transfer.
In the initial lineup, IBM used 720 KB double density (DD) capacity drives on the 8086-based models and 1440 KB high density (HD) on the 80286-based and higher models. By the end of the PS/2 line they had moved to a somewhat standardized capacity of 2880 KB.
The PS/2 floppy drives lacked a capacity detector. 1440 KB floppies had a hole so that drives could identify them from 720 KB floppies, preventing users from formatting the smaller capacity disks to the higher capacity (doing so would work, but with a higher tendency of data loss). Clone manufacturers implemented the hole detection, but IBM did not. As a result of this a 720 KB floppy could be formatted to 1440 KB in a PS/2, but the resulting floppy would only be readable by a PS/2 machine.
PS/2s primarily used Mitsubishi floppy drives and did not use a separate Molex power connector; the data cable also contained the power supply lines. As the hardware aged the drives often malfunctioned due to bad quality capacitors.
The PS/2 used several different types of internal hard drives. Early models used MFM or ESDI drives. Some desktop models used combo power/data cables similar to the floppy drives. Later models used DBA ESDI or Parallel SCSI. Typically, desktop PS/2 models only permitted use of one hard drive inside the computer case. Additional storage could be attached externally using the optional SCSI interface.
Memory
Later PS/2 models introduced the 72-pin SIMM which became the de facto standard for RAM modules by the mid-1990s in mid-to-late 486 and nearly all Pentium desktop systems. 72-pin SIMMs were 32/36 bits wide and replaced the old 30-pin SIMM (8/9-bit) standard. The older SIMMs were much less convenient because they had to be installed in sets of two or four to match the width of the CPU's 16-bit (Intel 80286 and 80386SX) or 32-bit (80386 and 80486) data bus, and would have been extremely inconvenient to use in Pentium systems (which featured a 64-bit memory bus). 72-pin SIMMs were also made with greater capacities (starting at 1mb and ultimately reaching 128mb, vs 256kb to 16mb and more commonly no more than 4mb for 30-pin) and in a more finely graduated range (powers of 2, instead of powers of 4).
Many PS/2 models also used proprietary IBM SIMMs and could not be fitted with commonly available types. However industry standard SIMMs could be modified to work in PS/2 machines if the SIMM-presence and SIMM-type detection bridges, or associated contacts, were correctly rewired.
Models
At launch, the PS/2 family comprised the Model 30, 50, 60 and 80; the Model 25 was launched a few months later.
The PS/2 Models 25 and 30 (IBM 8525 and 8530, respectively) were the lowest-end models in the lineup and meant to replace the IBM PC and XT. Model 25s came with either an 8086 CPU running at 8 MHz, 512 KB of RAM, and 720 KB floppy disks, or 80286 CPU. The 8086s had ISA expansion slots and a built-in MCGA monitor, which could be either color or monochrome, while the 80286 models came with VGA monitor and ISA expansion slots. A cut-down Model M with no numeric keypad was standard, with the normal keyboard being an extra-cost option. There was a very rare later model called the PS/2 Model 25-SX which sported either a 16 MHz or 20 MHz 386 CPU, up to 12 MB of memory, IDE hard drive, VGA Monitor and 16 bit ISA slots making it the highest available model 25 available denoted by model number 8525-L41.
The Model 30 had either an 8086 or 286 CPU and sported the full 101-key keyboard and standalone monitor along with three 8-bit ISA expansion slots. 8086 models had 720 KB floppies while 286 models had 1440 KB ones. Both the Model 25 and 30 could have an optional 20 MB ST-506 hard disk (which in the Model 25 took the place of the second floppy drive if so equipped and used a proprietary 3.5" form factor). 286-based Model 30s are otherwise a full AT-class machine and support up to 4 MB of RAM.
Later ISA PS/2 models comprised the Model 30-286 (a Model 30 with an Intel 286 CPU), Model 35 (IBM 8535) and Model 40 (IBM 8540) with Intel 386SX or IBM 386SLC processors.
The higher-numbered models (above 50) were equipped with the Micro Channel bus and mostly ESDI or SCSI hard drives (models 60-041 and 80-041 had MFM hard drives). PS/2 Models 50 (IBM 8550) and 60 (IBM 8560) used the Intel 286 processor, the PS/2 Models 70 (IBM 8570) and 80 used the 386DX, while the mid-range PS/2 Model 55SX (IBM 8555-081) and used the 16/32-bit 386SX processor. The Model 50 was revised to the Model 50Z still with 10MHz 80286 processor, but with memory run at zero wait state, and a switch to ESDI hard drives. Later Model 70 and 80 variants (B-xx) also used 25 MHz Intel 486 processors, in a complex called the Power Platform.
The PS/2 Models 90 (IBM 8590/9590) and 95 (IBM 8595/9595/9595A) used Processor Complex daughterboards holding the CPU, memory controller, MCA interface, and other system components. The available Processor Complex options ranged from the 20 MHz Intel 486 to the 90 MHz Pentium and were fully interchangeable. The IBM PC Server 500, which has a motherboard identical to the 9595A, also uses Processor Complexes.
Other later Micro Channel PS/2 models included the Model 65SX with a 16 MHz 386SX; various Model 53 (IBM 9553), 56 (IBM 8556) and 57 (IBM 8557) variants with 386SX, 386SLC or 486SLC2 processors; the Models 76 and 77 (IBM 9576/9577) with 486SX or 486DX2 processors respectively; and the 486-based Model 85 (IBM 9585).
The IBM PS/2E (IBM 9533) was the first Energy Star compliant personal computer. It had a 50 MHz IBM 486SLC processor, an ISA bus, four PC card slots, and an IDE hard drive interface. The environmentally friendly PC borrowed many components from the ThinkPad line and was composed of recycled plastics, designed to be easily recycled at the end of its life, and used very little power.
The IBM PS/2 Server 195 and 295 (IBM 8600) were 486-based dual-bus MCA network servers supporting asymmetric multiprocessing, designed by Parallan Computer Inc.
The IBM PC Server 720 (IBM 8642) was the largest MCA-based server made by IBM, although it was not, strictly speaking, a PS/2 model. It could be fitted with up to six Intel Pentium processors interconnected by the Corollary C-bus and up to eighteen SCSI hard disks. This model was equipped with seven combination MCA/PCI slots.
PS/2 portables, laptops and notebooks
IBM also produced several portable and laptop PS/2s, including the Model L40 (ISA-bus 386SX), N33 (IBM's first notebook-format computer from year 1991, Model 8533, 386SX), N51 (386SX/SLC), P70 (386DX) and P75 (486DX2).
The IBM ThinkPad 700C, aside from being labeled "700C PS/2" on the case, featured MCA and a 486SLC CPU.
6152 Academic System
The 6152 Academic System was a workstation computer developed by IBM's Academic Information Systems (ACIS) division for the university market introduced in February 1988. The 6152 was based on the PS/2 Model 60, adding a RISC Adapter Card on the Micro Channel bus. This card was a co-processor that enabled the 6152 to run ROMP software compiled for IBM's Academic Operating System (AOS), a version of BSD UNIX for the ROMP that was only available to select colleges and universities.
The RISC Adapter Card contained the ROMP-C microprocessor (an enhanced version of the ROMP that first appeared in the IBM RT PC workstations), a memory management unit (the ROMP had virtual memory), a floating-point coprocessor, and up to 8MB of memory for use by the ROMP. The 6152 was the first computer to use the ROMP-C, which would later be introduced in new RT PC models.
Marketing
During the 1980s, IBM's advertising of the original PC and its other product lines had frequently used the likeness of Charlie Chaplin. For the PS/2, however, IBM augmented this character with the following jingle:
Another campaign featured actors from the television show M*A*S*H playing the staff of a contemporary (i.e. late-1980s) business in roles reminiscent of their characters' roles from the series. Harry Morgan, Larry Linville, William Christopher, Wayne Rogers, Gary Burghoff, Jamie Farr, and Loretta Swit were in from the beginning, whereas Alan Alda joined the campaign later.
The profound lack of success of these advertising campaigns led, in part, to IBM's termination of its relationships with its global advertising agencies; these accounts were reported by Wired magazine to have been worth over $500 million a year, and the largest such account review in the history of business.
Overall, the PS/2 line was largely unsuccessful with the consumer market, even though the PC-based Models 30 and 25 were an attempt to address that. With what was widely seen as a technically competent but cynical attempt to gain undisputed control of the market, IBM unleashed an industry backlash, which went on to standardize VESA, EISA and PCI. In large part, IBM failed to establish a link in the consumer's mind between the PS/2 MicroChannel architecture and the immature OS/2 1.x operating system; the more capable OS/2 version 2.0 did not release until 1992.
The firm suffered massive financial losses for the remainder of the 1980s, forfeiting its previously unquestioned position as the industry leader, and eventually lost its status as the largest manufacturer of personal computers, first to Compaq and then to Dell. From a high of 10,000 employees in Boca Raton before the PS/2 came out, only seven years later, IBM had $600 million in unsold inventory and was laying off staff by the thousands.
After the failure of the PS/2 line to establish a new standard, IBM was forced to revert to building ISA PCs—following the industry it had once led—with the low-end PS/1 line and later with the more compatible Aptiva and PS/ValuePoint lines.
Still, the PS/2 platform experienced some success in the corporate sector where the reliability, ease of maintenance and strong corporate support from IBM offset the rather daunting cost of the machines. Also, many people still lived with the motto "Nobody ever got fired for buying an IBM". In the mid-range desktop market, the models 55SX and later 56SX were the leading sellers for almost their entire lifetimes. Later PS/2 models saw a production life span that took them into the late 1990s, within a few years of IBM selling off the division.
See also
Successors
IBM PS/ValuePoint
Ambra Computer Corporation
IBM Aptiva
Concurrent
IBM PS/1
Notes
References
Further reading
Burton, Greg. IBM PC and PS/2 pocket reference. NDD (the old dealer channel), 1991.
Byers, T.J. IBM PS/2: A Reference Guide. Intertext Publications, 1989. .
Dalton, Richard and Mueller, Scott. IBM PS/2 Handbook . Que Publications, 1989. .
Held, Gilbert. IBM PS/2: User's Reference Manual. John Wiley & Sons Inc., 1989. .
Hoskins, Jim. IBM PS/2. John Wiley & Sons Inc., fifth revised edition, 1992. .
Leghart, Paul M. The IBM PS/2 in-depth report. Pachogue, NY: Computer Technology Research Corporation, 1988.
Newcom, Kerry. A Closer Look at IBM PS/2 Microchannel Architecture. New York: McGraw-Hill, 1988.
Norton, Peter. Inside the IBM PC and PS/2. Brady Publishing, fourth edition 1991. .
Outside the IBM PC and PS/2: Access to New Technology. Brady Publishing, 1992. .
Shanley, Tom. IBM PS/2 from the Inside Out. Addison-Wesley, 1991. .
External links
IBM Type 8530
IBM PS/2 Personal Systems Reference Guide 1992 - 1995
Computercraft - The PS/2 Resource Center
Model 9595 Resource, covers all PS/2 models and adapters
PS/2 keyboard pinout
Computer Chronicles episode on the PS/2
IBM PS/2 L40 SX (8543)
PS/2
OS/2
16-bit computers
32-bit computers
Computer-related introductions in 1987 | Operating System (OS) | 850 |
Stephen Warshall
Stephen Warshall (November 15, 1935 – December 11, 2006) was an American computer scientist. During his career, Warshall carried out research and development in operating systems, compiler design, language design, and operations research. Warshall died on December 11, 2006 of cancer at his home in Gloucester, Massachusetts. He is survived by his wife, Sarah Dunlap, and two children, Andrew D. Warshall and Sophia V. Z. Warshall.
Early life
Warshall was born in New York City and went to public school in Brooklyn. He graduated from A.B. Davis High School in Mount Vernon, New York and attended Harvard University, receiving a bachelor's degree in mathematics in 1956. He never received an advanced degree since at that time no programs were available in his areas of interest. However, he took graduate courses at several different universities and contributed to the development of computer science and software engineering. In the 1971–1972 academic year, he lectured on software engineering at French universities.
Employment
After graduating from Harvard, Warshall worked at ORO (Operation Research Office), a program set up by Johns Hopkins to do research and development for the United States Army. In 1958, he left ORO to take a position at a company called Technical Operations, where he helped build a research and development laboratory for military software projects. In 1961, he left Technical Operations to found Massachusetts Computer Associates. Later, this company became part of Applied Data Research (ADR). After the merger, Warshall sat on the board of directors of ADR and managed a variety of projects and organizations. He retired from ADR in 1982 and taught a weekly class in Biblical Hebrew at Temple Ahavat Achim in Gloucester, Massachusetts.
Warshall's algorithm
There is an interesting anecdote about his proof that the transitive closure algorithm, now known as Warshall's algorithm, is correct. He and a colleague at Technical Operations bet a bottle of rum on who first could determine whether this algorithm always works. Warshall came up with his proof overnight, winning the bet and the rum, which he shared with the loser of the bet. Because Warshall did not like sitting at a desk, he did much of his creative work in unconventional places such as on a sailboat in the Indian Ocean or in a Greek lemon orchard.
References
Journal of the ACM bibliography – Selected citations of Warshall paper
Stephen Warshall, Boston Globe, Obituaries, December 13, 2006
Temple Ahavat Achim Celebrates 100 Years on Cape Ann, Gloucester Jewish Journal, May 7–20, 2004
Further reading
Stephen Warshall. A theorem on Boolean matrices. ''Journal of the ACM'', 9(1):11–12, January 1962.
Thomas E. Cheatham, Jr., Stephen Warshall: Translation of retrieval requests couched in a "semiformal" English-like language. Commun. ACM 5(1): 34–39 (1962)
See also
Warshall's algorithm
1935 births
2006 deaths
20th-century American mathematicians
21st-century American mathematicians
American academics
American computer programmers
American computer scientists
American technology writers
American operations researchers
Harvard College alumni | Operating System (OS) | 851 |
OOM
OOM or oom may refer to:
Science and technology
Out of memory, a pathological state of a computer operation
Object-oriented modeling, a modeling paradigm mainly used in computer programming
Order of magnitude, a measurement term
Person
Oom is the style of calling a male person as an uncle.
Paul Kruger (1825–1904), former president of the Transvaal Republic nicknamed Oom Paul
Govan Mbeki (1910–2001), South African politician and father of Thabo Mbeki nicknamed Oom Gov
Raymond Mhlaba (1920–2005), South African politician and first Premier of the Eastern Cape nicknamed Oom Ray
Other uses
Order of Myths, an Alabama Mardi Gras organization
Officer of the Order of Merit of the Police Forces (post-nominal letters)
Zoom Airlines (ICAO code)
Oom, a villain character; see List of Batman: The Brave and the Bold characters
Odyssey of the Mind (OoM), a creative problem-solving competition
Out of mana (OOM), in video games where characters do not have enough mana for its abilities. | Operating System (OS) | 852 |
Server (computing)
In computing, a server is a piece of computer hardware or software (computer program) that provides functionality for other programs or devices, called "clients". This architecture is called the client–server model. Servers can provide various functionalities, often called "services", such as sharing data or resources among multiple clients, or performing computation for a client. A single server can serve multiple clients, and a single client can use multiple servers. A client process may run on the same device or may connect over a network to a server on a different device. Typical servers are database servers, file servers, mail servers, print servers, web servers, game servers, and application servers.
Client–server systems are usually most frequently implemented by (and often identified with) the request–response model: a client sends a request to the server, which performs some action and sends a response back to the client, typically with a result or acknowledgment. Designating a computer as "server-class hardware" implies that it is specialized for running servers on it. This often implies that it is more powerful and reliable than standard personal computers, but alternatively, large computing clusters may be composed of many relatively simple, replaceable server components.
History
The use of the word server in computing comes from queueing theory, where it dates to the mid 20th century, being notably used in (along with "service"), the paper that introduced Kendall's notation. In earlier papers, such as the , more concrete terms such as "[telephone] operators" are used.
In computing, "server" dates at least to RFC 5 (1969), one of the earliest documents describing ARPANET (the predecessor of Internet), and is contrasted with "user", distinguishing two types of host: "server-host" and "user-host". The use of "serving" also dates to early documents, such as RFC 4, contrasting "serving-host" with "using-host".
The Jargon File defines "server" in the common sense of a process performing service for requests, usually remote, with the 1981 (1.1.0) version reading:
Operation
Strictly speaking, the term server refers to a computer program or process (running program). Through metonymy, it refers to a device used for (or a device dedicated to) running one or several server programs. On a network, such a device is called a host. In addition to server, the words serve and service (as verb and as noun respectively) are frequently used, though servicer and servant are not. The word service (noun) may refer to either the abstract form of functionality, e.g. Web service. Alternatively, it may refer to a computer program that turns a computer into a server, e.g. Windows service. Originally used as "servers serve users" (and "users use servers"), in the sense of "obey", today one often says that "servers serve data", in the same sense as "give". For instance, web servers "serve [up] web pages to users" or "service their requests".
The server is part of the client–server model; in this model, a server serves data for clients. The nature of communication between a client and server is request and response. This is in contrast with peer-to-peer model in which the relationship is on-demand reciprocation. In principle, any computerized process that can be used or called by another process (particularly remotely, particularly to share a resource) is a server, and the calling process or processes is a client. Thus any general-purpose computer connected to a network can host servers. For example, if files on a device are shared by some process, that process is a file server. Similarly, web server software can run on any capable computer, and so a laptop or a personal computer can host a web server.
While request–response is the most common client-server design, there are others, such as the publish–subscribe pattern. In the publish-subscribe pattern, clients register with a pub-sub server, subscribing to specified types of messages; this initial registration may be done by request-response. Thereafter, the pub-sub server forwards matching messages to the clients without any further requests: the server pushes messages to the client, rather than the client pulling messages from the server as in request-response.
Purpose
The role of a server is to share data as well as to share resources and distribute work. A server computer can serve its own computer programs as well; depending on the scenario, this could be part of a quid pro quo transaction, or simply a technical possibility. The following table shows several scenarios in which a server is used.
Almost the entire structure of the Internet is based upon a client–server model. High-level root nameservers, DNS, and routers direct the traffic on the internet. There are millions of servers connected to the Internet, running continuously throughout the world and virtually every action taken by an ordinary Internet user requires one or more interactions with one or more servers. There are exceptions that do not use dedicated servers; for example, peer-to-peer file sharing and some implementations of telephony (e.g. pre-Microsoft Skype).
Hardware
Hardware requirement for servers vary widely, depending on the server's purpose and its software. Servers are more often than not, more powerful and expensive than the clients that connect to them.
Since servers are usually accessed over a network, many run unattended without a computer monitor or input device, audio hardware and USB interfaces. Many servers do not have a graphical user interface (GUI). They are configured and managed remotely. Remote management can be conducted via various methods including Microsoft Management Console (MMC), PowerShell, SSH and browser-based out-of-band management systems such as Dell's iDRAC or HP's iLo.
Large servers
Large traditional single servers would need to be run for long periods without interruption. Availability would have to be very high, making hardware reliability and durability extremely important. Mission-critical enterprise servers would be very fault tolerant and use specialized hardware with low failure rates in order to maximize uptime. Uninterruptible power supplies might be incorporated to guard against power failure. Servers typically include hardware redundancy such as dual power supplies, RAID disk systems, and ECC memory, along with extensive pre-boot memory testing and verification. Critical components might be hot swappable, allowing technicians to replace them on the running server without shutting it down, and to guard against overheating, servers might have more powerful fans or use water cooling. They will often be able to be configured, powered up and down, or rebooted remotely, using out-of-band management, typically based on IPMI. Server casings are usually flat and wide, and designed to be rack-mounted, either on 19-inch racks or on Open Racks.
These types of servers are often housed in dedicated data centers. These will normally have very stable power and Internet and increased security. Noise is also less of a concern, but power consumption and heat output can be a serious issue. Server rooms are equipped with air conditioning devices.
Clusters
A server farm or server cluster is a collection of computer servers maintained by an organization to supply server functionality far beyond the capability of a single device. Modern data centers are now often built of very large clusters of much simpler servers, and there is a collaborative effort, Open Compute Project around this concept.
Appliances
A class of small specialist servers called network appliances are generally at the low end of the scale, often being smaller than common desktop computers.
Mobile
A mobile server has a portable form factor, e.g. a laptop. In contrast to large data centers or rack servers, the mobile server is designed for on-the-road or ad hoc deployment into emergency, disaster or temporary environments where traditional servers are not feasible due to their power requirements, size, and deployment time. The main beneficiaries of so-called "server on the go" technology include network managers, software or database developers, training centers, military personnel, law enforcement, forensics, emergency relief groups, and service organizations. To facilitate portability, features such as the keyboard, display, battery (uninterruptible power supply, to provide power redundancy in case of failure), and mouse are all integrated into the chassis.
Operating systems
On the Internet the dominant operating systems among servers are UNIX-like open-source distributions, such as those based on Linux and FreeBSD, with Windows Server also having a significant share. Proprietary operating systems such as z/OS and macOS Server are also deployed, but in much smaller numbers.
Specialist server-oriented operating systems have traditionally had features such as:
GUI not available or optional
Ability to reconfigure and update both hardware and software to some extent without restart
Advanced backup facilities to permit regular and frequent online backups of critical data,
Transparent data transfer between different volumes or devices
Flexible and advanced networking capabilities
Automation capabilities such as daemons in UNIX and services in Windows
Tight system security, with advanced user, resource, data, and memory protection.
Advanced detection and alerting on conditions such as overheating, processor and disk failure.
In practice, today many desktop and server operating systems share similar code bases, differing mostly in configuration.
Energy consumption
In 2010, data centers (servers, cooling, and other electrical infrastructure) were responsible for 1.1-1.5% of electrical energy consumption worldwide and 1.7-2.2% in the United States. One estimate is that total energy consumption for information and communications technology saves more than 5 times its carbon footprint in the rest of the economy by increasing efficiency.
Global energy consumption is increasing due to the increasing demand of data and bandwidth. Natural Resources Defense Council (NRDC) states that data centers used 91 billion kilowatt hours (kWh) electrical energy in 2013 which accounts to 3% of global electricity usage.
Environmental groups have placed focus on the carbon emissions of data centers as it accounts to 200 million metric tons of carbon dioxide in a year.
See also
Peer-to-peer
Notes
References
Further reading
Server hardware | Operating System (OS) | 853 |
ALT Linux
ALT Linux is a set of Russian operating systems based on RPM Package Manager (RPM) and built on a Linux kernel and Sisyphus package repository. ALT Linux has been developed collectively by ALT Linux Team developers community and ALT Linux Ltd.
History
ALT Linux Team arose from the merger of IPLabs Linux Team and the Linux community of the Institute of Logic, Cognitive Science and Development of Personality. The latter cooperated with Mandrake Linux and SUSE Linux teams to improve localization (specifically Cyrillic script), producing a Linux-Mandrake Russian Edition (RE).
Mandrake and Mandrake RE became different distributions and thus the decision was made to create a separate project. The name ALT was coined, which is a recursive acronym meaning ALT Linux Team.
The split led to the creation of the Sisyphus package repository, which is an unstable branch of the ALT Linux development. In 2007, the Sisyphus repository won a prestigious CNews award in nomination for Information Security.
Releases
Version history
Linux-Mandrake
Linux-Mandrake 7.0 Russian Edition, released in the beginning of 2000, was the first de facto independent distribution of IPLabs Linux Team. It kept the name Mandrake with permission from Mandrake developers.
Spring 2001 was the second IPLabs Linux team release, released several months later.
ALT Linux 1.0
Since the summer of 2001, ALT Linux Team has been formed and the ALT Linux name has been established.
The first ALT Linux release was ALT Linux Junior 1.0, released in summer of 2001, followed by the updated ALT Linux Junior 1.1 in autumn of 2001.
Junior distributions were 1CD releases.
ALT Linux 2.*
ALT Linux Master 2.0, released in May 2002, was the 4CD all-purpose Linux distribution targeted for software developers and system administrators.
ALT Linux Junior 2.0 was released in summer of 2002, as a 1CD desktop/workstation-oriented release.
ALT Linux 3.0
ALT Linux Compact 3.0 was released during autumn 2005, and consisted of 1CD/1DVD installable versions along with LiveCD (TravelCD 3.0). There were several subsequent OEM updates counting up to 3.0.5.
ALT Linux 4.0
These series changed the official naming somewhat to be ALT Linux 4.0 $flavour.
Server was released in June 2007 (1CD+1DVD per platform; i586 and x86_64);
Office Server quickly followed (1CD; i586 and x86_64);
Desktop Personal in August 2007 (1DVD, LiveCD, Rescue CD; i586; KDE3);
Lite in December 2007 (installation CD, live CD and 2CD with addons; i586; Xfce4);
Terminal in December 2007 (joint release with Media Magic Ltd, 1DVD; i586; KDE3, low client RAM requirements).
There was also a more conservative school 4.0 branch maintained for the Russian schools pilot project, and several distributions specifically tailored for schools released using that as a base.
ALT Linux 4.1
Desktop was released in October 2008 (1CD/1DVD; i586 and x86_64; KDE3);
Children in December 2008 (LiveCD; i586);
Skif in December 2008 (1CD; x86_64; HPC);
School Server in February 2009 (1CD; i586).
ALT Linux 5.x
The 5.0 branch was canceled mainly due to stormy X.org conditions (and subsequently archived); 5.1 community branch was created along with p5 conservative branch later in 2009. Somewhat confusingly, distributions based on the p5/branch were numbered as ALT Linux 5.0:
Ark (client+server suite, 1DVD+1CD per platform; i586 and x86_64);
School Suite – mostly i586, also including docs, video lessons and free software for Windows (3DVD):
Server (1DVD; i586 and x86_64);
Terminal (1DVD; KDE3);
Master (1DVD/flash; KDE4);
Junior (1DVD/flash; GNOME2);
Lite (2CD; Xfce4);
New Lite (1CD/1DVD/flash; LXDE);
KDesktop (1DVD; i586 and x86_64; KDE4);
Simply Linux 5.0 (1CD/flash/LiveCD; i586; Xfce4).
Lite
A small single-CD distribution for older/low-memory computers, with Xfce as default desktop. Available in normal and Live CD versions. Rather superseded by LXDE-based New Lite.
Compact
Compact is a series of ALT Linux distributions tailored for beginner users. It is mostly used on workstations, home computers, and notebooks. It includes additional means for easy configuration, many office and multimedia applications, and some games. Compact was also a popular choice for OEM whitelabeling, i.e., creating a specific edition for various hardware vendors to bundle with their hardware.
Server
ALT Linux Server is a hardened server single CD distribution. It is certified by Federal department of technical and expert control of Russia in the following categories:
by the level of monitoring for non-declared features – level 4
class of protection from unauthorized access to information – class 5
Terminal
ALT Linux Terminal is a terminal server distribution based on ALT Linux Desktop and ALTSP5: a friendly/merging fork of Linux Terminal Server Project (LTSP) which is usable on older hardware acting as thin and diskless clients (16 MB RAM is enough, while stock LTSP5 usually requires ≥ 64 MB RAM). It was also adapted for Russian School Education National Project free software package.
References
External links
Community website (in English)
Sisyphus package repository, on which Alt Linux is based
Reviews
ALT Linux 5 Ark desktop review
Customizing ALT Linux 5 Ark desktop
Simply Linux 5 review
Customizing Simply Linux 5
How to install Cairo-Dock on Simply Linux 5
RPM-based Linux distributions
X86-64 Linux distributions
Linux distributions
Russian-language Linux distributions | Operating System (OS) | 854 |
Mainframe computer
A mainframe computer, informally called a mainframe or big iron, is a computer used primarily by large organizations for critical applications like bulk data processing for tasks such as censuses, industry and consumer statistics, enterprise resource planning, and large-scale transaction processing. A mainframe computer is large but not as large as a supercomputer and has more processing power than some other classes of computers, such as minicomputers, servers, workstations, and personal computers. Most large-scale computer-system architectures were established in the 1960s, but they continue to evolve. Mainframe computers are often used as servers.
The term mainframe was derived from the large cabinet, called a main frame, that housed the central processing unit and main memory of early computers. Later, the term mainframe was used to distinguish high-end commercial computers from less powerful machines.
Design
Modern mainframe design is characterized less by raw computational speed and more by:
Redundant internal engineering resulting in high reliability and security
Extensive input-output ("I/O") facilities with the ability to offload to separate engines
Strict backward compatibility with older software
High hardware and computational utilization rates through virtualization to support massive throughput.
Hot-swapping of hardware, such as processors and memory.
Their high stability and reliability enable these machines to run uninterrupted for very long periods of time, with mean time between failures (MTBF) measured in decades.
Mainframes have high availability, one of the primary reasons for their longevity, since they are typically used in applications where downtime would be costly or catastrophic. The term reliability, availability and serviceability (RAS) is a defining characteristic of mainframe computers. Proper planning and implementation are required to realize these features. In addition, mainframes are more secure than other computer types: the NIST vulnerabilities database, US-CERT, rates traditional mainframes such as IBM Z (previously called z Systems, System z and zSeries), Unisys Dorado and Unisys Libra as among the most secure with vulnerabilities in the low single digits as compared with thousands for Windows, UNIX, and Linux. Software upgrades usually require setting up the operating system or portions thereof, and are non-disruptive only when using virtualizing facilities such as IBM z/OS and Parallel Sysplex, or Unisys XPCL, which support workload sharing so that one system can take over another's application while it is being refreshed.
In the late 1950s, mainframes had only a rudimentary interactive interface (the console) and used sets of punched cards, paper tape, or magnetic tape to transfer data and programs. They operated in batch mode to support back office functions such as payroll and customer billing, most of which were based on repeated tape-based sorting and merging operations followed by line printing to preprinted continuous stationery. When interactive user terminals were introduced, they were used almost exclusively for applications (e.g. airline booking) rather than program development. Typewriter and Teletype devices were common control consoles for system operators through the early 1970s, although ultimately supplanted by keyboard/display devices.
By the early 1970s, many mainframes acquired interactive user terminals operating as timesharing computers, supporting hundreds of users simultaneously along with batch processing. Users gained access through keyboard/typewriter terminals and specialized text terminal CRT displays with integral keyboards, or later from personal computers equipped with terminal emulation software. By the 1980s, many mainframes supported graphic display terminals, and terminal emulation, but not graphical user interfaces. This form of end-user computing became obsolete in the 1990s due to the advent of personal computers provided with GUIs. After 2000, modern mainframes partially or entirely phased out classic "green screen" and color display terminal access for end-users in favour of Web-style user interfaces.
The infrastructure requirements were drastically reduced during the mid-1990s, when CMOS mainframe designs replaced the older bipolar technology. IBM claimed that its newer mainframes reduced data center energy costs for power and cooling, and reduced physical space requirements compared to server farms.
Characteristics
Modern mainframes can run multiple different instances of operating systems at the same time. This technique of virtual machines allows applications to run as if they were on physically distinct computers. In this role, a single mainframe can replace higher-functioning hardware services available to conventional servers. While mainframes pioneered this capability, virtualization is now available on most families of computer systems, though not always to the same degree or level of sophistication.
Mainframes can add or hot swap system capacity without disrupting system function, with specificity and granularity to a level of sophistication not usually available with most server solutions. Modern mainframes, notably the IBM zSeries, System z9 and System z10 servers, offer two levels of virtualization: logical partitions (LPARs, via the PR/SM facility) and virtual machines (via the z/VM operating system). Many mainframe customers run two machines: one in their primary data center and one in their backup data center—fully active, partially active, or on standby—in case there is a catastrophe affecting the first building. Test, development, training, and production workload for applications and databases can run on a single machine, except for extremely large demands where the capacity of one machine might be limiting. Such a two-mainframe installation can support continuous business service, avoiding both planned and unplanned outages. In practice, many customers use multiple mainframes linked either by Parallel Sysplex and shared DASD (in IBM's case), or with shared, geographically dispersed storage provided by EMC or Hitachi.
Mainframes are designed to handle very high volume input and output (I/O) and emphasize throughput computing. Since the late 1950s, mainframe designs have included subsidiary hardware (called channels or peripheral processors) which manage the I/O devices, leaving the CPU free to deal only with high-speed memory. It is common in mainframe shops to deal with massive databases and files. Gigabyte to terabyte-size record files are not unusual. Compared to a typical PC, mainframes commonly have hundreds to thousands of times as much data storage online, and can access it reasonably quickly. Other server families also offload I/O processing and emphasize throughput computing.
Mainframe return on investment (ROI), like any other computing platform, is dependent on its ability to scale, support mixed workloads, reduce labor costs, deliver uninterrupted service for critical business applications, and several other risk-adjusted cost factors.
Mainframes also have execution integrity characteristics for fault tolerant computing. For example, z900, z990, System z9, and System z10 servers effectively execute result-oriented instructions twice, compare results, arbitrate between any differences (through instruction retry and failure isolation), then shift workloads "in flight" to functioning processors, including spares, without any impact to operating systems, applications, or users. This hardware-level feature, also found in HP's NonStop systems, is known as lock-stepping, because both processors take their "steps" (i.e. instructions) together. Not all applications absolutely need the assured integrity that these systems provide, but many do, such as financial transaction processing.
Current market
IBM, with z Systems, continues to be a major manufacturer in the mainframe market. In 2000, Hitachi co-developed the zSeries z900 with IBM to share expenses, and the latest Hitachi AP10000 models are made by IBM. Unisys manufactures ClearPath Libra mainframes, based on earlier Burroughs MCP products and ClearPath Dorado mainframes based on Sperry Univac OS 1100 product lines. Hewlett-Packard sells its unique NonStop systems, which it acquired with Tandem Computers and which some analysts classify as mainframes. Groupe Bull's GCOS, Stratus OpenVOS, Fujitsu (formerly Siemens) BS2000, and Fujitsu-ICL VME mainframes are still available in Europe, and Fujitsu (formerly Amdahl) GS21 mainframes globally. NEC with ACOS and Hitachi with AP10000-VOS3 still maintain mainframe businesses in the Japanese market.
The amount of vendor investment in mainframe development varies with market share. Fujitsu and Hitachi both continue to use custom S/390-compatible processors, as well as other CPUs (including POWER and Xeon) for lower-end systems. Bull uses a mixture of Itanium and Xeon processors. NEC uses Xeon processors for its low-end ACOS-2 line, but develops the custom NOAH-6 processor for its high-end ACOS-4 series. IBM also develops custom processors in-house, such as the zEC12. Unisys produces code compatible mainframe systems that range from laptops to cabinet-sized mainframes that use homegrown CPUs as well as Xeon processors. Furthermore, there exists a market for software applications to manage the performance of mainframe implementations. In addition to IBM, significant market competitors include BMC, Maintec Technologies, Compuware, and CA Technologies. Starting in the 2010s, cloud computing is now a less expensive, more scalable alternative commonly called Big Data.
History
Several manufacturers and their successors produced mainframe computers from the 1950s until the early 21st Century, with gradually decreasing numbers and a gradual transition to simulation on Intel chips rather than proprietary hardware. The US group of manufacturers was first known as "IBM and the Seven Dwarfs": usually Burroughs, UNIVAC, NCR, Control Data, Honeywell, General Electric and RCA, although some lists varied. Later, with the departure of General Electric and RCA, it was referred to as IBM and the BUNCH. IBM's dominance grew out of their 700/7000 series and, later, the development of the 360 series mainframes. The latter architecture has continued to evolve into their current zSeries mainframes which, along with the then Burroughs and Sperry (now Unisys) MCP-based and OS1100 mainframes, are among the few mainframe architectures still extant that can trace their roots to this early period. While IBM's zSeries can still run 24-bit System/360 code, the 64-bit zSeries and System z9 CMOS servers have nothing physically in common with the older systems. Notable manufacturers outside the US were Siemens and Telefunken in Germany, ICL in the United Kingdom, Olivetti in Italy, and Fujitsu, Hitachi, Oki, and NEC in Japan. The Soviet Union and Warsaw Pact countries manufactured close copies of IBM mainframes during the Cold War; the BESM series and Strela are examples of an independently designed Soviet computer.
Shrinking demand and tough competition started a shakeout in the market in the early 1970s—RCA sold out to UNIVAC and GE sold its business to Honeywell; between 1986 and 1990 Honeywell was bought out by Bull; UNIVAC became a division of Sperry, which later merged with Burroughs to form Unisys Corporation in 1986.
In 1984 estimated sales of desktop computers ($11.6 billion) exceeded mainframe computers ($11.4 billion) for the first time. IBM received the vast majority of mainframe revenue. During the 1980s, minicomputer-based systems grew more sophisticated and were able to displace the lower end of the mainframes. These computers, sometimes called departmental computers, were typified by the Digital Equipment Corporation VAX series.
In 1991, AT&T Corporation briefly owned NCR. During the same period, companies found that servers based on microcomputer designs could be deployed at a fraction of the acquisition price and offer local users much greater control over their own systems given the IT policies and practices at that time. Terminals used for interacting with mainframe systems were gradually replaced by personal computers. Consequently, demand plummeted and new mainframe installations were restricted mainly to financial services and government. In the early 1990s, there was a rough consensus among industry analysts that the mainframe was a dying market as mainframe platforms were increasingly replaced by personal computer networks. InfoWorld's Stewart Alsop infamously predicted that the last mainframe would be unplugged in 1996; in 1993, he cited Cheryl Currid, a computer industry analyst as saying that the last mainframe "will stop working on December 31, 1999", a reference to the anticipated Year 2000 problem (Y2K).
That trend started to turn around in the late 1990s as corporations found new uses for their existing mainframes and as the price of data networking collapsed in most parts of the world, encouraging trends toward more centralized computing. The growth of e-business also dramatically increased the number of back-end transactions processed by mainframe software as well as the size and throughput of databases. Batch processing, such as billing, became even more important (and larger) with the growth of e-business, and mainframes are particularly adept at large-scale batch computing. Another factor currently increasing mainframe use is the development of the Linux operating system, which arrived on IBM mainframe systems in 1999 and is typically run in scores or up to c. 8,000 virtual machines on a single mainframe. Linux allows users to take advantage of open source software combined with mainframe hardware RAS. Rapid expansion and development in emerging markets, particularly People's Republic of China, is also spurring major mainframe investments to solve exceptionally difficult computing problems, e.g. providing unified, extremely high volume online transaction processing databases for 1 billion consumers across multiple industries (banking, insurance, credit reporting, government services, etc.) In late 2000, IBM introduced 64-bit z/Architecture, acquired numerous software companies such as Cognos and introduced those software products to the mainframe. IBM's quarterly and annual reports in the 2000s usually reported increasing mainframe revenues and capacity shipments. However, IBM's mainframe hardware business has not been immune to the recent overall downturn in the server hardware market or to model cycle effects. For example, in the 4th quarter of 2009, IBM's System z hardware revenues decreased by 27% year over year. But MIPS (millions of instructions per second) shipments increased 4% per year over the past two years. Alsop had himself photographed in 2000, symbolically eating his own words ("death of the mainframe").
In 2012, NASA powered down its last mainframe, an IBM System z9. However, IBM's successor to the z9, the z10, led a New York Times reporter to state four years earlier that "mainframe technology—hardware, software and services—remains a large and lucrative business for I.B.M., and mainframes are still the back-office engines behind the world's financial markets and much of global commerce". , while mainframe technology represented less than 3% of IBM's revenues, it "continue[d] to play an outsized role in Big Blue's results".
In 2015, IBM launched the IBM z13, in June 2017 the IBM z14 and in September 2019 IBM launched the latest version of the product, the IBM z15.
Differences from supercomputers
A supercomputer is a computer at the leading edge of data processing capability, with respect to calculation speed. Supercomputers are used for scientific and engineering problems (high-performance computing) which crunch numbers and data, while mainframes focus on transaction processing. The differences are:
Mainframes are built to be reliable for transaction processing (measured by TPC-metrics; not used or helpful for most supercomputing applications) as it is commonly understood in the business world: the commercial exchange of goods, services, or money. A typical transaction, as defined by the Transaction Processing Performance Council, updates a database system for inventory control (goods), airline reservations (services), or banking (money) by adding a record. A transaction may refer to a set of operations including disk read/writes, operating system calls, or some form of data transfer from one subsystem to another which is not measured by the processing speed of the CPU. Transaction processing is not exclusive to mainframes but is also used by microprocessor-based servers and online networks.
Supercomputer performance is measured in floating point operations per second (FLOPS) or in traversed edges per second or TEPS, metrics that are not very meaningful for mainframe applications, while mainframes are sometimes measured in millions of instructions per second (MIPS), although the definition depends on the instruction mix measured. Examples of integer operations measured by MIPS include adding numbers together, checking values or moving data around in memory (while moving information to and from storage, so-called I/O is most helpful for mainframes; and within memory, only helping indirectly). Floating point operations are mostly addition, subtraction, and multiplication (of binary floating point in supercomputers; measured by FLOPS) with enough digits of precision to model continuous phenomena such as weather prediction and nuclear simulations (only recently standardized decimal floating point, not used in supercomputers, are appropriate for monetary values such as those useful for mainframe applications). In terms of computational speed, supercomputers are more powerful.
Mainframes and supercomputers cannot always be clearly distinguished; up until the early 1990s, many supercomputers were based on a mainframe architecture with supercomputing extensions. An example of such a system is the HITAC S-3800, which was instruction-set compatible with IBM System/370 mainframes, and could run the Hitachi VOS3 operating system (a fork of IBM MVS). The S-3800 therefore can be seen as being both simultaneously a supercomputer and also an IBM-compatible mainframe.
In 2007, an amalgamation of the different technologies and architectures for supercomputers and mainframes has led to a so-called gameframe.
See also
Channel I/O
Cloud computing
Computer types
Failover
Gameframe
List of transistorized computers
Commodity computing
Notes
References
External links
IBM Systems Mainframe Magazine
IBM z Systems mainframes
IBM Mainframe Computer Support Forum since 2003
Univac 9400, a mainframe from the 1960s, still in use in a German computer museum
Lectures in the History of Computing: Mainframes (archived copy from the Internet Archive) | Operating System (OS) | 855 |
DNIX
DNIX (original spelling: D-Nix) is a discontinued Unix-like real-time operating system from the Swedish company Dataindustrier AB (DIAB). A version called ABCenix was also developed for the ABC 1600 computer from Luxor. (Daisy Systems also had something called Daisy DNIX on some of their CAD workstations. It was unrelated to DIAB's product.)
History
Inception at DIAB in Sweden
Dataindustrier AB (literal translation: computer industries shareholding company) was started in 1970 by Lars Karlsson as a single-board computer manufacture in Sundsvall, Sweden, producing a Zilog Z80-based computer called Data Board 4680. In 1978 DIAB started to work with the Swedish television company Luxor AB to produce the home and office computer series ABC 80 and ABC 800.
In 1983 DIAB independently developed the first UNIX-compatible machine, DIAB DS90 based on the Motorola 68000 CPU. D-NIX here made its appearance, based on a UNIX System V license from AT&T. DIAB was however an industrial automation company, and needed a real-time operating system, so the company replaced the AT&T-supplied UNIX kernel with their own in-house developed, yet compatible real-time variant. This kernel was originally a Z80 kernel called OS8.
Over time, the company also replaced several of the UNIX standard userspace tools with their own implementations, to the point where no code was derived from UNIX, and their machines could be deployed independently of any AT&T UNIX license. Two years later and in cooperation with Luxor, a computer called ABC 1600 was developed for the office market, while in parallel, DIAB continue to produce enhanced versions of the DS90 computer using newer versions of the Motorola CPUs such as Motorola 68010, 68020, 68030 and eventually 68040. In 1990 DIAB was acquired by Groupe Bull who continued to produce and support the DS machines under the brand name DIAB, with names such as DIAB 2320, DIAB 2340 etc., still running DIABs version of DNIX.
Derivative at ISC Systems Corporation
ISC Systems Corporation (ISC) purchased the right to use DNIX in the late 1980s for use in its line of Motorola 68k-based banking computers. (ISC was later bought by Olivetti, and was in turn resold to Wang, which was then bought by Getronics. This corporate entity, most often referred to as 'ISC', has answered to a bewildering array of names over the years.) This code branch was the SVR2 compatible version, and received extensive modification and development at their hands. Notable features of this operating system were its support of demand paging, diskless workstations, multiprocessing, asynchronous I/O, the ability to mount processes (handlers) on directories in the file system, and message passing. Its real-time support consisted largely of internal event-driven queues rather than list search mechanisms (no 'thundering herd'), static process priorities in two classes (run to completion and timesliced), support for contiguous files (to avoid fragmentation of critical resources), and memory locking. The quality of the orthogonal asynchronous event implementation has yet to be equalled in current commercial operating systems, though some approach it. (The concept that has yet to be adopted is that the synchronous marshalling point of all the asynchronous activity itself could be asynchronous, ad infinitum. DNIX handled this with aplomb.) The asynchronous I/O facility obviated the need for Berkeley sockets select or SVR4's STREAMS poll mechanism, though there was a socket emulation library that preserved the socket semantics for backward compatibility. Another feature of DNIX was that none of the standard utilities (such as ps, a frequent offender) rummaged around in the kernel's memory to do their job. System calls were used instead, and this meant the kernel's internal architecture was free to change as required. The handler concept allowed network protocol stacks to be outside the kernel, which greatly eased development and improved overall reliability, though at a performance cost. It also allowed for foreign file systems to be user-level processes, again for improved reliability. The main file system, though it could have been (and once was) an external process, was pulled into the kernel for performance reasons. Were it not for this DNIX could well have been considered a microkernel, though it was not formally developed as such. Handlers could appear as any type of 'native' Unix file, directory structure, or device, and file I/O requests that the handler itself could not process could be passed off to other handlers, including the underlying one upon which the handler was mounted. Handler connections could also exist and be passed around independent of the filesystem, much like a pipe. One effect of this is that TTY-like 'devices' could be emulated without requiring a kernel-based pseudo terminal facility.
An example of where a handler saved the day was in ISC's diskless workstation support, where a bug in the implementation meant that using named pipes on the workstation could induce undesirable resource locking on the fileserver. A handler was created on the workstation to field accesses to the afflicted named pipes until the appropriate kernel fixes could be developed. This handler required approximately 5 kilobytes of code to implement, an indication that a non-trivial handler did not need to be large.
ISC also received the right to manufacture DIAB's DS90-10 and DS90-20 machines as its file servers. The multiprocessor DS90-20's, however, were too expensive for the target market and ISC designed its own servers and ported DNIX to them. ISC designed its own GUI-based diskless workstations for use with these file servers, and ported DNIX again. (Though ISC used Daisy workstations running Daisy DNIX to design the machines that would run DIAB's DNIX, there was negligible confusion internally as the drafting and layout staff rarely talked to the software staff. Moreover, the hardware design staff didn't use either system! The running joke went something like: "At ISC we build computers, we don't use them.") The asynchronous I/O support of DNIX allowed for easy event-driven programming in the workstations, which performed well even though they had relatively limited resources. (The GUI diskless workstation had a 7 MHz 68010 processor and was usable with only 512K of memory, of which the kernel consumed approximately half. Most workstations had 1 MB of memory, though there were later 2 MB and 4 MB versions, along with 10 MHz processors.) A full-blown installation could consist of one server (16 MHz 68020, 8 MB of RAM, and a 200 MB hard disk) and up to 64 workstations. Though slow to boot up, such an array would perform acceptably in a bank teller application. Besides the innate efficiency of DNIX, the associated DIAB C compiler was key to high performance. It generated particularly good code for the 68010, especially after ISC got done with it. (ISC also retargeted it to the Texas Instruments TMS34010 graphics coprocessor used in its last workstation.) The DIAB C compiler was, of course, used to build DNIX itself which was one of the factors contributing to its efficiency, and is still available (in some form) through Wind River Systems.
These systems are still in use as of this writing in 2006, in former Seattle-First National Bank branches now branded Bank of America. There may be, and probably are, other ISC customers still using DNIX in some capacity. Through ISC there was a considerable DNIX presence in Central and South America.
Asynchronous events
DNIX's native system call was the dnix(2) library function, analogous to the standard Unix unix(2) or syscall(2) function. It took multiple arguments, the first of which was a function code. Semantically this single call provided all appropriate Unix functionality, though it was syntactically different from Unix and had, of course, numerous DNIX-only extensions.
DNIX function codes were organized into two classes: Type 1 and Type 2. Type 1 commands were those that were associated with I/O activity, or anything that could potentially cause the issuing process to block. Major examples were F_OPEN, F_CLOSE, F_READ, F_WRITE, F_IOCR, F_IOCW, F_WAIT, and F_NAP. Type 2 were the remainder, such as F_GETPID, F_GETTIME, etc. They could be satisfied by the kernel itself immediately.
To invoke asynchronicity, a special file descriptor called a trap queue had to have been created via the Type 2 opcode F_OTQ. A Type 1 call would have the F_NOWAIT bit OR-ed with its function value, and one of the additional parameters to dnix(2) was the trap queue file descriptor. The return value from an asynchronous call was not the normal value but a kernel-assigned identifier. At such time as the asynchronous request completed, a read(2) (or F_READ) of the trap queue file descriptor would return a small kernel-defined structure containing the identifier and result status. The F_CANCEL operation was available to cancel any asynchronous operation that hadn't yet been completed, one of its arguments was the kernel-assigned identifier. (A process could only cancel requests that were currently owned by itself. The exact semantics of cancellation was up to each request's handler, fundamentally it only meant that any waiting was to be terminated. A partially completed operation could be returned.) In addition to the kernel-assigned identifier, one of the arguments given to any asynchronous operation was a 32-bit user-assigned identifier. This most often referenced a function pointer to the appropriate subroutine that would handle the I/O completion method, but this was merely convention. It was the entity that read the trap queue elements that was responsible for interpreting this value.
struct itrq { /* Structure of data read from trap queue. */
short it_stat; /* Status */
short it_rn; /* Request number */
long it_oid; /* Owner ID given on request */
long it_rpar; /* Returned parameter */
};
Of note is that the asynchronous events were gathered via normal file descriptor read operations, and that such reading was itself capable of being made asynchronous. This had implications for semi-autonomous asynchronous event handling packages that could exist within a single process. (DNIX 5.2 did not have lightweight processes or threads.) Also of note is that any potentially blocking operation was capable of being issued asynchronously, so DNIX was well equipped to handle many clients with a single server process. A process was not restricted to having only one trap queue, so I/O requests could be grossly prioritized in this way.
Compatibility
In addition to the native dnix(2) call, a complete set of 'standard' libc interface calls was available.
open(2), close(2), read(2), write(2), etc. Besides being useful for backwards compatibility, these were implemented in a binary-compatible manner with the NCR Tower computer, so that binaries compiled for it would run unchanged under DNIX. The DNIX kernel had two trap dispatchers internally, one for the DNIX method and one for the Unix method. Choice of dispatcher was up to the programmer, and using both interchangeably was acceptable. Semantically they were identical wherever functionality overlapped. (In these machines the 68000 trap #0 instruction was used for the unix(2) calls, and the trap #4 instruction for dnix(2). The two trap handlers were really quite similar, though the [usually hidden] unix(2) call held the function code in the processor's D0 register, whereas dnix(2) held it on the stack with the rest of the parameters.)
DNIX 5.2 had no networking protocol stacks internally (except for the thin X.25-based Ethernet protocol stack added by ISC for use by its diskless workstation support package), all networking was conducted by reading and writing to Handlers. Thus, there was no socket mechanism, but a libsocket(3) existed that used asynchronous I/O to talk to the TCP/IP handler. The typical Berkeley-derived networking program could be compiled and run unchanged (modulo the usual Unix porting problems), though it might not be as efficient as an equivalent program that used native asynchronous I/O.
Handlers
Under DNIX, a process could be used to handle I/O requests and to extend the filesystem. Such a process was called a Handler, and was a major feature of the operating system. A handler was defined as a process that owned at least one request queue, a special file descriptor that was procured in one of two ways: with a F_ORQ or a F_MOUNT call. The former invented an isolated request queue, one end of which was then typically handed down to a child process. (The network remote execution programs, of which there were many, used this method to provide standard I/O paths to their children.) The latter hooked into the filesystem so that file I/O requests could be adopted by handlers. (The network login programs, of which there were even more, used this method to provide standard I/O paths to their children, as the semantics of logging in under Unix requires a way for multiple perhaps-unrelated processes to horn in on the standard I/O path to the operator.) Once mounted on a directory in the filesystem, the handler then received all I/O calls to that point.
A handler would then read small kernel-assigned request data structures from the request queue. (Such reading could be done synchronously or asynchronously as the handler's author desired.) The handler would then do whatever each request required to be satisfied, often using the DNIX F_UREAD and F_UWRITE calls to read and write into the request's data space, and then would terminate the request appropriately using F_TERMIN. A privileged handler could adopt the permissions of its client for individual requests to subordinate handlers (such as the filesystem) via the F_T1REQ call, so it didn't need to reproduce the subordinate's permission scheme. If a handler was unable to complete a request itself, the F_PASSRQ function could be used to pass I/O requests from one handler to another. A handler could perform part of the work requested before passing the rest on to another handler. It was very common for a handler to be state-machine oriented so that requests it was fielding from a client were all done asynchronously. This allowed for a single handler to field requests from multiple clients simultaneously without them blocking each other unnecessarily. Part of the request structure was the process ID and its priority so that a handler could choose what to work on first based upon this information, there was no requirement that work be performed in the order it was requested. To aid in this, it was possible to poll both request and trap queues to see if there was more work to be considered before buckling down to actually do it.
struct ireq { /* Structure of incoming request */
short ir_fc; /* Function code */
short ir_rn; /* Request number */
long ir_opid; /* Owner ID that you gave on open or mount */
long ir_bc; /* Byte count */
long ir_upar; /* User parameter */
long ir_rad; /* Random address */
ushort ir_uid; /* User ID */
ushort ir_gid; /* User group */
time_t ir_time; /* Request time */
ulong ir_nph;
ulong ir_npl; /* Node and process ID */
};
There was no particular restriction on the number of request queues a process could have. This was used to provide networking facilities to chroot jails, for example.
Examples
To give some appreciation of the utility of handlers, at ISC handlers existed for:
foreign filesystems
FAT
CD-ROM/ISO9660
disk image files
RAM disk (for use with write-protected boot disks)
networking protocols
DNET (essentially X.25 over Ethernet, with multicast capability)
X.25
TCP/IP
DEC LAT
AppleTalk
remote filesystems
DNET's /net/machine/path/from/its/root...
NFS
remote login
ncu (DNET)
telnet
rlogin
wcu (DNET GUI)
X.25 PAD
DEC LAT
remote execution
rx (DNET)
remsh
rexec
system extension
windowman (GUI)
vterm (xterm-like)
document (passbook) printer
dmap (ruptime analog)
windowmac (GUI gateway to Macintosh)
system patches
named pipe handler
ISC's extensions
ISC purchased both 5.2 (SVR2 compatible) and 5.3 (SVR3 compatible) versions of DNIX. At the time of purchase, DNIX 5.3 was still undergoing development at DIAB so DNIX 5.2 was what was deployed. Over time, ISC's engineers incorporated most of their 5.3 kernel's features into 5.2, primarily shared memory and IPC, so there was some divergence of features between DIAB and ISC's versions of DNIX. DIAB's 5.3 likely went on to contain more SVR3 features than ISC's 5.2 ended up with. Also, DIAB went on to DNIX 5.4, a SVR4 compatible OS.
At ISC, developers considerably extended their version of DNIX 5.2 (only listed are features involving the kernel itself) based upon both their needs and the general trends of the Unix industry:
Diskless workstation support. The workstation's kernel filesystem was removed, and replaced with an X.25-based Ethernet communications stub. The file server's kernel was also extended with a mating component that received the remote requests and handed them to a pool of kernel processes for service, though a standard handler could have been written to do this. (Later in its product lifecycle, ISC deployed standard SVR4-based Unix servers in place of the DNIX servers. These used X.25 STREAMS and a custom-written file server program. In spite of the less efficient structuring, the raw horsepower of the platforms used made for a much faster server. It is unfortunate that this file server program did not support all of the functionality of the native DNIX server. Tricky things, like named pipes, never worked at all. This was another justification for the named pipe handler process.)
gdb watchpoint support using the features of ISC's MMU.
Asynchronous I/O to the filesystem was made real. (Originally it blocked anyway.) Kernel processes (kprocs, or threads) were used to do this.
Support for a truss- or strace-like program. In addition to some repairs to bugs in the standard Unix ptrace single-stepping mechanism, this required adding a temporary process adoption facility so that the tracer could use the standard single-stepping mechanism on existing processes.
SVR4 signal mechanism extensions. Primarily for the new STOP and CONT signals, but encompassing the new signal control calls as well. Due to ISC's lack of source code for the adb and sdb debuggers the u-page could not be modified, so the new signals could only be blocked or receive default handling, they could not be caught.
Support for network sniffing. This required extending the Ethernet driver so that a single event could satisfy more than one I/O request, and conditionally implementing the hardware filtering in software to support promiscuous mode.
Disk mirroring. This was done in the filesystem and not the device driver, so that slightly (or even completely) different devices could still be mirrored together. Mirroring a small hard disk to the floppy was a popular way to test mirroring as ejecting the floppy was an easy way to induce disk errors.
32-bit inode, 30-character filename, symbolic link, and sticky directory extensions to the filesystem. Added /dev/zero, /dev/noise, /dev/stdXXX, and /dev/fd/X devices.
Process group id lists (from SVR4).
#! direct script execution.
Serial port multiplication using ISC's Z-80 based VMEbus communications boards.
Movable swap partition.
Core 'dump' snapshots of running processes. Support for fuser command.
Process renice function. Associated timesharing reprioritizer program to implement floating priorities.
A way to 'mug' a process, instantly depriving it of all memory resources. Very useful for determining what the current working set is, as opposed to what is still available to it but not necessarily being used. This was associated with a GUI utility showing the status of all 1024 pages of a process's memory map. (This being the number of memory pages supported by ISC's MMU.) In use you would 'mug' the target process periodically through its life and then watch to see how much memory was swapped back in. This was useful as ISC's production environment used only a few long-lived processes, controlling their memory utilization and growth was key to maintaining performance.
Features that were never added
When DNIX development at ISC effectively ceased in 1997, a number of planned OS features were left on the table:
Shared objects - There were two dynamically loaded libraries in existence, an encryptor for DNET and the GUI's imaging library, but the facility was never generalized. ISC's machines were characterized by a general lack of virtual address space, so extensive use of memory-mapped entities would not have been possible.
Lightweight processes - The kernel itself already had multiple threads that shared a single MMU context, extending this to user processes should have been straightforward. The API implications would have been the most difficult part of this.
Access Control Lists - Trivial to implement using an ACL handler mounted over the stock filesystem.
Multiple swap partitions - DNIX already used free space on the selected volume for swapping, it would have been easy to give it a list of volumes to try in turn, potentially with associated space limits to keep it from consuming all free space on a volume before moving on to the next one.
Remote kernel debugging via gdb - All the pieces were there to do it either through the customary serial port or over Ethernet using the kernel's embedded X.25 link software, but they were never assembled.
68030 support - ISC's prototypes were never completed. Two processor piggyback plug-in cards were built, but were never used as more than faster 68020's. They were not reliable, nor were they as fast as they could have been due to having to fit into a 68020 socket. The fast context switching ISC MMU would be left disabled (and left out altogether in proposed production units), and the embedded one of the 68030 was to have been used instead, using a derivative of the DS90-20's MMU code. While the ISC MMU was very efficient and supported instant switching among 32 resident processes, it was very limited in addressability. The 68030 MMU would have allowed for much more than 8 MB of virtual space in a process, which was the limit of the ISC MMU. Though this MMU would be slower, the overall faster speed of the 68030 should have more than made up for it, so that a 68030 machine was expected to be in all ways faster, and support much larger processes.
See also
Timeline of operating systems
Cromemco Cromix
References
UNIX System V
Real-time operating systems | Operating System (OS) | 856 |
Comparison of executable file formats
This is a comparison of binary executable file formats which, once loaded by a suitable executable loader, can be directly executed by the CPU rather than being interpreted by software. In addition to the binary application code, the executables may contain headers and tables with relocation and fixup information as well as various kinds of meta data. Among those formats listed, the ones in most common use are PE (on Microsoft Windows), ELF (on Linux and most other versions of Unix), Mach-O (on macOS and iOS) and MZ (on DOS).
Notes
References
Computing comparisons | Operating System (OS) | 857 |
Apple III
The Apple III (styled as apple ///) is a business-oriented personal computer produced by Apple Computer and released in 1980. Running the Apple SOS operating system, it was intended as the successor to the Apple II series, but was largely considered a failure in the market. It was designed to provide key features business users wanted in a personal computer: a true typewriter-style upper/lowercase keyboard (the Apple II only supported uppercase) and an 80-column display.
Work on the Apple III started in late 1978 under the guidance of Dr. Wendell Sander. It had the internal code name of "Sara", named after Sander's daughter. The system was announced on May 19, 1980 and released in late November that year. Serious stability issues required a design overhaul and a recall of the first 14,000 machines produced. The Apple III was formally reintroduced on November 9, 1981.
Damage to the computer's reputation had already been done and it failed to do well commercially. Development stopped and the Apple III was discontinued on April 24, 1984, and its last successor, the III Plus, was dropped from the Apple product line in September 1985.
An estimated 65,000–75,000 Apple III computers were sold. The Apple III Plus brought this up to approximately 120,000. Apple co-founder Steve Wozniak stated that the primary reason for the Apple III's failure was that the system was designed by Apple's marketing department, unlike Apple's previous engineering-driven projects. The Apple III's failure led Apple to reevaluate its plan to phase out the Apple II, prompting the eventual continuation of development of the older machine. As a result, later Apple II models incorporated some hardware and software technologies of the Apple III, such as the thermal Apple Scribe printer.
Overview
Design
Steve Wozniak and Steve Jobs expected hobbyists to purchase the Apple II, but because of VisiCalc and Disk II, small businesses purchased 90% of the computers. The Apple III was designed to be a business computer and successor. Though the Apple II contributed to the inspirations of several important business products, such as VisiCalc, Multiplan, and Apple Writer, the computer's hardware architecture, operating system, and developer environment are limited. Apple management intended to clearly establish market segmentation by designing the Apple III to appeal to the 90% business market, leaving the Apple II to home and education users. Management believed that "once the Apple III was out, the Apple II would stop selling in six months", Wozniak said.
The Apple III is powered by a 1.8-megahertz Synertek 6502A or B 8-bit CPU and, like some of the later machines in the Apple II family, uses bank switching techniques to address memory beyond the 6502's traditional 64 kB limit, up to 256 kB in the III's case. Third-party vendors produced memory upgrade kits that allow the Apple III to reach up to 512 kB of random-access memory (RAM). Other Apple III built-in features include an 80-column, 24-line display with upper and lowercase characters, a numeric keypad, dual-speed (pressure-sensitive) cursor control keys, 6-bit (DAC) audio, and a built-in 140-kilobyte 5.25-inch floppy disk drive. Graphics modes include 560x192 in black and white, and 280x192 with 16 colors or shades of gray. Unlike the Apple II, the Disk III controller is part of the logic board.
The Apple III is the first Apple product to allow the user to choose both a screen font and a keyboard layout: either QWERTY or Dvorak. These choices cannot be changed while programs were running, unlike the Apple IIc, which has a keyboard switch directly above the keyboard, allowing the user to switch on the fly.
Software
The Apple III introduced an advanced operating system called Apple SOS, pronounced "apple sauce". Its ability to address resources by name allows the Apple III to be more scalable than the Apple II's addressing by physical location such as PR#6, CATALOG, D1. Apple SOS allows the full capacity of a storage device to be used as a single volume, such as the Apple ProFile hard disk drive, and it supports a hierarchical file system. Some of the features and code base of Apple SOS were later adopted into the Apple II's ProDOS and GS/OS operating systems, as well as Lisa 7/7 and Macintosh system software.
With a starting price between $4,340 to $7,800, the Apple III was more expensive than many of the CP/M-based business computers that were available at the time. Few software applications other than VisiCalc are available for the computer; according to a presentation at KansasFest 2012, fewer than 50 Apple III-specific software packages were ever published, most shipping when the III Plus was released. Because Apple did not view the Apple III as suitable for hobbyists, it did not provide much of the technical software information that accompanies the Apple II. Originally intended as a direct replacement to the Apple II series, it was designed to be backward compatible with Apple II software. However, since Apple did not want to encourage continued development of the II platform, Apple II compatibility exists only in a special Apple II Mode which is limited in its capabilities to the emulation of a basic Apple II Plus configuration with 48 kB of RAM. Special chips were intentionally added to prevent access from Apple II Mode to the III's advanced features such as its larger amount of memory.
Peripherals
The Apple III has four expansion slots, a number that inCider in 1986 called "miserly". Apple II cards are compatible but risk violating government RFI regulations, and require Apple III-specific device drivers; BYTE stated that "Apple provides virtually no information on how to write them". As with software, Apple provided little hardware technical information with the computer but Apple III-specific products became available, such as one that made the computer compatible with the Apple IIe. Several new Apple-produced peripherals were developed for the Apple III. The original Apple III has a built-in real-time clock, which is recognized by Apple SOS. The clock was later removed from the "revised" model, and was instead made available as an add-on.
Along with the built-in floppy drive, the Apple III can also handle up to three additional external Disk III floppy disk drives. The Disk III is only officially compatible with the Apple III. The Apple III Plus requires an adaptor from Apple to use the Disk III with its DB-25 disk port.
With the introduction of the revised Apple III a year after launch, Apple began offering the ProFile external hard disk system. Priced at $3,499 for 5 MB of storage, it also required a peripheral slot for its controller card.
Backward compatibility
The Apple III has the built-in hardware capability to run Apple II software. In order to do so, an emulation boot disk is required that functionally turns the machine into a standard 48-kilobyte Apple II Plus, until it is powered off. The keyboard, internal floppy drive (and one external Disk III), display (color is provided through the 'B/W video' port) and speaker all act as Apple II peripherals. The paddle and serial ports can also function in Apple II mode, however with some limitations and compatibility issues.
Apple engineers added specialized circuity with the sole purpose of blocking access to its advanced features when running in Apple II emulation mode. This was done primarily to discourage further development and interest in the Apple II line, and to push the Apple III as its successor. For example, no more than of RAM can be accessed, even if the machine has of RAM or higher present. Many Apple II programs require a minimum of of RAM, making them impossible to run on the Apple III. Similarly, access to lowercase support, 80 columns text, or its more advanced graphics and sound are blocked by this hardware circuitry, making it impossible for even skilled software programmers to bypass Apple's lockout. A third-party company, Titan Technologies, sold an expansion board called the III Plus II that allows Apple II mode to access more memory, a standard game port, and with a later released companion card, even emulate the Apple IIe.
Certain Apple II slot cards can be installed in the Apple III and used in native III-mode with custom written SOS device drivers, including Grappler Plus and Liron 3.5 Controller.
Revisions
After overheating issues were attributed to serious design flaws, a redesigned logic board was introduced – which included a lower power supply requirement, wider circuit traces and better-designed chip sockets. The $3,495 revised model also includes 256 kB of RAM as the standard configuration. The 14,000 units of the original Apple III sold were returned and replaced with the entirely new revised model.
Apple III Plus
Apple discontinued the III in October 1983 because it violated FCC regulations, and the FCC required the company to change the redesigned computer's name. It introduced the Apple III Plus in December 1983 at a price of US$2,995. This newer version includes a built-in clock, video interlacing, standardized rear port connectors, 55-watt power supply, 256 kB of RAM as standard, and a redesigned, Apple IIe-like keyboard.
Owners of the Apple III could purchase individual III Plus upgrades, like the clock and interlacing feature, and obtain the newer logic board as a service replacement. A keyboard upgrade kit, dubbed "Apple III Plus upgrade kit" was also made available – which included the keyboard, cover, keyboard encoder ROM, and logo replacements. This upgrade had to be installed by an authorized service technician.
Design flaws
According to Wozniak, the Apple III "had 100 percent hardware failures". Former Apple executive Taylor Pohlman stated that:
Steve Jobs insisted on the idea of having no fan or air vents, in order to make the computer run quietly. Jobs would later push this same ideology onto almost all Apple models he had control of, from the Apple Lisa and Macintosh 128K to the iMac. To allow the computer to dissipate heat, the base of the Apple III was made of heavy cast aluminum, which supposedly acts as a heat sink. One advantage to the aluminum case was a reduction in RFI (Radio Frequency Interference), a problem which had plagued the Apple II series throughout its history. Unlike the Apple II series, the power supply was mounted – without its own shell – in a compartment separate from the logic board. The decision to use an aluminum shell ultimately led to engineering issues which resulted in the Apple III's reliability problems. The lead time for manufacturing the shells was high, and this had to be done before the motherboard was finalized. Later, it was realized that there was not enough room on the motherboard for all of the components unless narrow traces were used.
Many Apple IIIs were thought to have failed due to their inability to properly dissipate heat. inCider stated in 1986 that "Heat has always been a formidable enemy of the Apple ///", and some users reported that their Apple IIIs became so hot that the chips started dislodging from the board, causing the screen to display garbled data or their disk to come out of the slot "melted". BYTE wrote, "the integrated circuits tended to wander out of their sockets". It is has been rumored Apple advised customers to tilt the front of the Apple III six inches above the desk and then drop it to reseat the chips as a temporary solution. Other analyses blame a faulty automatic chip insertion process, not heat.
Case designer Jerry Manock denied the design flaw charges, insisting that tests proved that the unit adequately dissipated the internal heat. The primary cause, he claimed, was a major logic board design problem. The logic board used "fineline" technology that was not fully mature at the time, with narrow, closely spaced traces. When chips were "stuffed" into the board and wave-soldered, solder bridges would form between traces that were not supposed to be connected. This caused numerous short circuits, which required hours of costly diagnosis and hand rework to fix. Apple designed a new circuit board with more layers and normal-width traces. The new logic board was laid out by one designer on a huge drafting board, rather than using the costly CAD-CAM system used for the previous board, and the new design worked.
Earlier Apple III units came with a built-in real time clock. The hardware, however, would fail after prolonged use. Assuming that National Semiconductor would test all parts before shipping them, Apple did not perform this level of testing. Apple was soldering chips directly to boards and could not easily replace a bad chip if one was found. Eventually, Apple solved this problem by removing the real-time clock from the Apple III's specification rather than shipping the Apple III with the clock pre-installed, and then sold the peripheral as a level 1 technician add-on.
BASIC
Microsoft and Apple each developed their own versions of BASIC for the Apple III. Apple III Microsoft BASIC was designed to run on the CP/M platform available for the Apple III. Apple Business BASIC shipped with the Apple III. Donn Denman ported Applesoft BASIC to SOS and reworked it to take advantage of the extended memory of the Apple III.
Both languages introduced a number of new or improved features over Applesoft BASIC. Both languages replaced Applesoft's single-precision floating-point variables using 5-byte storage with the somewhat-reduced-precision 4-byte variables, while also adding a larger numerical format. Apple III Microsoft BASIC provides double-precision floating-point variables, taking 8 bytes of storage, while Apple Business BASIC offers an extra-long integer type, also taking 8 bytes for storage. Both languages also retain 2-byte integers, and maximum 255-character strings.
Other new features common to both languages include:
Incorporation of disk-file commands within the language.
Operators for MOD and for integer-division.
An optional ELSE clause in IF...THEN statements.
HEX$() function for hexadecimal-format output.
INSTR function for finding a substring within a string.
PRINT USING statement to control format of output. Apple Business BASIC had an option, in addition to directly specifying the format with a string expression, of giving the line number where an IMAGE statement gave the formatting expression, similar to a FORMAT statement in FORTRAN.
Some features work differently in each language:
Microsoft BASIC additional features
INPUT$() function to replace Applesoft's GET command.
LINE INPUT statement to input an entire line of text, regardless of punctuation, into a single string variable.
LPRINT and LPRINT USING statements to automatically direct output to paper.
LSET and RSET statements to left- or right-justify a string expression within a given string variable's character length.
OCT$() function for output, and "&"- or "&O"-formatted expressions, for manipulating octal notation.
SPACE$() function for generating blank spaces outside of a PRINT statement, and STRING$() function to do likewise with any character.
WHILE...WEND statements, for loop structures built on general Boolean conditions without an index variable.
Bitwise Boolean (16-bit) operations (AND, OR, NOT), with additional operators XOR, EQV, IMP.
Line number specification in the RESTORE command.
RESUME options of NEXT (to skip to the statement after that which caused the error) or a specified line number (which replaces the idea of exiting error-handling by GOTO-line, thus avoiding Applesoft II's stack error problem).
Multiple parameters in user-defined (DEF FN) functions.
A return to the old Applesoft One concept of having multiple USR() functions at different addresses, by establishing ten different USR functions, numbered USR0 to USR9, with separate DEF USRx statements to define the address of each. The argument passed to a USRx function can be of any specific type, including string. The returned value can also be of any type, by default the same type as the argument passed.
There is no support for graphics provided within the language, nor for reading analog controls or buttons; nor is there a means of defining the active window of the text screen.
Business BASIC additional features
Apple Business BASIC eliminates all references to absolute memory addresses. Thus, the POKE command and PEEK() function were not included in the language, and new features replaced the CALL statement and USR() function. The functionality of certain features in Applesoft that had been achieved with various PEEK and POKE locations is now provided by:
BUTTON() function to read game-controller buttons
WINDOW statement to define the active window of the text screen by its coordinates
KBD, HPOS, and VPOS system variables
External binary subroutines and functions are loaded into memory by a single INVOKE disk-command that loads separately-assembled code modules. A PERFORM statement is then used to call an INVOKEd procedure by name, with an argument-list. INVOKEd functions would be referenced in expressions by EXFN. (floating-point) or EXFN%. (integer), with the function name appended, plus the argument-list for the function.
Graphics are supported with an INVOKEd module, with features including displaying text within graphics in various fonts, within four different graphics modes available on the Apple III.
Reception
Despite devoting the majority of its R&D to the Apple III and so ignoring the II that for a while dealers had difficulty in obtaining the latter, the III's technical problems made marketing the computer difficult. Ed Smith, who after designing the APF Imagination Machine worked as a distributor's representative, described the III as "a complete disaster". He recalled that he "was responsible for going to every dealership, setting up the Apple III in their showroom, and then explaining to them the functions of the Apple III, which in many cases didn’t really work".
Sales
Pohlman reported that Apple was only selling 500 units a month by late 1981, mostly as replacements. The company was able to eventually raise monthly sales to 5,000, but the IBM PC's successful launch had encouraged software companies to develop for it instead, prompting Apple to shift focus to the Lisa and Macintosh. The PC almost ended sales of the Apple III, the most closely comparable Apple computer model. By early 1984, sales were primarily to existing III owners, Apple itself—its 4500 employees were equipped with some 3000-4500 units—and some small businesses. Apple finally discontinued the Apple III series on April 24, 1984, four months after introducing the III Plus, after selling only 65,000-75,000 units and replacing 14,000 defective units.
Jobs said that the company lost "infinite, incalculable amounts" of money on the Apple III. Wozniak estimated that Apple had spent $100 million on the III, instead of improving the II and better competing against IBM. Pohlman claimed that there was a "stigma" at Apple associated with having contributed to the computer. Most employees who worked on the III reportedly left Apple.
Legacy
The file system and some design ideas from Apple SOS, the Apple III's operating system, were part of Apple ProDOS and Apple GS/OS, the major operating systems for the Apple II series following the demise of the Apple III, as well as the Apple Lisa, which was the de facto business-oriented successor to the Apple III. The hierarchical file system influenced the evolution of the Macintosh: while the original Macintosh File System (MFS) was a flat file system designed for a floppy disk without subdirectories, subsequent file systems were hierarchical. By comparison, the IBM PC's first file system (again designed for floppy disks) was also flat and later versions (designed for hard disks) were hierarchical.
In popular culture
At the start of the Walt Disney Pictures film TRON, lead character Kevin Flynn (played by Jeff Bridges) is seen hacking into the ENCOM mainframe using an Apple III.
References
Sources
External links
The Ill-Fated Apple III
Many manuals and diagrams
Sara – Apple /// emulator
The Ill-Fated Apple III Low End Mac
Apple III Chaos: Apple’s First Failure Low End Mac
Apple II family
Computer-related introductions in 1980
Discontinued Apple Inc. products
8-bit computers | Operating System (OS) | 858 |
IBM System/370
The IBM System/370 (S/370) is a model range of IBM mainframe computers announced on June 30, 1970 as the successors to the System/360 family. The series mostly maintains backward compatibility with the S/360, allowing an easy migration path for customers; this, plus improved performance, were the dominant themes of the product announcement. In September 1990, the System/370 line was replaced with the System/390.
Evolution
The original System/370 line was announced on June 30, 1970 with first customer shipment of the Models 155 and 165 planned for February 1971 and April 1971 respectively. The 155 first shipped in January 1971. System/370 underwent several architectural improvements during its roughly 20-year lifetime.
The following features mentioned in Principles of Operation
are either optional on S/360 but standard on S/370, introduced with S/370 or added to S/370 after announcement.
Branch and Save
Channel Indirect Data Addressing
Channel-Set Switching
Clear I/O
Command Retry
Commercial Instruction Set
Conditional Swapping
CPU Timer and Clock Comparator
Dual-Address Space (DAS)
Extended-Precision Floating Point
Extended Real Addressing
External Signals
Fast Release
Floating Point
Halt Device
I/O Extended Logout
Limited Channel Logout
Move Inverse
Multiprocessing
PSW-Key Handling
Recovery Extensions
Segment Protection
Service Signal
Start-I/O-Fast Queuing
Storage-Key-Instruction Extensions
Storage-Key 4K-Byte Block
Suspend and Resume
Test Block
Translation
Vector
31-Bit IDAWs
Initial models
The first System/370 machines, the Model 155 and the Model 165, incorporated only a small number of changes to the System/360 architecture. These changes included:
13 new instructions, among which were
MOVE LONG (MVCL);
COMPARE LOGICAL LONG (CLCL);
thereby permitting operations on up to 2^24-1 bytes (16 MB), vs. the 256-byte limits on the 360's MVC and CLC;
SHIFT AND ROUND DECIMAL (SRP), which multiplied or divided a packed decimal value by a power of 10, rounding the result when dividing;
optional 128-bit (hexadecimal) floating point arithmetic, introduced in the System/360 Model 85
a new higher-resolution time-of-day clock
support for the block multiplexer channel introduced in the System/360 Model 85.
All of the emulator features were designed to run under the control of the standard operating systems. IBM documented the S/370 emulator programs as integrated emulators.
These models had core memory and did not include support for virtual storage.
Logic technology
All models of the System/370 used IBM's form of monolithic integrated circuits called MST (Monolithic System Technology) making them third generation computers. MST provided System/370 with four to eight times the circuit density and over ten times the reliability when compared to the previous second generation SLT technology of the System/360.
Monolithic memory
On September 23, 1970, IBM announced the Model 145, a third model of the System/370, which was the first model to feature semiconductor main memory made from monolithic integrated circuits and was scheduled for delivery in the late summer of 1971. All subsequent S/370 models used such memory.
Virtual storage
In 1972, a very significant change was made when support for virtual storage was introduced with IBM's "System/370 Advanced Function" announcement. IBM had initially (and controversially) chosen to exclude virtual storage from the S/370 line. The August 2, 1972 announcement included:
address relocation hardware on all S/370s except the original models 155 and 165
the new S/370 models 158 and 168, with address relocation hardware
four new operating systems: DOS/VS (DOS with virtual storage), OS/VS1 (OS/360 MFT with virtual storage), OS/VS2 (OS/360 MVT with virtual storage) Release 1, termed SVS (Single Virtual Storage), and Release 2, termed MVS (Multiple Virtual Storage) and planned to be available 20 months later (at the end of March 1974), and VM/370 – the re-implemented CP/CMS
Virtual storage had in fact been delivered on S/370 hardware before this announcement:
In June 1971, on the S/370-145 (one of which had to be "smuggled" into Cambridge Scientific Center to prevent anybody noticing the arrival of an S/370 at that hotbed of virtual memory development – since this would have signaled that the S/370 was about to receive address relocation technology). (Varian 1997:p29) The S/370-145 had an associative memory used by the microcode for the DOS compatibility feature from its first shipments in June 1971; the same hardware was used by the microcode for DAT. Although IBM famously chose to exclude virtual storage from the S/370 announcement, that decision was being reconsidered during the completion of the 145 engineering, partly because of virtual memory experience at CSC and elsewhere. The 145 microcode architecture simplified the addition of virtual storage, allowing this capability to be present in early 145s without the extensive hardware modifications needed in other models. However, IBM did not document the 145's virtual storage capability, nor annotate the relevant bits in the control registers and PSW that were displayed on the operator control panel when selected using the roller switches. The Reference and Change bits of the Storage-protection Keys, however, were labeled on the rollers, a dead giveaway to anyone who had worked with the earlier 360/67. Existing S/370-145 customers were happy to learn that they did not have to purchase a hardware upgrade in order to run DOS/VS or OS/VS1 (or OS/VS2 Release 1 – which was possible, but not common because of the limited amount of main storage available on the S/370-145).
Shortly after the August 2, 1972 announcement, DAT box (address relocation hardware) upgrades for the S/370-155 and S/370-165 were quietly announced, but were available only for purchase by customers who already owned a Model 155 or 165. After installation, these models were known as the S/370-155-II and S/370-165-II. IBM wanted customers to upgrade their 155 and 165 systems to the widely sold S/370-158 and -168. These upgrades were surprisingly expensive ($200,000 and $400,000, respectively) and had long ship date lead times after being ordered by a customer; consequently, they were never popular with customers, the majority of whom leased their systems via a third-party leasing company. This led to the original S/370-155 and S/370-165 models being described as "boat anchors". The upgrade, required to run OS/VS1 or OS/VS2, was not cost effective for most customers by the time IBM could actually deliver and install it, so many customers were stuck with these machines running MVT until their lease ended. It was not unusual for this to be another four, five or even six years for the more unfortunate ones, and turned out to be a significant factor in the slow adoption of OS/VS2 MVS, not only by customers in general, but for many internal IBM sites as well.
Subsequent enhancements
Later architectural changes primarily involved expansions in memory (central storage) – both physical memory and virtual address space – to enable larger workloads and meet client demands for more storage. This was the inevitable trend as Moore's Law eroded the unit cost of memory. As with all IBM mainframe development, preserving backward compatibility was paramount.
Operating system specific assist, Extended Control Program Support (ECPS). extended facility and extension features for OS/VS1, MVS and VM. Exploiting levels of these operating systems, e.g., MVS/System Extensions (MVS/SE), reduce path length for some frequent functions.
The Dual Address Space (DAS) facility allows a privileged program to move data between two address spaces without the overhead of allocating a buffer in common storage, moving the data to the buffer, scheduling an SRB in the target address space, moving the data to their final destination and freeing the buffer. IBM introduced DAS in 1981 for the 3033, but later made it available for some 43xx, 3031 and 3032 processors. MVS/System Product (MVS/SP) Version 1 exploited DAS if it was available.
In October 1981, the 3033 and 3081 processors added "extended real addressing", which allowed 26-bit addressing for physical storage (but still imposed a 24-bit limit for any individual address space). This capability appeared later on other systems, such as the 4381 and 3090.
The System/370 Extended Architecture (S/370-XA), first available in early 1983 on the 3081 and 3083 processors, provided a number of major enhancements, including: expansion of the address space from 24-bits to 31-bits; facilitating movement of data between two address spaces; and a complete redesign of the I/O architecture. The cross-memory services capability which facilitated movement of data between address spaces was actually available just prior to S/370-XA architecture on the 3031, 3032 and 3033 processors.
In February 1988, IBM announced the Enterprise Systems Architecture/370 (ESA/370) for enhanced (E) 3090 and 4381 models. It added sixteen 32-bit access registers, more addressing modes, and various facilities for working with multiple address spaces simultaneously.
On September 5, 1990, IBM announced the Enterprise Systems Architecture/390 (ESA/390), upward compatible with ESA/370.
Expanding the address space
As described above, the S/370 product line underwent a major architectural change: expansion of its address space from 24 to 31 bits.
The evolution of S/370 addressing was always complicated by the basic S/360 instruction set design, and its large installed code base, which relied on a 24-bit logical address. (In particular, a heavily used machine instruction, "Load Address" (LA), explicitly cleared the top eight bits of the address being placed in a register. This created enormous migration problems for existing software.)
The strategy chosen was to implement expanded addressing in three stages:
first at the physical level (to enable more memory hardware per system)
then at the operating system level (to let system software access multiple address spaces and utilize larger address spaces)
finally at the application level (to let new applications access larger address spaces)
Since the core S/360 instruction set remained geared to a 24-bit universe, this third step would require a real break from the status quo; existing assembly language applications would of course not benefit, and new compilers would be needed before non-assembler applications could be migrated. Most shops thus continued to run their 24-bit applications in a higher-performance 31-bit world.
This evolutionary implementation (repeated in z/Architecture) had the characteristic of solving the most urgent problems first: relief for real memory addressing being needed sooner than virtual memory addressing.
31 versus 32 bits
IBM's choice of 31-bit (versus 32-bit) addressing for 370-XA involved various factors. The System/360 Model 67 had included a full 32-bit addressing mode, but this feature was not carried forward to the System/370 series, which began with only 24-bit addressing. When IBM later expanded the S/370 address space in S/370-XA, several reasons are cited for the choice of 31 bits:
The desire to retain the high-order bit as a "control or escape bit." In particular, the standard subroutine calling convention marked the final parameter word by setting its high bit.
Interaction between 32-bit addresses and two instructions (BXH and BXLE) that treated their arguments as signed numbers (and which was said to be the reason TSS used 31-bit addressing on the Model 67). (Varian 1997:p26, note 85)
Input from key initial Model 67 sites, which had debated the alternatives during the initial system design period, and had recommended 31 bits (instead of the 32-bit design that was ultimately chosen at the time). (Varian 1997:pp8–9, note 21, includes other comments about the "Inner Six" Model 67 design disclosees)
Series and models
Models sorted by date introduced (table)
The following table summarizes the major S/370 series and models. The second column lists the principal architecture associated with each series. Many models implemented more than one architecture; thus, 308x processors initially shipped as S/370 architecture, but later offered XA; and many processors, such as the 4381, had microcode that allowed customer selection between S/370 or XA (later, ESA) operation.
Note also the confusing term "System/370-compatible", which appeared in IBM source documents to describe certain products. Outside IBM, this term would more often describe systems from Amdahl Corporation, Hitachi Ltd., and others, that could run the same S/370 software. This choice of terminology by IBM may have been a deliberate attempt to ignore the existence of those plug compatible manufacturers (PCMs), because they competed aggressively against IBM hardware dominance.
Models grouped by Model number (detailed)
IBM used the name System/370 to announce the following eleven (3 digit) offerings:
System/370 Model 115
The IBM System/370 Model 115 was announced March 13, 1973 as "an ideal System/370 entry system for users of IBM's System/3, 1130 computing system and System/360 Models 20, 22 and 25."
It was delivered with "a minimum of two (of IBM's newly announced) directly-attached IBM 3340 disk drives." Up to four 3340s could be attached.
The CPU could be configured with 65,536 (64K) or 98,304 (96K) bytes of main memory. An optional 360/20 emulator was available.
The 115 was withdrawn on March 9, 1981.
System/370 Model 125
The IBM System/370 Model 125 was announced Oct 4, 1972.
Two, three or four directly attached IBM 3333 disk storage units provided "up to 400 million bytes online."
Main memory was either 98,304 (96K) or 131,072 (128K) bytes.
The 125 was withdrawn on March 9, 1981.
System/370 Model 135
The IBM System/370 Model 135 was announced Mar 8, 1971. Options for the 370/135 included a choice of four main memory sizes; IBM 1400 series (1401, 1440 and 1460) emulation was also offered.
A "reading device located in the Model 135 console" allowed updates and adding features to the Model 135's microcode.
The 135 was withdrawn on October 16, 1979.
System/370 Model 138
The IBM System/370 Model 138 which was announced Jun 30, 1976 was offered with either 524,288 (512K) or 1,048,576 (1 MB) of memory. The latter was "double the maximum capacity of the Model 135," which "can be upgraded to the new computer's internal performance levels at customer locations."
The 138 was withdrawn on November 1, 1983.
System/370 Model 145
The IBM System/370 Model 145 was announced Sep 23, 1970, three months after the 155 and 165 models. It first shipped in June 1971.
The first System/370 to use monolithic main memory, the Model 145 was offered in six memory sizes. A portion of the main memory, the "Reloadable Control Storage" (RCS) was loaded from a prewritten disk cartridge containing microcode to implement, for example, all needed instructions, I/O channels, and optional instructions to enable the system to emulate earlier IBM machines.
The 145 was withdrawn on October 16, 1979.
System/370 Model 148
The IBM System/370 Model 148 had the same announcement and withdrawal dates as the Model 138.
As with the option to field-upgrade a 135, a 370/145 could be field-upgraded "at customer locations" to 148-level performance. The upgraded 135 and 145 systems were "designated the Models 135-3 and 145-3."
System/370 Model 155
The IBM System/370 Model 155 and the Model 165 were announced Jun 30, 1970, the first of the 370s introduced. Neither had a DAT box; they were limited to running the same non-virtual-memory operating systems available for the System/360. The 155 first shipped in January 1971.
The OS/DOS (DOS/360 programs under OS/360), 1401/1440/1460 and 1410/7010 and 7070/7074 compatibility features were included, and the supporting integrated emulator programs could operate concurrently with standard System/370 workloads.
In August 1972 IBM announced, as a field upgrade only, the IBM System/370 Model 155 II, which added a DAT box.
Both the 155 and the 165 were withdrawn on December 23, 1977.
System/370 Model 158
The IBM System/370 Model 158 and the 370/168 were announced Aug 2, 1972.
It included dynamic address translation (DAT) hardware, a prerequisite for the new virtual memory operating systems (DOS/VS, OS/VS1, OS/VS2).
A tightly coupled multiprocessor (MP) model was available, as was the ability to loosely couple this system to another 360 or 370 via an optional channel-to-channel adapter.
The 158 and 168 were withdrawn on September 15, 1980.
System/370 Model 165
The IBM System/370 Model 165 was described by IBM as "more powerful" compared to the "medium-scale" 370/155. It first shipped in April 1971.
Compatibility features included emulation for 7070/7074, 7080, and 709/7090/7094/7094 II.
Some have described the 360/85's use of microcoded vs hardwired as a bridge to the 370/165.
In August 1972 IBM announced, as a field upgrade only, the IBM System/370 Model 165 II which added a DAT box.
The 165 was withdrawn on December 23, 1977.
System/370 Model 168
The IBM System/370 Model 168 included "up to eight megabytes" of main memory, double the maximum of 4 megabytes on the
370/158.
It included dynamic address translation (DAT) hardware, a pre-requisite for the new virtual memory operating systems.
Although the 168 served as IBM's "flagship" system, a 1975 newbrief said that IBM boosted the power of the 370/168 again "in the wake of the Amdahl challenge... only 10 months after it introduced the improved 168-3 processor."
The 370/168 was not withdrawn until September 1980.
System/370 Model 195
The IBM System/370 Model 195 was announced Jun 30, 1970 and, at that time, it was "IBM's most powerful computing system."
Its introduction came about 14 months after the announcement of the 360/195. Both 195 machines were withdrawn Feb. 9, 1977.
System/370-compatible
Beginning in 1977, IBM began to introduce new systems, using the description "A compatible member of the System/370 family."
IBM 303X
The first of the initial high end machines, IBM's 3033, was announced March 25, 1977 and was delivered the following March, at which time a multiprocessor version of the 3033 was announced. IBM described it as "The Big One."
IBM noted about the 3033, looking back, that "When it was rolled out on March 25, 1977, the 3033 eclipsed the internal operating speed of the company's previous flagship the System/370 Model 168-3 ..."
The IBM 3031 and IBM 3032 were announced Oct. 7, 1977 and withdrawn Feb. 8, 1985.
IBM 308X
Three systems comprised the next series of high end machines, IBM's 308X systems:
The 3081 (announced Nov 12, 1980) had 2 CPUs
The 3083 (announced Mar 31, 1982) had 1 CPU
The 3084 (announced Sep 3, 1982) had 4 CPUs
Despite the numbering, the least powerful was the 3083, which could be field-upgraded to a 3081; the 3084 was the top of the line.
These models introduced IBM's Extended Architecture's 31-bit address capability and a set of backward compatible MVS/Extended Architecture (MVS/XA) software replacing previous products and part of OS/VS2 R3.8:
All three 308x systems were withdrawn on August 4, 1987.
IBM 3090
The next series of high-end machines, the IBM 3090, began with models 200 and 400. They were announced Feb. 12, 1985, and were configured with two or four CPUs respectively. IBM subsequently announced models 120, 150, 180, 300, 500 and 600 with lower, intermediate and higher capacities; the first digit of the model number gives the number of central processors.
Starting with the E models, and continuing with the J and S models, IBM offered Enterprise Systems Architecture/370 (ESA/370), Processor Resource/System Manager (PR/SM) and a set of backward compatible MVS/Enterprise System Architecture (MVS/ESA) software replacing previous products:
IBM's offering of an optional vector facility (VF) extension for the 3090 came at a time when Vector processing/Array processing suggested names like Cray and Control Data Corporation (CDC).
The 200 and 400 were withdrawn on May 5, 1989.
IBM 4300
The first pair of IBM 4300 processors were Mid/Low end systems announced Jan 30, 1979 as "compact (and).. compatible with System/370."
The 4331 was subsequently withdrawn on November 18, 1981, and the 4341 on February 11, 1986.
Other models were the 4321, 4361 and 4381.
The 4361 has "Programmable Power-Off -- enables the user to turn off the processor under program control"; "Unit power off" is (also) part of the 4381 feature list.
IBM offered many Model Groups and models of the 4300 family, ranging from the entry level 4331 to the 4381, described as "one of the most powerful and versatile intermediate system processors ever produced by IBM."
The 4381 Model Group 3 was dual-CPU.
IBM 9370
This low-end system, announced October 7, 1986, was "designed to satisfy the computing requirements of IBM customers who value System/370 affinity" and "small enough and quiet enough to operate in an office environment."
IBM also noted its sensitivity to "entry software prices, substantial reductions in support and training requirements, and modest power consumption and maintenance costs."
Furthermore, it stated its awareness of the needs of small-to-medium size businesses to be able to respond, as "computing requirements grow," adding that "the IBM 9370 system can be easily expanded by adding additional features and racks to accommodate..."
This came at a time when Digital Equipment Corporation (DEC) and its VAX systems were strong competitors in both hardware and software; the media of the day carried IBM's alleged "VAX Killer" phrase, albeit often skeptically.
Clones
In the 360 era, a number of manufacturers had already standardized upon the IBM/360 instruction set and, to a degree, 360 architecture. Notable computer makers included Univac with the UNIVAC 9000 series, RCA with the RCA Spectra 70 series, English Electric with the English Electric System 4, and the Soviet ES EVM. These computers were not perfectly compatible, nor (except for the Russian efforts) were they intended to be.
That changed in the 1970s with the introduction of the IBM/370 and Gene Amdahl's launch of his own company. About the same time, Japanese giants began eyeing the lucrative mainframe market both at home and abroad. One Japanese consortium focused upon IBM and two others from the BUNCH (Burroughs/Univac/NCR/Control Data/Honeywell) group of IBM's competitors. The latter efforts were abandoned and eventually all Japanese efforts focused on the IBM mainframe lines.
Some of the era's clones included:
Architecture details
IBM documentation numbers the bits from high order to low order; the most significant (leftmost) bit is designated as bit number 0.
S/370 also refers to a computer system architecture specification, and is a direct and mostly backward compatible evolution of the System/360 architecture from which it retains most aspects. This specification does not make any assumptions on the implementation itself, but rather describes the interfaces and the expected behavior of an implementation. The architecture describes mandatory interfaces that must be available on all implementations and optional interfaces which may or may not be implemented.
Some of the aspects of this architecture are:
Big endian byte ordering
One or more processors with:
16 32-bit General purpose registers
16 32-bit Control registers
4 64-bit Floating-point registers
A 64-bit Program status word (PSW) which describes (among other things)
Interrupt masks
Privilege states
A condition code
A 24-bit instruction address
Timing facilities (Time of day clock, interval timer, CPU timer and clock comparator)
An interruption mechanism, maskable and unmaskable interruption classes and subclasses
An instruction set. Each instruction is wholly described and also defines the conditions under which an exception is recognized in the form of program interruption.
A memory (called storage) subsystem with:
8 bits per byte
A special processor communication area starting at address 0
Key controlled protection
24-bit addressing
Manual control operations that provide:
A bootstrap process (a process called Initial Program Load or IPL)
Operator-initiated interrupts
Resetting the system
Basic debugging facilities
Manual display and modifications of the system's state (memory and processor)
An Input/Output mechanismwhich doesn't describe the devices themselves
Some of the optional features are:
A Dynamic Address Translation (DAT) mechanism that can be used to implement a virtual memory system
Floating point instructions
IBM took great care to ensure that changes to the architecture would remain compatible for unprivileged (problem state) programs; some new interfaces did not break the initial interface contract for privileged (supervisor mode) programs. Some examples are
ECPS:MVS
A feature to enhance performance for the MVS/370 operating systems
ECPS:VM
A feature to enhance performance for the VM operating systems
Other changes were compatible only for unprivileged programs, although the changes for privileged programs were of limited scope and well defined. Some examples are:
ECPS:VSE
A feature to enhance performance for the DOS/VSE operating system.
S/370-XA
A feature to provide a new I/O interface and to support 31-bit computing
Great care was taken in order to ensure that further modifications to the architecture would remain compatible, at least as far as non-privileged programs were concerned. This philosophy predates the definition of the S/370 architecture and started with the S/360 architecture. If certain rules are adhered to, a program written for this architecture will run with the intended results on the successors of this architecture.
Such an example is that the S/370 architecture specifies that the 64-bit PSW register bit number 32 has to be set to 0 and that doing otherwise leads to an exception. Subsequently, when the S/370-XA architecture was defined, it was stated that this bit would indicate whether the program was a program expecting a 24-bit address architecture or 31-bit address architecture. Thus, most programs that ran on the 24-bit architecture can still run on 31-bit systems; the 64-bit z/Architecture has an additional mode bit for 64-bit addresses, so that those programs, and programs that ran on the 31-bit architecture, can still run on 64-bit systems.
However, not all of the interfaces can remain compatible. Emphasis was put on having non control programs (called problem state programs) remain compatible. Thus, operating systems have to be ported to the new architecture because the control interfaces can (and were) redefined in an incompatible way. For example, the I/O interface was redesigned in S/370-XA making S/370 program issuing I/O operations unusable as-is.
S/370 replacement
IBM replaced the System/370 line with the System/390 in the 1990s, and similarly extended the architecture from ESA/370 to ESA/390. This was a minor architectural change, and was upwards compatible.
In 2000, the System/390 was replaced with the zSeries (now called IBM System z). The zSeries mainframes introduced the 64-bit z/Architecture, the most significant design improvement since the 31-bit transition. All have retained essential backward compatibility with the original S/360 architecture and instruction set.
GCC and Linux on the S/370
The GNU Compiler Collection (GCC) had a back end for S/370, but it became obsolete over time and was finally replaced with the S/390 backend. Although the S/370 and S/390 instruction sets are essentially the same (and have been consistent since the introduction of the S/360), GCC operability on older systems has been abandoned. GCC currently works on machines that have the full instruction set of System/390 Generation 5 (G5), the hardware platform for the initial release of Linux/390. However, a separately maintained version of GCC 3.2.3 that works for the S/370 is available, known as GCCMVS.
I/O evolutions
I/O evolution from original S/360 to S/370
The block multiplexer channel, previously available only on the 360/85 and 360/195, was a standard part of the architecture. For compatibility it could operate as a selector channel. Block multiplexer channels were available in single byte (1.5 MB/s) and double byte (3.0 MB/s) versions.
I/O evolution since original S/370
As part of the DAT announcement, IBM upgraded channels to have Indirect Data Address Lists (IDALs). a form of I/O MMU.
Data streaming channels had a speed of 3.0 MB/s over a single byte interface, later upgraded to 4.5 MB/s.
Channel set switching allowed one processor in a multiprocessor configuration to take over the I/O workload from the other processor if it failed or was taken offline for maintenance.
System/370-XA introduced a channel subsystem that performed I/O queuing previously done by the operating system.
The System/390 introduced the ESCON channel, an optical fiber, half-duplex, serial channel with a maximum distance of 43 kilometers. Originally operating at 10 Mbyte/s, it was subsequently increased to 17 Mbyte/s.
Subsequently, FICON became the standard IBM mainframe channel; FIbre CONnection (FICON) is the IBM proprietary name for the ANSI FC-SB-3 Single-Byte Command Code Sets-3 Mapping Protocol for Fibre Channel (FC) protocol used to map both IBM's antecedent (either ESCON or parallel Bus and Tag) channel-to-control-unit cabling infrastructure and protocol onto standard FC services and infrastructure at data rates up to 16 Gigabits/sec at distances up to 100 km. Fibre Channel Protocol (FCP) allows attaching SCSI devices using the same infrastructure as FICON.
See also
IBM System/360
IBM ESA/390
IBM System z
PC-based IBM-compatible mainframes
Hercules emulator
Notes
References
S370-1st
S370
S370-MVS
S370-VM
S370-XA-1st
S370-XA
S370-ESA
S/390-ESA
SIE
Further reading
Chapter 4 (pp. 111166) describes the System/370 architecture; Chapter 5 (pp. 167206) describes the System/370 Extended Architecture.
External links
Hercules System/370 Emulator A software implementation of IBM System/370
370
Computing platforms
Computer-related introductions in 1970
1990s disestablishments
32-bit computers | Operating System (OS) | 859 |
Xbox (console)
The Xbox is a home video game console and the first installment in the Xbox series of video game consoles manufactured by Microsoft. It was released as Microsoft's first foray into the gaming console market on November 15, 2001, in North America, followed by Australia, Europe and Japan in 2002. It is classified as a sixth-generation console, competing with Sony's PlayStation 2 and Nintendo's GameCube. It was also the first major console produced by an American company since the release of the Atari Jaguar in 1993.
The console was announced in March 2000. With the release of the PlayStation 2, which featured the ability to playback CD-ROMs and DVDs in addition to playing games, Microsoft became concerned that game consoles would threaten the personal computer as an entertainment device for living rooms. Whereas most games consoles to that point were built from custom hardware components, the Xbox was built around standard personal computer components, using variations of Microsoft Windows and DirectX as its operating system to support games and media playback. The Xbox was technically more powerful compared to its rivals, featuring a 733 MHz Intel Pentium III processor, a processor that could be found on a standard PC. The Xbox was the first console to feature a built-in hard disk. The console also was built with direct support for broadband connectivity to the Internet via an integrated Ethernet port, and with the release of Xbox Live, a fee-based online gaming service, a year after the console's launch, Microsoft gained an early foothold in online gaming and made the Xbox a strong competitor in the sixth generation of consoles. The popularity of blockbuster titles such as Bungie's Halo 2 contributed to the popularity of online console gaming, and in particular first-person shooters.
The Xbox had a record-breaking launch in North America, selling 1.5 million units before the end of 2001, aided by the popularity of one of the system's launch titles, Halo: Combat Evolved, which sold a million units by April 2002. The system went on to sell a worldwide total of 24 million units, including 16 million in North America; however, Microsoft was unable to make a steady profit off the console, which had a manufacturing price far more expensive than its retail price, despite its popularity, losing over $4 billion during its market life. The system outsold the GameCube and the Sega Dreamcast, but was vastly outsold by the PlayStation 2, which had sold over 100 million units by the system's discontinuation in 2005. It also underperformed outside of the Western market; particularly, it sold poorly in Japan due to its large console size and an overabundance of games marketed towards American audiences instead of Japanese-developed titles. Production of the system was discontinued in 2005. The Xbox was the first in an ongoing brand of video game consoles developed by Microsoft, with a successor, the Xbox 360, launching in 2005, followed by the Xbox One in 2013 and the Xbox Series X and Series S consoles in 2020.
History
Creation and development
Before the Xbox, Microsoft had found success publishing video games for its Windows PCs, releasing popular titles such as Microsoft Flight Simulator and the massively successful Age of Empires after the creation of DirectX, the application programming interface (API) that allowed for direct access of the computer hardware and bypassing Windows. However, the company had not delved into the home console market of video games, which was dominated at the time by Sony's PlayStation. Sony was working on its next video game console, the PlayStation 2 (PS2), announced officially to the public on March 2, 1999, and intended for the system to act as a gateway for all types of home entertainment. Sony presented a vision where the console would ultimately replace the desktop computer in the home. Microsoft CEO Bill Gates saw the upcoming PS2 as a threat to Microsoft's line of Windows PCs, worrying that the all-encompassing system could eliminate consumer interests in PCs and drive them out of the market. With video games rapidly growing into a massive industry, Gates decided that Microsoft needed to venture into the console gaming market to compete with Sony. Previously, Sega had developed a version of Windows CE for its Dreamcast console to be used by game developers. Additionally, Gates had directly approached Sony CEO Nobuyuki Idei before the public announcement of the PS2 regarding letting Microsoft develop programming software for the console. However, the offer was declined by Idei in favor of having Sony create proprietary software. Microsoft had also attempted to meet with Hiroshi Yamauchi and Genyo Takeda of Nintendo to potentially acquire the company, but Nintendo declined to go further.
In 1998, four engineers from Microsoft's DirectX team—Kevin Bachus, Seamus Blackley, Ted Hase and DirectX team leader Otto Berkes—began discussing ideas for a new console which would run off Microsoft's DirectX technology. Nat Brown, the Windows Software Architect at Microsoft, would also become a regular contributor to the project after meeting Hase in November 1998. The project was codenamed "Midway," in reference to the Battle of Midway during World War II in which Japan was decisively defeated by American forces, as a representation of Microsoft's desire to surpass Sony in the console market. The DirectX team held their first development meeting on March 30, 1999, in which they discussed issues such as getting a PC to boot at a quicker pace than usual. The console would run off Windows 2000 using DirectX 8.1, which would allow PC developers to easily transition into making games for the console while also granting it a larger processing power than that of most other home consoles. According to Blackley, using PC technology as the foundation for a video game console would eliminate the technological barriers of most home consoles, allowing game creators to expand further on their own creativity without having to worry about hardware limitations.
The 4 DirectX team members encountered disagreements with the Silicon Valley engineering team behind WebTV, which joined Microsoft after they purchased the rights to the device. Microsoft executive Craig Mundie wanted the project to be led by the WebTV team, who believed the console should be built from the ground-up as an appliance running off Windows CE; however, the DirectX team were adamant about the idea of repurposing PC hardware components, such as a hard disk drive, arguing that they were cheaply manufactured and could easily be updated every year. The 4 developers gained the support of Ed Fries, the head of Microsoft's gaming division, who believed the use of a hard drive, in particular, would give the console a technical edge among competitors despite its high manufacturing cost. The two opposing teams pitched their arguments to Gates on May 5, 1999, at a meeting attended by over twenty different people. WebTV's team, among whom were Nick Baker, Dave Riola, Steve Perlman, and Tim Bucher, and their sponsor, Craig Mundie, made the case that creating an appliance would be far cheaper, highlighting that most consoles were generally sold at around $300. They also wanted to use a custom-made graphics chip, which could be shared across several different home devices. Conversely, Fries, vouching for the DirectX team, argued that using a PC hard drive would set Microsoft's console apart from competitors by allowing for the direct implementation of online access, an argument which Gates sided with. When Gates questioned if PC games could be effectively ported to the new console, Blackley explained that the machine would utilize DirectX hardware, meaning that they could be converted easily. Gates heavily favored this proposition over WebTV's, whose concept relied on Windows CE, a heavily stripped-down Windows variant that was not compatible with DirectX. As such, Gates sided with the DirectX concept and gave Berkes' team permission to create a new video game console. Despite this, WebTV would still play a part in the Xbox's initial launch.
Rick Thompson and Robert J. Bach were responsible for overseeing the Xbox's design. The DirectX team began constructing prototype consoles, purchasing several Dell computers and using their internal parts. Initially, it envisioned that after designing the console, Microsoft would have worked with a third-party computer manufacturer to mass-produce the units. However, the early work showed that this would need to be something that Microsoft would have to produce themselves, making the prospect a far more costly operation; the name "Coffin Box" became associated with the project as there were fears the project would end careers at Microsoft. Further, as a gaming console, they could not provide the direct Windows interface to users. While Thompson and Bach had warned Gates and Steve Ballmer about these large-scale changes from the initial proposal in late 1999, the matter came to a head at a February 14, 2000, meeting, informally referred to as the Valentine's Day Massacre, in which Gates furiously vented about the new cost proposal and massive changes in this console from what had been previously presented, since the Xbox appeared to marginalize Windows. However, after being reminded that this was a product to compete against Sony, Gates and Ballmer gave the project the go-ahead along with the necessary marketing budget. Another contentious point of design was the addition of Ethernet connectivity rather than simple support for dial-up networking. At this point, most consumer homes had access to Internet connectivity, but social networks had yet to be established which would later demonstrate the viability of this decision. The Xbox leads argued that with the planned Xbox Live functionality, the Ethernet port would help friends be able to play after they have graduated from schools and colleges and moved across the country.
Throughout the console's prototyping, Microsoft was working with AMD for the CPU on the system. According to Blackley, just prior to the system's reveal in January 2001, the Microsoft engineers opted to switch to an Intel CPU, a fact that had not yet been communicated to AMD prior to the reveal.
Among the names considered for the new console were a number of acronyms, including "Windows Entertainment Project" (WEP), "Microsoft Total Gaming" (MTG), "Microsoft Interactive Network Device" (MIND), and "Microsoft Interactive Center" (MIC). Also among the names considered was "DirectX Box", referring to the system's reliance on Direct X. At one point, Hase jokingly came up with the names "XXX-Box" and "DirectXXX-Box" as a nod to the system's higher volume of adult content compared to Sony or Nintendo's consoles. "DirectX Box" was quickly shortened to "Xbox" through an e-mail conversation, and was ultimately favored by the development team, though a number of spelling variants were tossed around, such as xBox, XboX, and X-box. Microsoft's marketing department did not like this name, suggesting "11-X" or "Eleven-X" as alternatives. During focus testing, the company put the name "Xbox" on the list of possible names during focus testing simply to prove how unpopular the Xbox name would be with consumers. However, "Xbox" proved to be the more popular name on the list during consumer testing and was thus selected as the official name of the product.
When the physical design of the controller began, circuit boards for the controller had already been manufactured. Microsoft had asked Sony's supplier, Mitsumi Electric, for a similar folded and stacked circuit board design used in Sony's DualShock 2 controller, but the company refused to manufacture such a design for Microsoft. This led to the controller being bulky and nearly three times the size of Sony's controller. This initial controller design was never launched in Japan. The console instead launched with a smaller, redesigned version named "Controller S" that did use the more compact circuit board design.
As the development team began to tighten down the design of the Xbox, they got help from Flextronics not only in revising the design but in mass production, creating a factory in Guadalajara, Mexico, for this purpose. Early production units had a high failure rate of around 25%, which Flextronics repaired. Later iterations of the hardware design worked to eliminate these failures.
Initial announcement and content acquisitions
Gates first publicly mentioned the Xbox in an interview in late 1999, stating that he wanted the system "to be the platform of choice for the best and most creative game developers in the world". It was later announced officially by Gates in a keynote presentation at the Game Developers Conference in San Jose on March 10, 2000, showing off an early prototype build of the system and a series of demos showcasing its hardware. The presentation and the new system were well-received, impressing developers with both the hard drive and the Ethernet port and appealing to them with the notion of easy-to-use development tools.
Microsoft began looking at a series of acquisitions and partnerships to secure content for the console at this time. In early 2000, Sega's Dreamcast sales were diminishing, in part due to Electronic Arts' decision to bypass the console, and Sony's PlayStation 2 was just going on sale in Japan. Gates was in talks with Sega's late chairman Isao Okawa about the possibility of Xbox compatibility with Dreamcast games, but negotiations fell apart over whether the Dreamcast's SegaNet online service should be implemented. Microsoft also looked to acquire Electronic Arts, Nintendo, Square Enix, and Midway without success. The company did achieve success in convincing developers at Bethesda Game Studios and Tecmo about the power of the Xbox over the PS2, lining up The Elder Scrolls III: Morrowind and Dead or Alive 3 as Xbox console-exclusives.
Around this same time, Microsoft announced it was rebranding its Games Group, which had been focused on developing games for Windows, to the Microsoft Games division to make titles for both Windows and the Xbox. Microsoft began acquiring a number of studios to add to the division, notably Bungie in June 2000, shortly after their announcement of Halo: Combat Evolved. With Microsoft's acquisition, Halo switched from being a release for personal computers to being an Xbox exclusive release and as a launch time to help drive sales of the console.
Formal announcement and release
The Xbox was officially unveiled to the public by Gates and guest professional wrestler Dwayne "The Rock" Johnson at CES 2001 in Las Vegas on January 3, 2001. Microsoft announced Xbox's release dates and prices at E3 2001 in May. Most Xbox launch titles were unveiled at E3, most notably Halo and Dead or Alive 3.
The unit's release in November 2001 was partially hampered by the impact of the September 11 attacks on travel, as Microsoft could not travel to the Guadalajara facility to help test units. They were able to arrange to ship the units locally instead of testing at Microsoft facilities to have them ready for launch.
The system was officially launched at midnight on November 15, 2001, three days before the subsequent launch of the Nintendo GameCube. A special event was held on the prior night as part of the grand opening of the flagship store of Toys 'R' Us at Times Square in New York City, in which 1,000 systems were shipped to the store to kick off sales. Bill Gates was present at the event, personally selling the very first Xbox console and greeting people in line and playing games with them at the numerous display units present.
Promotion
In 2002, the Independent Television Commission (ITC) banned a television advertisement for the Xbox in the United Kingdom after complaints that it was "offensive, shocking and in bad taste." It depicted a mother giving birth to a baby boy, fired like a projectile through a window, aging rapidly as he flies through the air. The advertisement ends with an old man crash-landing into his own grave and the slogan, "Life is short. Play more."
Discontinuation and successors
The Xbox's successor, the Xbox 360, was officially announced on May 12, 2005 on MTV. It was the first next generation system to be announced. It was released in North America on November 22, 2005. Nvidia ceased production of the Xbox's GPU in August 2005, which marked the end of brand-new Xbox production. The last game for the Xbox in Japan was The King of Fighters Neowave released in March 2006, the last Xbox game in Europe was Xiaolin Showdown released in June 2007, and the last game in North America was Madden NFL 09 released in August 2008. Support for out-of-warranty Xbox consoles was discontinued on March 2, 2009. Support for Xbox Live on the console ended on April 15, 2010.
The Xbox 360 supports a limited number of the Xbox's game library if the player has an official Xbox 360 Hard Drive. Xbox games were added up until November 2007. Xbox game saves cannot be transferred to Xbox 360, and the ability to play Xbox games through Xbox LIVE has been discontinued since April 15, 2010. It is still possible to play Xbox games with System Link functionality online via both the original console and the Xbox 360 with tunneling software such as XLink Kai. It was announced at E3 2017 that the Xbox One would be gaining support for a limited number of the Xbox's game library.
Hardware
The Xbox was the first video game console to feature a built-in hard disk drive, used primarily for storing game saves and content downloaded from Xbox Live. This eliminated the need for separate memory cards (although some older consoles, such as the Amiga CD32, used internal flash memory, and others, such as the TurboGrafx-CD, Sega CD, and Sega Saturn, had featured built-in battery backup memory prior to 2001). An Xbox user could rip music from standard audio CDs to the hard drive, and these songs were used for the custom soundtracks in some games.
Unlike the PlayStation 2, which could play movie DVDs without the need for a remote control (although an optional remote was available), the Xbox required an external IR adapter to be plugged into a controller port in order to play movie DVDs. If DVD playback is attempted without the IR sensor plugged in, an error screen will pop up informing the user of the need for the Xbox DVD Playback Kit. The said kit included the IR sensor and a remote control (unlike the PS2, the Xbox controller could not control DVD playback). Said remote was manufactured by Thomson (which also manufactured optical drives for the console) and went on sale in late 2002, which meant a modified version of the remote design used by the RCA, GE and ProScan consumer electronics of the era was used for the Xbox remote, and therefore users wishing to use a universal remote were instructed to utilize RCA DVD remote codes.
The Xbox was the first gaming product to feature Dolby Interactive Content-Encoding Technology, which allows real-time Dolby Digital encoding in game consoles. Previous game consoles could only use Dolby Digital 5.1 during non-interactive "cut scene" playback.
The Xbox is based on commodity PC hardware and is much larger and heavier than its contemporaries. This is largely due to a bulky tray-loading DVD-ROM drive and the standard-size 3.5-inch hard drive. The Xbox has also pioneered safety features, such as breakaway cables for the controllers to prevent the console from being pulled from the surface upon which it rests.
Several internal hardware revisions have been made in an ongoing battle to discourage modding (hackers continually updated modchip designs in an attempt to defeat them), to cut manufacturing costs, and to make the DVD-ROM drive more reliable (some of the early units' drives gave disc-reading errors due to the unreliable Thomson DVD-ROM drives used). Later-generation units that used the Thomson TGM-600 DVD-ROM drives and the Philips VAD6011 DVD-ROM drives were still vulnerable to failure that, respectively, either rendered the consoles unable to read newer discs or caused them to halt the console with an error code usually indicating a PIO/DMA identification failure. These units were not covered under the extended warranty.
In 2002, Microsoft and Nvidia entered arbitration over a dispute on the pricing of Nvidia's chips for the Xbox. Nvidia's filing with the SEC indicated that Microsoft was seeking a $13 million discount on shipments for NVIDIA's fiscal year 2002. Microsoft alleged violations of the agreement the two companies entered, sought reduced chipset pricing, and sought to ensure that Nvidia fulfill Microsoft's chipset orders without limits on quantity. The matter was privately settled on February 6, 2003.
The Xbox includes a standard AV cable which provides composite video and monaural or stereo audio to TVs equipped with RCA inputs. European Xboxes also included an RCA jack to SCART converter block and the standard AV cable.
An 8 MB removable solid-state memory card can be plugged into the controllers, onto which game saves can either be copied from the hard drive when in the Xbox dashboard's memory manager or saved during a game. Most Xbox game saves can be copied to the memory unit and moved to another console, but some Xbox saves are digitally signed. It is also possible to save an Xbox Live account on a memory unit, to simplify its use on more than one Xbox. The ports at the top of the controllers could also be used for other accessories, primarily headsets for voice chat via Xbox Live.
Technical specifications
The Xbox CPU is a 32-bit 733 MHz, custom Intel Pentium III Coppermine-based processor. It has a 133 MHz 64-bit GTL+ front-side bus (FSB) with a 1.06 GB/s bandwidth. The system has 64 MB unified DDR SDRAM, with a 6.4 GB/s bandwidth, of which 1.06 GB/s is used by the CPU and 5.34 GB/s is shared by the rest of the system.
Its GPU is Nvidia's 233 MHz NV2A. It has a floating-point performance of 7.3 GFLOPS, capable of geometry calculations for up to a theoretical 115 million vertices/second. It has a peak fillrate of 932 megapixels/second, capable of rendering a theoretical 29 million 32-pixel triangles/second. With bandwidth limitations, it has a realistic fillrate of 250–700 megapixels/second, with Z-buffering, fogging, alpha blending, and texture mapping, giving it a real-world performance of 7.8–21 million 32-pixel triangles/second.
Controllers
The Xbox controller features two analog sticks, a pressure-sensitive directional pad, two analog triggers, a Back button, a Start button, two accessory slots and six 8-bit analog action buttons (A/Green, B/Red, X/Blue, Y/Yellow, and Black and White buttons). The standard Xbox controller (also nicknamed the "Fatty" and later, the "Duke") was originally the controller bundled with Xbox systems for all territories except Japan. The controller has been criticized for being bulky compared to other video game controllers; it was awarded "Blunder of the Year" by Game Informer in 2001, a Guinness World Record for the biggest controller in Guinness World Records Gamer's Edition 2008, and was ranked the second-worst video game controller ever by IGN editor Craig Harris.
The "Controller S" (codenamed "Akebono"), a smaller, lighter Xbox controller, was originally the standard Xbox controller only in Japan, designed for users with smaller hands. The "Controller S" was later released in other territories by popular demand and by 2002 replaced the standard controller in the Xbox's retail package, with the larger original controller remaining available as an accessory.
Software
Operating system
The Xbox runs a custom operating system which is based on a heavily modified version of Windows 2000. It exports APIs similar to those found in Microsoft Windows, such as Direct3D. Its source code was leaked in 2020.
The user interface for the Xbox is called the Xbox Dashboard. It features a media player that can be used to play music CDs, rip CDs to the Xbox's built-in hard drive and play music that has been ripped to the hard drive; it also lets users manage game saves, music, and downloaded content from Xbox Live, and lets Xbox Live users sign in, customize, and manage their account. The dashboard is only available when the user is not watching a movie or playing a game. It uses many shades of green and black for the user interface to be consistent with the physical Xbox color scheme. When the Xbox was released in 2001, the Live service was not online, so the dashboard's Live sections and the network settings sub-menu were not present yet.
Xbox Live was released in November 2002, but in order to access it, users had to buy the Xbox Live starter kit containing a headset and a subscription. While the Xbox was still being supported by Microsoft, the Xbox Dashboard was updated via Live several times to reduce cheating and add features.
Games
The Xbox launched in North America on November 15, 2001. Popular launch games included Halo: Combat Evolved, Project Gotham Racing, and Dead or Alive 3. All three of these games would go on to sell over a million copies in the US.
Although the console gained strong third-party support from its inception, many early Xbox games did not fully use its powerful hardware until a full year after its release. Xbox versions of cross-platform games sometimes came with a few additional features and/or graphical improvements to distinguish them from the PS2 and GameCube versions of the same game, thus negating one of the Xbox's main selling points. Sony countered the Xbox for a short time by temporarily securing PlayStation 2 exclusives for highly anticipated games such as the Grand Theft Auto series and the Metal Gear Solid series as well as Nintendo for the Resident Evil series. Notable third-party support came from Sega, who announced an 11-game exclusivity deal at Tokyo Game Show. Sega released exclusives such as Panzer Dragoon Orta and Jet Set Radio Future, which met with a strong reception among critics.
In 2002 and 2003, several high-profile releases helped the Xbox gain momentum and distinguish itself from the PS2. Microsoft purchased Rare, responsible for many Nintendo 64 hit games, to expand their first party portfolio. The Xbox Live online service was launched in late 2002 alongside pilot titles MotoGP, MechAssault and Tom Clancy's Ghost Recon. Several best-selling and critically acclaimed titles for the Xbox soon followed, such as Tom Clancy's Splinter Cell, and Star Wars: Knights of the Old Republic. Take-Two Interactive's exclusivity deal with Sony was amended to allow Grand Theft Auto III and its sequels to be published for the Xbox. Many other publishers got into the trend of releasing the Xbox version alongside the PS2 version, instead of delaying it for months.
2004 saw the release of highly rated exclusives Fable and Ninja Gaiden: both games would become big hits for the Xbox. Later that year, Halo 2 was released and became the highest-grossing release in entertainment history, making over $125 million in its first day and became the best-selling Xbox game worldwide. Halo 2 became Xbox Live's third killer app after MechAssault & Tom Clancy's Rainbow Six 3. That year Microsoft made a deal to put Electronic Arts' popular titles on Xbox Live to boost the popularity of their service.
By 2005, despite notable first party releases in Conker: Live & Reloaded and Forza Motorsport, Microsoft began phasing out the Xbox in favor of their next console, the Xbox 360. Games such as Kameo: Elements of Power and Perfect Dark Zero, which were originally to be developed for the Xbox, became Xbox 360 launch titles instead. The last game released on the Xbox was Madden NFL 09, on August 12, 2008.
Services
On November 15, 2002, Microsoft launched its Xbox Live online gaming service, allowing subscribers to play online Xbox games with other subscribers around the world and download new content directly to the system's hard drive. The online service works only with a broadband Internet connection. In its first week of operation, Xbox Live received 100,000 subscriptions, and further grew to 250,000 subscribers within two months of the service's launch. In July 2004, Microsoft announced that Xbox Live had reached one million subscribers; in July 2005, membership reached two million, and by July 2007 there were more than three million subscribers. By May 2009, the number had ballooned to twenty million current subscribers. On February 5, 2010, it was reported that Xbox Live support for the original Xbox games would be discontinued as of April 14, 2010. Services were discontinued on schedule, but a group of users later known as the "Noble 14" continued to play for almost a month afterwards by simply leaving their consoles on connected to Halo 2.
Sales
Prior to launching, anticipation for the Xbox was high, with Toys 'R' Us and Amazon reporting that online preorders had sold out within just 30 minutes. Microsoft stated that it planned to ship 1–1.5 million units to retailers by the end of the year, followed by weekly shipments of 100,000 units.
The launch was one of the most successful in video game history, with unit sales surpassing 1 million after just 3 weeks and rising further to 1.5 million by the end of 2001. The system also attained one of the highest-ever attachment rates at launch, with over 3 games selling per unit according to the NPD Group. Strong sales were tied in large part to the highly anticipated launch title, Halo: Combat Evolved, which had surpassed sales of 1 million units by April 2002 and attained a 50% attach rate for the console. By July 2004, the system had sold 15.5 million units worldwide—10.1 million in North America, 3.9 million in Europe, and 1.5 million in Asia-Pacific—and had a 33% market share in the US.
Despite strong sales in North America, Microsoft struggled to make a profit from the Xbox due to its high manufacturing cost. With its initial retail price of $299, Microsoft lost about $125 for every system sold, which cost $425 to manufacture, meaning that the company would have to rely on software sales in order to make any money. According to Robbie Bach, "Probably six months after we shipped, you could see the price curve and do the math and know that we were going to lose billions of dollars." These losses were further exacerbated in April 2002, when Microsoft lowered the retail price of the Xbox even further to $199 in order to further driving hardware sales. Microsoft also struggled to compete with Sony's more popular PlayStation 2 console, which generally saw far higher sales numbers, although the Xbox outsold the PS2 in the U.S. in April 2004. By its manufacturing discontinuation in 2005, the Xbox had sold a total of 24 million units worldwide, 16 million of which had been sold in North America. These numbers fell short of Microsoft's predicted 50 million units, and failed to match the PlayStation 2's lifetime sales of 106 million units at the time, although it did surpass the GameCube and Dreamcast's lifetime sales of 21 million and 10.6 units, respectively. Ultimately, Microsoft lost an accumulative total of $4 billion from the Xbox, only managing to turn a profit at the end of 2004. While the Xbox represented an overall loss for Microsoft, Gates, Ballmer, and other executives still saw it as a positive result for the company as it brought them into the console marketplace against doubts raised by the industry, and led to Microsoft's further development of other consoles in the Xbox family.
Japan
Prior to its Japanese launch in February 2002, many analysts estimated that the Xbox would have trouble competing with the PS2 and the GameCube, its local counterparts in the region, noting its comparatively high price tag, lack of exclusives, and larger size which would not fit as well in Japan's smaller living spaces. Microsoft hoped to ship six million Japanese Xbox consoles by June 2002; however, the system had only sold a total of 190,000 units in the region by April of that year, two months after the system's launch in February. For the week ending April 14, 2002, the Xbox sold only 1,800 units, considerably less than the PS2 and GameCube, and failed to see a single title reach the top 50 best-selling video games in Japan. In November 2002, the Xbox chief in Japan stepped down, leading to further consultations about Xbox's future, which by that point had only sold 278,860 units in the country since its February launch. For the week ending July 18, 2004, the Xbox sold just 272 units, even fewer than the PSOne had sold in the same week. The Xbox did, however, outsell the GameCube for the week ending May 26, 2002. Ultimately, the Xbox had only sold 450,000 units as of November 2011. Factors believed to have contributed to the console's poor market presence included its large physical size, which contrasted the country's emphasis on more compact designs, and a lack of Japanese-developed games to aid consumer interest.
Modding
The popularity of the Xbox, as well as (in the United States) its comparatively short 90-day warranty, inspired efforts to circumvent the built-in hardware and software security mechanisms, a practice informally known as modding.
References
External links
at Xbox.com
2001 in video gaming
Computer-related introductions in 2001
Discontinued Microsoft products
Home video game consoles
Microsoft video game consoles
Products introduced in 2001
Products and services discontinued in 2009
Sixth-generation video game consoles
X86-based game consoles | Operating System (OS) | 860 |
ISO 10303
ISO 10303 is an ISO standard for the computer-interpretable representation and exchange of product manufacturing information. It's an ASCII-based format. Its official title is: Automation systems and integration — Product data representation and exchange. It is known informally as "STEP", which stands for "STandard for the Exchange of Product model data". ISO 10303 can represent 3D objects in Computer-aided design (CAD) and related information.
Overview
The international standard's objective is to provide a mechanism that is capable of describing product data throughout the life cycle of a product, independent from any particular system. The nature of this description makes it suitable not only for neutral file exchange, but also as a basis for implementing and sharing product databases and archiving.
Typically STEP can be used to exchange data between CAD, computer-aided manufacturing, computer-aided engineering, product data management/enterprise data modeling and other CAx systems.
STEP addresses product data from mechanical and electrical design, geometric dimensioning and tolerancing, analysis and manufacturing, as well as additional information specific to various industries such as automotive, aerospace, building construction, ship, oil and gas, process plants and others.
STEP is developed and maintained by the ISO technical committee TC 184, Automation systems and integration, sub-committee SC 4, Industrial data. Like other ISO and IEC standards STEP is copyright by ISO and is not freely available. However, the 10303 EXPRESS schemas are freely available, as are the recommended practices for implementers.
Other standards developed and maintained by ISO TC 184/SC 4 are:
ISO 13584 PLIB - Parts Library
ISO 15531 MANDATE - Industrial manufacturing management data
ISO 15926 Process Plants including Oil and Gas facilities Life-Cycle data
ISO 18629 PSL- Process specification language
ISO 18876 IIDEAS - Integration of industrial data for exchange, access, and sharing
ISO 22745 Open technical dictionaries and their application to master data
ISO 8000 Data quality
STEP is closely related with PLIB (ISO 13584, IEC 61360).
History
The basis for STEP was the Product Data Exchange Specification (PDES), which was initiated during the mid-1980's and was submitted to ISO in 1988. The Product Data Exchange Specification (PDES) was a data definition effort intended to improve interoperability between manufacturing companies, and thereby improve productivity.
The evolution of STEP can be divided into four release phases. The development of STEP started in 1984 as a successor of IGES, SET and VDA-FS. The initial plan was that "STEP shall be based on one single, complete, implementation-independent Product Information Model, which shall be the Master Record of the integrated topical and application information models". But because of the complexity, the standard had to be broken up into smaller parts that can be developed, balloted and approved separately. In 1994/95 ISO published the initial release of STEP as international standards (IS) with the parts 1, 11, 21, 31, 41, 42, 43, 44, 46, 101, AP 201 and AP 203. Today AP 203 Configuration controlled 3D design is still one of the most important parts of STEP and supported by many CAD systems for import and export.
In the second phase the capabilities of STEP were widely extended, primarily for the design of products in the aerospace, automotive, electrical, electronic, and other industries. This phase ended in the year 2002 with the second major release, including the STEP parts AP 202, AP 209, AP 210, AP 212, AP 214, AP 224, AP 225, AP 227, AP 232. Basic harmonization between the APs especially in the geometric areas was achieved by introducing the Application Interpreted Constructs (AIC, 500 series).
A major problem with the monolithic APs of the first and second releases is that they are too big, have too much overlap with each other, and are not sufficiently harmonized. These deficits led to the development of the STEP modular architecture (400 and 1000 series). This activity was primarily driven by new APs covering additional life-cycle phases such as early requirement analysis (AP 233) and maintenance and repair (AP 239), and also new industrial areas (AP 221, AP 236). New editions of the previous monolithic APs on a modular basis have been developed (AP 203, AP 209, AP 210). The publication of these new editions coincided with the release in 2010 of the new ISO product SMRL, the STEP Module and Resource Library, that contains all STEP resource parts and application modules on a single CD. The SMRL will be revised frequently and is available at a much lower cost than purchasing all the parts separately.
In December 2014, ISO published the first edition of a new major Application Protocol, AP 242 Managed model based 3d engineering, that combined and replaced the following previous APs in an upward compatible way:
AP 201, Explicit draughting. Simple 2D drawing geometry related to a product. No association, no assembly hierarchy.
AP 202, Associative draughting. 2D/3D drawing with association, but no product structure.
AP 203, Configuration controlled 3D designs of mechanical parts and assemblies.
AP 204, Mechanical design using boundary representation
AP 214, Core data for automotive mechanical design processes
AP 242, Managed model based 3D engineering
AP 242 was created by merging the following two Application protocols:
AP 203, Configuration controlled 3D designs of mechanical parts and assemblies (as used by the Aerospace Industry).
AP 214, Core data for automotive mechanical design processes (used by the Automotive Industry).
In addition AP 242 edition 1 contains extensions and significant updates for:
Geometric dimensioning and tolerancing
Kinematics
Tessellation
Two APs had been modified to be directly based on AP 242, and thus became supersets of it:
AP 209, Composite and metallic structural analysis and related design
AP 210, Electronic assembly, interconnect and packaging design. This is the most complex and sophisticated STEP AP.
AP242 edition 2, published in April 2020, extends edition 1 domain by the description of Electrical Wire Harnesses and introduces an extension of STEP modelisation and implementation methods based on SysML and system engineering with an optimized XML implementation method.
This new edition contains also enhancements on 3D Dimensioning and Tolerancing, and Composite Design. New functionalities are also introduced like:
curved triangles
textures
levels of detail (LODs)
color on vertex
3D scanner data support
persistent IDs on geometry
additive manufacturing
Structure
STEP is divided into many parts, grouped into
Environment
Parts 1x: Description methods: EXPRESS, EXPRESS-X
Parts 2x: Implementation methods: STEP-File, STEP-XML, SDAI
Parts 3x: Conformance testing methodology and framework
Integrated data models
The Integrated Resources (IR), consisting of
Parts 4x and 5x: Integrated generic resources
Parts 1xx: Integrated application resources
PLIB ISO 13584-20 Parts library: Logical model of expressions
Parts 5xx: Application Interpreted Constructs (AIC)
Parts 1xxx: Application Modules (AM)
Top parts
Parts 2xx: Application Protocols (AP)
Parts 3xx: Abstract Test Suites (ATS) for APs
Parts 4xx: Implementation modules for APs
In total STEP consists of several hundred parts and every year new parts are added or new revisions of older parts are released. This makes STEP the biggest standard within ISO.
Each part has its own scope and introduction.
The APs are the top parts. They cover a particular application and industry domain and hence are most relevant for users of STEP. Every AP defines one or several Conformance Classes, suitable for a particular kind of product or data exchange scenario. To provide a better understanding of the scope, information requirements and usage scenarios an informative application activity model (AAM) is added to every AP, using IDEF0.
STEP is primarily defining data models using the EXPRESS modeling language. Application data according to a given data model can be exchanged either by a STEP-File, STEP-XML or via shared database access using SDAI.
Every AP defines a top data models to be used for data exchange, called the Application Interpreted Model (AIM) or in the case of a modular AP called Module Interpreted Models (MIM). These interpreted models are constructed by choosing generic objects defined in lower level data models (4x, 5x, 1xx, 5xx) and adding specializations needed for the particular application domain of the AP. The common generic data models are the basis for interoperability between APs for different kinds of industries and life cycle stages.
In APs with several Conformance Classes the top data model is divided into subsets, one for each Conformance Class.
The requirements of a conformant STEP application are:
implementation of either a preprocessor or a postprocessor or both,
using one of the STEP implementation methods STEP-File, STEP-XML or SDAI for the AIM/MIM data model and
supporting one or several conformance classes of an AP.
Originally every APs was required to have a companion Abstract test suite (ATS) (e.g. ATS 303 for AP 203), providing Test Purposes, Verdict Criteria and Abstract Test Cases together with example STEP-Files. But because the development of an ATS was very expensive and inefficient this requirement was dropped and replaced by the requirements to have an informal validation report and recommended practices how to use it. Today the recommended practices are a primary source for those going to implement STEP.
The Application Reference Models (ARM) is the mediator between the AAM and the AIM/MIM. Originally its purpose was only to document high level application objects and the basic relations between them. IDEF1X diagrams documented the AP of early APs in an informal way. The ARM objects, their attributes and relations are mapped to the AIM so that it is possible to implement an AP. As APs got more and more complex formal methods were needed to document the ARM and so EXPRESS which was originally only developed for the AIM was also used for the ARM. Over time these ARM models got very detailed till to the point that some implementations preferred to use the ARM instead of the formally required AIM/MIM. Today a few APs have ARM based exchange formats standardized outside of ISO TC184/SC4:
PLM-Services within the OMG for AP 214
ISO 14649 Data model for computerized numerical controllers for AP 238
PLCS-DEXs within OASIS (organization) for AP 239
There is a bigger overlap between APs because they often need to refer to the same kind of products, product structures, geometry and more. And because APs are developed by different groups of people it was always an issue to ensure interoperability between APs on a higher level. The Application Interpreted Constructs (AIC) solved this problem for common specializations of generic concepts, primarily in the geometric area. To address the problem of harmonizing the ARM models and their mapping to the AIM the STEP modules were introduced. They contain a piece of the ARM, the mapping and a piece of the AIM, called MIM. Modules are built on each other, resulting in an (almost) directed graph with the AP and conformance class modules at the very top. The modular APs are:
AP 209, Composite and metallic structural analysis and related design
AP 210, Electronic assembly, interconnect and packaging design
AP 221, Functional data and schematic representation of process plants
AP 236, Furniture product data and project data
AP 239, Product life cycle support
AP 242, Managed model based 3d engineering
The modular editions of AP 209 and 210 are explicit extensions of AP 242.
Coverage of STEP Application Protocols (AP)
The STEP APs can be roughly grouped into the three main areas design, manufacturing and life cycle support.
Design APs:
Mechanical:
AP 207, Sheet metal die planning and design
AP 209, Composite and metallic structural analysis and related design
AP 235, Materials information for the design and verification of products
AP 236, Furniture product data and project data
AP 242, Managed model based 3d engineering
Connectivity oriented electric, electronic and piping/ventilation:
AP 210, Electronic assembly, interconnect and packaging design. The most complex and sophisticated STEP AP.
AP 212, Electrotechnical design and installation.
AP 227, Plant spatial configuration
Ship:
AP 215, Ship arrangement
AP 216, Ship moulded forms
AP 218, Ship structures
Others:
AP 225, Building elements using explicit shape representation
AP 232, Technical data packaging core information and exchange
AP 233, Systems engineering data representation
AP 237, Fluid dynamics has been cancelled and the functionality included in AP 209
Manufacturing APs:
AP 219, Dimensional inspection information exchange
AP 223, Exchange of design and manufacturing product information for cast parts
AP 224, Mechanical product definition for process plans using machining features
AP 238 - Application interpreted model for computer numeric controllers
AP 240, Process plans for machined products
Life cycle support APs:
AP 239, Product life cycle support
AP 221, Functional data and schematic representation of process plants
AP 241, Generic Model for Life Cycle Support of AEC Facilities (planned)
The AP 221 model is very similar to the ISO 15926-2 model, whereas AP 221 follows the STEP architecture and ISO 15926-2 has a different architecture. They both use ISO-15926-4 as their common reference data library or dictionary of standard instances. A further development of both standards resulted in Gellish English as general product modeling language that is application domain independent and that is proposed as a work item (NWI) for a new standard.
The original intent of STEP was to publish one integrated data-model for all life cycle aspects. But due to the complexity, different groups of developers and different speed in the development processes, the splitting into several APs was needed. But this splitting made it difficult to ensure that APs are interoperable in overlapping areas. Main areas of harmonization are:
AP 212, 221, 227 and 242 for technical drawings with extension in AP 212 and 221 for schematic functionality
AP 224, 238 and 242 for machining features and for Geometric dimensioning and tolerancing
For complex areas it is clear that more than one APs are needed to cover all major aspects:
AP 212 and 242 for electro-mechanical products such as a car or a transformer. This will be addressed by the second edition of AP242 that is currently under development
AP 242, 209 and 210 for electro/electronic-mechanical products
AP 212, 215, 216, 218, 227 for ships
AP 203/214, 224, 240 and 238 for the complete design and manufacturing process of piece parts.
See also
Boundary representation
Geometric dimensioning and tolerancing
ISO 16739 Industry Foundation Classes, widely used instead of ISO 10303-225
Notes
References
External links
Standardization group ISO TC184/SC4
List of STEP parts
STEP Ship team ISO TC 184/SC 4/WG 3/T 23
STEP AP242 Project
The STEP Module Repository on SourceForge
CAx Implementor Forum - information on existing implementations and testing activities
WikiSTEP - tutorial and overview information about STEP and recommended practices
PDES, Inc. - recommended practices and links
Korea STEP Center
Product Life Cycle Support (PLCS) Resources
Application Protocol 224 implementation
Introducing STEP
PDM schema - a common subset extracted from AP 203 and AP 214
BRLCAD and STEP
STEP File Analyzer and Viewer - generate a spreadsheet from a STEP file, also a viewer
STEP programs
STEP File Viewers
Online Step File Viewer
ISO 10303 STEP Standards – STEP Tools, Inc.
STP viewer 2.3 - download
10303
CAD file formats
Modeling languages | Operating System (OS) | 861 |
COSMOS (telecommunications)
COSMOS (Computer System for Mainframe Operations) was a record-keeping system for main distribution frames (MDFs) in the Bell System, the American Bell Telephone Company and then, subsequently, AT&T–led system which provided telephone services to much of the United States and Canada from 1977 to 1984.
COSMOS was introduced in the 1970s after MDFs were found to be congested in large urban telephone exchanges. It assigns terminals so jumpers need not be so long, thus leaving more space on the shelves. COSMOS also converts customer service orders into printed work orders for staff who connect the jumpers. COSMOS orders are usually coordinated with RCMAC to ensure that translations match wiring. With good computer records, jumpers are often left in place for reuse when one customer replaces another, resulting in a great reduction in labor.
More modern modular MDFs were developed around the same time called COSMIC (Common System Main Interconnecting) frames.
See also
Operations support systems
Telephony | Operating System (OS) | 862 |
Parabola GNU/Linux-libre
Parabola GNU/Linux-libre is an operating system for the i686, x86-64 and ARMv7 architectures. It is based on many of the packages from Arch Linux and Arch Linux ARM, but distinguishes from the former by offering only free software. It includes the GNU operating system components common to many Linux distributions and the Linux-libre kernel instead of the generic Linux kernel. Parabola is listed by the Free Software Foundation as a completely free operating system, true to their Free System Distribution Guidelines.
Parabola uses a rolling release model like Arch, such that a regular system update is all that is needed to obtain the latest software. Development focuses on system simplicity, community involvement and use of the latest free software packages.
History
Parabola was originally proposed by members of the gNewSense IRC channel in 2009. Members of different Arch Linux communities, especially Spanish-speaking members, started the development and maintenance of the project software and documentation.
On May 20, 2011, the Parabola distribution was recognized as a completely free project by GNU, making it part of the FSF list of free distributions.
In February 2012 Dmitrij D. Czarkoff reviewed Parabola for OSNews. Czarkoff reported that on his test computer a number of hardware problems surfaced, due to lack of free firmware. He said
Czarkoff also criticized the lack of documentation available for Parabola. He concluded "The overall impression of the Parabola GNU/Linux user experience exactly matches the one of Arch: a system with easy and flexible installation and configuration process and good choice of free software packages. Though the lack of documentation spoils the user experience, the Arch Linux resources can be used to further configure and extend the distribution. If my hardware would allow, I would probably stick with Parabola."
Parabola used to have a mips64el port to provide support for the Chinese Loongson processor used in the Lemote Yeeloong laptop. It was discontinued due to a lack of resources and interest, and the final activity was seen in July 2014.
Robert Rijkhoff reviewed Parabola GNU/Linux for DistroWatch in September 2017.
Differences from Arch and Arch ARM
The project uses only 100% free software from the official Arch repositories for the i686 and x86-64 architectures and official Arch ARM repositories (except [alarm] and [aur]) for the ARMv7. It uses free replacements when possible, such as the Linux-libre kernel instead of the generic Linux kernel.
The filtering process removes around 700 software packages from the repositories that do not meet the requirements of the Free Software Definition for each architecture.
Social contract
Parabola has established a social contract. The Parabola Social Contract commits the project to the free software community (viewing itself as only competing against nonfree systems), free culture, democracy, and to follow Arch's philosophy. Under the covenant are included the GNU Free System Distribution Guidelines.
Installation
There are two ways to install Parabola, either from scratch using installable ISO images, or by migrating from an existing Arch-based system. The latter process is almost as simple as switching to the Parabola repositories list.
TalkingParabola
TalkingParabola is a derivative install CD based on TalkingArch. It is a respin of the Parabola ISO modified to include speech and braille output for blind and visually impaired users. TalkingParabola retains all the features of the Parabola live image, but adds speech and braille packages to make it possible for blind and visually impaired users to install Parabola eyes-free.
Mascots
The Parabola community has created a number of cartoon characters for the project. The characters are a gnu and a cat named "Bola", who is conceived after Parabola's main characteristics: "elegant, minimalist and lightweight".
See also
GNU/Linux naming controversy
GNU variants
List of Pacman-based Linux distributions
References
External links
Parabola GNU/Linux-libre appears on news section on SOLAR website (Software Libre Argentina)
2009 software
Arch-based Linux distributions
Free software only Linux distributions
Pacman-based Linux distributions
Rolling Release Linux distributions
Linux distributions | Operating System (OS) | 863 |
MOST Bus
MOST (Media Oriented Systems Transport) is a high-speed multimedia network technology optimized by the automotive industry. It can be used for applications inside or outside the car. The serial MOST bus uses a daisy-chain topology or ring topology and synchronous data communication to transport audio, video, voice and data signals via plastic optical fiber (POF) (MOST25, MOST150) or electrical conductor (MOST50, MOST150) physical layers.
MOST technology is used in almost every car brand worldwide, including Audi, BMW, General Motors, Honda, Hyundai, Jaguar, Lancia, Land Rover, Mercedes-Benz, Porsche, Toyota, Volkswagen, SAAB, SKODA, SEAT and Volvo. SMSC and MOST are registered trademarks of Standard Microsystems Corporation (“SMSC”), now owned by Microchip Technology.
Principles of communication
The MOST specification defines the physical and the data link layer as well as all seven layers of the ISO/OSI-Model of data communication. Standardized interfaces simplify the MOST protocol integration in multimedia devices.
For the system developer, MOST is primarily a protocol definition. It provides the user with a standardized interface (API) to access device functionality.
The communication functionality is provided by driver software known as MOST Network Services. MOST Network Services include Basic Layer System Services (Layer 3, 4, 5) and Application Socket Services (Layer 6). They process the MOST protocol between a MOST Network Interface Controller (NIC), which is based on the physical layer, and the API (Layer 7).
MOST networks
A MOST network is able to manage up to 64 MOST devices in a ring configuration. Plug and play functionality allows MOST devices to be easily attached and removed. MOST networks can also be set up in virtual star network or other topologies. Safety critical applications use redundant double ring configurations. Hubs or switches are also possible, but they are not well-established in the automotive sector.
In a MOST network, one device is designated the timing master. Its role is to continuously supply the ring with MOST frames. A preamble is sent at the beginning of the frame transfer. The other devices, known as timing followers, use the preamble for synchronization. Encoding based on synchronous transfer, allows constant post-sync for the timing followers.
MOST25
MOST25 provides a bandwidth of approximately 23 megabaud for streaming (synchronous) as well as package (asynchronous) data transfer over an optical physical layer. It is separated into 60 physical channels. The user can select and configure the channels into groups of four bytes each. MOST25 provides many services and methods for the allocation (and deallocation) of physical channels.
MOST25 supports up to 15 uncompressed stereo audio channels with CD-quality sound or up to 15 MPEG-1 channels for audio/video transfer, each of which uses four bytes (four physical channels).
MOST also provides a channel for transferring control information. The system frequency of 44.1 kHz allows a bandwidth of 705.6 kbit/s, enabling 2670 control messages per second to be transferred. Control messages are used to configure MOST devices and configure synchronous and asynchronous data transfer. The system frequency closely follows the CD standard. Reference data can also be transferred via the control channel.
Some limitations restrict MOST25’s effective data transfer rate to about 10 kB/s. Because of the protocol overhead, the application can use only 11 of 32 bytes at segmented transfer and a MOST node can only use one third of the control channel bandwidth at any time.
MOST50
MOST50 doubles the bandwidth of a MOST25 system and increases the frame length to 1024 bits. The three established channels (control message channel, streaming data channel, packet data channel) of MOST25 remain the same, but the length of the control channel and the sectioning between the synchronous and asynchronous channels are flexible. Although MOST50 is specified to support both optical and electrical physical layers, the available MOST50 Intelligent Network Interface Controllers (INICs) only support electrical data transfer via a three copper conductor configuration; consisting of an Unshielded Twisted Pair (UTP) set and a single additional control line. The additional control line is connected to each MOST50 network device in a parallel "single shared bus" configuration. Each MOST50 device would contain five copper wire connections in this configuration. Control line (for signals sent from the master) and two UTP sets (each containing D+ D-). One set is used for Data Input (out putted from the preceding device on the network ring) while the other is used for Data Output to the next device on the ring. As with its fibre counterparts, closing or completing the ring (termination at the originating device) is required for any and all network operation.
MOST150
MOST150 was introduced in October 2007 and provides a physical layer to implement Ethernet in automobiles. It increases the frame length up to 3072 bits, which is about 6 times the bandwidth of MOST25. It also integrates an Ethernet channel with adjustable bandwidth in addition to the three established channels (control message channel, streaming data channel, packet data channel) of the other grades of MOST. MOST150 also permits isochronous transfer on the synchronous channel. Although the transfer of synchronous data requires a frequency other than the one specified by the MOST frame rate, it is also possible with MOST150.
MOST150’s advanced functions and enhanced bandwidth will enable a multiplex network infrastructure capable of transmitting all forms of infotainment data, including video, throughout an automobile.
Physical layer
The optical transmission layer has been widely used in automotive applications for a number of years. It uses plastic optical fibers (POF) with a core diameter of 1 mm as transmission medium, in combination with light emitting diodes (LEDs) in the red wavelength range as transmitters. MOST25 only uses an optical physical layer. MOST50 and MOST150 support both optical and electrical physical layers.
Main advantages of POF:
high data rate transmission
lighter and more flexible compared to shielded electric data lines
meets strict EMC requirements
does not cause any interference radiation
insensitive to electromagnetic interference irradiation
MOST Cooperation
The MOST Cooperation, a partnership of carmakers, setmakers, system architects, and key component suppliers, was founded in 1998. Their objective was to define and adopt a common multimedia network protocol and application object model. As a result of their efforts, MOST technology has emerged as a global standard for implementing current and future requirements for multimedia networking in automobiles.
Infrastructure
The MOST Cooperation has published specifications for the MOST Bus for a number of years. However, these specifications do not include details on the Data Link Layer. In March 2008, SMSC (formerly OASIS SiliconSystems) - inventor of the first MOST NIC - and Harman/Becker announced that they would open and license their proprietary Data Link Layer intellectual property to other semiconductor companies on a royalty-bearing basis.
At this time MOST chip solutions are available from SMSC, Analog Devices and some FPGA core companies. Development tools are offered by K2L, Ruetz System Solutions, SMSC, Vector Informatik GmbH and Telemotive AG.
Competing standards
BroadR-Reach has taken a chunk of the automotive communication bus network for Infotainment. First with 100 Mbit/s, then 1 Gbit/s and now 10 Gbit/s for domain controller backbone links.
IEEE 1355 has a slice (combination of network medium and speed) TS-FO-02, for polymer optical fiber operating at 200 megabits/second. The specification is faster than MOST, well tested, and open. However, it lacks industry advocates.
Ethernet is more standard, higher-speed, equally noise immune, being differential and isolated by transformers. However Cat 5 cable may be too expensive for automotive applications. Also, standard cat-5 plugs do not resist vibration. The thin layers of gold rapidly rub off, and corrosion then causes failure. Ruggedized "standard" connectors exist, which hold the connectors steady, but are more expensive. Because of the restrictions of CAT5, Ethernet over fiber seems to be a possible solution but ethernet is asynchronous while MOST is synchronous.
CAN (Controller Area Network), LIN (Local Interconnect Network) and other automotive OBD standards are not suitable because they are too slow to carry video.
FlexRay, also an automotive bus standard, though faster than CAN, is intended for timing critical applications such as drive by wire rather than media.
Notes
References
External links
MOST Cooperation Website
Microchip's MOST Products Page
K2L Products Page
Vector
Ruetz System Solutions Website
Automotive software
Computer buses
Automotive standards | Operating System (OS) | 864 |
OSGi
The OSGi Alliance (formerly known as the Open Services Gateway initiative) is an open standards organization for computer software founded in March 1999. They originally specified and continue to maintain the OSGi standard. The OSGi specification describes a modular system and a service platform for the Java programming language that implements a complete and dynamic component model, something that does not exist in standalone Java/VM environments.
Description
Applications or components, coming in the form of bundles for deployment, can be remotely installed, started, stopped, updated, and uninstalled without requiring a reboot; management of Java packages/classes is specified in great detail. Application life cycle management is implemented via APIs that allow for remote downloading of management policies. The service registry allows bundles to detect the addition of new services, or the removal of services, and adapt accordingly.
The OSGi specifications have evolved beyond the original focus of service gateways, and are now used in applications ranging from mobile phones to the open-source Eclipse IDE. Other application areas include automobiles, industrial automation, building automation, PDAs, grid computing, entertainment, fleet management and application servers.
In October 2020, the OSGi Alliance announced the transition of the standardization effort to the Eclipse Foundation, subsequent to which it would shut down.
Specification process
The OSGi specification is developed by the members in an open process and made available to the public free of charge under the OSGi Specification License. The OSGi Alliance has a compliance program that is open to members only. As of November 2010, there are seven certified OSGi framework implementations. A separate page lists both certified and non-certified OSGi Specification Implementations, which include OSGi frameworks and other OSGi specifications.
Architecture
OSGi is a Java framework for developing and deploying modular software programs and libraries. Each bundle is a tightly coupled, dynamically loadable collection of classes, jars, and configuration files that explicitly declare their external dependencies (if any).
The framework is conceptually divided into the following areas:
BundlesBundles are normal JAR components with extra manifest headers.
ServicesThe services layer connects bundles in a dynamic way by offering a publish-find-bind model for plain old Java interfaces (POJIs) or plain old Java objects (POJOs).
Services RegistryThe application programming interface for management services.
Life-CycleThe application programming interface for life cycle management (install, start, stop, update, and uninstall) for bundles.
ModulesThe layer that defines encapsulation and declaration of dependencies (how a bundle can import and export code).
SecurityThe layer that handles the security aspects by limiting bundle functionality to pre-defined capabilities.
Execution EnvironmentDefines what methods and classes are available in a specific platform. There is no fixed list of execution environments, since it is subject to change as the Java Community Process creates new versions and editions of Java. However, the following set is currently supported by most OSGi implementations:
CDC-1.0/Foundation-1.0
CDC-1.1/Foundation-1.1
OSGi/Minimum-1.0
OSGi/Minimum-1.1
JRE-1.1
From J2SE-1.2 up to J2SE-1.6
Bundles
A bundle is a group of Java classes and additional resources equipped with a detailed manifest MANIFEST.MF file on all its contents, as well as additional services needed to give the included group of Java classes more sophisticated behaviors, to the extent of deeming the entire aggregate a component.
Below is an example of a typical MANIFEST.MF file with OSGi Headers:
Bundle-Name: Hello World
Bundle-SymbolicName: org.wikipedia.helloworld
Bundle-Description: A Hello World bundle
Bundle-ManifestVersion: 2
Bundle-Version: 1.0.0
Bundle-Activator: org.wikipedia.Activator
Export-Package: org.wikipedia.helloworld;version="1.0.0"
Import-Package: org.osgi.framework;version="1.3.0"
The meaning of the contents in the example is as follows:
Bundle-Name: Defines a human-readable name for this bundle, Simply assigns a short name to the bundle.
Bundle-SymbolicName: The only required header, this entry specifies a unique identifier for a bundle, based on the reverse domain name convention (used also by the java packages).
Bundle-Description: A description of the bundle's functionality.
Bundle-ManifestVersion: Indicates the OSGi specification to use for reading this bundle.
Bundle-Version: Designates a version number to the bundle.
Bundle-Activator: Indicates the class name to be invoked once a bundle is activated.
Export-Package: Expresses which Java packages contained in a bundle will be made available to the outside world.
Import-Package: Indicates which Java packages will be required from the outside world to fulfill the dependencies needed in a bundle.
Life-cycle
A Life Cycle layer adds bundles that can be dynamically installed, started, stopped, updated and uninstalled. Bundles rely on the module layer for class loading but add an API to manage the modules in run time. The life cycle layer introduces dynamics that are normally not part of an application. Extensive dependency mechanisms are used to assure the correct operation of the environment. Life cycle operations are fully protected with the security architecture.
Below is an example of a typical Java class implementing the BundleActivator interface:
package org.wikipedia;
import org.osgi.framework.BundleActivator;
import org.osgi.framework.BundleContext;
public class Activator implements BundleActivator {
private BundleContext context;
@Override
public void start(BundleContext context) throws Exception {
System.out.println("Starting: Hello World");
this.context = context;
}
@Override
public void stop(BundleContext context) throws Exception {
System.out.println("Stopping: Goodbye Cruel World");
this.context = null;
}
}
Services
Standard services
The OSGi Alliance has specified many services. Services are specified by a Java interface. Bundles can implement this interface and register the service with the Service Registry. Clients of the service can find it in the registry, or react to it when it appears or disappears.
The table below shows a description of OSGi System Services:
The table below shows a description of OSGi Protocol Services:
The table below shows a description of OSGi Miscellaneous Services:
Organization
The OSGi Alliance was founded by Ericsson, IBM, Motorola, Sun Microsystems and others in March 1999. Before incorporating as a nonprofit corporation, it was called the Connected Alliance.
Among its members are () more than 35 companies from quite different business areas, for example Adobe Systems, Deutsche Telekom, Hitachi, IBM, Liferay, Makewave, NEC, NTT, Oracle, Orange S.A., ProSyst, Salesforce.com, Siemens, Software AG and TIBCO Software.
The Alliance has a board of directors that provides the organization's overall governance. OSGi officers have various roles and responsibilities in supporting the alliance. Technical work is conducted within Expert Groups (EGs) chartered by the board of directors, and non-technical work is conducted in various working groups and committees. The technical work conducted within Expert Groups include developing specifications, reference implementations, and compliance tests. These Expert Groups have produced five major releases of the OSGi specifications ().
Dedicated Expert Groups exist for the enterprise, mobile, vehicle and the core platform areas.
The Enterprise Expert Group (EEG) is the newest EG and is addressing Enterprise / Server-side applications.
In November 2007 the Residential Expert Group (REG) started to work on specifications to remotely manage residential/home-gateways.
In October 2003, Nokia, Motorola, IBM, ProSyst and other OSGi members formed a Mobile Expert Group (MEG) that will specify a MIDP-based service platform for the next generation of smart mobile phones, addressing some of the needs that CLDC cannot manage - other than CDC. MEG became part of OSGi as with R4.
Specification versions
OSGi Release 1 (R1): May 2000
OSGi Release 2 (R2): October 2001
OSGi Release 3 (R3): March 2003
OSGi Release 4 (R4): October 2005 / September 2006
Core Specification (R4 Core): October 2005
Mobile Specification (R4 Mobile / JSR-232): September 2006
OSGi Release 4.1 (R4.1): May 2007 (AKA JSR-291)
OSGi Release 4.2 (R4.2): September 2009
Enterprise Specification (R4.2): March 2010
OSGi Release 4.3 (R4.3): April 2011
Core: April 2011
Compendium and Residential: May 2012
OSGi Release 5 (R5): June 2012
Core and Enterprise: June 2012
OSGi Release 6 (R6): June 2015
Core: June 2015
OSGi Release 7 (R7): April 2018
Core and Compendium: April 2018
OSGi Release 8 (R8): December 2020
Related standards
MHP / OCAP
Universal Plug and Play (UPnP)
DPWS
ITU-T G.hn
LonWorks
CORBA
CEBus
EHS (KNX) / CECED CHAIN
Java Management Extensions
Projects using OSGi
Adobe Experience Manager - an enterprise Content Management System
Apache Aries - Blueprint Container implementations and extensions of application-focused specifications defined by OSGi Enterprise Expert Group
Apache Sling - OSGi-based applications layer for JCR content repositories
Atlassian Confluence and JIRA - the plug-in architecture for this enterprise wiki and issue tracker uses OSGi
Business Intelligence and Reporting Tools (BIRT) Project - Open source reporting engine
Cytoscape - an open source bioinformatics software platform (as of version 3.0)
DataNucleus - open source data services and persistence platform in service-oriented architectures
DDF - Distributed Data Framework provides free and open-source data integration
Dotcms - open source Web Content Management
EasyBeans - open source EJB 3 container
Eclipse - open source IDE and rich client platform
iDempiere - is an OSGi implementation of the open source ERP Branch GlobalQSS Adempiere361 originally started by Low Heng Sin
Eclipse Virgo - open source microkernel-based server constructed of OSGi bundles and supporting OSGi applications
GlassFish (v3) - application server for Java EE
Fuse ESB - a productized and supported release of ServiceMix 4.
Integrated Genome Browser - an open source, desktop GUI for visualizing, exploring, and analyzing genome data
IntelliJ - Java IDE and rich client platform with free community edition
JBoss - Red Hat's JBoss Application Server
JOnAS 5 - open source Java EE 5 application server
Joram - open source messaging server (JMS, MQTT, AMQP, etc.)
JOSSO 2 - Atricore's open source standards-based Identity and Access Management Platform
Liferay Dxp - open source and commercial enterprise Portal platform use OSGi from version 7.x.
Lucee 5 - open source CFML Web Application Server
NetBeans - open source IDE and rich client platform
Nuxeo - open source ECM Service Platform
Open Daylight Project - Project with the goal of accelerating the adoption of software-defined networking
OpenEJB - open source OSGi-enabled EJB 3.0 container that can be run both in standalone or embedded mode
openHAB - open source home automation software
OpenWorm - open source software simulation of C. elegans, via the dedicated Geppetto modular platform
Akana - API Gateway, Portal and Analytics server from Akana (formerly SOA Software)
SpringSource dm Server - open source microkernel-based server constructed of OSGi bundles and supporting OSGi applications
Weblogic - Oracle Weblogic Application Server
WebSphere - IBM Websphere JEE Application Server
WebMethods - SoftwareAG WebMethods
WSO2 Carbon - Base platform for WSO2's enterprise-grade Open source middleware stack
Current framework implementations
See also
OSGi Specification Implementations
References
Further reading
External links
Oredev 2008 - Architecture - OSGi Now and Tomorrow
Eclipse Equinox Article Index - Articles on an open source OSGi implementation
Standards organizations in the United States
Articles with example Java code
Free software programmed in Java (programming language)
1999 establishments in the United States
Embedded systems
Organizations based in California | Operating System (OS) | 865 |
Application software
An application program (application or app for short) is a computer program designed to carry out a specific task other than one relating to the operation of the computer itself, typically to be used by end-users. Word processors, media players, and accounting software are examples. The collective noun refers to all applications collectively. The other principal classifications of software are system software, relating to the operation of the computer, and utility software ("utilities").
Applications may be bundled with the computer and its system software or published separately and may be coded as proprietary, open-source, or projects. The term "app" often refers to applications for mobile devices such as phones.
Terminology
In information technology, an application (app), application program or application software is a computer program designed to help people perform an activity. Depending on the activity for which it was designed, an application can manipulate text, numbers, audio, graphics, and a combination of these elements. Some application packages focus on a single task, such as word processing; others, called integrated software include several applications.
User-written software tailors systems to meet the user's specific needs. User-written software includes spreadsheet templates, word processor macros, scientific simulations, audio, graphics, and animation scripts. Even email filters are a kind of user software. Users create this software themselves and often overlook how important it is.
The delineation between system software such as operating systems and application software is not exact, however, and is occasionally the object of controversy. For example, one of the key questions in the United States v. Microsoft Corp. antitrust trial was whether Microsoft's Internet Explorer web browser was part of its Windows operating system or a separable piece of application software. As another example, the GNU/Linux naming controversy is, in part, due to disagreement about the relationship between the Linux kernel and the operating systems built over this kernel. In some types of embedded systems, the application software and the operating system software may be indistinguishable to the user, as in the case of software used to control a VCR, DVD player, or microwave oven. The above definitions may exclude some applications that may exist on some computers in large organizations. For an alternative definition of an app: see Application Portfolio Management.
Metonymy
The word "application" used as an adjective is not restricted to the "of or pertaining to application software" meaning. For example, concepts such as application programming interface (API), application server, application virtualization, application lifecycle management and portable application apply to all computer programs alike, not just application software.
Apps and killer apps
Some applications are available in versions for several different platforms; others only work on one and are thus called, for example, a geography application for Microsoft Windows, or an Android application for education, or a Linux game. Sometimes a new and popular application arises which only runs on one platform, increasing the desirability of that platform. This is called a killer application or killer app. For example, VisiCalc was the first modern spreadsheet software for the Apple II and helped sell the then-new personal computers into offices. For Blackberry it was their email software.
In recent years, the shortened term "app" (coined in 1981 or earlier) has become popular to refer to applications for mobile devices such as smartphones and tablets, the shortened form matching their typically smaller scope compared to applications on PCs. Even more recently, the shortened version is used for desktop application software as well.
Classification
There are many different and alternative ways in order to classify application software.
By the legal point of view, application software is mainly classified with a black box approach, in relation to the rights of its final end-users or subscribers (with eventual intermediate and tiered subscription levels).
Software applications are also classified in respect of the programming language in which the source code is written or executed, and respect of their purpose and outputs.
By property and use rights
Application software is usually distinguished among two main classes: closed source vs open source software applications, and among free or proprietary software applications.
Proprietary software is placed under the exclusive copyright, and a software license grants limited usage rights. The open-closed principle states that software may be "open only for extension, but not for modification". Such applications can only get add-on by third-parties.
Free and open-source software shall be run, distributed, sold or extended for any purpose, and -being open- shall be modified or reversed in the same way.
FOSS software applications released under a free license may be perpetual and also royalty-free. Perhaps, the owner, the holder or third-party enforcer of any right (copyright, trademark, patent, or ius in re aliena) are entitled to add exceptions, limitations, time decays or expiring dates to the license terms of use.
Public-domain software is a type of FOSS, which is royalty-free and - openly or reservedly- can be run, distributed, modified, reversed, republished or created in derivative works without any copyright attribution and therefore revocation. It can even be sold, but without transferring the public domain property to other single subjects. Public-domain SW can be released under an (un)licensing legal statement, which enforces those terms and conditions for an indefinite duration (for a lifetime, or forever).
By coding language
Since the development and near-universal adoption of the web, an important distinction that has emerged, has been between web applications — written with HTML, JavaScript and other web-native technologies and typically requiring one to be online and running a web browser — and the more traditional native applications written in whatever languages are available for one's particular type of computer. There has been a contentious debate in the computing community regarding web applications replacing native applications for many purposes, especially on mobile devices such as smartphones and tablets. Web apps have indeed greatly increased in popularity for some uses, but the advantages of applications make them unlikely to disappear soon, if ever. Furthermore, the two can be complementary, and even integrated.
By purpose and output
Application software can also be seen as being either horizontal or vertical. Horizontal applications are more popular and widespread, because they are general purpose, for example word processors or databases. Vertical applications are niche products, designed for a particular type of industry or business, or department within an organization. Integrated suites of software will try to handle every specific aspect possible of, for example, manufacturing or banking worker, or accounting, or customer service.
There are many types of application software:
An application suite consists of multiple applications bundled together. They usually have related functions, features and user interfaces, and may be able to interact with each other, e.g. open each other's files. Business applications often come in suites, e.g. Microsoft Office, LibreOffice and iWork, which bundle together a word processor, a spreadsheet, etc.; but suites exist for other purposes, e.g. graphics or music.
Enterprise software addresses the needs of an entire organization's processes and data flows, across several departments, often in a large distributed environment. Examples include enterprise resource planning systems, customer relationship management (CRM) systems and supply chain management software. Departmental Software is a sub-type of enterprise software with a focus on smaller organizations or groups within a large organization. (Examples include travel expense management and IT Helpdesk.)
Enterprise infrastructure software provides common capabilities needed to support enterprise software systems. (Examples include databases, email servers, and systems for managing networks and security.)
Application platform as a service (aPaaS) is a cloud computing service that offers development and deployment environments for application services.
Information worker software lets users create and manage information, often for individual projects within a department, in contrast to enterprise management. Examples include time management, resource management, analytical, collaborative and documentation tools. Word processors, spreadsheets, email and blog clients, personal information system, and individual media editors may aid in multiple information worker tasks.
Content access software is used primarily to access content without editing, but may include software that allows for content editing. Such software addresses the needs of individuals and groups to consume digital entertainment and published digital content. (Examples include media players, web browsers, and help browsers.)
Educational software is related to content access software, but has the content or features adapted for use in by educators or students. For example, it may deliver evaluations (tests), track progress through material, or include collaborative capabilities.
Simulation software simulates physical or abstract systems for either research, training or entertainment purposes.
Media development software generates print and electronic media for others to consume, most often in a commercial or educational setting. This includes graphic-art software, desktop publishing software, multimedia development software, HTML editors, digital-animation editors, digital audio and video composition, and many others.
Product engineering software is used in developing hardware and software products. This includes computer-aided design (CAD), computer-aided engineering (CAE), computer language editing and compiling tools, integrated development environments, and application programmer interfaces.
Entertainment Software can refer to video games, screen savers, programs to display motion pictures or play recorded music, and other forms of entertainment which can be experienced through use of a computing device.
By Platform
Applications can also be classified by computing platform such as a desktop application for a particular operating system, delivery network such as in cloud computing and Web 2.0 applications, or delivery devices such as mobile apps for mobile devices.
The operating system itself can be considered application software when performing simple calculating, measuring, rendering, and word processing tasks not used to control hardware via command-line interface or graphical user interface. This does not include application software bundled within operating systems such as a software calculator or text editor.
Information worker software
Accounting software
Data management
Contact manager
Spreadsheet
Database software
Documentation
Document automation
Word processor
Desktop publishing software
Diagramming software
Presentation software
Email
Blog software
Enterprise resource planning
Financial software
Day trading software
Banking software
Clearing systems
Arithmetic software
Field service management
Workforce management software
Project management software
Calendaring software
Employee scheduling software
Workflow software
Reservation systems
Entertainment software
Screen savers
Video games
Arcade games
Console games
Mobile games
Personal computer games
Software art
Demo
64K intro
Educational software
Classroom management
Reference software
Sales readiness software
Survey management
Encyclopedia software
Enterprise infrastructure software
Artificial Intelligence for IT Operations (AIOps)
Business workflow software
Database management system (DBMS)
Digital asset management (DAM) software
Document management software
Geographic information system (GIS)
Simulation software
Computer simulators
Scientific simulators
Social simulators
Battlefield simulators
Emergency simulators
Vehicle simulators
Flight simulators
Driving simulators
Simulation games
Vehicle simulation games
Media development software
3D computer graphics software
Animation software
Graphic art software
Raster graphics editor
Vector graphics editor
Image organizer
Video editing software
Audio editing software
Digital audio workstation
Music sequencer
Scorewriter
HTML editor
Game development tool
Product engineering software
Hardware engineering
Computer-aided engineering
Computer-aided design (CAD)
Computer-aided manufacturing (CAM)
Finite element analysis
Software engineering
Compiler software
Integrated development environment
Compiler
Linker
Debugger
Version control
Game development tool
License manager
See also
Software development
Mobile app
Web application
References
External links | Operating System (OS) | 866 |
Classes of computers
Computers can be classified, or typed, in many ways. Some common classifications of computers are given below.
Classes by purpose
|-
|style="text-align: left;"|Notes:
Microcomputers (personal computers)
Microcomputers became the most common type of computer in the late 20th century. The term “microcomputer” was introduced with the advent of systems based on single-chip microprocessors. The best-known early system was the Altair 8800, introduced in 1975. The term "microcomputer" has practically become an anachronism.
These computers include:
Desktop computers – A case put under or on a desk. The display may be optional, depending on use. The case size may vary, depending on the required expansion slots. Very small computers of this kind may be integrated into the monitor.
Rackmount computers – The cases of these computers fit into 19-inch racks, and maybe space-optimized and very flat. A dedicated display, keyboard, and mouse may not exist, but a KVM switch or built-in remote control (via LAN or other means) can be used to gain console access.
In-car computers (carputers) – Built into automobiles, for entertainment, navigation, etc.
Laptops and notebook computers – Portable and all in one case.
Tablet computer – Like laptops, but with a touch-screen, entirely replacing the physical keyboard.
Smartphones, smartbooks, and palmtop computers – Small handheld personal computers with limited hardware specifications.
Programmable calculator– Like small handhelds, but specialized in mathematical work.
Video game consoles – Fixed computers built specifically for entertainment purposes.
Handheld game consoles – The same as game consoles, but small and portable.
Minicomputers (mid-range computers)
Minicomputers (colloquially, minis) are a class of multi-user computers that lie in the middle range of the computing spectrum, in between the smallest mainframe computers and the largest single-user systems (microcomputers or personal computers). The term supermini computer or simply supermini was used to distinguish more powerful minicomputers that approached mainframes in capability. Superminis (such as the DEC VAX or Data General Eclipse MV/8000) were usually 32-bit at a time when most minicomputers (such as the PDP-11 or Data General Eclipse or IBM Series/1) were 16-bit. These traditional minicomputers in the last few decades of the 20th century, found in small to medium-sized businesses, laboratories and embedded in (for example) hospital CAT scanners, often would be rack-mounted and connect to one or more terminals or tape/card readers, like mainframes and unlike most personal computers, but require less space and electrical power than a typical mainframe.
Mainframe computers
The term mainframe computer was created to distinguish the traditional, large, institutional computer intended to service multiple users from the smaller, single-user machines. These computers are capable of handling and processing very large amounts of data quickly. Mainframe computers are used in large institutions such as government, banks, and large corporations.
They are measured in MIPS (million instructions per second) and can respond to hundreds of millions of users at a time.
Supercomputers
A supercomputer is focused on performing tasks involving intense numerical calculations such as weather forecasting, fluid dynamics, nuclear simulations, theoretical astrophysics, and complex scientific computations. A supercomputer is a computer that is at the front-line of current processing capacity, particularly speed of calculation. The term supercomputer itself is rather fluid, and the speed of today's supercomputers tends to become typical of tomorrow's ordinary computer. Supercomputer processing speeds are measured in floating-point operations per second, or FLOPS. An example of a floating-point operation is the calculation of mathematical equations in real numbers. In terms of computational capability, memory size and speed, I/O technology, and topological issues such as bandwidth and latency, supercomputers are the most powerful, are very expensive, and not cost-effective just to perform batch or transaction processing. These computers were developed in 1970s and are the fastest and the highest capacity computers
Classes by function
Servers
Server usually refers to a computer that is dedicated to providing one or more services. A server is expected to be reliable (e.g. error-correction of RAM; redundant cooling; self-monitoring, RAID), fit for running for several years, and giving useful diagnosis in case of an error. For even increased security, the server may be mirrored. Many smaller servers are actually personal computers that have been dedicated to provide services for other computers.
A database server is a server which uses a database application that provides database services to other computer programs or to computers. Database management systems (DBMSs) frequently provide database-server functionality, and some database management systems (such as MySQL) rely exclusively on the client–server model for database access while others (such as SQLite) are meant for using as an embedded database. Users access a database server either through a "front end" running on the user's computer – which displays requested data – or through the "back end", which runs on the server and handles tasks such as data analysis and storage.
A file server does not normally perform computational tasks or run programs on behalf of its client workstations but manage and store a large collection of computer files. The crucial function of a file server is storage. File servers are commonly found in schools and offices, where users use a local area network to connect their client computers and use Network-attached storage (NAS) systems to provide data access.
A web server is a server that can satisfy client requests on the World Wide Web. A web server can, in general, contain one or more websites. A web server processes incoming network requests over HTTP and several other related protocols. The primary function of a web server is to store, process and deliver web pages to clients. The communication between client and server takes place using the Hypertext Transfer Protocol (HTTP). Pages delivered are most frequently HTML documents, which may include images, style sheets and scripts in addition to the text content.
A terminal server enables organizations to connect devices with an RS-232, RS-422 or RS-485 serial interface to a local area network (LAN). Products marketed as terminal servers can be very simple devices that do not offer any security functionality, such as data encryption and user authentication. These provide GUI sessions that can be used by client PCs that work someway like a remote control. Only the screen (and audio) output is shown on the client. The GUI applications run on the server, data (like in files) would be stored in the same LAN, thus avoiding problems, should a client PC be damaged or stolen.
A server may run several virtual machines (VMs) for different activities, supplying the same environment to each VM as if it ran on dedicated hardware. Different operating systems (OS) can therefore be run at the same time. This technology approach needs special hardware support to be useful and was first the domain of mainframes and other large computers. Nowadays, most personal computers are equipped for this task, but for long-term operation or critical systems, specialized server hardware may be needed.
Another approach is to implement VMs on the operating system level, so all VMs run on the same OS instance (or incarnation), but are fundamentally separated to not interfere with each other.
Workstations
Workstations are computers that are intended to serve one user and may contain special hardware enhancements not found on a personal computer. By the mid 1990s personal computers reached the processing capabilities of mini computers and workstations. Also, with the release of multi-tasking systems such as OS/2, Windows NT and Linux, the operating systems of personal computers could do the job of this class of machines. Today, the term is used to describe desktop PCs with high-performance hardware. Such hardware is usually aimed at a professional, rather than enthusiast, market (e.g. dual-processor motherboards, error-correcting memory, professional graphics cards).
Information appliances
Information appliances are computers specially designed to perform a specific "user-friendly" function—such as editing text, playing music, photography, videography etc. The term is most commonly applied to battery-operated mobile devices, though there are also wearable devices.
Embedded computers
Embedded computers are computers that are a part of a machine or device. Embedded computers generally execute a program that is stored in non-volatile memory and is only intended to operate a specific machine or device. Embedded computers are very common. The majority are microcontrollers. Embedded computers are typically required to operate continuously without being reset or rebooted, and once employed in their task the software usually cannot be modified. An automobile may contain a number of embedded computers; however, a washing machine or DVD player would contain only one microcontroller. Embedded computers are chosen to meet the requirements of the specific application, and most are slower and cheaper than CPUs found in a personal computer.
Classes by usage
Public computer
Public computers are open for public uses, possibly as an Interactive kiosk. There are many places where one can use them, such as cybercafes, schools and libraries.
They are normally fire-walled and restricted to run only their pre-installed software. The operating system is difficult to change and/or resides on a file server. For example, "thin client" machines in educational establishments may be reset to their original state between classes. Public computers are generally not expected to keep an individual's data files.
Personal computer
A personal computer has one user who may also be the owner (although the term has also come also mean any computer hardware somewhat like the original IBM PC, irrespective of how it is used). This user often may use all hardware resources, has complete access to any part of the computer and has rights to install/remove software. Personal computers normally store personal files, and often the owner/user is responsible for routine maintenance such as removing unwanted files and virus-scanning. Some computers in a business setting are for one user but are also served by staff with protocols to ensure proper maintenance.
Shared computer
These are computers where different people might log on at different times; unlike public computers, they would have usernames and passwords assigned on a long-term basis, with the files they see and the computer's settings adjusted to their particular account. Often the important data files will reside on a central file server, so a person could log onto different computers yet still see the same files. The computer (or workstation) might be a "thin client" or X terminal, otherwise it may have its own disk for some or all system files, but usually will need to be networked to the rest of the system for full functionality. Such systems normally require a system administrator to set up and maintain the hardware and software.
Display computer
Computers that are used just to display selected material (usually audio-visual, or simple slide shows) in a shop, meeting or trade show. These computers may have more capabilities than they are being used for; they are likely to have WiFi and so be capable of Internet access, but are rarely firewalled (but have restricted port access or monitored in some way). Such computers are used and maintained as appliances, and not normally used as the primary store for important files.
Classed by generation of computer technology
The history of computing hardware is often used to reference the different generations of computing devices:
First generation computers (1940-1955): It used vacuum tubes such as the 6J6 or specially designed tubes - or even mechanical arrangements, and were relatively slow, energy-hungry and the earliest computers were less flexible in their programmability.
Second generation computers (1956-1963): It used discrete transistors, and so were smaller and consumed less power.
Third generation computers (1964-1970): It used Integrated Circuits (ICs), the main difference between hardware in computers of the 1960s and today being the density of transistors in each IC (beginning with Small Scale Integration chips like the Transistor-transistor logic (TTL) SN7400 gates with 20 transistors, through Medium Scale Integration and Large Scale Integration to Very-large-scale integration (VLSI) with over ten billion transistors in a single silicon-based IC "chip".
Fourth generation computers(1971-present): It uses Microprocessors, as millions of ICs were built onto a single silicon-based chip. Since then form factor of computers reduced, task processing & graphic rendering improved and it became more battery-powered with the advent of personal mobile devices such as laptops, tablets, smartphones etc.
See also
List of computer size categories
Bell's law of computer classes
Analog computers
Feng's classification
Flynn's taxonomy
References
External links
Four types of Computers | Operating System (OS) | 867 |
Parrot OS
Parrot OS is a Linux distribution based on Debian with a focus on security, privacy, and development.
Core
Parrot is based on Debian's "testing" branch, with a Linux 5.10 kernel. It follows a rolling release development model.
The desktop environments are MATE and KDE, and the default display manager is LightDM.
The system is certified to run on devices which have a minimum of 256MB of RAM, and it is suitable for both 32-bit (i386) and 64-bit (amd64) processor architectures. Moreover, the project is available for ARMv7 (armhf) architectures.
In June 2017, the Parrot Team announced they were considering to change from Debian to Devuan, mainly because of problems with systemd.
As of January 21st, 2019, the Parrot team has begun to phase out the development of their 32-bit (i386) ISO.
In August 2020, the Parrot OS officially supports Lightweight Xfce Desktop.
Editions
Parrot has multiple editions that are based upon Debian, with various desktop environments available.
Parrot Security
Parrot is intended to provide a suite of penetration testing tools to be used for attack mitigation, security research, forensics, and vulnerability assessment.
It is designed for penetration testing, vulnerability assessment and mitigation, computer forensics and anonymous web browsing.
Parrot Home
Parrot Home is the base edition of Parrot designed for daily use, and it targets regular users who need a "lightweight" system on their laptops or workstations.
The distribution is useful for daily work. Parrot Home also includes programs to chat privately with people, encrypt documents, or browse the internet anonymously. The system can also be used as a starting point to build a system with a custom set of security tools.
Parrot ARM
Parrot ARM is a lightweight Parrot release for embedded systems. It is currently available for Raspberry Pi devices.
Parrot OS Tools
There are multiple Tools in Parrot OS which are specially designed for Security Researchers and are related to penetration testing. A few of them are listed below, more can be found on the official website.
Tor
Tor, also known as The Onion Router, is a distributed network that anonymizes Internet browsing. It is designed in a way that the IP Address of the client using Tor is hidden from the server that the client is visiting. Also, the data and other details are hidden from the client’s Internet Service Provider (ISP). Tor network uses hops to encrypt the data between the client and the server. Tor network and Tor browser are pre-installed and configured in Parrot OS.
Onion Share
Onion Share is an open-source utility that can be used to share files of any size over the Tor network securely and anonymously. Onion Share then generates a long random URL that can be used by the recipient to download the file over the TOR network using TOR browser.
AnonSurf
Anonsurf is a utility that makes the operating system communication go over Tor or other anonymizing networks. According to Parrot, AnonSurf secures your web browser and anonymizes your IP.
Release frequency
The development team has not specified any official release timeline, but based on release changelogs and the notes included in the official review of the distribution, the project will be released on a monthly basis.
Releases
See also
BackBox
BlackArch
Devuan
Kali Linux
List of digital forensics tools
Security-focused operating system
Notes
External links
Official Website
Blog & Release Notes
DistroWatch
Debian Derivatives Census
Debian-based distributions
Computer security software
Pentesting software toolkits
Rolling Release Linux distributions
Linux distributions | Operating System (OS) | 868 |
ISO/IEC JTC 1/SC 22
ISO/IEC JTC 1/SC 22 Programming languages, their environments and system software interfaces is a standardization subcommittee of the Joint Technical Committee ISO/IEC JTC 1 of the International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC) that develops and facilitates standards within the fields of programming languages, their environments and system software interfaces. ISO/IEC JTC 1/SC 22 is also sometimes referred to as the "portability subcommittee". The international secretariat of ISO/IEC JTC 1/SC 22 is the American National Standards Institute (ANSI), located in the United States.
History
ISO/IEC JTC 1/SC 22 was created in 1985, with the intention of creating a JTC 1 subcommittee that would address standardization within the field of programming languages, their environments and system software interfaces. Before the creation of ISO/IEC JTC 1/SC 22, programming language standardization was addressed by ISO TC 97/SC 5. Many of the original working groups of ISO/IEC JTC 1/SC 22 were inherited from a number of the working groups of ISO TC 97/SC 5 during its reorganization, including ISO/IEC JTC 1/SC 22/WG 2 – Pascal (originally ISO TC 97/SC 5/WG 4), ISO/IEC JTC 1/SC 22/WG 4 – COBOL (originally ISO TC 97/SC 5/ WG 8), and ISO/IEC JTC 1/SC 22/WG 5 – Fortran (originally ISO TC 97/SC 5/WG 9). Since then, ISO/IEC JTC 1/SC 22 has created and disbanded many of its working groups in response to the changing standardization needs of programming languages, their environments and system software interfaces.
Scope and mission
The scope of ISO/IEC JTC 1/SC 22 is the standardization of programming languages (such as COBOL, Fortran, Ada, C, C++, and Prolog), their environments (such as POSIX and Linux), and systems software interfaces, such as:
Specification techniques
Common facilities and interfaces
ISO/IEC JTC 1/SC 22 also produces common language-independent specifications to facilitate standardized bindings between programming languages and system services, as well as greater interaction between programs written in different languages.
The scope of ISO/IEC JTC 1/SC 22 does not include specialized languages or environments within the program of work of other subcommittees or technical committees.
The mission of ISO/IEC JTC 1/SC 22 is to improve portability of applications, productivity and mobility of programmers, and compatibility of applications over time within high level programming environments. The three main goals of ISO/IEC JTC 1/SC 22 are:
To support the current global investment in software applications through programming languages standardization
To improve programming language standardization based on previous specification experience in the field
To respond to emerging technological opportunities
Structure
Although ISO/IEC JTC 1/SC 22 has had a total of 24 working groups (WGs), many have been disbanded when the focus of the working group was no longer applicable to the current standardization needs. ISO/IEC JTC 1/SC 22 is currently made up of eight (8) active working groups, each of which carries out specific tasks in standards development within the field of programming languages, their environments and system software interfaces. The focus of each working group is described in the group’s terms of reference. Working groups of ISO/IEC JTC 1/SC 22 are:
Collaborations
ISO/IEC JTC 1/SC 22 works in close collaboration with a number of other organizations or subcommittees, some internal to ISO, and others external to it. Organizations in liaison with ISO/IEC JTC 1/SC 22, internal to ISO are:
ISO/IEC JTC 1/SC 2, Coded character sets
ISO/IEC JTC 1/SC 7, Software and systems engineering
ISO/IEC JTC 1/SC 27, IT Security techniques
ISO/TC 37, Terminology and other language and content resources
ISO/TC 215, Health informatics
Organizations in liaison to ISO/IEC JTC 1/SC 22 that are external to ISO are:
Ecma International
Linux Foundation
Association for Computing Machinery Special Interest Group on Ada (ACM SIGAda)
Ada-Europe
MISRA
Member countries
Countries pay a fee to ISO to be members of subcommittees.
The 23 "P" (participating) members of ISO/IEC JTC 1/SC 22 are: Austria, Bulgaria, Canada, China, Czech Republic, Denmark, Finland, France, Germany, Israel, Italy, Japan, Kazakhstan, Republic of Korea, Netherlands, Poland, Russian Federation, Slovenia, Spain, Switzerland, Ukraine, United Kingdom, and United States of America.
The 21 "O" (observing) members of ISO/IEC JTC 1/SC 22 are: Argentina, Belgium, Bosnia and Herzegovina, Cuba, Egypt, Ghana, Greece, Hungary, Iceland, India, Indonesia, Islamic Republic of Iran, Ireland, Democratic People’s Republic of Korea, Malaysia, New Zealand, Norway, Portugal, Romania, Serbia, and Thailand.
Published standards and technical reports
ISO/IEC JTC 1/SC 22 currently has 98 published standards in programming languages, their environments and system software interfaces. Some standards published by ISO/IEC JTC 1/SC 22 within this field include:
See also
ISO/IEC JTC1
List of ISO standards
American National Standards Institute
International Organization for Standardization
International Electrotechnical Commission
References
External links
ISO/IEC JTC 1/SC 22 page at ISO
022
Programming language standards | Operating System (OS) | 869 |
Oz (programming language)
Oz is a multiparadigm programming language, developed in the Programming Systems Lab at Université catholique de Louvain, for programming language education. It has a canonical textbook: Concepts, Techniques, and Models of Computer Programming.
Oz was first designed by Gert Smolka and his students in 1991. In 1996, development of Oz continued in cooperation with the research group of Seif Haridi and Peter Van Roy at the Swedish Institute of Computer Science. Since 1999, Oz has been continually developed by an international group, the Mozart Consortium, which originally consisted of Saarland University, the Swedish Institute of Computer Science, and the Université catholique de Louvain. In 2005, the responsibility for managing Mozart development was transferred to a core group, the Mozart Board, with the express purpose of opening Mozart development to a larger community.
The Mozart Programming System is the primary implementation of Oz. It is released with an open source license by the Mozart Consortium. Mozart has been ported to Unix, FreeBSD, Linux, Windows, and macOS.
Language features
Oz contains most of the concepts of the major programming paradigms, including logic, functional (both lazy evaluation and eager evaluation), imperative, object-oriented, constraint, distributed, and concurrent programming. Oz has both a simple formal semantics (see chapter 13 of the book mentioned below) and Oz is a concurrency-oriented language, as the term was introduced by Joe Armstrong, the main designer of the Erlang language. A concurrency-oriented language makes concurrency easy to use and efficient. Oz supports a canonical graphical user interface (GUI) language QTk.
In addition to multi-paradigm programming, the major strengths of Oz are in constraint programming and distributed programming. Due to its factored design, Oz is able to successfully implement a network-transparent distributed programming model. This model makes it easy to program open, fault-tolerant applications within the language. For constraint programming, Oz introduces the idea of computation spaces, which allow user-defined search and distribution strategies orthogonal to the constraint domain.
Language overview
Data structures
Oz is based on a core language with very few datatypes that can be extended into more practical ones through syntactic sugar.
Basic data structures:
Numbers: floating point or integer (real integer)
Records: for grouping data : circle(x:0 y:1 radius:3 color:blue style:dots). Here the terms x,y, radius etc. are called features and the data associated with the features (in this case 0,1,3 etc.) are the values.
Tuples: Records with integer features in ascending order: circle(1:0 2:1 3:3 4:blue 5:dots) .
Lists: a simple linear structure
'|'(2 '|'(4 '|'(6 '|'(8 nil)))) % as a record.
2|(4|(6|(8|nil))) % with some syntactic sugar
2|4|6|8|nil % more syntactic sugar
[2 4 6 8] % even more syntactic sugar
Those data structures are values (constant), first class and dynamically type checked. Variable names in Oz start with an uppercase letter to distinguish them from literals which always begin with a lowercase letter.
Functions
Functions are first class values, allowing higher order functional programming:
fun {Fact N}
if N =< 0 then 1 else N*{Fact N-1} end
endfun {Comb N K}
{Fact N} div ({Fact K} * {Fact N-K}) % integers can't overflow in Oz (unless no memory is left)
end
fun {SumList List}
case List of nil then 0
[] H|T then H+{SumList T} % pattern matching on lists
end
end
Functions may be used with both free and bound variables. Free variable values are found using static lexical scoping.
Higher-order programming
Functions are like other Oz objects. A function can be passed as an attribute to other functions or can be returned in a function.
fun {Square N} % A general function
N*N
end
fun {Map F Xs} % F is a function here - higher order programming
case Xs
of nil then nil
[] X|Xr then {F X}|{Map F Xr}
end
end
%usage
{Browse {Map Square [1 2 3]}} %browses [1 4 9]
Anonymous functions
Like many other functional languages, Oz supports use of anonymous functions (i.e. functions which do not have a name) with higher order programming. The symbol $ is used to denote these.
In the following, the square function is defined anonymously and passed, causing [1 4 9] to be browsed.
{Browse {Map fun {$ N} N*N end [1 2 3]}}
Since anonymous functions don't have names, it is not possible to define recursive anonymous functions.
Procedures
Functions in Oz are supposed to return a value at the last statement encountered in the body of the function during its execution. In the example below, the function Ret returns 5 if X > 0 and -5 otherwise.
declare
fun {Ret X}
if X > 0 then 5 else ~5 end
end
But Oz also provides a facility in case a function must not return values. Such functions are called procedures. Procedures are defined using the construct "proc" as follows
declare
proc {Ret X}
if X > 0 then {Browse 5} else {Browse ~5} end
end
The above example doesn't return any value, it just prints 5 or -5 in the Oz browser depending on the sign of X.
Dataflow variables and declarative concurrency
When the program encounters an unbound variable it waits for a value. For example, below, the thread will wait until both X and Y are bound to a value before showing the value of Z.
thread
Z = X+Y
{Browse Z}
end
thread X = 40 end
thread Y = 2 end
The value of a dataflow variable cannot be changed once it is bound:
X = 1
X = 2 % error
Dataflow variables make it easy to create concurrent stream agents:
fun {Ints N Max}
if N == Max then nil
else
{Delay 1000}
N|{Ints N+1 Max}
end
end
fun {Sum S Stream}
case Stream
of nil then S
[] H|T then S|{Sum H+S T}
end
end
local X Y in
thread X = {Ints 0 1000} end
thread Y = {Sum 0 X} end
{Browse Y}
end
Because of the way dataflow variables work, it is possible to put threads anywhere in a program and guaranteed that it will have the same result. This makes concurrent programming very easy. Threads are very cheap: it is possible to have 100,000 threads running at once.
Example: Trial division sieve
This example computes a stream of prime numbers using the trial division algorithm by recursively creating concurrent stream agents that filter out non-prime numbers:
fun {Sieve Xs}
case Xs of nil then nil
[] X|Xr then Ys in
thread Ys = {Filter Xr fun {$ Y} Y mod X \= 0 end} end
X|{Sieve Ys}
end
end
Laziness
Oz uses eager evaluation by default, but lazy evaluation is possible. Below, the fact is only computed when value of X is needed to compute the value of Y.
fun lazy {Fact N}
if N =< 0 then 1 else N*{Fact N-1} end
end
local X Y in
X = {Fact 100}
Y = X + 1
end
Lazy evaluation gives the possibility of storing truly infinite data structures in Oz. The power of lazy evaluation can be seen from the following code sample:
declare
fun lazy {Merge Xs Ys}
case Xs#Ys
of (X|Xr)#(Y|Yr) then
if X < Y then X|{Merge Xr Ys}
elseif X>Y then Y|{Merge Xs Yr}
else X|{Merge Xr Yr}
end
end
end
fun lazy {Times N Xs}
case Xs
of nil then nil
[] X|Xr then N*X|{Times N Xr}
end
end
declare H
H = 1 | {Merge {Times 2 H} {Merge {Times 3 H} {Times 5 H}}}
{Browse {List.take H 6}}
The code above elegantly computes all the Regular Numbers in an infinite list. The actual numbers are computed only when they are needed.
Message passing concurrency
The declarative concurrent model can be extended with message passing via simple semantics:
declare
local Stream Port in
Port = {NewPort Stream}
{Send Port 1} % Stream is now 1|_ ('_' indicates an unbound and unnamed variable)
{Send Port 2} % Stream is now 1|2|_
...
{Send Port n} % Stream is now 1|2| .. |n|_
end
With a port and a thread, asynchronous agents can be defined:
fun {NewAgent Init Fun}
Msg Out in
thread {FoldL Msg Fun Init Out} end
{NewPort Msg}
end
State and objects
It is again possible to extend the declarative model to support state and object-oriented programming with very simple semantics. To create a new mutable data structure called Cells:
local A X in
A = {NewCell 0}
A := 1 % changes the value of A to 1
X = @A % @ is used to access the value of A
end
With these simple semantic changes, the whole object-oriented paradigm can be supported. With a little syntactic sugar, OOP becomes well integrated in Oz.
class Counter
attr val
meth init(Value)
val:=Value
end
meth browse
{Browse @val}
end
meth inc(Value)
val :=@val+Value
end
end
local C in
C = {New Counter init(0)}
{C inc(6)}
{C browse}
end
Execution speed
The execution speed of a program produced by the Mozart compiler (version 1.4.0 implementing Oz 3) is very slow. On a set of benchmarks it averages about 50 times slower than that of the GNU Compiler Collection (GCC) for the C language, solving the benchmarks-tasks.
See also
Alice (programming language), a concurrent functional constraint language from Saarland University
Dataflow programming
Functional logic programming languages
Curry (programming language)
Mercury (programming language)
Visual Prolog, an object-oriented, functional, logic language
References
Peter Van Roy and Seif Haridi (2004). Concepts, Techniques, and Models of Computer Programming. MIT Press. There is online supporting material for this book. The book, an introduction to the principles of programming languages, uses Oz as its preferred idiom for examples.
External links
Tutorial of Oz
Programming Language Research at UCL: One of the core developers of Mozart/Oz, this group does research using Mozart/Oz as the vehicle
Multiparadigm Programming in Mozart/Oz: Proceedings of MOZ 2004: Conference which gives a snapshot of the work being done with Mozart/Oz
Programming in Oz
Oz Basics
Multi-paradigm programming languages
Functional logic programming languages
Logic programming languages
Dynamically typed programming languages
Prototype-based programming languages
Concurrent programming languages
Educational programming languages
Programming languages created in 1991 | Operating System (OS) | 870 |
Ubuntu MATE
Ubuntu MATE (pronounced '') is a free and open-source Linux distribution and an official derivative of Ubuntu. Its main differentiation from Ubuntu is that it uses the MATE desktop environment as its default user interface (based on GNOME 2), instead of the GNOME 3 desktop environment that is the default user interface for Ubuntu.
History
The Ubuntu MATE project was founded by Martin Wimpress and Alan Pope and began as an unofficial derivative of Ubuntu, using an Ubuntu 14.10 base for its first release; a 14.04 LTS release followed shortly. As of February 2015, Ubuntu MATE gained the official Ubuntu flavour status from Canonical Ltd. as per the release of 15.04 Beta 1. In addition to IA-32 and x86-64 which were the initial supported platforms, Ubuntu MATE also supports PowerPC and ARMv7 (on the Raspberry Pi 2 and 3 as well as the ODROID XU4).
In April 2015, Ubuntu MATE announced a partnership with British computer reseller Entroware, enabling Entroware customers to purchase desktop and laptop computers with Ubuntu MATE preinstalled with full support. Several other hardware deals were announced later.
In Ubuntu MATE 18.10, 32-bit support was dropped.
Releases
Reception
In a May 2016 review Jesse Smith of DistroWatch concluded, "despite my initial problems getting Ubuntu MATE installed and running smoothly, I came away with a positive view of the distribution. The project is providing a very friendly desktop experience that requires few hardware resources by modern standards. I also want to tip my hat to the default theme used on Ubuntu MATE."
Dedoimedo reviewed Ubuntu MATE in July 2018, and wrote that "[Ubuntu MATE offers] a wealth of visual and functional changes ... You really have the ability to implement anything and everything, and all of it natively, from within the system's interface.".
See also
Comparison of Linux distributions
GTK+
Linux Mint
MATE
Ubuntu GNOME
Ubuntu Unity
Xubuntu
References
External links
IA-32 Linux distributions
Operating system distributions bootable from read-only media
Ubuntu derivatives
X86-64 Linux distributions
Linux distributions | Operating System (OS) | 871 |
Bare machine
In computer science, bare machine (or bare metal) refers to a computer executing instructions directly on logic hardware without an intervening operating system. Modern operating systems evolved through various stages, from elementary to the present day complex, highly sensitive systems incorporating many services. After the development of programmable computers (which did not require physical changes to run different programs) but prior to the development of operating systems, sequential instructions were executed on the computer hardware directly using machine language without any system software layer. This approach is termed the "bare machine" precursor to modern operating systems. Today it is mostly applicable to embedded systems and firmware generally with time-critical latency requirements, while conventional programs are run by a runtime system overlaid on an operating system.
Advantages
For a given application, in most of the cases, a bare-metal implementation will run faster, using less memory and so being more power efficient. This is because operating systems, as any program, need some execution time and memory space to run, and these are no longer needed on bare-metal.
For instance, any hardware feature that includes inputs and outputs are directly accessible on bare-metal, whereas the same feature using an OS must route the call to a subroutine, consuming running time and memory.
Disadvantages
For a given application, bare-metal programming requires more effort to work properly and is more complex because the services provided by the operating system and used by the application have to be re-implemented regarding the needs. These services can be:
System boot (mandatory)
Memory management: Storing location of the code and the data regarding the hardware resources and peripherals (mandatory)
Interruptions handling (if any)
Task scheduling, if the application can perform more than one task
Peripherals management (if any)
Error management, if wanted or needed
Debugging a bare-metal program is difficult since:
There are no software error notifications nor faults management, unless they have been implemented and validated.
There is no standard output, unless it has been implemented and validated.
The machine where the program is written cannot be the same where the program is executed, so the target hardware is either an emulator / simulator or an external device. This forces to setup a way to load the bare-metal program onto the target (flashing), start the program execution and access the target resources.
Bare-metal programming is generally done using a close-to-hardware language, such as Rust, C++, C, assembly language, or even for small amounts of code or very new processors machine code directly. All the previous issues inevitably mean that bare-metal programs are very rarely portable.
Examples
Early computers
Early computers, such as the PDP-11, allowed programmers to load a program, supplied in machine code, to RAM. The resulting operation of the program could be monitored by lights, and output derived from magnetic tape, print devices, or storage.
Embedded systems
Bare machine programming remains common practice in embedded systems, where microcontrollers or microprocessors often boot directly into monolithic, single-purpose software, without loading a separate operating system. Such embedded software can vary in structure, but the simplest form may consist of an infinite main loop, calling subroutines responsible for checking for inputs, performing actions, and writing outputs.
Development
The approach of using bare machines paved the way for new ideas which accelerated the evolution of operating system development.
This approach highlighted a need for the following:
Input/output (I/O) devices to enter both code and data conveniently:
Input devices, such as keyboards, were created. These were necessary, as earlier computers often had unique, obtuse, and convoluted input devices.
For example, programs were loaded into the PDP-11 by hand, using a series of toggle switches on the front panel of the device. Keyboards are far superior to these vintage input devices, as it would be much faster to type code or data than to use toggle switches to input this into the machine. Keyboards would later become standard across almost every computer, regardless of brand or price.
Output devices such as computer monitors would later be widely used, and still are to this day. They proved themselves to be a huge convenience over earlier output devices, such as an array of lights on the Altair 8800, which would indicate the status of the computer.
Computer monitors can also easily display the output of a program in a user friendly manner. For example, one would have to be intimately knowledgeable about a specific early computer and its display system, consisting of an array of lights, to even begin to make sense of the status of the computer's hardware. In contrast, anybody who can read should be able to understand a well-designed user interface on a modern system, without having to know anything about the hardware of the computer on which the program is being run.
Faster, cheaper, more widely available secondary storage devices to store programs to non-volatile memory. This was needed, as it was cumbersome to have to type code in by hand in order to use the computer in a useful way, which would be lost upon every reboot due to the system saving it to volatile memory.
The requirement for a convenient high-level language and a translator for such a high-level language to the corresponding machine code.
Linkers to link library modules, which may be written by the user or already available on the system.
Loaders to load executables into RAM from the secondary storage.
Suitable I/O devices, such as printers for producing a hard copy of the output generated by programs.
See also
Bare machine computing
Barebone computer
Standalone program
References
History of computing hardware
History of software
Operating systems | Operating System (OS) | 872 |
Macintosh Office
The Macintosh Office was an effort by Apple Computer to design an office-wide computing environment consisting of Macintosh computers, a local area networking system, a file server, and a networked laser printer. Apple announced Macintosh Office in January 1985 with a poorly received sixty-second Super Bowl commercial dubbed Lemmings. In the end, the file server would never ship and the Office project would be cancelled. However, the AppleTalk networking system and LaserWriter printer would be hugely successful in launching the desktop publishing revolution.
History
Previous efforts
Macintosh Office was the company's third attempt to enter into the business environment as a serious competitor to IBM.
Following the success of the Apple II personal computer, Apple first sought to enter into the lucrative professional business market with the Apple III. A high-end computer with features geared toward the business professional, it suffered from many technical problems which plagued the system during most of its production run. As a result, Apple's reputation suffered and it lost any advantage it had entering into the business market – a full year prior to the introduction of the IBM PC.
Apple's second attempt was with the introduction of the revolutionary Lisa, a high-end computer aimed at the business community, based on the graphical user interface that was to become the basis of the Macintosh. Unfortunately it proved far too expensive and offered too few features for most businesses to justify the cost. A year later when the much less expensive Macintosh debuted, Lisa's fate was sealed. After being renamed the Macintosh XL in an effort to revive sales, a year later production ended following less than three years of poor sales.
While Apple had a hit with the Macintosh, they still needed a way to make inroads into the professional world and the Mac was already being criticized as a toy by the business community.
Strategy
Apple had initially examined local area networking through an effort known as AppleNet, which used Ethernet-like coax cable to support a 1 Mbit/s network of up to 128 Apple IIs, Apple IIIs and Apple Lisas. This was first announced at the National Computer Conference in Los Angeles in June 1983, but quietly dropped four months later. At the time, Apple commented that they "realized that it's not in the business to create a network system", and instead announced they would be waiting for IBM to release its Token Ring system in 1984.
This left Apple with no networking system until IBM released Token Ring. Internal work continued throughout, greatly aided by a series of memos from Bob Belleville, who outlined what the system would need to do and outlining the networking system, a networked laser printer, and a file server.
When the Macintosh had originally been designed it used the Zilog Z8530 serial driver chip, which had the capability of running simple networking protocols. The original aim was to produce a system known as AppleBus that would allow multiple devices to be plugged into a single port. The AppleBus concept had been dropped during development, but it left the systems with the hardware needed to support a local area network, all that was needed was the appropriate software. To address any short-term networking needs, Apple announced the development of a low-speed system running at 230 kbit/s. As the serial ports on the Macintosh were not connected in a ring, an external box (later known as LocalTalk) was used to provide "up" and "down" connections. The system was released in January 1985 as the "AppleTalk Personal Network".
Armed with the proper networking hardware, Apple set about developing the other key pieces of its business suite.
It would include a dedicated file server they code-named Big Mac. Essentially it was conceived as a fast Unix-based server which ran the Mac OS as an interface shell.
Also included was a networked hard drive intended to be plugged directly into the network.
Finally, a Laser printer which would produce typeset quality documents also shared among all the users on the network.
By January 1985 Apple was ready to launch the LocalTalk network which would allow a small office to inexpensively share its newly introduced LaserWriter printer. But the dedicated file server was up to two years away. The networked hard disk was closer, but still over a year away. By early 1985, Apple did not even offer a hard drive that worked on the Macintosh, much less a networked one. Unfortunately, Apple's newly announced network could do little else but print. As a stopgap measure, Apple had re-branded the Lisa 2/10 as the Macintosh XL and dropped the price substantially. With its built-in 10MB hard drive, greater RAM and Macintosh System emulation software MacWorks, the XL was positioned to act as the file server until Apple could develop the replacement. However, there was no file-sharing software to take advantage of the Macintosh XL. Nevertheless, based on the premise promised by the Macintosh Office, the Macintosh XL sold well at its reduced price, so well that Apple ran out of parts, forcing it to be discontinued long before the replacement network server was ready.
In the meantime, third party developers working with Apple, such as Infosphere and Centram Systems West (later Sun Microsystems) created AppleTalk-based file sharing applications called XL/Serve (later MacServe) and TOPS respectively. The former was actually a hard disk sharing application that allowed a remote client Mac to log onto a hard drive connected to the host Mac and work on a file. However, this arrangement meant that only one user could access the file volume at a time. Nevertheless, it fulfilled one of the main features of the Macintosh Office: a networked hard drive. By contrast, TOPS was a true file sharing application. With TOPS a remote client could log onto a host Mac and access and work on any file simultaneously with another remote or the host user. In addition, TOPS did not require a dedicated host, rather every Mac could be a host, offering peer-to-peer file sharing. What's more, TOPS was not limited to the Macintosh, but could also share files across platforms with IBM PCs. Both of these products, as well as others, helped fulfill Apple's announcement of the Macintosh Office.
Nevertheless, none of the software available represented a unified solution fully supported by Apple. Following the early removal of the Macintosh XL, Apple finally delivered its first hard drive for the Macintosh. Nine months after announcing it, the Hard Disk 20 was a mere 20MB hard drive. Though a welcome addition, it was slow and delivered none of the promise of a network file server. Though third party products made good use of it, Apple would not offer another installment of the poorly implemented Macintosh Office for well over a year. Instead Apple canceled the UNIX-based Big Mac file-server concept and chose to focus on the next generation Macintosh II.
In January 1987, Apple finally introduced its file sharing application AppleShare. Together with a faster SCSI hard drive, the Hard Disk 20SC released 3 months earlier, Apple finally offered an officially supported unified, simple-to-use file sharing network. However, it failed to deliver on the promise of the initial announcement made 2 years earlier. At best, the Macintosh Office was a piecemeal solution run on relatively underpowered Macs, lacking many of the features offered by third-party applications before it. In fact, it would be almost 5 more years before AppleShare would offer peer-to-peer file sharing under System 7. It would take four more months for the release of expandable Macs that could accommodate the growing industry standard, Ethernet, and larger, faster built-in hard drives powerful enough to manage AppleTalk's potential to serve a large office. IBM network compatibility was still unavailable.
Legacy
Though largely considered a failure by most, The Macintosh Office ushered in the era of Desktop Publishing with the advent of the LaserWriter, the low-cost network interface which made it affordable and the resulting software developers who took advantage of the Macintosh GUI and the printer's PostScript professional looking output. More than anything this cemented the Macintosh's reputation as a serious computer and its indispensable place in the office, particularly when compared to the capabilities of its DOS based counterparts.
References
Products introduced in 1985
Macintosh platform
Apple Inc. hardware | Operating System (OS) | 873 |
Sord IS-11
The Sord IS-11 is an A4-size, lightweight, portable Z80-based computer. The IS-11 ('IS' stands for 'Integrated Software') had no operating system, but came with built-in word processing, spreadsheet, file manager and communication software.
The machine was manufactured by Sord Computer Corporation and released in 1983. It was later followed by the IS-11B and IS-11C.
Technical description
The IS-11 had a CMOS version of the Z80A running at 3.4 MHz with 32-64 KiB NVRAM and 64 KiB ROM. The monochrome non-back-lit LCD screen allowed for 40 characters × 8 lines or 256 × 64 pixels. Data was stored on built-in microcassette recorder (128kb, 2000 baud).
See also
Sord M23P
External links
Sord IS-11 at Obsolete Technology website
Sord IS-11 at old-computers.com
Sord IS-11 at The Machine Room
Sord IS-11C
Personal computers
Portable computers
Computer-related introductions in 1983 | Operating System (OS) | 874 |
Computer for operations with functions
Within computer engineering and computer science, a computer for operations with (mathematical) functions (unlike the usual computer) operates with functions at the hardware level (i.e. without programming these operations).
History
A computing machine for operations with functions was presented and developed by Mikhail Kartsev in 1967. Among the operations of this computing machine were the functions addition, subtraction and multiplication, functions comparison, the same operations between a function and a number, finding the function maximum, computing indefinite integral, computing definite integral of derivative of two functions, derivative of two functions, shift of a function along the X-axis etc. By its architecture this computing machine was (using the modern terminology) a vector processor or array processor, a central processing unit (CPU) that implements an instruction set containing instructions that operate on one-dimensional arrays of data called vectors. In it there has been used the fact that many of these operations may be interpreted as the known operation on vectors: addition and subtraction of functions - as addition and subtraction of vectors, computing a definite integral of two functions derivative— as computing the vector product of two vectors, function shift along the X-axis – as vector rotation about axes, etc. In 1966 Khmelnik had proposed a functions coding method, i.e. the functions representation by a "uniform" (for a function as a whole) positional code. And so the mentioned operations with functions are performed as unique computer operations with such codes on a "single" arithmetic unit.
Positional codes of one-variable functions
The main idea
The positional code of an integer number is a numeral notation of digits in a certain positional number system of the form
.
Such code may be called "linear". Unlike it a positional code of one-variable function has the form:
and so it is flat and "triangular", as the digits in it comprise a triangle.
The value of the positional number above is that of the sum
,
where is the radix of the said number system. The positional code of a one-variable function correspond to a 'double' code of the form
,
where is an integer positive number, quantity of values that taken , and is a certain function of argument .
Addition of positional codes of numbers is associated with the carry transfer to a higher digit according to the scheme
.
Addition of positional codes of one-variable functions is also associated with the carry transfer to higher digits according to the scheme:
.
Here the same transfer is carried simultaneously to two higher digits.
R-nary triangular code
A triangular code is called R-nary (and is denoted as ), if the numbers take their values from the set
, where and .
For example, a triangular code is a ternary code , if , and quaternary , if .
For R-nary triangular codes the following equalities are valid:
,
where is an arbitrary number. There exists of an arbitrary integer real number. In particular, . Also there exists of any function of the form . For instance, .
Single-digit addition
in R-nary triangular codes consists in the following:
in the given -digit there is determined the sum of the digits that are being added and two carries , transferred into this digit from the left, i.e.
,
this sum is presented in the form , where ,
is written in the -digit of summary code, and the carry from the given digit is carried into -digit and —digit.
This procedure is described (as also for one-digit addition of the numbers) by a table of one-digit addition, where all the values of the terms and must be present and all the values of carries appearing at decomposition of the sum . Such a table may be synthesized for
Below we have written the table of one-digit addition for :
One-digit subtraction
in R-nary triangular codes differs from the one-digit addition only by the fact that in the given -digit the value is determined by the formula
.
One-digit division by the parameter R
in R-nary triangular codes is based on using the correlation:
,
from this it follows that the division of each digit causes carries into two lowest digits. Hence, the digits result in this operation is a sum of the quotient from the division of this digit by R and two carries from two highest digits. Thus, when divided by parameter R
in the given -digit the following sum is determined
,
this sum is presented as , where ,
is written into —digit of the resulting code, and carry from the given digit is transferred into the -digit and -digit.
This procedure is described by the table of one-digit division by parameter R, where all the values of terms and all values of carries, appearing at the decomposition of the sum , must be present. Such table may be synthesized for
Below the table will be given for the one-digit division by the parameter R for :
Addition and subtraction
of R-nary triangular codes consists (as in positional codes of numbers) in subsequently performed one-digit operations. Mind that the one-digit operations in all digits of each column are performed simultaneously.
Multiplication
of R-nary triangular codes. Multiplication of a code by -digit of another code consists in -shift of the code , i.e. its shift k columns left and m rows up. Multiplication of codes and consists in subsequent -shifts of the code and addition of the shifted code with the part-product (as in the positional codes of numbers).
Derivation
of R-nary triangular codes. The derivative of function , defined above, is
.
So the derivation of triangular codes of a function consists in determining the triangular code of the partial derivative and its multiplication by the known triangular code of the derivative . The determination of the triangular code of the partial derivative is based on the correlation
.
The derivation method consists of organizing carries from mk-digit into (m+1,k)-digit and into (m-1,k)-digit, and their summing in the given digit is performed in the same way as in one-digit addition.
Coding and decoding
of R-nary triangular codes. A function represented by series of the form
,
with integer coefficients , may be represented by R-nary triangular codes, for these coefficients and functions have R-nary triangular codes (which was mentioned in the beginning of the section). On the other hand, R-nary triangular code may be represented by the said series, as any term in the positional expansion of the function (corresponding to this code) may be represented by a similar series.
Truncation
of R-nary triangular codes. This is the name of an operation of reducing the number of "non"-zero columns. The necessity of truncation appears at the emergence of carries beyond the digit net. The truncation consists in division by parameter R. All coefficients of the series represented by the code are reduced R times, and the fractional parts of these coefficients are discarded. The first term of the series is also discarded. Such reduction is acceptable if it is known that the series of functions converge. Truncation consists in subsequently performed one-digit operations of division by parameter R. The one-digit operations in all the digits of a row are performed simultaneously, and the carries from lower row are discarded.
Scale factor
R-nary triangular code is accompanied by a scale factor M, similar to exponent for floating-point number. Factor M permits to display all coefficients of the coded series as integer numbers. Factor M is multiplied by R at the code truncation. For addition factors M are aligned, to do so one of added codes must be truncated. For multiplication the factors M are also multiplied.
Positional code for functions of many variables
Positional code for function of two variables is depicted on Figure 1. It corresponds to a "triple" sum of the form:: ,
where is an integer positive number, number of values of the figure , and — certain functions of arguments correspondingly. On Figure 1 the nodes correspond to digits , and in the circles the values of indexes of the corresponding digit are shown. The positional code of the function of two variables is called "pyramidal". Positional code is called R-nary (and is denoted as ), if the numbers assume the values from the set . At the addition of the codes the carry extends to four digits and hence .
A positional code for the function from several variables corresponds to a sum of the form
,
where is an integer positive number, number of values of the digit , and certain functions of arguments . A positional code of a function of several variables is called "hyperpyramidal". Of Figure 2 is depicted for example a positional hyperpyramidal code of a function of three variables. On it the nodes correspond to the digits , and the circles contain the values of indexes of the corresponding digit. A positional hyperpyramidal code is called R-nary (and is denoted as ), if the numbers assume the values from the set . At the codes addition the carry extends on a-dimensional cube, containing digits, and hence .
See also
Hardware acceleration
Digital signal processor
References
Encodings
Central processing unit
Soviet inventions
One-of-a-kind computers | Operating System (OS) | 875 |
ODROID
The ODROID is a series of single-board computers and tablet computers created by Hardkernel Co., Ltd., located in South Korea. Even though the name ODROID is a portmanteau of open + Android, the hardware is not actually open because some parts of the design are retained by the company. Many ODROID systems are capable of running not only Android, but also regular Linux distributions.
Hardware
Several models of ODROID's have been released by Hardkernel. The first generation was released in 2009, followed by higher specification models.
C models feature an Amlogic system on a chip (SoC), while XU models feature an Samsung Exynos SoC. Both include an ARM central processing unit (CPU) and an on chip graphics processing unit (GPU). CPU architectures include ARMv7-A and ARMv8-A, a board memory range from 1 GB RAM to 4 GiB RAM. Secure Digital SD cards are used to store the operating system and program memory in either the SDHC or MicroSDHC sizes. Most boards have between three and five mixed USB 2.0 or 3.0 slots, HDMI output, and a 3.5 mm jack. Lower level output is provided by a number of general-purpose input/output (GPIO) pins which support common protocols like I²C. Current models have an Gigabit Ethernet (8P8C) port and eMMC module socket.
Specifications
Software
Operating systems
References
External links
Official Hardkernel website
ODROID official forum
ODROID Wiki
ODROID Magazine
Single-board computers
Android (operating system) devices
Handheld game consoles
Linux-based devices
Products introduced in 2009
Tablet computers
AI based Human Capital Management Solutions | Operating System (OS) | 876 |
Linux Foundation
The Linux Foundation (LF) is a non-profit technology consortium founded in 2000 as a merger between Open Source Development Labs and the Free Standards Group to standardize Linux, support its growth, and promote its commercial adoption. Additionally, it hosts and promotes the collaborative development of open source software projects. It is a major force in promoting diversity and inclusion in both Linux and the wider open source software community.
The foundation was launched in 2000, under the Open Source Development Labs (OSDL) and became the organization it is today when OSDL merged with the Free Standards Group (FSG). The Linux Foundation sponsors the work of Linux creator Linus Torvalds and lead maintainer Greg Kroah-Hartman. Furthermore, it is supported by members, such as AT&T, Cisco, Fujitsu, Google, Hitachi, Huawei, IBM, Intel, Meta, Microsoft, NEC, Oracle, Orange S.A., Qualcomm, Samsung, Tencent, and VMware, as well as developers from around the world.
In recent years, the Linux Foundation has expanded its support programs through events, training and certification, as well as open source projects. Projects hosted at the Linux Foundation include the Linux kernel project, Kubernetes, Automotive Grade Linux, Open Network Automation Platform (ONAP), Hyperledger, Cloud Native Computing Foundation, Cloud Foundry Foundation, Xen Project, and many others.
Goals
The Linux Foundation is dedicated to building sustainable ecosystems around open source projects to accelerate technology development and commercial adoption. The foundation currently sponsors the work of Linux creator Linus Torvalds and lead maintainer Greg Kroah-Hartman, and aims to provide a neutral home where Linux kernel development can be protected and accelerated.
The foundation also hosts collaborative events among the Linux technical community, software developers, industry, and end users to solve pressing issues facing Linux and open source.
The Linux Foundation supports the Linux community by offering technical information and education through its annual events, such as Open Source Leadership Summit, Linux Kernel Developers Summit, and Open Source Summit (formerly known as LinuxCon, inaugurated in September 2009). A developer travel fund is available.
Initiatives
Community Data License Agreement (CDLA)
Introduced in October 2017, the Community Data License Agreement (CDLA) is a legal framework for sharing data. There are two initial CDLA licenses:
The CDLA-Sharing license was designed to embody the principles of copyleft in a data license. It puts terms in place to ensure that downstream recipients can use and modify that data, and are also required to share their changes to the data.
The CDLA-Permissive agreement is similar to permissive open source licenses in that the publisher of data allows anyone to use, modify and do what they want with the data with no obligations to share changes or modifications.
Linux.com
On March 3, 2009, the Linux Foundation announced that they would take over the management of Linux.com from its previous owners, SourceForge, Inc.
The site was relaunched on May 13, 2009, shifting away from its previous incarnation as a news site to become a central source for Linux tutorials, information, software, documentation and answers across the server, desktop/netbook, mobile, and embedded areas. It also includes a directory of Linux software and hardware.
Much like Linux itself, Linux.com plans to rely on the community to create and drive the content and conversation.
Linux Foundation Public Health (LFPH)
In 2020 amidst the COVID-19 pandemic, the Linux Foundation announced the LFPH, a program dedicated to advancing and supporting the virus contact tracing work led by Google and Apple and their Bluetooth notification systems. The LFPH is focusing its efforts on public health applications, including the effort's first initiative: a notification app intended for governments wanting to launch their privacy-focused exposure notification networks. As of today, LFPF hosts two contact-tracing apps.
LF Climate Finance Foundation
In September 2020, The Linux Foundation announced the LF Climate Finance Foundation (LFCF), a new initiative "to encourage investment in AI-enhanced open source analytics to address climate change." LFCF plans to build a platform that will utilize open-source open data to help the financial investment, NGO, and academia sectors to help better model companies’ exposure to climate change. Allianz, Amazon, Microsoft, and S&P Global will be the initiative's founding members.
Training and certification
The Linux Foundation Training Program features instructors and content from the leaders of the Linux developer and open source communities.
Participants receive Linux training that is vendor-neutral and created with oversight from leaders of the Linux development community. The Linux Foundation's online and in-person training programs aim to deliver broad, foundational knowledge and networking opportunities.
In March 2014, the Linux Foundation and edX partnered to offer a free massive open online class titled Introduction to Linux. This was the first in a series of ongoing free offerings from both organizations whose current catalogue of MOOCs include Intro to DevOps, Intro to Cloud Foundry and Cloud Native Software Architecture, Intro to Apache Hadoop, Intro to Cloud Infrastructure Technologies, and Intro to OpenStack.
In December 2015, the Linux Foundation introduced a self-paced course designed to help prepare administrators for the OpenStack Foundation's Certified OpenStack Administrator exam.
As part of a partnership with Microsoft, it was announced in December 2015 that the Linux on Azure certification would be awarded to individuals who pass both the Microsoft Exam 70-533 (Implementing Microsoft Azure Infrastructure Solutions) and the Linux Foundation Certified System Administrator (LFCS) exam.
In early 2017, at the annual Open Source Leadership Summit, it was announced that the Linux Foundation would begin offering an Inclusive Speaker Orientation course in partnership with the National Center for Women & Information Technology. The free course is designed to give participants "practical skills to promote inclusivity in their presentations."
In September 2020, the Linux Foundation released a free serverless computing training course with CNCF. It is taught by Alex Ellis, founder of OpenFaaS.
Among many other organization with similar offerings, The Linux Foundation has reported a 40% increase in demand for their online courses in 2020 during the coronavirus pandemic and the resulting social-distancing measures.
Patent Commons Project
The patent commons consists of all patented software which has been made available to the open source community. For software to be considered to be in the commons the patent owner must guarantee that developers will not be sued for infringement, though there may be some restrictions on the use of the patented code. The concept was first given substance by Red Hat in 2001 when it published its Patent Promise.
The Patent Commons Project was launched on November 15, 2005, by the Open Source Development Labs (OSDL). The core of the project is an online patent commons reference library aggregating and documenting information about patent-related pledges and other legal solutions directed at the open-source software community. , the project listed 53 patents.
Projects
Linux Foundation Projects (originally "Collaborative Projects") are independently funded software projects that harness the power of collaborative development to fuel innovation across industries and ecosystems. More than 500 companies and thousands of developers from around the world contribute to these open source software projects.
, the total lines of source code present in Linux Foundation's Collaborative Projects are 115,013,302. The estimated, total amount of effort required to retrace the steps of collaborative development for these projects is 41,192.25 person years. In other words, it would take 1,356 developers 30 years to recreate the code bases. At that time, the total economic value of development costs of Linux Foundation Collaborative Projects was estimated at $5 billion. Through continued investment in open source projects and growth in the number of projects hosted, this number rose to $15.6 billion by September 2017.
All Linux Foundation projects are covered by the Contributor Covenant code of conduct developed by Coraline Ada Ehmke, which is intended to ensure a safe and harassment-free environment for minorities.
Some of the projects include (alphabetical order):
ACRN
ACRN is a flexible, lightweight reference hypervisor, built with real-time and safety-criticality in mind, optimized to streamline embedded development through an open source platform.
AllJoyn
AllJoyn is an open source application framework for connected devices and services was formed under Allseen Alliance in 2013. The project is now sponsored as an independent Linux Foundation project by the Open Connectivity Foundation (OCF).
Automotive Grade Linux
Automotive Grade Linux (AGL) is a collaborative open source project developing a Linux-based, open platform for the connected car that can serve as the de facto standard for the industry. Although initially focused on In-Vehicle Infotainment (IVI), the AGL roadmap includes instrument cluster, heads up display, telematics and autonomous driving. The goals of AGL are to provide:
An automotive-focused core Linux operating system stack that meets common and shared requirements of the automotive ecosystem
A transparent, collaborative and open environment for Automotive OEMs, Tier One suppliers, and their semiconductor and software vendors to create in-vehicle software
A collective voice for working with other open source projects and developing new open source solutions
An embedded Linux distribution that enables rapid prototyping for developers new to Linux or teams with prior open source experience
AGL technology
On June 30, 2014, AGL announced their first release, which was based on Tizen IVI and was primarily for demo applications. AGL expanded the first reference platform with the Unified Code Base (UCB) distribution. The first UCB release, nicknamed Agile Albacore, was released in January 2016 and leverages software components from AGL, Tizen and GENIVI Alliance. UCB 2.0, nicknamed Brilliant Blowfish, was made available in July 2016 and included new features like rear seat display, video playback, audio routing and application framework. UCB 3.0, or Charming Chinook was released in January 2017. AGL plans to support additional use cases such as instrument clusters and telematics systems.
Carrier Grade Linux
The "CGL" Workgroup's main purpose is to "interface with network equipment providers and carriers to gather requirements and produce specifications that Linux distribution vendors can implement." It also serves to use unimplemented requirements to foster development projects that will assist in the upstream integration of these requirements.
CD Foundation
The Continuous Delivery Foundation serves as the vendor-neutral home of many of the fastest-growing projects for continuous delivery, including Jenkins, Jenkins X, Spinnaker, and Tekton. It supports DevOps practitioners with an open model, training, industry guidelines, and a portability focus.
Cloud Foundry
Cloud Foundry is an open source, multi cloud application platform as a service (PaaS) governed by the Cloud Foundry Foundation, a 501(c)(6) organization. In January 2015, the Cloud Foundry Foundation was created as an independent not-for-profit Linux Foundation Project. The foundation exists to increase awareness and adoption of Cloud Foundry, grow the contributor community, and create a cohesive strategy across all member companies. The Foundation serves as a neutral party holding all Cloud Foundry intellectual property.
Cloud Native Computing Foundation
Founded in 2015, the Cloud Native Computing Foundation (CNCF) exists to help advance container technology and align the tech industry around its evolution. It was announced with Kubernetes 1.0, an open source container cluster manager, which was contributed to the foundation by Google as a seed technology. Today, CNCF is backed by over 450 sponsors. Founding members include Google, CoreOS, Mesosphere, Red Hat, Twitter, Huawei, Intel, Cisco, IBM, Docker, Univa, and VMware.
CHAOSS
The Community Health Analytics Open Source Software (CHAOSS) project was announced at the 2017 Open Source Summit North America in Los Angeles. Overall, the project aims to provide transparency and health and security metrics for open-source projects.
Code Aurora Forum
Code Aurora Forum is a consortium of companies with projects serving the mobile wireless industry. Software projects it concerns itself with are e.g. Android for MSM, Femto Linux Project, LLVM, MSM WLAN and Linux-MSM.
Core Embedded Linux Project
Started in 2003, the Core Embedded Linux Project aims to provide a vendor-neutral place to establish core embedded Linux technologies beyond those of the Linux Foundation's Projects. From the start, any Linux Foundation member company has been allowed to apply for membership in the Core Embedded Linux Project.
Core Infrastructure Initiative
The Core Infrastructure Initiative was announced on 25 April 2014 in the wake of Heartbleed to fund and support free and open-source software projects that are critical to the functioning of the Internet.
Delta Lake
Delta Lake is an open-source storage layer that brings ACID transactions to Apache Spark™ and big data workloads.
DiaMon Workgroup
The DiaMon Workgroup works toward improving interoperability between open source tools and improve Linux-based tracing, profiling, logging, and monitoring features. According to the workgroup, DiaMon "aims to accelerate this development by making it easier to work together on common pieces."
DPDK
The Data Plane Development Kit consists of libraries to accelerate CPU architecture-running packet processing workloads. According to Intel, "DPDK can improve packet processing performance by up to ten times."
Dronecode
Started in 2014, Dronecode began as an open source, collaborative project to unite current and future open source drone initiatives under the auspices of the Linux Foundation. The goal is a common, shared open source stack for Unmanned Aerial Vehicles (UAVs). Chris Anderson (CEO of 3D Robotics & founder of DIY Drones) serves as the chairman of the board of directors. Lorenz Meier, creator of PX4, MAVLink, QGC, and Pixhawk serves as the community representative on the Board.
EdgeX Foundry
Founded in 2017, EdgeX Foundry acts as a vendor-neutral interoperability framework. It is hosted in a hardware and OS agnostic reference platform and seeks to enable an ecosystem of plug-and-play components, uniting the marketplace and accelerating IoT deployment. The project wants to enable collaborators to freely work on open and interoperable IoT solutions with existing and self-created connectivity standards.
ELISA
The ELISA (Enabling Linux In Safety Applications) project was started to make it easier for companies to build and certify Linux kernel-based safety-critical applications – systems whose failure could result in loss of human life, significant property damage or environmental damage. ELISA members are working together to define and maintain a common set of tools and processes that can help companies demonstrate that a Linux-based system meets the necessary safety requirements for certification.
ELISA was launched in 2019 and builds upon work done by SIL2LinuxMP and Real-Time Linux projects.
FD.io
The Fast Data Project-referred to as "Fido"- provides an IO services framework for the next wave of network and storage software. In the stack, FD.io is the universal data plane. "FD.io runs completely in the user space," said Ed Warnicke (consulting engineer with Cisco and chair of the FD.io technical steering committee).
FinOps Foundation
The FinOps Foundation supports practitioners of FinOps, a discipline that helps finance and IT operations teams to work together to manage public cloud spending collaboratively, to get the maximum value out of cloud investments in a way that aligns to organizational goals. FinOps principles, best practices and framework allow for more accountability and predictability to the highly variable, self-service, consumption based billing models of public cloud.
FOSSology
FOSSology is primarily a project dedicated to an open source license compliance software system and toolkit. Users are able to run license, copyright and export control scans from the command line. A database and web UI provide a compliance workflow.
FRRouting
FRRouting (FRR) is an IP routing protocol suite for Unix and Linux platforms. It incorporates protocol daemons for BGP, IS-IS, LDP, OSPF, PIM, and RIP.
GraphQL Foundation
On 7 November 2018, the GraphQL project was moved from Facebook to the newly established GraphQL Foundation, hosted by the non-profit Linux Foundation.
Hyperledger
The Hyperledger project is a global, open source effort based around advancing cross-industry blockchain technologies. In addition to being hosted by the Linux Foundation, it is backed by finance, banking, IoT, supply chain, manufacturing and technology leaders. The project is the foundation's fastest growing to date, boasting over 115 members since founding in 2016. In May 2016, co-founder of the Apache Software Foundation, Brian Behlendorf, joined the project as its executive director.
IO Visor
IO Visor is an open source project and community of developers that will enable a new way to innovate, develop and share IO and networking functions. It will advance IO and networking technologies to address new requirements presented by cloud computing, the Internet of Things (IoT), Software-Defined Networking (SDN) and Network Function Virtualization (NFV).
IoTivity
IoTivity is an OSS framework enabling seamless device-to-device connectivity to aid the Internet of Things as it grows. While Allseen Alliance and Open Connectivity Foundation merged in October 2016, the IoT projects of each (AllJoyn and IoTivity, respectively) will continue operating under The Linux Foundation. The two projects will "collaborate to support future versions of the OCF specification with a single IoTivity implementation."
JanusGraph
JanusGraph aims to continue open source development of the TitanDB graph database. It is a fork TitanDB, "the distributed graph database that was originally released in 2012 to enable users to find connections among large data sets composed of billions of vertices and edges."
JS Foundation
JS Foundation existed from 2016 to 2019. It was created in 2016 when the Dojo Foundation merged with jQuery Foundation, which merged subsequently rebranded itself as JS Foundation and became a Linux Foundation project. In 2019, the JS Foundation merged with the Node.js Foundation to form the new OpenJS Foundation with a stated mission to foster healthy growth of the JavaScript and web ecosystem as a whole.
Kinetic Open Storage Project
The Kinetic Open Storage Project is dedicated to creating an open source standard around Ethernet-enabled, key/value Kinetic devices for accessing their drives. By creating this standard, it expands the available ecosystem of software, hardware, and systems developers. The project is the result of an alliance including major hard drive manufacturers - Seagate, Toshiba and Western Digital - in addition to Cisco, Cleversafe, Dell, DigitalSense, NetApp, Open vStorage, Red Hat and Scality.
Linux Standard Base
The Linux Standard Base, or LSB, is a joint project by several Linux distributions under the organizational structure of the Linux Foundation to standardize the software system structure, or filesystem hierarchy, used with Linux operating system. The LSB is based on the POSIX specification, the Single UNIX Specification, and several other open standards, but extends them in certain areas.
According to the LSB:
The LSB compliance may be certified for a product by a certification procedure.
The LSB specifies for example: standard libraries, a number of commands and utilities that extend the POSIX standard, the layout of the file system hierarchy, run levels, the printing system, including spoolers such as CUPS and tools like Foomatic and several extensions to the X Window System.
Long Term Support Initiative
LTSI is a project created/supported by Hitachi, LG Electronics, NEC, Panasonic, Qualcomm Atheros, Renesas Electronics, Samsung Electronics, Sony and Toshiba, hosted at The Linux Foundation. It aims to maintain a common Linux base for use in a variety of consumer electronics products.
MLflow
MLflow is an open source platform to manage the ML lifecycle, including experimentation, reproducibility, deployment, and a central model registry.
Node.js Foundation
The Node.js Foundation existed from 2015 to 2019. In 2019, the Node.js Foundation merged with the JS Foundation to form the new OpenJS Foundation. with a stated mission to foster healthy growth of the JavaScript and web ecosystem as a whole.
ODPi
ODPi (Open Data Platform initiative) hosts open source projects that accelerate the development and delivery of big data solutions. It aims to deliver well-defined open source and open data technologies that run across distributed devices. It promotes these technologies worldwide through certification programs and other forms of marketing.
ONOS
ONOS (Open Network Operating System) is an open source community with a mission of bringing the promise of software-defined networking (SDN) to communications service providers in order to make networks more agile for mobile and data center applications with better economics for both users and providers.
OpenAPI Initiative (OAI)
OAI is committed to standardizing how REST APIs are described. SmartBear Software has donated the Swagger Specification directly to the initiative. The new name for the specification is OpenAPI Specification.
OpenBMC
The OpenBMC project is a collaborative open-source project whose goal is to produce an open source implementation of the Baseboard Management Controllers (BMC) Firmware Stack.
OpenChain
The OpenChain Project aims to define effective open source software compliance in software supply chains. A key output is a reference specification for "good" open source compliance, which has become the ISO/IEC 5230:2020 standard. Another output is a simple self-certification scheme that companies can submit to test their conformance with the standard.
Open Container Initiative
In 2015, Docker & CoreOS launched the Open Container Initiative in partnership with The Linux Foundation to create a set of industry standards in the open around container formats and runtime.
OpenDaylight
OpenDaylight is the leading open SDN platform, which aims to accelerate the adoption of Software-Defined Networking (SDN) and Network Functions Virtualization (NFV) in service provider, enterprise and research networks.
OpenJS Foundation
The OpenJS Foundation is made up of 29 open source JavaScript projects including Appium, Dojo, jQuery, and Node.js, and webpack. Founding members included Google, Microsoft, IBM, PayPal, GoDaddy, and Joyent. It was founded in 2019 from a merger of JS Foundation and Node.js Foundation. Its stated mission is to foster healthy growth of the JavaScript and web ecosystem by providing a neutral organization to host projects and collaboratively fund activities that benefit the ecosystem as a whole.
Open Mainframe Project
The Open Mainframe Project aims to drive harmony across the mainframe community and to developed shared tool sets and resources. The project also endeavors to heighten participation of academic institutions in educating mainframe Linux engineers and developers.
OpenMAMA
OpenMAMA (Open Middleware Agnostic Messaging API) is a lightweight vendor-neutral integration layer for systems built on top of a variety of message-oriented middleware.
OpenMessaging
Announced in October 2017, the goal of OpenMessaging is to act as a vendor-neutral open standard for distributed messaging/stream. The project is supported by Alibaba, Verizon's Oath business unit, and others.
OpenPrinting
The OpenPrinting workgroup is a website belonging to the Linux Foundation which provides documentation and software support for printing under Linux. Formed as LinuxPrinting.org, in 2006 it became part of the Free Standards Group.
They developed a database that lists a wide variety of printers from various manufacturers. The database allows people to give a report on the support and quality of each printer, and they also give a report on the support given to Linux by each printer vendor. They have also created a foomatic (formerly cupsomatic) script which plugs into the Common Unix Printing System (CUPS).
OpenSDS
OpenSDS is an open source software defined storage controller. As journalist Swapnil Bhartiya explained for CIO, it was formed to create "an industry response to address software-defined storage integration challenges with the goal of driving enterprise adoption of open standards." It is supported by storage users/vendors, including Dell, Huawei, Fujitsu, HDS, Vodafone and Oregon State University.
Open vSwitch
Originally created at Nicira before moving to VMware (and eventually the Linux Foundation), OvS is an open source virtual switch supporting standard management interfaces and protocols.
ONAP
The Open Network Automation Platform is the result of OPEN-O and Open ECOMP projects merging in April 2017. The platform allows end users to design, manage, and automate services and virtual functions.
OPNFV
The Open Platform for Network Function Virtualization (NFV) "aims to be a carrier-grade, integrated platform that introduces new products and services to the industry more quickly." In 2016, the project began an internship program, created a working group and an "End User Advisory Group" (founded by users & the board)
PNDA
PNDA (Platform for Network Data Analytics) is a platform for scalable network analytics, rounding up data from "multiple sources on a network and works with Apache Spark to crunch the numbers in order to find useful patterns in the data more effectively."
R Consortium
The R Consortium is dedicated to expanding the use of R language and developing it further. R Consortium works with the R Foundation and other organizations working to broaden the reach of the language. The consortium is supported by a collection of tech industry heavyweights including Microsoft, IBM, Oracle, Google, and Esri.
Real-Time Linux
Real-Time Linux has an overall goal of encouraging widespread adoption of Real Time. It was formed to coordinate efforts to mainline PREEMPT_RT and assist maintainers in "continuing development work, long-term support and future research of RT." Before 2004 there were research projects but no serious attempt at merging with mainline kernel. In 2004 Ingo Molnar start work on a patchset joined by Thomas Gleixner, who was working with Douglas Niehaus, and Steven Rostedt along with others. Patchset has seen rewriting and configuration has been merged into mainline kernel.
Project is aimed at deterministic RTOS with work involving threaded interrupts and priority inheritance. There is a wide range of requirements and use cases for RTOS and while project aims at large amounts of them it is not aiming for the most narrow and specialized range of cases.
Unrelated work is done by FSMLabs for RTLinux.
RethinkDB
After RethinkDB announced its shutdown as a business, the Linux Foundation announced that it had purchased the intellectual property under its Cloud Native Computing Foundation project, which was then relicensed under the Apache License (ASLv2). RethinkDB describes itself as "the first open-source, scalable JSON database built from the ground up for the realtime web."
RISC-V International
The RISC-V International association is chartered to standardize and promote the open RISC-V instruction set architecture together with its hardware and software ecosystem for use in all computing devices.
seL4
seL4 is the only micro-kernel with formal verification in the world. It belongs to the L4 micro-kernel family and was, like the other L4's, design to attain great security and performance.
Servo
Servo is a browser engine developed to take advantage of the memory safety properties and concurrency features of the Rust programming language. It was originally developed by Mozilla and later donated to the Linux Foundation.
SNAS.io
Streaming Network Analytics System (project SNAS.io) is an open source framework to collect and track millions of routers, peers, prefixes (routing objects) in real time. SNAS.io is a Linux Foundation Project announced in May 2017.
SPDX
The Software Package Data eXchange (SPDX) project was started in 2010, to create a standard format for communicating the components, licenses and copyrights associated with software packages. As part of the project, there is a team that curates the SPDX License List, which defines a list of identifiers for commonly found licenses and exceptions used for open source and other collaborative software.
Tizen
Tizen is a free and open-source, standards-based software platform supported by leading mobile operators, device manufacturers, and silicon suppliers for multiple device categories such as smartphones, tablets, netbooks, in-vehicle infotainment devices, and smart TVs.
TODO
TODO (Talk Openly, Develop Openly) is an open source collective housed under the Linux Foundation. It helps companies interested in open source collaborate better and more efficiently. TODO aims to reach companies and organizations that want to turn out the best open source projects and programs. "The TODO Group reaches across industries to collaborate with open source technical and business leaders to share best practices, tools and programs for building dependable, effective projects for the long term," said Jim Zemlin at Collaboration Summit 2016.
Xen Project
The Xen Project team is a global open source community that develops the Xen Hypervisor, contributes to the Linux PVOPS framework, the Xen® Cloud Platform and Xen® ARM.
Yocto Project
The Yocto Project is an open source collaboration project that provides templates, tools and methods to help create custom Linux-based systems for embedded products regardless of the hardware architecture. It was founded in 2010 as a collaboration among many hardware manufacturers, open-source operating systems vendors, and electronics companies to bring some order to the chaos of embedded Linux development.
Zephyr Project
Zephyr is a small real-time operating system for connected, resource-constrained devices supporting multiple architectures. It is developed as an open source collaboration project and released under the Apache License 2.0. Zephyr became a project of the Linux Foundation in February 2016.
Community stewardship
For the Linux kernel community, the Linux Foundation hosts its IT infrastructure and organizes conferences such as the Linux Kernel Summit and Linux Plumbers Conference. It also hosts a Technical Advisory Board made up of Linux kernel developers. One of these developers is appointed to sit on the Linux Foundation board.
Goodwill partnership
In January 2016, the Linux Foundation announced a partnership with Goodwill Central Texas to help hundreds of disadvantaged individuals from underserved communities and a variety of backgrounds get the training they need to start new and lucrative careers in Linux IT.
Community Developer Travel Fund
To fund deserving developers to accelerate technical problem solving and collaboration in the open source community, the Linux Foundation launched the Community Developer Travel Fund. Sponsorships are open to elite community developers with a proven track record of open source development achievement who cannot get funding to attend technical events from employers.
Community Specification
In July 2020, the Linux Foundation announced an initiative allowing open source communities to create Open Standards using tools and methods inspired by open source developers.
Core Infrastructure Initiative
The Core Infrastructure Initiative (CII), a project managed by the Linux Foundation that enables technology companies, industry stakeholders and esteemed developers to collaboratively identify and fund critical open source projects in need of assistance. In June 2015, the organization announced financial support of nearly $500,000 for three new projects to better support critical security elements of the global information infrastructure. In May 2016, CII launched its Best Practice Badge program to raise awareness of development processes and project governance steps that will help projects have better security outcomes. In May 2017, CII issued its 100th badge to a passing project.
Open Compliance Program
The Linux Foundation's Open Compliance Program provides an array of programs for open source software compliance. The focus in this initiative is to educate and assist developers (and their companies) on license requirements in order to build programs without friction. The program consists primarily of self-administered training modules, but it is also meant to include automated tools to help programmatically identify license compliance issues.
Members
As of June 2018, there are over 1000 members who identify with the ideals and mission of the Linux Foundation and its projects.
Corporate members
Affiliates
Blockchain at Columbia
Clemson University
Indiana University
Fondazione Inuit
ISA
Konkuk University
NXT
Seneca College
Trace Research and Development Center at University of Maryland, College Park
Turbot
University of Rome Tor Vergata
University of Wisconsin–Madison
Zhejiang University
Funding
Funding for the Linux Foundation comes primarily from its Platinum Members, who pay US$500,000 per year according to Schedule A in LF's bylaws, adding up to US$4 million. The Gold Members contribute a combined total of US$1.6 million, and smaller members less again.
As of April 2014, the foundation collected annual fees worth at least US$6,245,000.
Use of donations
Before early 2018, the Linux Foundation's website stated that it "uses [donations] in part to help fund the infrastructure and fellows (like Linus Torvalds) who help develop the Linux kernel."
Events
The Linux Foundation events are where the creators, maintainers, and practitioners of the most important open source projects meet. Linux Foundation events in 2017, for example, were expected to attract nearly 25,000 developers, maintainers, system administrators, thought leaders, business executives and other industry professionals from more than 4,000 organizations across 85 countries. Many open source projects also co-locate their events at the Linux Foundation events to take advantage of the cross-community collaboration with projects in the same industry.
2017 events covered various trends in open source, including big data, cloud native, containers, IoT, networking, security, and more.
Linux Foundation events are covered by a comprehensive code of conduct prohibiting inappropriate behavior, including harassment and offensive language. This applies at all times, either before, during or after the event, not only in person but also on social media and any other form of electronic communication. Persons witnessing such behavior are encouraged to report it to conference staff immediately and offenders may face penalties up to and including a lifetime ban from all future events. Additionally, to further improve diversity at events, all-male panels or speaker line-ups are specifically disallowed.
Due to the COVID-19 pandemic, the Linux Foundation has transitioned their events to a digital model until the virus has been successfully managed. The Foundation's largest event, Open Source Summit, was held remotely from June 29 to July 2, 2020 — it had originally been planned to take place in Austin, Texas.
References
External links | Operating System (OS) | 877 |
Lubuntu
Lubuntu ( ) is a lightweight Linux distribution based on Ubuntu and uses the LXQt desktop environment in place of Ubuntu's GNOME desktop. Lubuntu was originally touted as being "lighter, less resource hungry and more energy-efficient", but now aims to be "a functional yet modular distribution focused on getting out of the way and letting users use their computer".
Lubuntu originally used the LXDE desktop, but moved to the LXQt desktop with the release of Lubuntu 18.10 in October 2018, due to the slow development of LXDE, losing support for GTK 2 as well as the more active and stable LXQt development without GNOME dependencies.
The name Lubuntu is a portmanteau of LXQt and Ubuntu. The LXQt name derives from the merger of the LXDE and Razor-qt projects, while the word Ubuntu means "humanity towards others" in the Zulu and Xhosa languages.
Lubuntu received official recognition as a formal member of the Ubuntu family on 11 May 2011, commencing with Lubuntu 11.10, which was released on 13 October 2011.
History
The LXDE desktop was first made available for Ubuntu in October 2008, with the release of Ubuntu 8.10 Intrepid Ibex. These early versions of Lubuntu, including 8.10, 9.04 and 9.10, were not available as separate ISO image downloads, and could only be installed on Ubuntu as separate lubuntu-desktop packages from the Ubuntu repositories. LXDE can also be retroactively installed in earlier Ubuntu versions.
In February 2009, Mark Shuttleworth invited the LXDE project to become a self-maintained project within the Ubuntu community, with the aim of leading to a dedicated new official Ubuntu derivative to be called Lubuntu.
In March 2009, the Lubuntu project was started on Launchpad by Mario Behling, including an early project logo. The project also established an official Ubuntu wiki project page, that includes listings of applications, packages, and components.
In August 2009, the first test ISO was released as a Live CD, with no installation option.
Initial testing in September 2009 by Linux Magazine reviewer Christopher Smart showed that Lubuntu's RAM usage was about half of that of Xubuntu and Ubuntu on a normal installation and desktop use, and two-thirds less on live CD use.
In 2014, the project announced that the GTK+-based LXDE and Qt-based Razor-qt would be merging into the new Qt-based LXQt desktop and that Lubuntu would consequently be moving to LXQt. The transition was completed with the release of Lubuntu 18.10 in October 2018, the first regular release to employ the LXQt desktop.
Lenny became Lubuntu's mascot in 2014.
During the 2018 transition to becoming LXQt-based, the aim of Lubuntu was re-thought by the development team. It had previously been intended for users with older computers, typically ten years old or newer, but with the introduction of Windows Vista PCs, older computers gained faster processors and much more RAM, and by 2018, ten-year-old computers remained much more capable than had been the case five years earlier. As a result, the Lubuntu development team, under Simon Quigley, decided to change the focus to emphasize a well-documented distribution, based on LXQt "to give users a functional yet modular experience", that is lightweight by default and available in any language. The developers also decided to stop recommending minimum system requirements after the 18.04 LTS release.
Developer Simon Quigley announced in August 2018 that Lubuntu 20.10 will switch to the Wayland display server protocol by default.
In January 2019, the developers formed the Lubuntu Council, a new body to formalize their previous organization, with its own written constitution.
Releases
Lubuntu 10.04
On 30 December 2009 the first Alpha 1 "Preview" version ISO for Lubuntu 10.04 Lucid Lynx was made available for testing, with Alpha 2 following on 24 January 2010. The first Beta was released on 20 March 2010 and the stable version of Lubuntu 10.04 was released on 2 May 2010, four days behind the main Ubuntu release date of 28 April 2010.
Lubuntu 10.04 was only released as a 32-bit ISO file, but users could install a 64-bit version through the 64-bit Mini ISO and then install the required packages.
Lubuntu 10.04 was not intended to be a long-term support (LTS) release, unlike Ubuntu 10.04 Lucid Lynx, and was only going to be supported for 18 months. However, since the infrastructure of Ubuntu 10.10 Maverick Meerkat (and thus Lubuntu 10.10) dropped support for i586 processors, including VIA C3, AMD K6, and AMD Geode/National Semiconductor CPUs, the release of Lubuntu 10.10 prompted the community to extend support until April 2013 for Lubuntu 10.04, as if it were a long term support version.
In reviewing Lubuntu 10.04 Alpha 1 in January 2010, Joey Sneddon of OMG Ubuntu wrote, "Not having had many preconceptions regarding LXDE/Lubuntu i found myself presently surprised. It was pleasant to look at, pleasant to use and although I doubt I would switch from GNOME to LXDE, it can give excellent performance to those who would benefit from doing so." In writing about the final 10.10 release, on 10 October 2010 Sneddon termed it "A nimble and easy-to-use desktop".
Writing about Lubuntu 10.04 in May 2010 Damien Oh of Make Tech Easier said: "If you are looking for a lightweight alternative to install in your old PC or netbook, Lubuntu is a great choice. You won’t get any eye candy or special graphical effects, but what you get is fast speed at a low cost. It’s time to put your old PC back to work."
Also reviewing Lubuntu 10.04 in May 2010 Robin Catling of Full Circle magazine said: "The first thing that impresses on running Lubuntu on my modest Compaq Evo laptop (Pentium-M, 512 MB RAM) is the small memory footprint... It beats Karmic on Gnome, and Xubuntu on Xfce, by a mile. The Evo used to take 60 seconds-plus to boot to the desktop, LXDE takes exactly 30. Yet you're not restricted; gtk2 applications are well supported, and Synaptic hooks up to the Ubuntu repositories for package management (so you can pull down Open Office to replace the default Abi-Word without crippling the machine)." Catling did note of the file manager, "The PCManFM file manager needs a little more maturity to compete with Thunar, but it's a competent and robust
application that doesn't hog resources like Nautilus or Dolphin."
In June 2010 Jim Lynch reviewed Lubuntu 10.04, saying, "One thing you’ll notice about using the Lubuntu desktop is that it’s fast. Very, very fast. Even on an underpowered machine, Lubuntu should perform very well. It’s one of the best things about this distro; it leaves behind the bloated eye candy that can sometimes bog down GNOME and KDE... I didn’t run into any noticeable problems with Lubuntu. It was very fast and stable, and I didn’t see noticeable bugs or problems. I hate it when this happens since it’s so much more interesting for my readers when I run into one nasty problem or another. Hopefully the next version of Lubuntu will be chock full of horrendous problems and bugs. Just kidding."
In September 2010 lead developer Julien Lavergne announced that the Lubuntu project had not been granted official status as a derivative of Ubuntu as part of the Ubuntu 10.04 release cycle, but that work would continue on this goal for Ubuntu 10.10. Lavergne explained the reasons as "there is still a resource problem on Canonical /Ubuntu infrastructure, which was not resolved during this cycle. Also, they are writing a real process to integrate new member in the Ubuntu family, but it’s still not finished."
Lubuntu 10.10
Lubuntu 10.10 was released on schedule on 10 October 2010, the same day as Ubuntu 10.10 Maverick Meerkat, but it was not built with the same underlying infrastructure as Ubuntu 10.10. Developer Julien Lavergne said about it, "Lubuntu is actually not part of the Ubuntu family, and not build with the current Ubuntu infrastructure. This release is considered as a "stable beta", a result that could be a final and stable release if we was included in the Ubuntu family." Version 10.10 introduced new artwork to the distribution, including new panel and menu backgrounds, a new Openbox theme, new Lubuntu menu logo, splash images and desktop wallpaper. Lubuntu 10.10 was not accepted as an official Ubuntu derivative at this release point due to "a lack of integration with the infrastructure Canonical and Ubuntu" but work is continuing towards that goal.
Lubuntu 10.10 was only released as a 32-bit ISO file, but users could install a 64-bit version through the 64-bit Mini ISO and then install the required packages.
Developer Julien Lavergne wrote that while 10.10 incorporated many changes over 10.04, not all of the changes were considered improvements. The improvements included a new theme designed by Rafael Laguna, the incorporation of xpad for note taking, Ace-of-Penguins games, LXTask the LXDE task manager in place of the Xfce application, replacing the epdfview PDF reader with Evince due to a memory leak problem and removing pyneighborhood. The minuses included a last-minute rewrite of the installer to integrate it properly, which resulted in some installation instability and the raising of the minimum installation RAM from 180 MB to 256 MB. The other issue was the incorporation of the Ubuntu Update Manager which increased RAM usage by 10 MB. Lubuntu 10.04 had no indication of updates being available, so this was deemed necessary.
The minimum system requirements for Lubuntu 10.10 were described by Mario Behling as "comparable to Pentium II or Celeron systems with a 128 MB RAM configuration, which may yield a slow yet usable system with lubuntu." Chief developer Julien Lavergne stated that the minimum RAM to install Lubuntu 10.10 is 256 MB.
In reviewing Lubuntu 10.10 right after its release in October 2010, Jim Lynch of Eye on Linux said "Lubuntu’s biggest appeal for me is its speed; and it’s no disappointment in that area. Applications load and open quickly, and my overall experience with Lubuntu was quite positive. I detected no stability problems, Lubuntu 10.10 was quite solid and reliable the entire time I used it." Lynch did fault the choice of Synaptic as the package manager: "One of the strange things about Lubuntu is that it only offers Synaptic as its package manager. Xubuntu 10.10, on the other hand, offers the Ubuntu Software Center as well as Synaptic. I’m not sure why the Ubuntu Software Center is missing from Lubuntu; it would make a lot of sense to include it since it is a much easier and more attractive way to manage software. Synaptic gets the job done, but it’s less friendly to new users and can’t match the Ubuntu Software Center in terms of usability and comfort."
By mid-December 2010, Lubuntu had risen to 11th place on DistroWatch's six-month list of most popular Linux distributions out of 319 distributions, right behind Puppy Linux and well ahead of Xubuntu, which was in 36th place. In reviewing Linux distribution rankings for DistroWatch in early January 2011 for the year 2010 versus 2009, Ladislav Bodnár noted, "Looking through the tables, an interesting thing is the rise of distributions that use the lightweight, but full-featured LXDE desktop or the Openbox window manager. As an example, Lubuntu now comfortably beats Kubuntu in terms of page hits..."
Lubuntu 11.04
The project announced the development schedule in November 2010 and Lubuntu 11.04 was released on time on 28 April 2011.
Lubuntu 11.04 was only released as a 32-bit ISO file, but users could install a 64-bit version through the 64-bit Mini ISO and then install the required packages. An unofficial 64-bit ISO of 11.04 was also released by Kendall Weaver of Peppermint OS.
Improvements in Lubuntu 11.04 included replacing Aqualung with Audacious as the default music player, elimination of the hardware abstraction layer, introducing movable desktop icons, the Ubuntu font being used by default, improved menu translations and reorganized menus. The release also introduced a new default theme and artwork designed by Raphael Laguna, known as Ozone, which is partly based on Xubuntu’s default Bluebird theme.
Lubuntu 11.04 can be run with as little as 128 MB of RAM, but requires 256 MB of RAM to install using the graphical installer.
While Lubuntu 11.04 had not completed the process for official status as a member of the Ubuntu family, Mario Behling stated: "The next goals of the project are clear. Apart from constantly improving the distribution, the lubuntu project aims to become an official flavour of Ubuntu."
Mark Shuttleworth remarked to the Lubuntu developers upon the release of 11.04:
In reviewing Lubuntu 11.04 just after its release, Joey Sneddon of OMG Ubuntu commented on its look: "Lubuntu’s 'traditional' interface will be of comfort to those agitated by the interface revolution heralded in GNOME 3 and Ubuntu Unity; it certainly won’t appeal to 'bling' fans! But that’s not to say attention hasn’t been paid to the appearance. The new default theme by Raphael Laguna and the use of the Ubuntu font helps to give the sometimes-basic-feeling OS a distinctly professional look." On the subject of official status Sneddon said, "Lubuntu has long sought official sanction from the Ubuntu Project family to be classed as an official 'derivative' of Ubuntu, earning a place alongside Kubuntu and Xubuntu. With such an accomplished release as Lubuntu 11.04 the hold out on acceptance remains disappointing if expected."
In a review on 12 May 2011 Jim Lynch of Desktop Linux Reviews faulted 11.04 for not using the Ubuntu Software Center, the lack of alternative wallpapers and the use of AbiWord in place of LibreOffice. He did praise Lubuntu, saying: "speed is one of the nice things about Lubuntu; even on a slow or older system it’s usually quite fast. It’s amazing what you can achieve when you cut out the unnecessary eye-candy and bloat."
Also on 12 May 2011, Koen Vervloesem writing in Linux User & Developer criticized the applications bundled with Lubuntu, saying "Some of the software choices are rather odd, however. For instance, Chromium is the default web browser, which is a sensible move for a distro aimed at low-end computers, but the developers also ship Firefox, so Lubuntu shows both web browsers in the Internet menu. Also, the default screenshot program is scrot, but this is a command-line program and it is not shown in the Accessories menu, so not everyone will find it. Another odd choice is that you install your applications with Synaptic: by default Lubuntu doesn’t have the Ubuntu Software Center, which has been the preferred software installation program in Ubuntu for a good few releases now. These are just minor inconveniences, though, since you get access to the full Ubuntu software repositories, meaning you can install your favourite applications in a blink of the eye."
One month after its release, Lubuntu 11.04 had risen to ninth place on the DistroWatch 30-day list of most popular distributions.
Lubuntu 11.10
Lubuntu 11.10 was the first version of Lubuntu with official sanction as a member of the Ubuntu family. As part of this status change Lubuntu 11.10 used the latest Ubuntu infrastructure and the ISO files were hosted by Ubuntu. The release did not include many new features as work focused on integration with Ubuntu instead.
11.10 was released on 13 October 2011, the same day that Ubuntu 11.10 was released.
In September 2011 it was announced that work on a Lubuntu Software Center was progressing. The Ubuntu Software Center is too resource intensive for Lubuntu and so Lubuntu has been using the less user-friendly Synaptic package manager in recent releases. The development of a new lightweight application manager for Lubuntu is intended to rectify this problem, although users can, of course, install the Ubuntu Software Center using Synaptic.
Changes in Lubuntu 11.10 include that it was built with the Ubuntu official build system using the current packages by default, alternative install and 64-bit ISOs were provided, use of xfce4-power-manager, a new microblog client, pidgin-microblog and a new theme by Rafael Laguna.
Lubuntu 11.10 requires a minimum of 128 MB of RAM to run and 256 MB of RAM to install with the graphic installer. The recommended minimum RAM to run a live CD session is 384 MB.
The Lubuntu 11.10 ISO file contains a known issue that causes it to fail to load a live CD session on some hardware, instead loading to a command prompt. Users are required to enter sudo start lxdm at the prompt to run the live CD session.
In a review of Lubuntu 11.10 on PC Mech, writer Rich Menga described it as "simple, rock-solid, reliable, trustworthy". He added "Ubuntu at this point is suffering from major bloat on the interface side of things, and you can even say that about Xubuntu at this point – but not Lubuntu, as it gets back to what a great Linux distro should be."
By the end of October 2011 Lubuntu had risen to seventh place on the DistroWatch one month popularity list.
In a review in Linux User and Developer in November 2011, Russell Barnes praised Lubuntu 11.10 for its low system hardware requirements, for providing an alternative to GNOME and KDE, saying that its "aesthetic appeal and functionality is minimally compromised in its effort to be as sleek and light as possible". Barnes noted that Mark Shuttleworth may have been wise to offer full status to Lubuntu for this release given the "fuss and bluster surrounding Unity". Of the aesthetics he stated "the now trademark pale blue of the desktop is almost hypnotic. It’s incredibly clean, clear and logically laid out – a user experience a million miles away from that of Ubuntu 11.10’s Unity or GNOME Shell counterparts. In comparison there’s an almost cleansing nature about its simplicity." Barnes rated it as 4/5 and concluded "While it’s not as flexible or pretty as [GNOME 2], Lubuntu 11.10 has certainly got everything you need to keep your computer happy and your desktop clean and clutter-free"
Igor Ljubuncic in Dedoimedo said about Lubuntu 11.10, "Lubuntu is meant to offer a valid alternative to the heavier KDE and Unity flavors. It tries bravely and fails heroically. The only advantage is the somewhat reduced system resource usage, but it is more than triply negatively compensated by the drawbacks of the desktop environment as well as the incomplete integration. Then, there you have Samba-related crashes, no laptop hotkeys, jumbled system tray icons, low battery life. If you want to be really mean, you could add the lack of customization, an average software arsenal, and a dozen other smaller things that get in the way... All in all, Lubuntu could work for you, but it's not exciting or spectacular in any way and packages a handsome bag of problems that you can easily avoid by using the main release... I would not recommend this edition... Grade: 6/10."
Lubuntu 12.04
Lubuntu 12.04 was released on 26 April 2012. Planning for this release took place at the Ubuntu Developer Summit held in early November 2011. Changes planned at that time for the release included the use of LightDM as the X display manager and of Blueman instead of gnome-bluetooth for managing bluetooth devices.
The Lubuntu Software Center was added with this release to provide a more user-friendly graphical interface for managing applications. Synaptic package manager is still installed by default and allows users to manage all packages in the system. GDebi allows the installation of downloaded .deb packages.
Lubuntu 12.04 was released with the Linux v3.2.14 Linux kernel and also introduced a large number of bug fixes, particularly in the LX panel and in the PCManFM file manager. The Ubuntu Backports repository was enabled by default, meaning backport packages were not installed by default, but once installed were automatically upgraded to newer versions.
Lubuntu 12.10
Lubuntu 12.10 was released on 18 October 2012 and includes a new version of the session manager, with more customization and integration options. It also includes a new version of the PCMan File Manager, with external thumbnail support. This version has new artwork, including a new wallpaper, a new icon set entitled Box and adjusted GTK themes. The notification-daemon has been replaced by xfce4-notifyd on the default installation. Previous versions of Lubuntu did not have a GUI search function and so the Catfish search utility was added to the default installation.
This version of Lubuntu uses the Linux kernel 3.5.5, Python 3.2 and OpenJDK7 as the default Java implementation.
The installation requires a CPU with Physical Address Extensions (PAE), which indicates an Intel Pentium Pro and newer CPU, except most 400 MHz-bus versions of the Pentium M. In the case of PowerPCs, it was tested on a PowerPC G4 running at 867 MHz with 640 MB RAM and will also run on all Intel-based Apple Macs. There is also a version that supports the ARM architecture, but the developers currently only provide installation instructions for one ARM-based device (the Toshiba AC100 netbook).
This release of Lubuntu does not support UEFI Secure Boot, unlike Ubuntu 12.10, which would have allowed it to run on hardware designed for Windows 8. Lubuntu 12.10 could be run on UEFI secure boot hardware by turning off the secure boot feature.
Lubuntu 13.04
Lubuntu 13.04 was released on 25 April 2013.
This version only incorporated some minor changes over Lubuntu 12.10, including a new version of the PCManFM file manager which incorporates a built-in search utility. Due to this particular file manager update, the Catfish search utility was no longer required and was deleted. Lubuntu 13.04 also introduced some artwork improvements, with new wallpaper offerings, new icons and a new installation slideshow.
The minimum system requirements for Lubuntu 13.04 are a Pentium II or Celeron CPU with PAE support, 128 MB of RAM and at least 2 GB of hard-drive space. This release also still supports PowerPC architecture, requiring a G4 867 MHz processor and 640 MB of RAM minimum.
Lubuntu 13.10
Julien Lavergne announced in June 2013 that Lubuntu 13.10 would ship with Firefox as its default browser in place of Chromium. This release also used LightDM for screen locking and included zRam.
In reviewing the beta release in September 2013, Joey Sneddon of OMG Ubuntu said: "Lubuntu has never looked as good as it does in this latest beta." He noted that the new "box" icon theme had been expanded, progress bar colours softened and window controls enlarged along with a sharpened "start button".
The final release incorporated only minor changes over 13.04. It included a new version of PCManFM that includes a file search function, which allowed the Catfish desktop search to be removed. There was also new artwork included and bug fixes for gnome-mplayer and the gpicview image viewer.
In reviewing Lubuntu 13.10, Jim Lynch said "Sometimes less can be much, much more when it comes to Linux distributions. Lubuntu 13.10 offers some of the advantages of Ubuntu but in a much more minimalist package."
Lubuntu 14.04 LTS
Tentative plans were announced in April 2013 to make Lubuntu 14.04 a long term support release. In November 2013 it was confirmed that 14.04 would be the first Lubuntu LTS release with three years of support. This release also saw xscreensaver replaced by light-locker screen lock.
Released on 17 April 2014, Lubuntu 14.04 included just minor updates over version 13.10, along with a more featured file manager.
Download media for Lubuntu 14.04 is available in PC 32 bit, PC 64 bit, Mac Intel 64 bit and Mac PowerPC. For early Intel Macs with a 32 bit Core solo processor, a 32-bit PC image is available.
In reviewing Lubuntu 14.04 LTS Silviu Stahie of Softpedia noted, "because it uses a similar layout with the one found on the old and defunct Windows XP, this OS is considered to be a very good and appropriate replacement for Microsoft's operating system."
On 1 June 2014 Jim Lynch reviewed Lubuntu 14.04 LTS and concluded, "Lubuntu 14.04 LTS performed very well for me. It was fast and quite stable while I was using it. I had no problems running any applications and the system as a whole lived up to its reputation as a great choice for Ubuntu minimalists... The LXDE desktop environment is very different than Unity for Ubuntu or GNOME 3 in Ubuntu GNOME. It’s a traditional desktop which means it’s very quick and easy to learn how to use. And if you are someone that doesn’t like Unity or GNOME then LXDE in Lubuntu 14.04 LTS might be just what the doctor ordered. You’ll get all the benefits of Ubuntu, but without the discomfort of the Unity interface."
Lubuntu 14.10
This release, on 23 October 2014, was originally intended to feature a version of LXDE based upon the Qt toolkit and called LXQt, but development of the latter was delayed and the feature was not implemented in time.
Lubuntu 14.10 incorporated general bug fixes in preparation for the implementation of LXQt, updated LXDE components and new artwork, including more icons and a theme update.
Silviu Stahie, writing for Softpedia, stated, "One of the main characteristics of Lubuntu is the fact that it's fast, even on older computers. Basically, Lubuntu is able to run on anything built in the last decade, and there are very few operating systems out there that can claim the same thing... Just like its Ubuntu base, Lubuntu 14.10 has seen very few important visual modifications, although many packages have been updated under the hood. The theme and the icons have been updated, but the developers are preparing to make the switch to LXQt, a project that is still in the works."
Igor Ljubuncic in Dedoimedo said about Lubuntu 14.10, "There's nothing functionally wrong with Lubuntu. It's not bad. It's simply not interesting. It's meat without flavor, it's a hybrid car, it's accounting lessons at the local evening school, it's morning news, it's a visit to Pompei while blindfolded. There's no excitement... I liked this desktop environment in the past, but it's stagnated. It hasn't evolved at all, and its competitors have left it far behind. And that reflects poorly on Lubuntu, which, despite a calm and stable record of spartan behavior, has left with me an absolute zero of emotional attachment toward it."
Lubuntu 15.04
Released on 23 April 2015, Lubuntu 15.04 consisted primarily of bug fixes, as the project prepared for the planned switch to LXQt in Lubuntu 15.10. The Lubuntu Box theme was updated and merged into the Ubuntu Light theme to incorporate the most recent GTK+ features, including new header bars for GNOME native applications, plus improved artwork and icons.
The minimum system requirements for this release include: 512 MB of RAM, with 1 GB recommended, plus a Pentium 4 or Pentium M or AMD K8 processor. The release notes indicated about graphics cards: "Nvidia, AMD/ATI/Radeon and Intel work out of the box".
Marius Nestor of Softpedia noted, "...the Lubuntu 15.04 operating system comes now with updated artwork, which includes an updated theme, more beautiful icons, and an updated GTK+ infrastructure for better compatibility with Qt applications."
Lubuntu 15.10
Released on 22 October 2015, Lubuntu 15.10 was originally planned to move to LXQt and its Qt libraries in place of the GTK+ libraries used by LXDE, but in June 2015 this was delayed to a future release. The release ended up as a minor bug fix and application version update.
Changes in this versions included new artwork, iBus replaced by Fcitx, allowing fonts for Chinese, Japanese and Korean to be included. lubuntu-extra-sessions is now optional instead of default.
The minimum system requirements for this release stated, "For advanced internet services like Google+, Youtube, Google Docs and Facebook, your computer needs about 1 GB RAM. For local programs like Libre Office and simple browsing habits, your computer needs about 512 MB RAM ... The minimum specification for CPU is Pentium 4 or Pentium M or AMD K8. Older processors are too slow and AMD K7 has problems with flash video ... Nvidia, AMD/ATI/Radeon and Intel work out of the box, or the system can be tweaked to work fairly easily.
Joey Sneddon of OMG Ubuntu humorously noted, "Lubuntu 15.10 is another highly minor bug fix release."
Lubuntu 16.04 LTS
Released on 21 April 2016, Lubuntu 16.04 is a Long Term Support (LTS) version, supported for three years until April 2019. It is the second Lubuntu LTS version, preceded by 14.04 in April 2014.
This release retains the LXDE desktop and did not make the transition to LXQt, to allow LXQt to be better tested in later non-LTS releases.
This release is too large a file to fit on a CD and requires a DVD or USB flash drive installation. Lubuntu 16.04 LTS is primarily a bug-fix release and includes few new features. It does have updated artwork, however. The system requirements include 512 MB of RAM (1 GB recommended) and a Pentium 4, Pentium M, AMD K8 or newer CPU.
The first point release, 16.04.1, was released on 21 July 2016. The release of Lubuntu 16.04.2 was delayed a number of times, but it was eventually released on 17 February 2017. Lubuntu 16.04.3 was released on 3 August 2017. Lubuntu 16.04.4 was delayed from 15 February 2018 and was released on 1 March 2018. Lubuntu 16.04.5 was released on 2 August 2018.
On 8 March 2017 a new version of Firefox, 52.0, arrived through the update process. This version removed ALSA audio support from Firefox in favour of PulseAudio, something initially not mentioned in the Mozilla release notes. Since Lubuntu 16.04 LTS shipped with only ALSA audio, this broke the default Lubuntu audio system in the default Lubuntu browser. In response to a bug filed, Mozilla developers declined to fix the issue.
Lubuntu 16.10
Lubuntu 16.10 was released on 13 October 2016. It uses LXDE and not LXQt. The implementation of LXQt was delayed from this release until 17.04.
The release also features just small bug fixes, updated LXDE components and updated artwork, particularly the wallpaper.
The developers' recommended system requirements for this release were, "for advanced internet services like Google+, YouTube, Google Drive, and Facebook, your computer needs at least 1 GB of RAM. For local programs like LibreOffice and simple browsing habits, your computer needs at least 512 MB of RAM. The minimum specification for CPU is Pentium 4 or Pentium M or AMD K8. Older processors are too slow and the AMD K7 has problems with Flash video."
Joey Sneddon of OMG Ubuntu noted that there are very few new features in Lubuntu 16.10, but that it no longer uses the Lubuntu Software Centre, having switched to GNOME Software, as Ubuntu also has. Sneddon wrote, "Lubuntu 16.10 is largely the same as Lubuntu 16.04 LTS as work on switching to the LXQt desktop – expected next release – continues." In a July 2016 article, Sneddon singled out the new wallpaper design for Lubuntu 16.10, saying, "the jaggedy geometric layout of the new backdrop stands out as one of the more visually distinct to ship in recent years".
Marius Nestor of Softpedia wrote: "it appears that there are a lot of known issues for this release, so if you're using Lubuntu 16.04 LTS (Xenial Xerus), we don't recommend upgrading to Lubuntu 16.10, or at least read about them before attempting an upgrade operation."
Lubuntu 17.04
Lubuntu 17.04 was released on 13 April 2017. Like previous releases it uses LXDE and not LXQt, as the implementation of LXQt in Lubuntu was delayed once again, this time until 17.10.
This release incorporated Linux Kernel 4.10, updated LXDE components, general bug fixes and new artwork. The recommended system requirements included 1 GB of RAM (512 MB minimum) and a minimum of a Pentium 4, Pentium M or AMD K8 processor.
Joey Sneddon of OMG Ubuntu said of this release, that it is "compromised mainly of bug fixes and core app and system updates rather than screenshot-able new features."
Lubuntu 17.10
Lubuntu 17.10 was released on 19 October 2017.
This release was a general bug fix release as the project prepares for the implementation of LXQt. Also included were new versions of the LXDE components and new artwork. The minimum system requirements for this release remained 512 MB of RAM (with 1 GB recommended) and at least a Pentium 4, Pentium M or AMD K8 processor.
An alternate version entitled Lubuntu Next 17.10 was provided with the LXQt 0.11.1 desktop. "While this release is available to install... we do NOT recommend that people use it in production unless they are aware of the somewhat critical bugs associated (which are more than 10 at the point of writing this). It also wouldn’t be a bad idea to be in contact with us as well," wrote Lubuntu developer Simon Quigley.
Lubuntu 18.04 LTS
Lubuntu 18.04 is a long term support version that was released on 26 April 2018. Like all past releases, it uses the LXDE desktop, although work continued to move towards deployment of the LXQt desktop, referred to as Lubuntu Next. 18.04 was the last release of Lubuntu to use the LXDE desktop as 18.10 moved to using LXQt.
This release included new artwork, including a new star field wallpaper.
System requirements for Lubuntu 18.04 LTS included a minimum of 1 GB of RAM, although 2 GB was recommended for better performance, plus a Pentium 4, Pentium M, or AMD K8 CPU or newer. The RAM requirements increased from Lubuntu 17.10.
Point releases include 18.04.1 on 26 July 2018 and 18.04.2 on 14 February 2019.
Lubuntu 18.10
In a 14 May 2018 announcement the project developers confirmed that Lubuntu would transition to the LXQt desktop for Lubuntu 18.10. It was released on 18 October 2018 and included LXQt. This transition was planned for after the release of Lubuntu 18.04 LTS, to allow testing and development over three regular releases before the first long term support version, Lubuntu 20.04 LTS, is released with LXQt. The project also changed its logo in early April 2018, in anticipation of this move.
In transitioning to LXQt this release uses LXQt 0.13.0, based upon Qt 5.11.1. Applications include LibreOffice 6.1.1 office suite, the VLC media player 3.0.4 player, Discover Software Center 5.13.5 and FeatherPad 0.9.0 text editor. KDE's Falkon 3.0.1 had been beta tested as the default web browser, but was found to lack stability and was replaced with Firefox 63.0.
The installer for 18.10 is the Calamares system installer, in place of the previous Ubiquity installer.
Starting with this release the developers no longer make recommendations for minimum system requirements.
In reviewing the beta version of 18.10 in May 2018, Marius Nestor of Softpedia wrote: "We took the first Lubuntu 18.10 daily build with LXQt for a test drive, and we have to say that we're impressed ... The layout is very simple, yet stylish with a sleek dark theme by default and a single panel at the bottom of the screen from where you can access everything you need ... we give it a five-star rating."
Writing after the official release on 20 October 2018, Marius Nestor of Softpedia noted: "After many trials and tribulations, and a lot of hard work, the Lubuntu team finally managed to ship a release with the LXQt desktop environment by default instead of LXDE (Lightweight X11 Desktop Environment), which was used by default on all Lubuntu releases from the beginning of the project. We also believe LXQt is the future of the LXDE desktop environment, which uses old and soon deprecated technologies, so we welcome Lubuntu 18.10 (Cosmic Cuttlefish) with its shiny LXQt 0.13.0 desktop environment by default, built against the latest Qt 5.11.1 libraries and patched with upstream's improvements."
In reviewing Lubuntu 18.10, DistroWatch's Jesse Smith wrote: "I have mixed feelings about this release of Lubuntu. On the one hand most of the features worked well. The distribution was easy to install, I liked the theme, and the operating system is pretty easy to use. There were a few aspects I didn't like, usually programs or settings modules I felt were overly complex or confusing compared to their counterparts on other distributions. For the most part though, Lubuntu does a nice job of being a capable, relatively lightweight distribution ... On the whole, I think the transition from LXDE to LXQt has gone smoothly. There are a few choices I didn't like, and a few I did, but mostly the changes were minor. I think most people will be able to make the leap between the two desktops fairly easily. I think a few settings modules still need polish and I'd like to see Discover replaced with just about any other modern software manager, but otherwise this felt like a graceful (and mostly positive) move from 18.04 to 18.10 and from LXDE to LXQt."
A detailed review of Lubuntu 18.10, Mahmudin Asharin, writing in Ubuntu Buzz, found only a few faults in the release, in particular the network manager. He concluded, "For most users, I recommend Lubuntu 18.04 LTS instead for the sake of usability and support duration. For first timer who installed/want to install 18.10 LXQt, go ahead and you will get beautiful user interface and very nice experience, but I recommend you to use Wicd instead of default network manager. For LXQt desktop pursuer, Lubuntu 18.10 is a great example of LXQt system. Try it first."
A review in Full Circle magazine noted, "Overall LXQt, as seen in Lubuntu 18.10, is ready for day-to-day use, while there is also still room for ongoing refinement. Introducing LXQt in Lubuntu 18.10 was a careful choice by the Lubuntu developers. Coming right after Lubuntu 18.04 LTS, the final LXDE release, it gives developers three "standard" releases to continue to polish LXQt before the first LTS release..."
Lubuntu 19.04
This standard release was made on schedule on 18 April 2019.
This release marked the first Lubuntu version without 32-bit support. Lubuntu developer Simon Quigley wrote in December 2018:
This release featured LXQt 0.14.1, based upon Qt 5.12.2. It included working full-disk encryption, easier customization of the Calamares installer configurations by employing XDG configuration variables and Austrian keymapping. Minimum installation RAM was reduced to 500 MB. Other changes include Trash, Home, Computer, and Network icons added to the desktop, split view in PCManFM-Qt, exif data display in the image viewer, LXImage-Qt and touchpad settings fixed over 18.10.
In a review Softpedia writer Marius Nestor described the use of LXQt 0.14.1, employed in Lubuntu 19.04, as "a much-improved and richer LXQt experience".
A review in Full Circle magazine concluded, "Lubuntu 18.10 wasn’t ready for prime time, but 19.04 is. LXQt looks fresh and new, and everything works right from the installation; it even runs fine from a DVD live session. I didn’t find anything that needs fixing in 19.04. If not for the nine-month support period for this regular release, it could have been a long term support release, at least for the quality of the user experience and the lack of bugs."
A review in FOSS Post by M. Hanny Subbagh in September 2019, entitled Lubuntu, A Once Great Distro, Is Falling Behind concluded "Most of the criticism you have seen in this article is coming from the LXQt desktop environment. It’s understandable that any new piece of software will have bugs/issues in the first few years of its life cycle, but the LXQt desktop still needs a long round of polished updated to make it match the other desktops such as GNOME, Cinnamon, XFCE and MATE. Meanwhile, if you are interested in trying Lubuntu, we recommend that you stick to the 18.04LTS version, which comes with LXDE."
A review by Igor Ljubuncic in Dedoimedo concluded, "Lubuntu 19.04 Disco Dingo feels ... raw. Unfinished. Half-baked. It has some perfectly decent functionality, like networking, media and phone support, but then it also comes with rudimentary package management, a jumbled arsenal of programs, a desktop that is too difficult to manage and tame, plus identity crisis ... I struggled with the overall purpose, though. As impressive as the speed and lightness are, they are only small improvements over what Plasma offers. But then, Plasma is much easier to customize and tweak, it offers a coherent, consistent experience, and it feels modern and relevant. With Lubuntu, I had no connection, and using the distro felt like a chore. I had to fight the weird defaults to try to create an efficient setup, and I wasn't able to do achieve that. So I always go back to the question of investment versus benefit. Lubuntu feels too pricey for what it gives .. With Lubuntu, there needs to be more order, more consistency in how it works. At the moment, it's just a collection of ideas mashed together. While perfectly functional, it's not really fun. 6/10 ..."
Lubuntu 19.10
This standard release was the last one before the 20.04 LTS release and arrived on 17 October 2019.
This release brought new artwork, including new wallpaper. It uses LXQt 0.14.1, based upon Qt 5.12.4.
A review in the February 2020 issue of Full Circle magazine, concluded, "Lubuntu 19.10 builds well upon the success of 19.04. The developers seem to be fixing things at a good clip and polishing it up for the next key release, the first LXQt LTS version, due out on 23 April 2020. The 19.10 release is bug-free enough to have been an LTS release itself and this bodes really well for the expected quality of the upcoming LTS."
Lubuntu 20.04 LTS
This release is the first Lubuntu long term support release that uses LXQt and was released on 23 April 2020. Lubuntu 20.04.1 LTS was released on 6 August 2020.
Lubuntu 20.04 LTS used LXQt 0.14.1, based upon Qt 5.12.8 LTS. This release did not introduce many changes. It included a new software update notifier application. Called Update Notifier, it was developed by Hans Möller. The release included new wallpaper artwork as a result of a community contest held for the release.
In a 27 April 2020 review in It's FOSS, Dimitrios Savvopoulos noted, "LXQt is not only for users with an older hardware but also for those who are seeking a simple and classic experience at their new machine." He added, "in daily use, Lubuntu 20.04 has proven to me completely trouble-free as every Ubuntu flavour in fact ... [the] Lubuntu team has successfully made the transition to a modern, still lightweight and minimal desktop environment. LXDE looks like [it has been] abandoned and it is a good thing to move away [from it] to an active project.
A 29 May 2020 review in Full Circle magazine concluded, "Lubuntu 20.04 LTS completes the two-year development cycle, consisting of three standard releases leading to this LTS release. Overall this represents the culmination of a development project that commenced in 2014 to create a new Qt-based desktop for Lubuntu: LXQt. The process has taken much longer than was forecast six years ago, but was worth the wait. This first LTS release is stable, smooth, elegant, and a real joy to use. This is the best Lubuntu release so far."
Lubuntu 20.10
This standard release was made available on 22 October 2020. On 16 August 2018, the Lubuntu development team announced plans to port Openbox to Mir in time for Lubuntu 20.10 to allow Lubuntu to move away from the X display server to an implementation of Wayland instead.
This release used LXQt 0.15.0, based on Qt 5.14.2. Improvements include adding a tree view to show pending updates in the Lubuntu update notifier and an updated plymouth theme, with the default wallpaper an alternate one from the 20.04 wallpaper competition.
In a rundown on the Ubuntu flavors, DeBugPoint noted, "Lubuntu 20.10 based on Groovy Gorilla ... is perfect for low-end hardware and lightweight systems while being stable [due to being based upon] Ubuntu".
A review published on Christmas Day 2020 in Full Circle magazine concluded, "Lubuntu 20.10 introduces very little that is new over 20.04 LTS. I actually think this is a good sign, as 20.04 LTS is a superb operating system and doesn’t really need much improvement. If this development cycle leads to the next Lubuntu LTS version having just a few minor improvements over 20.04, then, personally, I will be very happy with the results. An updated version of FeatherPad would be nice by then, however."
Lubuntu 21.04
Lubuntu 21.04 is a standard release, made on 22 April 2021.
This version introduced LXQt 0.16.0 based upon Qt 5.15.2. A new application, LXQt Archiver 0.3.0, based on Engrampa, was included. There was also a new version of the Lubuntu update notifier that includes a tree view with packages and versions for upgrade listed, plus new artwork.
A review in Full Circle magazine stated, "this second standard release in the development cycle leading to the next LTS includes just small, cautious changes. This is really how operating system development should be approached, particularly when you have a loyal user base who are happy with how everything works and are generally not demanding big changes."
Lubuntu 21.10
Lubuntu 21.10 is a standard release that was released on 14 October 2021.
This release uses the LXQt 0.17.0 desktop, based on Qt 5.15.2. Unlike Ubuntu 21.10, which moved to a snap package, Lubuntu retained the .deb package for the Firefox browser for this release.
Applications
Lubuntu LXDE
The LXDE versions of Lubuntu (18.04 LTS and earlier) included the following applications:
User applications:
Abiword – word processor
Audacious – music player
Evince – PDF reader
File-roller – archiver
Firefox – web browser
Galculator – calculator
GDebi – package installer
GNOME Software – package manager
Gnumeric – spreadsheet
Guvcview – webcam
LightDM – log-in manager
Light-Locker – screen locker
MPlayer – video player
mtPaint – graphics painting
Pidgin – instant messenger and microblogging
scrot – screenshot tool
Simple Scan – scanning
Sylpheed – email client
Synaptic and Lubuntu Software Center – package managers
Transmission – bittorrent client
Update Manager
Startup Disk Creator – USB ISO writer
Wget – command line webpage downloader
XChat – IRC
Xfburn – CD burner
Xpad – notetaking
From LXDE:
GPicView – image viewer
Leafpad – text editor
LXAppearance
LXDE Common
LXDM
LXLauncher
LXPanel
LXRandr
LXSession
LXSession Edit
LXShortCut
LXTask
LXTerminal
Menu-Cache
Openbox – window manager
PCManFM – file manager
Up to and including 18.04 LTS, Lubuntu also had access to the Ubuntu software repositories through the Lubuntu Software Center, the Synaptic package manager and APT allowing the installation of any applications available to Ubuntu.
Lubuntu LXQt
The LXQt versions of Lubuntu (18.10 and later) include the following applications:
Internet Applications
Firefox – web browser
Qtransmission – bit torrent client
Quassel – internet relay chat
Bluedevil – bluetooth connector
Trojitá – email client
Office Applications
LibreOffice – office suite
LibreOffice Calc – spread sheet
LibreOffice Impress – presentation
LibreOffice Math – math formula writer
LibreOffice Writer – word processor
qpdfview – PDF viewer
Graphics Applications
ImageMagick – image manipulation
lximage – image viewer
ScreenGrab – screenshot tool
Skanlite – scanning application
Accessories
2048-qt – a lightweight implementation of the 2048 video game
LXQt Archiver – archive utility
Discover Software Center
FeatherPad – lightweight text editor
Kcalc – scientific calculator
KDE partition manager
Muon package manager
Noblenote – note taking application
PCManFM-Qt – file manager
Qlipper – clipboard manager
Sound and Video
K3b – CD and DVD burning
PulseAudio Volume Control
VLC media player
From 18.10, Lubuntu also has access to the Ubuntu software repositories through the Discover Software Center, the Synaptic package manager and APT allowing the installation of any applications available to Ubuntu.
Table of releases
Timeline
See also
Similar Linux distributions
Manjaro Linux - a similar project based on Arch Linux with various desktops to choose from, including LXDE.
Peppermint Linux OS - based on Lubuntu with Linux Mint's utilities.
Other links
Computer technology for developing areas
Free-culture movement
Linux user group
List of Ubuntu-based distributions
Open-source software
Ubuntu Professional Certification
References
External links
Lubuntu Documentation
Lubuntu - Ubuntu Wiki
2010 software
LXQt
LXDE
Operating system distributions bootable from read-only media
Ubuntu derivatives
Linux distributions | Operating System (OS) | 878 |
Pica8
Pica8, Inc. is a computer networking company headquartered in Palo Alto, California, United States. Pica8 is a vendor of open-standards-based operating systems on white box network switches delivering software-defined networking (SDN) solutions for datacenter and cloud computing environments and traditional L2/L3 solutions for large enterprise customers. The company's products include a Linux-based L2/L3 and OpenFlow-supporting network operating system, PICOS, which is shipped as standalone software that can be loaded onto a range of 1/10/40/100 Gigabit Ethernet switches based on commoditized ("white box") switches purchased from original design manufacturers (ODMs).
The company's approach is to combine commodity network hardware (from manufacturers like Accton, Foxconn, Quanta) with Debian Linux, L2/L3 protocol stacks, a full enterprise feature set, OpenFlow controller and Open vSwitch (OVS) to create both a more "democratic" SDN solutions with competitive price compared to conventional embedded switches as well as more flexible and scalable disaggregated enterprise white box networking solutions.
History
The company was founded in 2009. It launched a family of OpenFlow-enabled Ethernet switches in August 2009 and has been selling products ever since.
In 2010, Pica8 was selling 48-port gigabit Ethernet and 10-gigabit Ethernet switches at half the price of comparable products of Force10 and Arista Networks. It achieved such result through combining open source software with merchant ASICs (from companies like Broadcom, Marvell, and Intel/Fulcrum) on switches from "white-box vendors".
In July 2011, Pica8 added support for the open source "Indigo" OpenFlow stack from Big Switch Networks to its switches as an alternative stack. In November 2011 it embedded Open vSwitch (OVS), developed by Nicira, into its operation system PicOS to enable more sophisticated network management from inside the switch.
In October 2012 Pica8 raised $6.6m in Series A funding from VantagePoint Capital Partners to support its sales and product development. On 10 December 2012 the company exited stealth mode with introduction of SDN reference architecture aimed at cloud providers.
By 2013, among about 100 Pica8's customers, including large service providers and hosting companies, were such companies as Baidu, Yahoo! Japan and NTT Communications.
In December 2013, the company launched the Pica8 SDN Starter Kit, an "out-of-the-box" kit that includes an open-source network controller, a programmable network tap, an open-source network intrusion detection system, and other components meant to give customers a complete SDN solution, which would be quick to implement.
In April 2014 Pica8 claimed to be the first vendor to support the latest version 1.4 of OpenFlow and to have over 300 customers globally.
By 2018, Pica8 grew to over 1,000 customers in over 40 countries, announcing a broad push into the enterprise campus and branch office markets in January.
Products
PICOS
PICOS (formerly known as XorPlus) is a network operating system (NOS) that Pica8 has developed based on XORP, an eXtensible Open Router Platform. The operation system works on an unmodified Linux kernel and is extended with a range of network and switching services.
PICOS includes a traditional Layer-2 / Layer-3 switching mode (L2/L3 Mode) and has support for OpenFlow protocol, standardized by the Open Networking Foundation (ONF), through Open vSwitch (OVS). OVS runs as a process on the Debian Linux distribution.
Additionally, PICOS has a hardware abstraction layer that lets it run atop networking ASICs from various switch silicon manufacturers. Therefore, the PICOS operating system can be unaware ("agnostic") about the underlying hardware and not tied to it.
PicaPilot
In addition to PICOS, Pica8 offers a second core technology solution called PicaPilot, which was announced in May 2018. PicaPilot is an automated white box switch configuration and management application that runs on Pica8-enabled switches alongside PICOS. Designed as a replacement for legacy Ethernet switch stacks and chassis switches, PicaPilot compresses dozens of access- and aggregation-layer leaf-spine topology switches into a single layer and allows them to be managed as a single logical switch with a single consolidated IP address.
CrossFlow
On 10 November 2014 Pica8 announced CrossFlow, a new feature in the PICOS NOS that enables network managers to integrate OpenFlow applications and business policies with existing layer 2/layer 3 networks. Users can run layer 2/layer 3 protocols and OpenFlow protocols on all the switch ports in a network at the same time. OpenFlow can be used for policy-driven applications to bring business logic to the network. The traditional network can optimize packet transport and performance with protocols, such as OSPF, Spanning Tree, and BGP.
Awards and recognitions
The 10 Coolest Networking Startups Of 2013 according to CRN (2013).
AlwaysOn OnDemand Companies to Watch (2013).
AlwaysOn OnDemand 100 Top Private Companies (2014).
AlwaysOn Global 250 Top Private Companies (2014, along with companies like Acquia, Couchbase, Dropbox, MongoDB).
See also
Linux
Network switch
Router
Packet switching
Circuit switching
Fully switched network
Cumulus Networks
Software-defined networking
Open Compute Project
Open-source computing hardware
References
Networking software companies
Networking hardware companies
Companies based in Palo Alto, California
Networking companies of the United States | Operating System (OS) | 879 |
Open Source Applications Foundation
The Open Source Applications Foundation (OSAF) was a non-profit organization founded in 2002 by Mitch Kapor whose purpose was to effect widespread adoption of free software/open-source software.
History
Founded in 2002 by Mitch Kapor to effect widespread adoption of free software/open-source software.
The 2007 book Dreaming in Code: Two Dozen Programmers, Three Years, 4,732 Bugs, and One Quest for Transcendent Software documented the struggles of OSAF in building an open source calendar application, Chandler.
In January 2008 Mitch Kapor ended his involvement with the foundation, stepped down from the board, and provided transitional funding. In the restructure that followed, Katie Capps Parlante became acting president. There were at one time eleven employees with Sheila Mooney as president.
Projects
Chandler – a note-to-self organiser (personal information management (PIM) software) designed for personal and small-group task management and calendaring. Chandler Desktop and Chandler Hub (an instance of Chandler Server (Cosmo)) are complementary. Chandler is written in Python and runs on Linux, Mac OS X and Windows.
Cosmo (Chandler Server) – a Java-based content/calendar sharing server with a built-in rich web application client.
From 2005, OSAF participated in Google's Summer of Code programs by allowing several interns to work on OSAF projects during the summer.
OSAF Funds
Major contributors to OSAF included:
Mitch Kapor: US$5 million
Andrew W. Mellon Foundation: US$98,000
Andrew W. Mellon Foundation: US$1.5 million
Common Solutions Group: US$1.25 million
OSAF Mission
The mission of the OSAF was stated this way:
Create and gain wide adoption of open source application software of uncompromising quality.
Carry forward the vision of Vannevar Bush, Doug Engelbart, and Ted Nelson of the computer as a medium for communication, collaboration, and coordination.
Design a new application to manage personal information including notes, mail, tasks, appointments and events, contacts, documents and other personal resources.
Enable sharing with colleagues, friends and family. In particular, meet the unique and under-served needs of small group collaboration.
Demonstrate that open source software *can* serve a general audience in the consumer market.
Offer a choice of platforms and full interoperability amongst Windows, Macintosh, and Linux versions.
Leverage our resources by using an open source model of development.
External links
Open Source Applications Foundation Site
The Chandler Project Website
Free software project foundations in the United States | Operating System (OS) | 880 |
OpenStep
OpenStep is a defunct object-oriented application programming interface (API) specification for a legacy object-oriented operating system, with the basic goal of offering a NeXTSTEP-like environment on non-NeXTSTEP operating systems. OpenStep was principally developed by NeXT with Sun Microsystems, to allow advanced application development on Sun's operating systems, specifically Solaris. NeXT produced a version of OpenStep for its own Mach-based Unix, stylized as OPENSTEP, as well as a version for Windows NT. The software libraries that shipped with OPENSTEP are a superset of the original OpenStep specification, including many features from the original NeXTSTEP.
History
In the early era of the Sun Microsystems history, Sun machines had been programmed at a relatively low-level making calls directly to the underlying Unix operating system and producing a graphical user interface (GUI) using the X11 system. This led to complex programming even for simple projects. An attempt to address this with an object oriented programming model was made in the mid-1980s with Sun's NeWS windowing system, but the combination of a complex application programming interface (API) and generally poor performance led to little real-world use and the system was eventually abandoned.
Sun then began looking for other options. Taligent was considered to be a competitor in the operating system and object markets, and Microsoft's Cairo was at least a consideration, even without any product releases from either. Taligent's theoretical newness was often compared to NeXT's older but mature and commercially established platform. Sun held exploratory meetings with Taligent before deciding upon building out its object application framework OpenStep in partnership with NeXT as a "preemptive move against Taligent and Cairo". Bud Tribble, a founding designer of the Macintosh and of NeXTStep, was now SunSoft's Vice President of Object Products to lead this decision. The 1993 partnership included a $10 million investment from Sun into NeXT. The deal was described as "the first unadulterated piece of good news in the NeXT community in the last four years".
The basic concept was to take a cut-down version of the NeXTSTEP operating system's object layers and adapt them to run on Sun's Solaris operating system, more specifically, Solaris on SPARC-based hardware. Most of the OpenStep effort was to strip away those portions of NeXTSTEP that depended on Mach or NeXT-specific hardware being present. This resulted in a smaller system that consisted primarily of Display PostScript, the Objective-C runtime and compilers, and the majority of the NeXTSTEP Objective-C libraries. Not included was the basic operating system, or the lower-level display system.
Steve Jobs said "We are ahead today, but the race is far from over. ... [In 1996,] Cairo will be very close behind, and Taligent will be very far behind." Sun's CEO Scott McNealy said, "We have no insurance policy. We have made a firm one-company, one-architecture decision, not like Taligent getting a trophy spouse by signing up HP."
The first draft of the API was published by NeXT in mid 1994. Later that year they released an OpenStep compliant version of NeXTSTEP as OPENSTEP, supported on several of their platforms as well as Sun SPARC systems. NeXT submitted the OpenStep specification to the industry's object standards bodies. The official OpenStep API, published in September 1994, was the first to split the API between Foundation and Application Kit and the first to use the "NS" prefix. Early versions of NeXTSTEP use an "NX" prefix and contain only the Application Kit, relying on standard Unix libc types for low-level data structures. OPENSTEP remained NeXT's primary operating system product until the company was purchased by Apple Computer in 1997. OPENSTEP was then combined with technologies from the existing classic Mac OS to produce Mac OS X. iPhone and iPad's iOS is also a descendant of OPENSTEP, but targeted at touch devices.
Sun originally adopted the OpenStep environment with the intent of complementing Sun's CORBA-compliant object system, Solaris NEO (formerly known as Project DOE), by providing an object-oriented user interface toolkit to complement the object-oriented CORBA plumbing. The port involved integrating the OpenStep AppKit with the Display PostScript layer of the Sun X11 server, making the AppKit tolerant of multi-threaded code (as Project DOE was inherently heavily multi-threaded), implementing a Solaris daemon to simulate the behavior of Mach ports, extending the SunPro C++ compiler to support Objective-C using NeXT's ObjC runtime, writing an X11 window manager to implement the NeXTSTEP look and feel as much as possible, and integrating the NeXT development tools, such as Project Builder and Interface Builder, with the SunPro compiler. In order to provide a complete end-user environment, Sun also ported the NeXTSTEP-3.3 versions of several end-user applications, including Mail.app, Preview.app, Edit.app, Workspace Manager, and the Dock.
The OpenStep and CORBA parts of the products were later split, and NEO was released in late 1995 without the OpenStep environment. In March 1996, Sun announced Joe, a product to integrate NEO with Java. Sun shipped a beta release of the OpenStep environment for Solaris on July 22, 1996, and made it freely available for download in August 1996 for non-commercial use, and for sale in September 1996. OpenStep/Solaris was shipped only for the SPARC architecture.
Description
OpenStep differs from NeXTSTEP in various ways:
NeXTSTEP is an operating system, whereas OpenStep is an API.
Unlike NeXTSTEP, OpenStep does not require the Mach kernel.
Each version of NeXTSTEP has a specific endianness: big endian for Motorola 68K processors, and little endian for x86 processors, for example. OpenStep is "endian-free".
OpenStep introduces new classes and memory management capabilities.
The OpenStep API specification defines three major components: Foundation Kit, the software framework; Application Kit, the GUI and graphics front-end; and Display PostScript, a 2D graphics system (for drawing windows and other graphics on the screen).
Building on OpenStep
The standardization on OpenStep also allowed for the creation of several new library packages that were delivered on the OPENSTEP platform. Unlike the operating system as a whole, these packages were designed to run stand-alone on practically any operating system. The idea was to use OpenStep code as a basis for network-wide applications running across different platforms, as opposed to using CORBA or some other system.
Primary among these packages was Portable Distributed Objects (PDO). PDO was essentially an even more "stripped down" version of OpenStep containing only the Foundation Kit technologies, combined with new libraries to provide remote invocation with very little code. Unlike OpenStep, which defined an operating system that applications would run in, under PDO the libraries were compiled into the application itself, creating a stand-alone "native" application for a particular platform. PDO was small enough to be easily portable, and versions were released for all major server vendors.
In the mid-1990s, NeXT staff took to writing in solutions to various CORBA magazine articles in a few lines of code, whereas the original article would fill several pages. Even though using PDO required the installation of a considerable amount of supporting code (Objective-C and the libraries), PDO applications were nevertheless considerably smaller than similar CORBA solutions, typically about one-half to one-third the size.
The similar D'OLE provided the same types of services, but presented the resulting objects as COM objects, with the goal of allowing programmers to create COM services running on high-powered platforms, called from Microsoft Windows applications. For instance one could develop a high-powered financial modeling application using D'OLE, and then call it directly from within Microsoft Excel. When D'OLE was first released, OLE by itself only communicated between applications running on a single machine. PDO enabled NeXT to demonstrate Excel talking to other Microsoft applications across a network before Microsoft themselves were able to implement this functionality (DCOM).
Another package developed on OpenStep was Enterprise Objects Framework (EOF), a tremendously powerful (for the time) object-relational mapping product. EOF became very popular in the enterprise market, notably in the financial sector where OPENSTEP caused something of a minor revolution.
Implementations
OPENSTEP for Mach
NeXT's first operating system was NeXTSTEP, a sophisticated Mach-UNIX based operating system that originally ran only on NeXT's Motorola 68k-based workstations and that was then ported to run on 32-bit Intel x86-based "IBM-compatible" personal computers, PA-RISC-based workstations from Hewlett-Packard, and SPARC-based workstations from Sun Microsystems.
NeXT completed an implementation of OpenStep on their existing Mach-based OS and called it OPENSTEP for Mach 4.0 (July, 1996), 4.1 (December, 1996), and 4.2 (January, 1997). It was, for all intents, NeXTSTEP 4.0, and still retained flagship NeXTSTEP technologies (such as DPS, UNIX underpinnings, user interface characteristics like the Dock and Shelf, and so on), and retained the classic NeXTSTEP user interface and styles. OPENSTEP for Mach was further improved, in comparison to NeXTSTEP 3.3, with vastly improved driver support – however the environment to actually write drivers was changed with the introduction of the object-oriented DriverKit.
OPENSTEP for Mach supported Intel x86-based PC's, Sun's SPARC workstations, and NeXT's own 68k-based architectures, while the HP PA-RISC version was dropped. These versions continued to run on the underlying Mach-based OS used in NeXTSTEP. OPENSTEP for Mach became NeXT's primary OS from 1995 on, and was used mainly on the Intel platform. In addition to being a complete OpenStep implementation, the system was delivered with a complete set of NeXTSTEP libraries for backward compatibility. This was an easy thing to do in OpenStep due to library versioning, and OPENSTEP did not suffer in bloat because of it.
Solaris OpenStep
In addition to the OPENSTEP for Mach port for SPARC, Sun and NeXT developed an OpenStep compliant set of frameworks to run on Sun's Solaris operating system. After developing Solaris OpenStep, Sun lost interest in OpenStep and shifted its attention toward Java. As a virtual machine development environment, Java served as a direct competitor to OpenStep.
OPENSTEP Enterprise
NeXT also delivered an implementation running on top of Windows NT 4.0 called OPENSTEP Enterprise (often abbreviated OSE). This was an unintentional demonstration on the true nature of the portability of programs created under the OpenStep specification. Programs for OPENSTEP for Mach could be ported to OSE with little difficulty. This allowed their existing customer base to continue using their tools and applications, but running them on Windows, to which many of them were in the process of switching. Never a clean match from the UI perspective, probably due to OPENSTEP's routing of window graphics through the Display Postscript server—which was also ported to Windows—OSE nevertheless managed to work fairly well and extended OpenStep's commercial lifespan.
OPENSTEP and OSE had two revisions (and one major one that was never released) before NeXT was purchased by Apple in 1997.
Rhapsody, Mac OS X Server 1.0
After acquiring NeXT, Apple intended to ship Rhapsody as a reworked version of OPENSTEP for Mach for both the Mac and standard PCs. Rhapsody was OPENSTEP for Mach with a Copland appearance from Mac OS 8 and support for Java and Apple's own technologies, including ColorSync and QuickTime; it could be regarded as OPENSTEP 5. Two developer versions of Rhapsody were released, known as Developer Preview 1 and 2; these ran on a limited subset of both Intel and PowerPC hardware. Mac OS X Server 1.0 was the first commercial release of this operating system, and was delivered exclusively for PowerPC Mac hardware.
Darwin, Mac OS X 10.0 and later
After replacing the Display Postscript WindowServer with Quartz, and responding to developers by including better backward compatibility for classic Mac OS applications through the addition of Carbon, Apple released Mac OS X and Mac OS X Server, starting at version 10.0; Mac OS X is now named macOS.
macOS's primary programming environment is essentially OpenStep (with certain additions such as XML property lists and URL classes for Internet connections) with macOS ports of the development libraries and tools, now called Cocoa.
macOS has since become the single most popular desktop Unix-like operating system in the world, although macOS is no longer an OpenStep compliant operating system.
GNUstep
GNUstep, a free software implementation of the NeXT libraries, began at the time of NeXTSTEP, predating OPENSTEP. While OPENSTEP and OSE were purchased by Apple, who effectively ended the commercial development of implementing OpenStep for other platforms, GNUstep is an ongoing open source project aiming to create a portable, free software implementation of the Cocoa/OPENSTEP libraries.
GNUstep also features a fully functional development environment, reimplementations of some of the newer innovations from macOS's Cocoa framework, as well as its own extensions to the API.
See also
NeXT character set
Multi-architecture binary
References
External links
OpenStep Specification
SUNs Workshop OpenStep AnswerBook
Rich Burridge's Weblog on OpenStep at SUN
The NeXTonian
NeXTComputers.org
OpenMagic 1.0 for Sparc by Luke Th. Bullock
NeXTanswers archive
Application programming interfaces
Berkeley Software Distribution
macOS APIs
NeXT
Solaris software | Operating System (OS) | 881 |
SystemStarter
SystemStarter is a system program in Mac OS X, started by Mac OS X's BSD-style init prior to Mac OS X v10.4 and by launchd in Mac OS X v10.4 and later releases, that starts system processes specified by a set of property lists. SystemStarter was originally written by Wilfredo Sanchez for Mac OS X. In Mac OS X v10.4, it was deprecated in favor of launchd, and kept in the system only to start system processes not yet converted to use launchd.
SystemStarter appears to have been removed from OS X 10.10 and later.
References
External links
SystemStarter manual page
BSDCON 2002 Paper on SystemStarter
MacOS
Process (computing)
Unix process- and task-management-related software | Operating System (OS) | 882 |
Open Desktop Workstation
The Open Desktop Workstation, also referred to as ODW is a PowerPC based computer, by San Antonio-based Genesi. The ODW has an interchangeable CPU card allowing for a wide range of PowerPC microprocessors from IBM and Freescale Semiconductor.
It is a standardized version of the Pegasos II. It was the first open source based PowerPC computer and gave PowerPC a host/target development environment. Genesi have released the complete specification (design and component listing) free of charge. The ODW-derived Home Media Center won the Best in Show award at the Freescale Technology Forum in 2005. It also features an ATI certification and a "Ready for IBM Technology" certification.
It supports a variety of operating systems such as MorphOS, Linux, QNX and OpenSolaris. Manufacturing of the ODW have been discontinued in favour for EFIKA.
Specification
Freescale 1.0 GHz MPC7447 processor
512 MB DDR RAM (two slots, up to 2 GB)
80 GB ATA100 hard disk
Dual-Layer DVD±RW Drive
Floppy disk support
3× PCI slots
AGP based ATI Radeon 9250 graphics (DVI, VGA and S-Video out)
4× USB
PS/2 mouse and keyboard support
3× FireWire 400 (two external)
2× Ethernet ports, 100 Mbit/s and 1 Gbit
AC'97 sound - in/out, analog and digital (S/PDIF)
PC game/MIDI-port
Parallel and serial ports (supporting IrDA)
MicroATX motherboard (236×172 mm)
Small Footprint Case - (92×310×400 mm)
References
External links
Genesi's ODW page
ODW specification at PowerDeveloper.org
Linux resources for ODW at Freescale
PowerPC mainboards | Operating System (OS) | 883 |
Cross-platform virtualization
Cross-platform virtualization is a form of computer virtualization that allows software compiled for a specific instruction set and operating system to run unmodified on computers with different CPUs and/or operating systems, through a combination of dynamic binary translation and operating system call mapping.
Since the software runs on a virtualized equivalent of the original computer, it does not require recompilation or porting, thus saving time and development resources. However, the processing overhead of binary translation and call mapping imposes a performance penalty, when compared to natively-compiled software. For this reason, cross-platform virtualization may be used as a temporary solution until resources are available to port the software. Alternatively, cross-platform virtualization may be used to support legacy code, which running on a newer and faster machine still maintains adequate performance even with virtualization overhead.
By creating an abstraction layer capable of running software compiled for a different computer system, cross-platform virtualization characterizes the Popek and Goldberg virtualization requirements outlined by Gerald J. Popek and Robert P. Goldberg in their 1974 article "Formal Requirements for Virtualizable Third Generation Architectures". Cross-platform virtualization is distinct from simple emulation and binary translation - which involve the direct translation of one instruction set to another - since the inclusion of operating system call mapping provides a more complete virtualized environment. Cross-platform virtualization is also complementary to server virtualization and desktop virtualization solutions, since these are typically constrained to a single instruction set, such as x86 or Power ISA. Modern variants of cross-platform virtualisation may employ hardware acceleration techniques to offset some of the cost incurred in the guest-to-host system translation.
See also
Instruction set simulator
Platform virtualization
Virtual machine
Emulator
Porting
Cross-platform
References
Hardware virtualization | Operating System (OS) | 884 |
Control Program Facility
Control Program Facility (CPF) was the operating system for the IBM System/38. CPF represented an independendent line of development at IBM Rochester, and was unrelated to the earlier and more widely used System Support Program operating system. CPF evolved into the OS/400 operating system, which was originally known as XPF (Extended CPF).
While CPF is considered to be the operating system of the System/38, much of the hardware and resource management of the platform is implemented in the System/38's Horizontal and Vertical Microcode.
Description of the libraries
QGPL – general purpose library
QSYS – system library
QSPL – spooling library
QTEMP – temporary library
QSRV – system service library
QRECOVERY – system recovery library
Data storage
In most computers prior to the System/38, and most modern ones, data stored on disk was stored in separate logical files. When data was added to a file it was written in the sector dedicated to this, or if the sector was full, on a new sector somewhere else.
The System/38 adopted the single-level store architecture, where main storage and disk storage are organized as one, from the abandoned IBM Future Systems project (FS). Every piece of data was stored separately and could be put anywhere on the system. There was no such thing as a physically contiguous file on disk, and the operating system managed the storage and recall of all data elements.
Capability-based addressing
CPF was an example of a commercially-available Capability-based operating system. System/38 was one of the few commercial computers with capability-based addressing. Capability-based addressing was removed in the follow-on OS/400 operating system.
Distributed Data Management
In 1986, System/38 announced support for Distributed Data Management Architecture (DDM). Such a middleware in the context of a distributed system is the software layer that lies between the operating system and applications. Distributed Data Management Architecture defines an environment for sharing data. This enables System/38 programs to create, manage, and access record-oriented files on remote System/36, System/38, and IBM mainframe systems running CICS. It also allows programs on remote System/36 and System/38 computers to create, manage, and access files of a System/38.
Programming languages
Languages supported on the System/38 included RPG III, COBOL, BASIC, and PL/I. CPF also implements the Control Language for System/38.
References
External links
Control Program Facility Concepts Manual (PDF file)
Control Program Facility Programmer's Guide (PDF file)
IBM operating systems
Computer-related introductions in 1978 | Operating System (OS) | 885 |
DOCS (software)
DOCS (Display Operator Console Support) was a software package for IBM mainframes by CFS Inc., enabling access to the system console using 3270-compatible terminals.
Computer operators communicated with IBM mainframe computers using an electro-mechanical typewriter-like console that came standard on most IBM 360 and 370 computer, except a few upper end models that offered video consoles and the Model 20 which came standard without a console.
The majority of smaller and less expensive IBM 360s and 370s came equipped with these ruggedized Selectric keyboard devices. The Selectric was a major step up from the teletypes (TTY) associated with Unix and smaller systems, but still clunky. The video consoles provided with certain models were not considered particularly user friendly, and they ignored two thirds of IBM's mainframe market, DOS and its VSE descendants.
DOCS replaced or supplanted the typewriter interface with a video screen. In practice, it worked a little like present-day instant messenger programs (ICQ, QQ, AIM, Adium, iChat, etc.), with a data entry line at the bottom and messages scrolling in real time up the screen. The commands were otherwise identical.
DOCS was available for DOS, DOS/VS, DOS/VSE, and came packaged with third party operating systems, such as EDOS from The Computer Software Company, later acquired by Nixdorf.
Platforms
Software
The product ran under several DOS-related platforms:
DOS/VS
DOS/VSE
DOS, modified
EDOS
vDOS
Hardware
Several vendors offered DOCS as part of their OS:
Amdahl
Fujitsu
Hitachi
Magnuson
RCA
Development
DOCS was developed by CFS, Inc. of Brookline, Massachusetts at the Kayser-Roth data center in Whitman, Massachusetts. Dick Goran wrote the video interface. Leigh Lundin wrote the operating system interface and transcript recorder.
Fx
DOCS required a dedicated partition. With DOS having only three partitions and DOS/VS seven, giving up a partition to DOCS placed a crimp in practicability.
Leigh Lundin designed Fx, a pseudo-partition that relieved the user from relinquishing a working partition. Fx appeared in the DOS/VS version of SDI's Grasp as F0.
Marketing
DOCS was sold in North America by CFS, Inc, Brookline, Ma.
For overseas sales, CFS engaged in both mail order and local vendors. The product was also embedded in third party operating system packages, such as EDOS and vDOS.
References
Device drivers
IBM mainframe software | Operating System (OS) | 886 |
OpenBSD Cryptographic Framework
The OpenBSD Cryptographic Framework (OCF) is a service virtualization layer for the uniform management of cryptographic hardware by an operating system. It is part of the OpenBSD Project, having been included in the operating system since OpenBSD 2.8 (December, 2000). Like other OpenBSD projects such as OpenSSH, it has been ported to other systems based on Berkeley Unix such as FreeBSD and NetBSD, and to Solaris and Linux. One of the Linux ports is supported by Intel for use with its proprietary cryptographic software and hardware to provide hardware-accelerated SSL encryption for the open source Apache HTTP Server.
Background
Cryptography is computationally intensive and is used in many different contexts. Software implementations often serve as a bottleneck to information flow or increase network latency. Specialist hardware such as cryptographic accelerators can mitigate the bottleneck problem by introducing parallelism. Certain kinds of hardware, hardware random number generators, can also produce randomness more reliably than a pseudo-random software algorithm by exploiting the entropy of natural events.
Unlike graphics applications such as games and film processing where similar hardware accelerators are in common use and have strong operating system support, the use of hardware in cryptography has had relatively low uptake. By the late 1990s, there was a need for a uniform operating system layer to mediate between cryptographic hardware and application software that used it. The lack of this layer led to the production of applications that were hard-coded to work with one or a very small range of cryptographic accelerators.
The OpenBSD Project, which has a history of integrating strong, carefully audited cryptography into its operating system's core, produced a framework for the provision of cryptographic hardware acceleration as an operating system service.
/dev/crypto
Application-level support is provided through the pseudo-device , which provides access to the hardware drivers through a standard ioctl interface. This simplifies the writing of applications and removes the need for the application programmer to understand the operational details of the actual hardware that will be used.
Implications for other subsystems
The OpenBSD implementation of IPsec, the packet-level encryption protocol, was altered so that packets can be decoded in batches, which improves throughput. One rationale for this is to maximize efficiency of hardware usage—larger batches reduce the bus transmission overhead—but in practice the IPsec developers have found that this strategy improves the efficiency even of software implementations.
Many Intel firmware hubs on i386 motherboards provide a hardware random number generator, and where possible this facility is used to provide entropy in IPsec.
Because OpenSSL uses the OCF, systems with hardware that supports the RSA, DH, or DSA cryptographic protocols will automatically use the hardware without any modification of the software.
Backdoor allegations investigated
On 11 December 2010, a former government contractor named Gregory Perry sent an email to OpenBSD project leader Theo de Raadt alleging that the FBI had paid some OpenBSD ex-developers 10 years previously to compromise the security of the system, inserting "a number of backdoors and side channel key leaking mechanisms into the OCF". Theo de Raadt made the email public on 14 December by forwarding it to the openbsd-tech mailing list and suggested an audit of the IPsec codebase. De Raadt's response was skeptical of the report and he invited all developers to independently review the relevant code. In the weeks that followed, bugs were fixed but no evidence of backdoors was found.
Similarly named Solaris product
Oracle's proprietary operating system Solaris (originally developed by Sun) features an unrelated product called the Solaris Cryptographic Framework, a plug-in system for cryptographic algorithms and hardware.
See also
OpenBSD security features
Crypto API (Linux)
Microsoft CryptoAPI
References
External links
Cryptography in OpenBSD, overview document provided by the OpenBSD project.
OCF Linux Project.
Cryptographic Framework
Applications of cryptography
Cryptographic software | Operating System (OS) | 887 |
Computación y Sistemas
Computación y Sistemas is a peer-reviewed on Artificial Intelligence and Computing Science research providing a recognized forum for research in the area of computer science at Latin America. It was established in 1997 by Professor Adolfo Guzmán Arenas and it is published by Instituto Politécnico Nacional with the support of CONACyT.
Abstracting and indexing
The journal is abstracted and indexed in DBLP, Scielo, and Scopus.
Computación y Sistemas is included in the CONACyT until 2017.
Open Access Policy
Computación y Sistemas provides immediate open access to its peer-reviewed content.
Former Editors in Chief
Juan Humberto Sosa Azuela (IPN), Isaac Scherson (University of California, Irvine) & Ulises Cortés (KEMLg-UPC) (2009–2012)
Juan Luis Díaz de León (IPN), Jean Paul Frédéric Serra (Centre de Morphologie Mathématique) & Gerhard X. Ritter (University of Florida) (2004–2009)
George A. Bekey (USC), Juan Luis Díaz de León (IPN), Jean Paul Frédéric Serra (Centre de Morphologie Mathématique), Gerhard X. Ritter (University of Florida) & Adolfo Steiger Garçao (New University of Lisbon) (2003–2004)
George A. Bekey (USC), Adolfo Guzmán Arenas (IPN), Ramón López de Mántaras (CSIC) & Adolfo Steiger Garçao (New University of Lisbon) (1997 -2003)
References
External links
CONACyT registry of excellence magazinesEvaluation Criteria
http://scielo.unam.mx/scielo.php/script_sci_serial/pid_1405-5546/lng_en/nrm_iso
Computación y Sistemas
English-language journals
Quarterly journals | Operating System (OS) | 888 |
System integrity
In telecommunications, the term system integrity has the following meanings:
That condition of a system wherein its mandated operational and technical parameters are within the prescribed limits.
The quality of an AIS when it performs its intended function in an unimpaired manner, free from deliberate or inadvertent unauthorized manipulation of the system.
The state that exists when there is complete assurance that under all conditions an IT system is based on the logical correctness and reliability of the operating system, the logical completeness of the hardware and software that implement the protection mechanisms, and data integrity.
References
National Information Systems Security Glossary
Telecommunications systems
Technology systems
Systems engineering
Computer security
Reliability engineering | Operating System (OS) | 889 |
GNU Guix
GNU Guix () is a functional cross-platform package manager and a tool to instantiate and manage Unix-like operating systems, based on the Nix package manager. Configuration and package recipes are written in Guile Scheme. GNU Guix is the default package manager of the GNU Guix System distribution.
Differing from traditional package managers, Guix (like Nix) utilizes a purely functional deployment model where software is installed into unique directories generated through cryptographic hashes. All dependencies for each software are included within each hash. This solves the problem of dependency hell, allows multiple versions of the same software to coexist and makes packages portable and reproducible. Performing scientific computations in a Guix setup has been proposed as a promising response to the replication crisis.
The development of GNU Guix is intertwined with the GNU Guix System, an installable operating system distribution using the Linux-libre kernel and GNU Shepherd init system.
General features
Guix packages are defined through functional Guile Scheme APIs specifically designed for package management. Dependencies are tracked directly in this language through special values called "derivations" which are evaluated by the Guix daemon lazily. Guix keeps track of these references automatically so that installed packages can be garbage collected when no other package depends on them. At the cost of greater storage requirements, all upgrades in Guix are guaranteed to be both atomic and can be rolled back. The roll-back feature of Guix is inherited from the design of Nix and is not found in any of the native package managers of popular Linux distributions such as Debian and its derivatives, Arch Linux and its derivatives, or in other major distributions such as Fedora, CentOS or OpenSUSE. The Guix package manager can however be used in such distributions and is available for Debian and Parabola. This also enables multiple users to safely install software on the same system without administrator privileges.
Compared to traditional package managers, Guix package stores can grow considerably bigger and therefore require more bandwidth; although compared to container solutions (like Docker) that are also commonly employed to solve dependency hell, Guix is leaner and conforms to practices like Don't repeat yourself and Single source of truth. If the user chooses to build everything from source even larger storage space and bandwidth is required.
The store
Inherited from the design of Nix, most of the content of the package manager is kept in a directory /gnu/store where only the Guix daemon has write-access. This is achieved via specialised bind mounts, where the Store as a file system is mounted read only, prohibiting interference even from the root user, while the Guix daemon remounts the Store as read/writable in its own private namespace. Guix talks with this daemon to build things or fetch substitutes which are all kept in the store. Users are discouraged from ever manually touching the store by re-mounting it as writable since this defeats the whole purpose of the store.
Garbage collection
Guix - like Nix - has built-in garbage collection facilities to help prune dead store items and keep the live ones.
Package definitions
This is an example of a package definition for the hello-package:
(define-public hello
(package
(name "hello")
(version "2.10")
(source (origin
(method url-fetch)
(uri (string-append "mirror://gnu/hello/hello-" version
".tar.gz"))
(sha256
(base32
"0ssi1wpaf7plaswqqjwigppsg5fyh99vdlb9kzl7c9lng89ndq1i"))))
(build-system gnu-build-system)
(synopsis "Hello, GNU world: An example GNU package")
(description
"GNU Hello prints the message \"Hello, world!\" and then exits. It
serves as an example of standard GNU coding practices. As such, it supports
command-line arguments, multiple languages, and so on.")
(home-page "https://www.gnu.org/software/hello/")
(license gpl3+)))
It is written using Guile. The package recipes can easily be inspected (running e.g. guix edit hello) and changed in Guix, making the system transparent and very easily hackable.
Transactional upgrades
Inherited from the design of Nix, all manipulation of store items is independent of each other, and the directories of the store begin with a base32-encoded hash of the source code of the derivation along with its inputs.
Profiles
Guix package uses profiles generations, which are a collection of symlinks to specific store items together comprising what the user has installed into the profile. Every time a package is installed or removed, a new generation will be built.
E.g. the profile of a user who only installed GNU Hello contains links to the store item which holds the version of hello installed with the currently used guix.
E.g. on version c087a90e06d7b9451f802323e24deb1862a21e0f of guix, this corresponds to the following item: /gnu/store/md2plii4g5sk66wg9cgwc964l3xwhrm9-hello-2.10 (built from the recipe above).
In addition to symlinks, each profile guix builds also contains a union of all the info-manuals, man-pages, icons, fonts, etc. so that the user can browse documentation and have access to all the icons and fonts installed.
The default symlinks to profile generations are stored under /var/guix in the filesystem.
Multiple user profiles
The user can create any number of profiles by invoking guix package -p PROFILE-NAME COMMAND. A new directory with the profile-name as well as profile-generation-symlinks will then be created in the current directory.
Roll-back
Guix package enables instantaneous roll-back to a previous profile generation via changing the symlink to an earlier profile generation. Profiles are also stored in the store e.g. this item is a profile containing hello above: /gnu/store/b4wipjlsapvnijmbawl7sh76087vpl4n-profile (built and activated when running guix install hello).
Environment
Guix environment enables the user to easily enter an environment where all the necessary packages for development of software are present without clogging up the users default profile with dependencies for multiple projects.
E.g. running guix environment hello enters a throw-away environment where everything needed to compile hello on guix is present (gcc, guile, etc.).
Persistent development environment
If you want a persistent gc-rooted environment that is not garbage collected on the next run of guix gc you can create a root:
E.g. running guix environment --root=hello-root hello enters an environment where everything needed to compile guix is present (gcc, guile, etc.) and registered as a root in the current directory (by symlinking to the items in the store).
Pack
Guix pack enables the user to bundle together store items and output them as a docker binary image, a relocatable tarball or a squashfs binary.
Graph
Guix graph enables the user to view different graphs of the packages and their dependencies.
Guix System (operating system)
GNU Guix System uses Guix as its package manager and configuration system, similar to how NixOS uses Nix.
History
The GNU Project announced in November 2012 the first release of GNU Guix, a functional package manager based on Nix that provides, among other things, Guile Scheme APIs. The project was started in June 2012 by Ludovic Courtès, one of the GNU Guile hackers. On August 20, 2015, it was announced that Guix had been ported to GNU Hurd.
Releases
The project has no fixed release schedule and has until now released approximately every 6 months.
See also
GNU Guix System
Debian GNU/Hurd
Comparison of Linux distributions
NixOS – A similar operating system, which inspired GNU Guix
References
External links
List of Guix packages
Guix
GNU Project
Free package management systems
Free software programmed in Lisp
Functional programming
GNU Project software
Linux package management-related software | Operating System (OS) | 890 |
Memory management (operating systems)
In operating systems, memory management is the function responsible for managing the computer's primary memory.
The memory management function keeps track of the status of each memory location, either allocated or free. It determines how memory is allocated among competing processes, deciding which gets memory, when they receive it, and how much they are allowed. When memory is allocated it determines which memory locations will be assigned. It tracks when memory is freed or unallocated and updates the status.
This is distinct from application memory management, which is how a process manages the memory assigned to it by the operating system.
Memory management techniques
Single contiguous allocation
Single allocation is the simplest memory management technique. All the computer's memory, usually with the exception of a small portion reserved for the operating system, is available to a single application. MS-DOS is an example of a system that allocates memory in this way. An embedded system running a single application might also use this technique.
A system using single contiguous allocation may still multitask by swapping the contents of memory to switch among users. Early versions of the MUSIC operating system used this technique.
Partitioned allocation
Partitioned allocation divides primary memory into multiple memory partitions, usually contiguous areas of memory. Each partition might contain all the information for a specific job or task. Memory management consists of allocating a partition to a job when it starts and unallocating it when the job ends.
Partitioned allocation usually requires some hardware support to prevent the jobs from interfering with one another or with the operating system. The IBM System/360 used a lock-and-key technique. Other systems used base and bounds registers which contained the limits of the partition and flagged invalid accesses. The UNIVAC 1108 Storage Limits Register had separate base/bound sets for instructions and data. The system took advantage of memory interleaving to place what were called the i bank and d bank in separate memory modules.
Partitions may be either static, that is defined at Initial Program Load (IPL) or boot time, or by the computer operator, or dynamic, that is, automatically created for a specific job. IBM System/360 Operating System Multiprogramming with a Fixed Number of Tasks (MFT) is an example of static partitioning, and Multiprogramming with a Variable Number of Tasks (MVT) is an example of dynamic. MVT and successors use the term region to distinguish dynamic partitions from static ones in other systems.
Partitions may be relocatable using hardware typed memory, like the Burroughs Corporation B5500, or base and bounds registers like the PDP-10 or GE-635. Relocatable partitions are able to be compacted to provide larger chunks of contiguous physical memory. Compaction moves "in-use" areas of memory to eliminate "holes" or unused areas of memory caused by process termination in order to create larger contiguous free areas.
Some systems allow partitions to be swapped out to secondary storage to free additional memory. Early versions of IBM's Time Sharing Option (TSO) swapped users in and out of time-sharing partitions.
Paged memory management
Paged allocation divides the computer's primary memory into fixed-size units called page frames, and the program's virtual address space into pages of the same size. The hardware memory management unit maps pages to frames. The physical memory can be allocated on a page basis while the address space appears contiguous.
Usually, with paged memory management, each job runs in its own address space. However, there are some single address space operating systems that run all processes within a single address space, such as IBM i, which runs all processes within a large address space, and IBM OS/VS2 (SVS), which ran all jobs in a single 16MiB virtual address space.
Paged memory can be demand-paged when the system can move pages as required between primary and secondary memory.
Segmented memory management
Segmented memory is the only memory management technique that does not provide the user's program with a "linear and contiguous address space." Segments are areas of memory that usually correspond to a logical grouping of information such as a code procedure or a data array. Segments require hardware support in the form of a segment table which usually contains the physical address of the segment in memory, its size, and other data such as access protection bits and status (swapped in, swapped out, etc.)
Segmentation allows better access protection than other schemes because memory references are relative to a specific segment and the hardware will not permit the application to reference memory not defined for that segment.
It is possible to implement segmentation with or without paging. Without paging support the segment is the physical unit swapped in and out of memory if required. With paging support the pages are usually the unit of swapping and segmentation only adds an additional level of security.
Addresses in a segmented system usually consist of the segment id and an offset relative to the segment base address, defined to be offset zero.
The Intel IA-32 (x86) architecture allows a process to have up to 16,383 segments of up to 4GiB each. IA-32 segments are subdivisions of the computer's linear address space, the virtual address space provided by the paging hardware.
The Multics operating system is probably the best known system implementing segmented memory. Multics segments are subdivisions of the computer's physical memory of up to 256 pages, each page being 1K 36-bit words in size, resulting in a maximum segment size of 1MiB (with 9-bit bytes, as used in Multics). A process could have up to 4046 segments.
Rollout/rollin
Rollout/rollin (RO/RI) is a computer operating system memory management technique where the entire non-shared code and data of a running program is swapped out to auxiliary memory (disk or drum) to free main storage for another task. Programs may be rolled out "by demand end or…when waiting for some long event." Rollout/rollin was commonly used in time-sharing systems, where the user's "think time" was relatively long compared to the time to do the swap.
Unlike virtual storage—paging or segmentation, rollout/rollin does not require any special memory management hardware; however, unless the system has relocation hardware such as a memory map or base and bounds registers, the program must be rolled back in to its original memory locations. Rollout/rollin has been largely superseded by virtual memory.
Rollout/rollin was an optional feature of OS/360 Multiprogramming with a Variable number of Tasks (MVT)
Rollout/rollin allows the temporary, dynamic expansion of a particular job beyond its originally specified region. When a job needs more space, rollout/rollin attempts to obtain unassigned storage for the job's use. If there is no such unassigned storage, another job is rolled out—i.e., is transferred to auxiliary storage—so that its region may be used by the first job. When released by the first job, this additional storage is again available, either (1) as unassigned storage, if that was its source, or (2) to receive the job to be transferred back into main storage (rolled in).
In OS/360, rollout/rollin was used only for batch jobs, and rollin does not occur until the jobstep borrowing the region terminates.
See also
Memory overcommitment
Memory protection
x86 memory segmentation
Notes
References
Operating systems | Operating System (OS) | 891 |
Universal Access
Apple Universal Access is a component of the Mac OS X operating system that provides computing abilities to people with visual impairment, hearing impairment, or physical disability.
Components
Universal Access is a preference pane of the System Preferences application. It includes four sub-components, each providing different options and settings.
Seeing
Turn On/Off Screen Zooming
Inverse Colors (White on Black, also known as reverse colors), +++8
Set Display to Greyscale (10.2 onwards)
Enhance Contrast
Enable Access for Assistive Devices
Enable Text-To-Speech for Universal Access Preferences
Disable unnecessary automatic animations
Hearing
Flash the screen when an alert sound occurs
Raise/Lower Volume
Keyboard
Sticky Keys (Treat a sequence of modifier keys as a key combo)
Slow keys (Delay between key press and key acceptance)
Mouse
Mouse Keys (Use the numeric keypad in place of the mouse)
Mouse Pointer Delay
Mouse Pointer Max Speed
Mouse Pointer enlarging
External links
Universal Access Webpage
Apple Accessibility webpage with more information
MacOS | Operating System (OS) | 892 |
Apple II
The Apple II (stylized as apple ][) is an 8-bit home computer and one of the world's first highly successful mass-produced microcomputer products. It was designed primarily by Steve Wozniak; Steve Jobs oversaw the development of Apple II's foam-molded plastic case and Rod Holt developed the switching power supply. It was introduced by Jobs and Wozniak at the 1977 West Coast Computer Faire, and marks Apple's first launch of a personal computer aimed at a consumer market—branded toward American households rather than businessmen or computer hobbyists.
Byte magazine referred to the Apple II, Commodore PET 2001, and TRS-80 as the "1977 Trinity". Apple II had the defining feature of being able to display color graphics, and this was why the Apple logo was redesigned to have a spectrum of colors.
The Apple II is the first model in the Apple II series, followed by Apple II+, Apple IIe, Apple IIc, and the 16-bit Apple IIGS—all of which remained compatible. Production of the last available model, Apple IIe, ceased in November 1993.
History
By 1976, Steve Jobs had convinced product designer Jerry Manock (who had formerly worked at Hewlett Packard designing calculators) to create the "shell" for the Apple II—a smooth case inspired by kitchen appliances that concealed the internal mechanics. The earliest Apple II computers were assembled in Silicon Valley and later in Texas; printed circuit boards were manufactured in Ireland and Singapore. The first computers went on sale on June 10, 1977 with an MOS Technology 6502 microprocessor running at 1.022,727 MHz ( of the NTSC color carrier), two game paddles (bundled until 1980, when they were found to violate FCC regulations), 4 KiB of RAM, an audio cassette interface for loading programs and storing data, and the Integer BASIC programming language built into ROMs. The video controller displayed 24 lines by 40 columns of monochrome, uppercase-only text on the screen (the original character set matches ASCII characters 20h to 5Fh), with NTSC composite video output suitable for display on a TV monitor or on a regular TV set (by way of a separate RF modulator). The original retail price of the computer with 4 KiB of RAM was $1,298 () and $2,638 () with the maximum 48 KB of RAM. To reflect the computer's color graphics capability, the Apple logo on the casing had rainbow stripes, which remained a part of Apple's corporate logo until early 1998. Perhaps most significantly, the Apple II was a catalyst for personal computers across many industries; it opened the doors to software marketed at consumers.
Certain aspects of the system's design were influenced by Atari's arcade video game Breakout (1976), which was designed by Wozniak, who said: "A lot of features of the Apple II went in because I had designed Breakout for Atari. I had designed it in hardware. I wanted to write it in software now". This included his design of color graphics circuitry, the addition of game paddle support and sound, and graphics commands in Integer BASIC, with which he wrote Brick Out, a software clone of his own hardware game. Wozniak said in 1984: "Basically, all the game features were put in just so I could show off the game I was familiar with—Breakout—at the Homebrew Computer Club. It was the most satisfying day of my life [when] I demonstrated Breakout—totally written in BASIC. It seemed like a huge step to me. After designing hardware arcade games, I knew that being able to program them in BASIC was going to change the world."
Overview
In the May 1977 issue of Byte, Steve Wozniak published a detailed description of his design; the article began, "To me, a personal computer should be small, reliable, convenient to use, and inexpensive."
The Apple II used peculiar engineering shortcuts to save hardware and reduce costs, such as:
Taking advantage of the way the 6502 processor accesses memory: it occurs only on alternate phases of the clock cycle; video generation circuitry memory access on the otherwise unused phase avoids memory contention issues and interruptions of the video stream.
This arrangement simultaneously eliminated the need for a separate refresh circuit for DRAM chips, as video transfer accessed each row of dynamic memory within the timeout period. In addition, it did not require separate RAM chips for video RAM, while the PET and TRS-80 had SRAM chips for video.
Rather than use a complex analog-to-digital circuit to read the outputs of the game controller, Wozniak used a simple timer circuit whose period is proportional to the resistance of the game controller, and used a software loop to measure the timer.
A single 14.31818 MHz master oscillator (fM) was divided by various ratios to produce all other required frequencies, including microprocessor clock signals (fM/14), video transfer counters, and color-burst samples (fM/4).
The text and graphics screens have a complex arrangement. For instance, the scanlines were not stored in sequential areas of memory. This complexity was reportedly due to Wozniak's realization that the method would allow for the refresh of dynamic RAM as a side effect (as described above). This method had no cost overhead to have software calculate or look up the address of the required scanline and avoided the need for significant extra hardware. Similarly, in high-resolution graphics mode, color is determined by pixel position and thus can be implemented in software, saving Wozniak the chips needed to convert bit patterns to colors. This also allowed for subpixel font rendering, since orange and blue pixels appear half a pixel-width farther to the right on the screen than green and purple pixels.
The Apple II at first used data cassette storage, like most other microcomputers of the time. In 1978, the company introduced an external -inch floppy disk drive, called Disk II (stylized as Disk ][), attached through a controller card that plugs into one of the computer's expansion slots (usually slot 6). The Disk II interface, created by Wozniak, is regarded as an engineering masterpiece for its economy of electronic components.
The approach taken in the Disk II controller is typical of Wozniak's designs. With a few small-scale logic chips and a cheap PROM (programmable read-only memory), he created a functional floppy disk interface at a fraction of the component cost of standard circuit configurations.
Case design
The first production Apple II computers had hand-molded cases; these had visible bubbles and other lumps in them from the imperfect plastic molding process, which was soon switched to machine molding. In addition, the initial case design had no vent openings, causing high heat buildup from the PCB and resulting in the plastic softening and sagging. Apple added vent holes to the case within three months of production; customers with the original case could have them replaced at no charge.
PCB revisions
The Apple II's printed circuit board (PCB) underwent several revisions, as Steve Wozniak made modifications to it. The earliest version was known as Revision 0, and the first 6,000 units shipped used it. Later revisions added a color killer circuit to prevent color fringing when the computer was in text mode, as well as modifications to improve the reliability of cassette I/O. Revision 0 Apple IIs powered up in an undefined mode and had garbage on-screen, requiring the user to press Reset. This was eliminated in later board revisions. Revision 0 Apple IIs could display only four colors in hi-res mode, but Wozniak was able to increase this to six hi-res colors on later board revisions.
The PCB had three RAM banks for a total of 24 RAM chips. Original Apple IIs had jumper switches to adjust RAM size, and RAM configurations could be 4, 8, 12, 16, 20, 24, 32, 36, or 48 KiB. The three smallest memory configurations used 4kx1 DRAMs, with larger ones using 16kx1 DRAMs, or mix of 4-kilobyte and 16-kilobyte banks (the chips in any one bank have to be the same size). The early Apple II+ models retained this feature, but after a drop in DRAM prices, Apple redesigned the circuit boards without the jumpers, so that only 16kx1 chips were supported. A few months later, they started shipping all machines with a full 48 KiB complement of DRAM.
Unlike most machines, all integrated circuits on the Apple II PCB were socketed; although this cost more to manufacture and created the possibility of loose chips causing a system malfunction, it was considered preferable to make servicing and replacement of bad chips easier.
The Apple II PCB lacks any means of generating an IRQ, although expansion cards may generate one. Program code had to stop everything to perform any I/O task; like many of the computer's other idiosyncrasies, this was due to cost reasons and Steve Wozniak assuming interrupts were not needed for gaming or using the computer as a teaching tool.
Display and graphics
Color on the Apple II series uses a quirk of the NTSC television signal standard, which made color display relatively easy and inexpensive to implement. The original NTSC television signal specification was black and white. Color was added later by adding a 3.58-megahertz subcarrier signal that was partially ignored by black-and-white TV sets. Color is encoded based on the phase of this signal in relation to a reference color burst signal. The result is, that the position, size, and intensity of a series of pulses define color information. These pulses can translate into pixels on the computer screen, with the possibility of exploiting composite artifact colors.
The Apple II display provides two pixels per subcarrier cycle. When the color burst reference signal is turned on and the computer attached to a color display, it can display green by showing one alternating pattern of pixels, magenta with an opposite pattern of alternating pixels, and white by placing two pixels next to each other. Blue and orange are available by tweaking the pixel offset by half a pixel-width in relation to the color-burst signal. The high-resolution display offers more colors by compressing more (and narrower) pixels into each subcarrier cycle.
The coarse, low-resolution graphics display mode works differently, as it can output a pattern of dots per pixel to offer more color options. These patterns are stored in the character generator ROM, and replace the text character bit patterns when the computer is switched to low-res graphics mode. The text mode and low-res graphics mode use the same memory region and the same circuitry is used for both.
A single HGR page occupied 8 KiB of RAM; in practice this meant that the user had to have at least 12 KiB of total RAM to use HGR mode and 20 KiB to use two pages. Early Apple II games from the 1977–79 period often ran only in text or low-resolution mode in order to support users with small memory configurations; HGR not being near universally supported by games until 1980.
Sound
Rather than a dedicated sound-synthesis chip, the Apple II has a toggle circuit that can only emit a click through a built-in speaker or a line-out jack; all other sounds (including two-, three- and, eventually, four-voice music and playback of audio samples and speech synthesis) are generated entirely by software that clicked the speaker at just the right times. Similar techniques are used for cassette storage: cassette output works the same as the speaker, and input is a simple zero-crossing detector that serves as a relatively crude (1-bit) audio digitizer. Routines in machine ROM encode and decode data in frequency-shift keying for the cassette.
Programming languages
Initially, the Apple II was shipped with Integer BASIC encoded in the motherboard ROM chips. Written by Wozniak, the interpreter enabled users to write software applications without needing to purchase additional development utilities. Written with game programmers and hobbyists in mind, the language only supported the encoding of numbers in 16-bit integer format. Since it only supported integers between -32768 and +32767 (signed 16-bit integer), it was less suitable to business software, and Apple soon received complaints from customers. Because Steve Wozniak was busy developing the Disk II hardware, he did not have time to modify Integer BASIC for floating point support. Apple instead licensed Microsoft's 6502 BASIC to create Applesoft BASIC.
Disk users normally purchased a so-called Language Card, which had Applesoft in ROM, and was sat below the Integer BASIC ROM in system memory. The user could switch between either BASIC by typing FP or INT in BASIC prompt. Apple also offered a different version of Applesoft for cassette users, which occupied low memory, and was started by using the LOAD command in Integer BASIC.
As shipped, Apple II incorporated a machine code monitor with commands for displaying and altering the computer's RAM, either one byte at a time, or in blocks of 256 bytes at once. This enabled programmers to write and debug machine code programs without further development software. The computer powers on into the monitor ROM, displaying a * prompt. From there, Ctrl+B enters BASIC, or a machine language program can be loaded from cassette. Disk software can be booted with Ctrl+P followed by 6, referring to Slot 6 which normally contained the Disk II controller.
A 6502 assembler was soon offered on disk, and later the UCSD compiler and operating system for the Pascal language were made available. The Pascal system requires a 16 KiB RAM card to be installed in the language card position (expansion slot 0) in addition to the full 48 KiB of motherboard memory.
Manual
The first 1,000 or so Apple IIs shipped in 1977 with a 68-page mimeographed "Apple II Mini Manual", hand-bound with brass paper fasteners. This was the basis for the Apple II Reference Manual, which became known as the Red Book for its red cover, published in January 1978. All existing customers who sent in their warranty cards, were sent free copies of the Red Book. The Apple II Reference Manual contained the complete schematic of the entire computer's circuitry, and a complete source listing of the "Monitor" ROM firmware that served as the machine's BIOS.
Operating system
The original Apple II provided an operating system in ROM along with a BASIC variant called Integer BASIC. The only form of storage available was cassette tape which was inefficiently slow and, worse, unreliable. In 1977 when Apple decided against the popular but clunky CP/M operating system for Wozniak's innovative disk controller design, it contracted Shepardson Microsystems for $13,000 to write an Apple DOS for the Apple II series. At Shepardson, Paul Laughton developed the crucial disk drive software in just 35 days, a remarkably short deadline by any standard. Apple's Disk II -inch floppy disk drive was released in 1978. The final and most popular version of this software was Apple DOS 3.3.
Apple DOS was superseded by ProDOS, which supported a hierarchical filesystem and larger storage devices. With an optional third-party Z80-based expansion card, the Apple II could boot into the CP/M operating system and run WordStar, dBase II, and other CP/M software. With the release of MousePaint in 1984 and the Apple IIGS in 1986, the platform took on the look of the Macintosh user interface, including a mouse.
Apple released Applesoft BASIC in 1977, a more advanced variant of the language which users could run instead of Integer BASIC for more capabilities.
Some commercial Apple II software booted directly and did not use standard DOS disk formats. This discouraged the copying or modifying of the software on the disks, and improved loading speed.
Third-party devices and applications
When the Apple II initially shipped in June 1977, no expansion cards were available for the slots. This meant that the user did not have any way of connecting a modem or a printer. One popular hack involved connecting a teletype machine to the cassette output.
Wozniak's open-architecture design and Apple II's multiple expansion slots permitted a wide variety of third-party devices, including peripheral cards, such as serial controllers, display controllers, memory boards, hard disks, networking components, and real-time clocks. There were plug-in expansion cards—such as the Z-80 SoftCard—that permitted Apple II to use the Z80 processor and run programs for the CP/M operating system, including the dBase II database and the WordStar word processor. The Z80 card also allowed the connection to a modem, and thereby to any networks that a user might have access to. In the early days, such networks were scarce. But they expanded significantly with the development of bulletin board systems in later years. There was also a third-party 6809 card that allowed OS-9 Level One to be run. Third-party sound cards greatly improved audio capabilities, allowing simple music synthesis and text-to-speech functions. Apple II accelerator cards doubled or quadrupled the computer's speed.
Early Apple IIs were often sold with a Sup'R'Mod, which allowed the composite video signal to be viewed in a television.
The Soviet Union radio-electronics industry designed Apple II-compatible computer Agat. Roughly 12,000 Agat 7 and 9 models were produced and they were widely used in Soviet schools. Agat 9 computers could run "Apple II" compatibility and native modes. "Apple II" mode allowed to run wider variety of (presumably pirated) Apple II software, but at the expense of less RAM. Because of that Soviet developers preferred native mode over "Apple II" compatibility mode.
Reception
Jesse Adams Stein wrote, "As the first company to release a 'consumer appliance' micro-computer, Apple Computer offers us a clear view of this shift from a machine to an appliance." But the company also had "to negotiate the attitudes of its potential buyers, bearing in mind social anxieties about the uptake of new technologies in multiple contexts. The office, the home and the 'office-in-the-home' were implicated in these changing spheres of gender stereotypes and technological development." After seeing a crude, wire-wrapped prototype demonstrated by Wozniak and Steve Jobs in November 1976, Byte predicted in April 1977, that the Apple II "may be the first product to fully qualify as the 'appliance computer' ... a completed system which is purchased off the retail shelf, taken home, plugged in and used". The computer's color graphics capability especially impressed the magazine. The magazine published a favorable review of the computer in March 1978, concluding: "For the user that wants color graphics, the Apple II is the only practical choice available in the 'appliance' computer class."
Personal Computer World in August 1978 also cited the color capability as a strength, stating that "the prime reason that anyone buys an Apple II must surely be for the colour graphics". While mentioning the "oddity" of the artifact colors that produced output "that is not always what one wishes to do", it noted that "no-one has colour graphics like this at this sort of price". The magazine praised the sophisticated monitor software, user expandability, and comprehensive documentation. The author concluded that "the Apple II is a very promising machine" which "would be even more of a temptation were its price slightly lower ... for the moment, colour is an Apple II".
Although it sold well from the launch, the initial market was to hobbyists and computer enthusiasts. Sales expanded exponentially into the business and professional market, when the spreadsheet program VisiCalc was launched in mid-1979. VisiCalc is credited as the defining killer app in the microcomputer industry.
During the first five years of operations, revenues doubled about every four months. Between September 1977 and September 1980, annual sales grew from $775,000 to $118 million. During this period the sole products of the company were the Apple II and its peripherals, accessories, and software.
References
External links
Additional documentation in Bitsavers PDF Document archive
Apple II on Old-computers.com
Online Apple II Resource
Apple II computers
Computer-related introductions in 1977
6502-based home computers
8-bit computers
ca:Apple II | Operating System (OS) | 893 |
IBM System/390
The IBM System/390 is the discontinued fifth generation of the System/360 instruction set architecture. The first ESA/390 computer was the Enterprise System/9000 (ES/9000) family, which were introduced in 1990. These were followed by the 9672 CMOS System/390 mainframe family in the mid-1990s. These systems followed the IBM 3090, with over a decade of follow-ons. The ESA/390 was succeeded by the 64-bit z/Architecture in 2000.
History
On February 15, 1988, IBM announced
Enterprise Systems Architecture/370 (ESA/370) for 3090 enhanced ("E") models and for 4381 model groups 91E and 92E. In additional to the primary and secondary addressing modes that System/370 Extended Architecture (S/370-XA) supports, ESA has an AR mode in which each use of general register 1-15 as a base register uses an associated access register to select an address space. In addition to the normal address spaces that XA supports, ESA also allows data spaces, which contain no executable code.
On September 15, 1990, IBM published a group of hardware and software announcements, two of which included overviews of three announcements:
System/390 (S/390), as in 360 for 1960s, 370 for 1970s.
Enterprise System/9000 (ES/9000), as in 360 for 1960s, 370 for 1970s.
Enterprise Systems Architecture/390 (ESA/390) was IBM's last 31-bit-address/32-bit-data mainframe computing design, copied by Amdahl, Hitachi, and Fujitsu among other competitors. It was the successor of ESA/370 and, in turn, was succeeded by the 64-bit z/Architecture in 2000. Among other things, ESA/390 added fiber optics channels, known as Enterprise Systems Connection (ESCON) channels, to the parallel (Bus and Tag) channels of ESA/370.
Despite the fact that IBM mentioned the 9000 family first in some of the day's announcements, it was clear "by the end of the day" that it was "for System/390," although it was a shortened name, S/390, that was placed on some of the actual "boxes" later shipped.
The ES/9000 include rack-mounted models, free standing air cooled models and water cooled models. The low end models were substantially less expensive than the 3090s previously needed to run MVS/ESA, and could also run VM/ESA and VSE/ESA, which IBM announced at the same time.
IBM periodically added named features to ESA/390 in conjunction with new processors; the ESA/390 Principles of Operation manual identifies them only by name, not by the processors supporting them.
Machines supporting the architecture have been sold under the brand System/390 (S/390) from the beginning of the 1990s. The 9672 implementations of System/390 were the first high-end IBM mainframe architecture implemented first with CMOS CPU electronics rather than the traditional bipolar logic.
The IBM z13 was the last z Systems server to support running an operating system in ESA/390 architecture mode. However, all 24-bit and 31-bit problem-state application programs originally written to run on the ESA/390 architecture readily run unaffected by this change.
ESA/390 architecture
The architecture (the Linux kernel architecture designation is "s390"; "s390x" designates the 64-bit z/Architecture) employs a channel I/O subsystem in the System/370 Extended Architecture (S/370-XA) tradition, offloading almost all I/O activity to specialized hardware more sophisticated than the S/360 and S/370 I/O channels. It also includes a standard set of CCW opcodes that new equipment is expected to support.
The architecture maintains problem state backward compatibility with the 24-bit-address/32-bit-data System/360 (1964) and subsequent 24/31-bit-address/32-bit-data architectures (System/370, System/370-XA, ESA/370 and ESA/390. However, the I/O subsystem is based on System/370 Extended Architecture (S/370-XA), not on the original S/370 I/O instructions.
ESA/390 is arguably a 32-bit architecture; as with System/360, System/370, 370-XA, and ESA/370, the general-purpose registers are 32 bits long, and the arithmetic instructions support 32-bit arithmetic. Only byte-addressable real memory (Central Storage) and Virtual Storage addressing is limited to 31 bits. (IBM reserved the most significant bit to easily support applications expecting 24-bit addressing, as well as to sidestep a problem with extending two instructions to handle 32-bit unsigned addresses.)
In fact, total system memory is not limited to 31 bits (2 GB). While the virtual storage of a single address space cannot exceed 2 GB, ESA/390 supports multiple concurrent 2 GB address spaces. Further, each address space can have Dataspaces associated with it, each of which can have up to 2 GB of Virtual Storage. While Central Storage is limited to 2 GB additional memory can be configured as expanded storage. With Expanded Storage 4 KB pages can be moved between Central Storage and Expanded Storage. Expanded Storage can be used for ultra-fast paging, for disk caching, and for virtual disks within the VM/CMS operating system. Under Linux/390 this memory cannot be used for disk caching; instead, it is supported by a block device driver, allowing to use it as ultra-fast swap space and for RAM drives.
In addition, a machine may be divided into Logical Partitions (LPARs), each with its own system memory so that multiple operating systems may run concurrently on one machine.
An important capability to form a Parallel Sysplex was added to the architecture in 1994.
Some PC-based IBM-compatible mainframes which provide ESA/390 processors in smaller machines have been released over time, but are only intended for software development.
The Hercules emulator is a portable ESA/390 and z/Architecture machine emulator which supports enough devices to boot many ESA/390 operating systems. Since it is written in pure C, it has been ported to many platforms, including S/390 itself. A commercial emulation product for IBM xSeries with higher execution speed is also available.
Common I/O Device Commands
2.0 Chapter 2. Specific I/O-Device Commands in Enterprise Systems Architecture/390 Common I/O-Device Commands shows the following commands.
S/390 computers
New models were offered on an ongoing basis.
Initial ES/9000 models
Eighteen models were announced September 5, 1990 for the ES/9000, the successor of the IBM 3090. The technology of all but two of the 18 models was similar to the 3090-J, but the models 900 and 820 (codenamed Summit) were greatly enhanced, featuring an on-board split I+D 128+128 KB L1 cache in addition to the 2x2MB L2 cache with 11-cycle latency, more direct interconnects between the processors, multi-level TLBs, branch target buffer and 111 MHz clock frequency. The 900 and 820 were the first models with out-of-order execution since the System/370-195 of 1971. Models 820 and 900 shipped to customers a year later than the models with older technology, in September 1991. Later these new technologies were used in models 520, 640, 660, 740 and 860.
Cooling
Water-cooled ES/9000 models included ES/9021-900, -820, -720, -620, -580, -500, -340 and -330.Air-cooled ES/9000 models included standalone ES/9121-480, -440, -320, -260, -210, -190, and rack mounted: ES/9221-421, -211, -170, -150, -130, -120.
In February 1993 an 8-processor 140MHz model 982 became available, with models 972, 962, 952, 942, 941, 831, 822, 821 and 711 following in March. These models had 30% higher per-processor performance than the 520 to 900 model line. In April 1994 alongside the launch of the first CMOS-based 9672 models, IBM also launched their ultimate bipolar model, the 10-processor model 9X2 rated at 465 MIPS.
Competitive Cooling
By the late 1970s and early 1980s, patented technology allowed Amdahl mainframes of this era to be completely air-cooled, unlike IBM systems that required chilled water and its supporting infrastructure. The 8 largest of the 18 models of the ES/9000 systems introduced in 1990 were water-cooled; the other ten were air-cooled.
ES/9000 features
ESCON fiber optic channels
Two of the initially announced models could be configured with as much as 9 Gigabytes of main memory.
Optional vector facilities were available on 14 of the 18 models, the number of vector processors could be 1, 2, 3, 4 or 6.
Six models were air-cooled models (and eight water-cooled models); 4 are rack-mounted.
Logical partitioning
Logical Partitions (LPARs) are a standard function on ES/9000 processors whereby IBM's Processor Resource/Systems Manager (PR/SM) hypervisor allows different operating systems to run concurrently in separate logical partitions (LPARs), with a high degree of isolation.
This was introduced as part of IBM's moving towards "lights-out" operation and increased control of multiple system configurations.
Vector facility
The System/390 vector facility was originally introduced with the IBM 3090 system, replacing IBM 3838 array processor (first introduced in 1976 for System/370).
9672
Introduced in 1994, the six generations of the IBM 9672 machines, "Parallel Enterprise Server", were the first CMOS, microprocessor based systems intended for the high end. The initial generations were slower than the largest ES/9000 sold in parallel, but the fifth and sixth generations were the largest and most powerful ESA/390 machines built.
In the course of next generations, CPUs added more instructions and increased performance. All 9672s were CMOS, but were slower than the 9021 bipolar machines until the G5 models. The G5 operated at 500MHz, making it at the time (September 1998 to early 1999) the second-highest clocked microprocessor after DEC Alpha. The G5 also added support for the IEEE 754 floating-point formats. In late May 1999 the G6 arrived featuring copper interconnects, raising the frequency to 637MHz, higher than the fastest servers of DEC at the time. CMOS designs permitted much smaller mainframes, such as the Multiprise 3000 introduced in 1999, which was actually based on 9672 G5. The 9672 G3 model and the Multiprise 2000 were the last versions to support pre-XA System/370 mode.
See also
IBM System/360
IBM System/370
IBM 30XX mainframe lines
IBM 303X
IBM 308X
IBM 3090
IBM Z
Notes
References
External links
IBM
IBM Z mainframe homepage
Current IBM Z mainframe servers
IBM System/390 Photo
Multiple links and references.
Exterior and interior images of the IBM 390.
Computing platforms
9000
32-bit computers | Operating System (OS) | 894 |
Ultrix
Ultrix (officially all-caps ULTRIX) is the brand name of Digital Equipment Corporation's (DEC) discontinued native Unix operating systems for the PDP-11, VAX, MicroVAX and DECstations.
History
The initial development of Unix occurred on DEC equipment, notably DEC PDP-7 and PDP-11 (Programmable Data Processor) systems. Later DEC computers, such as their VAX, also offered Unix. The first port to VAX, UNIX/32V, was finished in 1978, not long after the October 1977 announcement of the VAX, for which – at that time – DEC only supplied its own proprietary operating system, VMS.
DEC's Unix Engineering Group (UEG) was started by Bill Munson with Jerry Brenner and Fred Canter, both from DEC's Customer Service Engineering group, Bill Shannon (from Case Western Reserve University), and Armando Stettner (from Bell Labs). Other later members of UEG included Joel Magid, Bill Doll, and Jim Barclay recruited from DEC's marketing and product management groups.
Under Canter's direction, UEG released V7M, a modified version of Unix 7th Edition (q.v.).
In 1988 The New York Times reported Ultrix Posix-compliant.
BSD
Shannon and Stettner worked on low-level CPU and device driver support initially on UNIX/32V but quickly moved to concentrate on working with the University of California, Berkeley's 4BSD. Berkeley's Bill Joy came to New Hampshire to work with Shannon and Stettner to wrap up a new BSD release. UEG's machine was the first to run the new Unix, labeled 4.5BSD as was the tape Bill Joy took with him. The thinking was that 5BSD would be the next version - university lawyers thought it would be better to call it 4.1BSD. After the completion of 4.1BSD, Bill Joy left Berkeley to work at Sun Microsystems. Shannon later moved from New Hampshire to join him. Stettner stayed at DEC and later conceived of and started the Ultrix project.
Shortly after IBM announced plans for a native UNIX product, Stettner and Bill Doll presented plans for DEC to make a native VAX Unix product available to its customers; DEC-founder Ken Olsen, agreed.
V7m
DEC's first native UNIX product was V7M (for modified) or V7M11 for the PDP-11 and was based on version of UNIX 7th Edition from Bell Labs. V7M, developed by DEC's original Unix Engineering Group (UEG), Fred Canter, Jerry Brenner, Stettner, Bill Burns, Mary Anne Cacciola, and Bill Munson – but the work of primarily Canter and Brenner. V7M contained many fixes to the kernel including support for separate instruction and data spaces, significant work for hardware error recovery, and many device drivers. Much work was put into producing a release that would reliably bootstrap from many tape drives or disk drives. V7M was well respected in the Unix community. UEG evolved into the group that later developed Ultrix.
First release of Ultrix
The first native VAX UNIX product from DEC was Ultrix-32, based on 4.2BSD with some non-kernel features from System V, and was released in June 1984. Ultrix-32 was primarily the brainchild of Armando Stettner. It provided a Berkley-based native VAX Unix on a broad array of hardware configurations without the need to access kernel sources. A further goal was to enable better support by DEC's field software and systems support engineers through better hardware support, system messages, and documentation. It also incorporated several modifications and scripts from Usenet/UUCP experience. Later, Ultrix-32 incorporated support for DECnet and other proprietary DEC protocols such as LAT. It did not support VAXclustering. Given Western Electric/AT&T Unix licensing, DEC (and others) were restricted to selling binary-only licenses. A significant part of the engineering work was in making the systems relatively flexible and configurable despite their binary-only nature.
DEC provided Ultrix on three platforms: PDP-11 minicomputers (where Ultrix was one of many available operating systems from DEC), VAX-based computers (where Ultrix was one of two primary OS choices) and the Ultrix-only DECstation workstations and DECsystem servers. Note that the DECstation systems used MIPS processors and predate the much later Alpha-based systems.
Later releases of Ultrix
The V7m product was later renamed to Ultrix-11 to establish the family with Ultrix-32, but as the PDP-11 faded from view Ultrix-32 became known simply as Ultrix. When the MIPS versions of Ultrix was released, the VAX and MIPS versions were referred to as VAX/ULTRIX and RISC/ULTRIX respectively. Much engineering emphasis was placed on supportability and reliable operations including continued work on CPU and device driver support (which was, for the most part, also sent to UC Berkeley), hardware failure support and recovery with enhancement to error message text, documentation, and general work at both the kernel and systems program levels. Later Ultrix-32 incorporated some features from 4.3BSD and optionally included DECnet and SNA in addition to the standard TCP/IP, and both the SMTP and DEC's Mail-11 protocols.
Notably, Ultrix implemented the inter-process communication (IPC) facilities found in System V (named pipes, messages, semaphores, and shared memory). While the converged Unix from the Sun and AT&T alliance (that spawned the Open Software Foundation or OSF), released late 1986, put BSD features into System V, DEC, as described in Stettner's original Ultrix plans, took the best from System V and added it to a BSD base.
Originally, on the VAX workstations, Ultrix-32 had a desktop environment called UWS, Ultrix Workstation Software, which was based on X10 and the Ultrix Window Manager. Later, the widespread version 11 of the X Window System (X11) was added, using a window manager and widget toolkit named XUI (X User Interface), which was also used on VMS releases of the time. Eventually Ultrix also provided the Motif toolkit and Motif Window Manager.
Ultrix ran on multiprocessor systems from both the VAX and DECsystem families. Ultrix-32 supported SCSI disks and tapes and also proprietary Digital Storage Systems Interconnect and CI peripherals employing DEC's Mass Storage Control Protocol, although lacking the OpenVMS distributed lock manager it did not support concurrent access from multiple Ultrix systems. DEC also released a combination hardware and software product named Prestoserv which accelerated NFS file serving to allow better performance for diskless workstations to communicate to a file serving Ultrix host. The kernel supported symmetric multiprocessing while not being fully multithreaded based upon pre-Ultrix work by Armando Stettner and earlier work by George H. Goble at Purdue University. As such, there was liberal use of locking and some tasks could only be done by a particular CPUs (e.g. the processing of interrupts). This was not uncommon in other SMP implementations of that time (e.g. SunOS). Also, Ultrix was slow to support many then new or emerging Unix system capabilities found on competing Unix systems (e.g. it never supported shared libraries or dynamically linked executables; and a delay in implementing bind, 4.3BSD system calls and libraries.
Last release
As part of its commitment to the OSF, Armando Stettner went to DEC's Cambridge Research Labs to work on the port of OSF/1 to DEC's RISC-based DECstation 3100 workstation. Later, DEC replaced Ultrix as its Unix offering with OSF/1 for the Alpha, ending Unix development on the MIPS and VAX platforms. OSF/1 had previously shipped in 1991 with a Mach-based kernel for the MIPS architecture.
The last major release of Ultrix was version 4.5 in 1995, which supported all previously supported DECstations and VAXen. There were some subsequent Y2K patches.
Application software
WordMARC, a scientifically-oriented word processor, was among the application packages available for Ultrix.
The following shells were provided with Ultrix:
C Shell
BSD Bourne Shell
System V Bourne Shell
Korn Shell
See also
Comparison of BSD operating systems
Ultrix Window Manager
References
Further reading
Ultrix/UWS Release Notes V4.1, AA-ME85D-TE
Ultrix-32 Supplementary Documents, AA-MF06A-TE
The Little Gray Book: An ULTRIX Primer, AA-MG64B-TE
Guide to Installing Ultrix and UWS, AA-PBL0G-TE
External links
Ultrix FAQ
Info on Ultrix from OSdata (version as of Jan 11 2006)
Ultrix 2.0, 4.2, and 4.3 source code
Ultrix system manuals
Ultrix man pages
Berkeley Software Distribution
DEC operating systems
Discontinued operating systems
MIPS operating systems | Operating System (OS) | 895 |
Scsh
Scsh (a Scheme shell) is computer software, a type of shell for an operating system. It is a Portable Operating System Interface (POSIX) application programming interface (API) layered on the programming language Scheme, in a manner to make the most of Scheme's ability for scripting. Scsh is limited to 32-bit platforms but there is a development version against the latest Scheme 48 that works in 64-bit mode. It is free and open-source software released under the BSD-3-Clause license.
Features
Scsh includes these notable features:
Library support for list, character, and string manipulations;
Regular expressions manipulation support using scheme regular expressions, a domain-specific language (DSL), or little languages, approach to the abilities;
Strong networking support;
High-level support for awk like scripts, integrated into the language as macros;
Abstractions supporting pseudo terminals;
A shell language, modeled using quasi-quotation.
Example
Print a list of all the executables available in the current PATH to the standard output:
#!/usr/local/bin/scsh -s
!#
(define (executables dir)
(with-cwd dir
(filter file-executable? (directory-files dir #t))))
(define (writeln x) (display x) (newline))
(for-each writeln
(append-map executables ((infix-splitter ":") (getenv "PATH"))))
"Acknowledgments"
The reference manual for Scsh includes a spoof Acknowledgments section written by Olin Shivers. It starts:
Who should I thank? My so-called "colleagues", who laugh at me behind my back, all the while becoming famous on my work? My worthless graduate students, whose computer skills appear to be limited to downloading bitmaps off of netnews? My parents, who are still waiting for me to quit "fooling around with computers," go to med school, and become a radiologist? My department chairman, a manager who gives one new insight into and sympathy for disgruntled postal workers?
and concludes with:
Oh, yes, the acknowledgements. I think not. I did it. I did it all, by myself.
See also
Unix shell
Comparison of command shells
References
External links
Sourceforge project page
Downloads
Unix shells
Scheme (programming language) interpreters
Scheme (programming language) implementations
Scripting languages
Software using the BSD license | Operating System (OS) | 896 |
Fdformat
Fdformat is the name of two unrelated programs:
A command-line tool for Linux that "low-level formats" a floppy disk.
A DOS tool written in Pascal by Christoph H. Hochstätter that allows users to format floppy disks to a higher than usual density, enabling the user to store up to 300 kilobytes more data on a normal high density 3.5" floppy disk. It also increases the speed of diskette I/O on these specially formatted disks using a technique called "Sector Sliding". In this technique, the physical sectors on the disk are ordered in such a way that when the drive advances to the next track, the next logical sector waiting to be read is immediately available to the read head.
See also
2M, a similar program that offers even higher capacity
DMF, a high-density diskette format used by Microsoft
VGA-Copy, a similar program that allowed higher floppy disk capacity
XDF, a high-density diskette format used by IBM
External links
FTP for the DOS tool, version 1.8, compiled .exe and Pascal sources
Usage of the Bash command-line utility fdformat
Floppy disk computer storage
Linux file system-related software | Operating System (OS) | 897 |
JavaPOS
JavaPOS (short for Java for Point of Sale Devices), is a standard for interfacing point of sale (POS) software, written in Java, with the specialized hardware peripherals typically used to create a point-of-sale system. The advantages are reduced POS terminal costs, platform independence, and reduced administrative costs. JavaPOS was based on a Windows POS device driver standard known as OPOS. JavaPOS and OPOS have since been folded into a common UnifiedPOS standard.
Types of hardware
JavaPOS can be used to access various types of POS hardware. A few of the hardware types that can be controlled using JavaPOS are
POS printers (for receipts, check printing, and document franking)
Magnetic stripe readers (MSRs)
Magnetic ink character recognition readers (MICRs)
Barcode scanners/readers
Cash drawers
Coin dispensers
Pole displays
PINpads
Electronic scales
Parts
In addition to referring to the standard, the term JavaPOS is used to refer to the application programming interface (API).
The JavaPOS standard includes definitions for "Control Objects" and "Service Objects". The POS software communicates with the Control Objects. The Control Objects load and communicate with appropriate Service Objects. The Service Objects are sometimes referred to as the "JavaPOS drivers."
Control objects
The POS software interacts with the control object to control the hardware device. A common JavaPOS library is published by the standards organization with an implementation of the Control Objects of the JavaPOS standard.
Service objects
Each hardware vendor is responsible for providing Service Objects, or "JavaPOS drivers" for the hardware they sell. Depending on the vendor, drivers may be available that can communicate over USB, RS232, RS485, or even an Ethernet connection. The hardware vendors will typically create JavaPOS drivers that will work with Windows. The majority of vendors will also create drivers for at least one flavor of Linux, but not as many. Since there is not nearly as much marketshare to capture for Apple computers used as POS systems, only a few JavaPOS drivers would be expected to work with Mac OS X. (And those would be more likely due to happy circumstance rather than careful design.)
Historical background
The committee that initiated JavaPOS development consisted of Sun Microsystems, IBM, and NCR. The first meeting occurred in April, 1997 and the first release, JavaPOS 1.2, occurred on 28 March 1998. The final release as a separate standard was version 1.6 in July 2001. Beginning with release 1.7, a single standards document was released by a UnifiedPOS committee. That standards document is then used to create the common JavaPOS libraries for the release.
See also
Point of sale
UnifiedPOS
EFTPOS
Point of sale display
Point of Sale Malware
References
External links
JavaPOS
Retail point of sale systems
Computer standards | Operating System (OS) | 898 |
PICAXE
PICAXE is a microcontroller system based on a range of Microchip PIC microcontrollers. PICAXE devices are Microchip PIC devices with pre-programmed firmware that enables bootloading of code directly from a PC, simplifying hobbyist embedded development (not unlike the Arduino and Parallax BASIC Stamp systems). PICAXE devices have been produced by Revolution Education (Rev-Ed) since 1999.
Hardware
There are currently six (6) PICAXE variants of differing pin counts (8-14-18-20-28-40) and are available as DIL and SMD.
PICAXE microcontrollers are pre-programmed with an interpreter similar to the BASIC Stamp but using internal EEPROM instead, thus reducing cost. This also allows downloads to be made with a simple serial connection which eliminates the need for a PIC programmer. PICAXE is programmed using an RS-232 serial cable or a USB cable which connects a computer to the download circuit, which normally uses a 3.5 mm jack and two resistors.
Programming language
PICAXE microcontrollers are programmed using BASIC.
The PICAXE interpreter features bit-banged communications:
Serial (asynchronous serial)
SPI (synchronous serial)
Infrared (using a 38 kHz carrier, seven data bits and five ID bits)
One-wire
The "readtemp" command reads the temperature from a DS18B20 temperature sensor and converts it into Celsius.
All current PICAXEs have commands for using hardware features of the underlying PIC microcontrollers:
Hardware asynchronous serial
Hardware synchronous serial
Hardware PWM
DAC
ADC
SR Latch
Timers (two on X2/X1 parts which have settable intervals, only one on M2 parts with a fixed interval, older parts have none)
Comparators
Internal temperature measurement
Program space
All current PICAXE chips have at least 2048 bytes of on board program memory available for user programs:
08M2 - 2048 bytes
14M2 - 2048
18M2+ - 2048
20M2 - 2048
20X2 - 4096
28X1 - 4096
40X1 - 4096
28X2 - 4096 per slot with four slots for a total of 16 KiB
40X2 - 4096 per slot with four slots for a total of 16 KiB
Clock speeds
The default clock speed for all M2 and X1 parts is 4 MHz and for the X2 parts is 8 MHz.
The SETFREQ command allows speeds from 31 kHz up to 8 MHz for X1 parts, 31 kHz up to 32 MHz for M2 parts and 31 kHz up to 16 MHz for X2 parts (up to 64 MHz for the 20X2) using the internal resonator.
An external resonator can be used with the X1 parts for from 4 MHz to 20 MHz clock speeds and with the X2 parts for 16 MHz to 64 MHz clock speeds.
Project boards
Project boards for different applications are sold by Rev-Ed which contain the PICAXE, download circuit and may also contain a prototyping area or high power output drivers.
Software
Revolution Education develop software for writing programs for PICAXE.
PICAXE Programming Editor
PICAXE Programming Editor is a Windows-only IDE for writing PICAXE programs in BASIC code or a simple flowchart.
PICAXE Programming Editor features:
source code colour syntax highlighting
auto indentation
syntax check and program download
code explorer to shown variable, label and constant values
full on screen simulation with animated chips and line by line code highlighting
simulation breakpoints by line number and variable value
debug and serial terminal windows
AXE027 download cable testing and port identification tools
various testing tools such as the analogue calibration wizard
various code generation wizards (pwmout, tune, RTC setting, etc.)
AXEpad
AXEpad is a cross-platform application recommended for Linux and Mac users. It lacks some of Programming Editor's wizards, simulation and MDI.
Logicator for PICAXE
Logicator is an easy to use shareware flowcharting program. The Logicator web page is out of date as the free version does support all commands but shows nag screens.
PICAXE Programming Editor 6, the successor to PICAXE Programming Editor 5, has Logicator flowcharting merged into it so separate Logicator software is no longer required. Like PICAXE Programming Editor 5, PICAXE Programming Editor 6 is freeware.
Third-party software
Yenka
Yenka is a program developed by Crocodile Clips Ltd which has flowcharts and simulation.
Others
Many companies and organizations have put out their own editors with special features. Some include language translators or serial connectors, so there is a wide variety of consoles to be used.
Support
Support is available at the Technical Support section of the PICAXE website and at the PICAXE Forum.
The PICAXE Forum has a finished projects section where completed projects and PICAXE programs are posted, plus there is a similar section on the PICAXE website.
See also
Arduino
ARM express BASICchip
BASIC Atom
BASIC Stamp
Maximite
OOPic
KodeKLIX - PICAXE chip based snap-together educational system
References
Further reading
External links
Official PICAXE Website
BASIC commands
web server/PICAXE interface
Distributors - PICAXE
Introducing the PICAXE System
Snap Electronics educational system using PICAXE
Microcontrollers
Microchip Technology hardware | Operating System (OS) | 899 |