text
stringlengths 101
134k
| type
stringclasses 12
values | __index_level_0__
int64 0
14.7k
|
---|---|---|
BIOS boot partition
The BIOS boot partition is a partition on a data storage device that GNU GRUB uses on legacy BIOS-based personal computers in order to boot an operating system, when the actual boot device contains a GUID Partition Table (GPT). Such a layout is sometimes referred to as BIOS/GPT boot.
A BIOS boot partition is needed on GPT-partitioned storage devices to hold the second stages of GRUB. On traditional MBR-partitioned devices, the disk sectors immediately following the first are usually unused, as the partitioning scheme does not designate them for any special purpose and partitioning tools avoid them for alignment purposes. On GPT-based devices, the sectors hold the actual partition table, necessitating the use of an extra partition. On MBR-partitioned disks, boot loaders are usually implemented so the portion of their code stored within the MBR, which cannot hold more than 512 bytes, operates as a first stage that serves primarily to load a more sophisticated second stage, which is, for example, capable of reading and loading an operating system kernel from a file system.
Overview
When used, the BIOS boot partition contains the second stage of the boot loader program, such as the GRUB 2; the first stage is the code that is contained within the Master Boot Record (MBR). Use of this partition is not the only way BIOS-based boot can be performed while using GPT-partitioned hard drives; however, complex boot loaders such as GRUB 2 cannot fit entirely within the confines of the MBR's 398 to 446 bytes of space, thus they need an ancillary storage space. On MBR disks, such boot loaders typically use the sectors immediately following the MBR for this storage; that space is usually known as the "MBR gap". No equivalent unused space exists on GPT disks, and the BIOS boot partition is a way to officially allocate such space for use by the boot loader.
The globally unique identifier (GUID) for the BIOS boot partition in the GPT scheme is (which, when written to a GPT in the required little endian fields, forms the ASCII string "Hah!IdontNeedEFI"). In the context of GPT on a BIOS-based computer, a BIOS boot partition is similar in some respects to the EFI system partition, which is used by systems based on EFI. The EFI System partition holds a filesystem and files used by the UEFI, while the BIOS boot partition is used in BIOS-based systems and accessed without a filesystem by holding raw binary code.
The size requirements for a BIOS boot partition are quite low so it can be as small as about 30 KiB; however, as future boot loaders might require more space, 1 MiB might be a reasonable BIOS boot partition size. Due to the 1 MiB partition alignment policies used by most modern disk partitioning tools to provide optimum performance with Advanced Format disks, SSD devices and certain RAID configurations, some room is left allowing the placement of a BIOS boot partition between the GPT and the first partition aligned that way. If created by utilizing that free space, the BIOS boot partition would be out of the GPT alignment specification, but that is not very important since it is written to very infrequently.
Creation
The following utilities are known to support BIOS boot partitions:
cfdisk
fdisk
GNU Parted (2.0 or later).
GParted, the front-end to GNU Parted.
gpt(8) partition editor in NetBSD (5.0 or later).
gdisk: GPT fdisk
See also
Unified Extensible Firmware Interface (UEFI)
Windows To Go
References
External links
BIOS installation, part of the GRUB2 documentation
The Funtoo Linux GUID Booting Guide
Booting from GPT, part of the GPT fdisk documentation
Legacy BIOS issues with GPT, February 22, 2014, by Rod Smith
BIOS
Disk partitions | Operating System (OS) | 1,200 |
IBM Personal Computer
The IBM Personal Computer (model 5150, commonly known as the IBM PC) is the first microcomputer released in the IBM PC model line and the basis for the IBM PC compatible de facto standard. Released on August 12, 1981, it was created by a team of engineers and designers directed by Don Estridge in Boca Raton, Florida.
The machine was based on open architecture and third-party peripherals. Over time, expansion cards and software technology increased to support it.
The PC had a substantial influence on the personal computer market. The specifications of the IBM PC became one of the most popular computer design standards in the world. The only significant competition it faced from a non-compatible platform throughout the 1980s was from the Apple Macintosh product line. The majority of modern personal computers are distant descendants of the IBM PC.
History
Prior to the 1980s, IBM had largely been known as a provider of business computer systems. As the 1980s opened, their market share in the growing minicomputer market failed to keep up with competitors, while other manufacturers were beginning to see impressive profits in the microcomputer space. The market for personal computers was dominated at the time by Tandy, Commodore and Apple, whose machines sold for several hundred dollars each and had become very popular. The microcomputer market was large enough for IBM's attention, with $15 billion in sales by 1979 and projected annual growth of more than 40% during the early 1980s. Other large technology companies had entered it, such as Hewlett-Packard, Texas Instruments and Data General, and some large IBM customers were buying Apples.
As early as 1980 there were rumors of IBM developing a personal computer, possibly a miniaturized version of the IBM System/370, and Matsushita acknowledged publicly that it had discussed with IBM the possibility of manufacturing a personal computer in partnership, although this project was abandoned. The public responded to these rumors with skepticism, owing to IBM's tendency towards slow-moving, bureaucratic business practices tailored towards the production of large, sophisticated and expensive business systems. As with other large computer companies, its new products typically required about four to five years for development, and a well publicized quote from an industry analyst was, "IBM bringing out a personal computer would be like teaching an elephant to tap dance."
IBM had previously produced microcomputers, such as 1975's IBM 5100, but targeted them towards businesses; the 5100 had a price tag as high as $20,000. Their entry into the home computer market needed to be competitively priced.
In 1980, IBM president John Opel, recognizing the value of entering this growing market, assigned William C. Lowe to the new Entry Level Systems unit in Boca Raton, Florida. Market research found that computer dealers were very interested in selling an IBM product, but they insisted the company use a design based on standard parts, not IBM-designed ones so that stores could perform their own repairs rather than requiring customers to send machines back to IBM for service.
Atari proposed to IBM in 1980 that it act as original equipment manufacturer for an IBM microcomputer, a potential solution to IBM's known inability to move quickly to meet a rapidly changing market. The idea of acquiring Atari was considered but rejected in favor of a proposal by Lowe that by forming an independent internal working group and abandoning all traditional IBM methods, a design could be delivered within a year and a prototype within 30 days. The prototype worked poorly but was presented with a detailed business plan which proposed that the new computer have an open architecture, use non-proprietary components and software, and be sold through retail stores, all contrary to IBM practice. It also estimated sales of 220,000 computers over three years, more than IBM's entire installed base.
This swayed the Corporate Management Committee, which converted the group into a business unit named "Project Chess", and provided the necessary funding and authority to do whatever was needed to develop the computer in the given timeframe. The team received permission to expand to 150 people by the end of 1980, and one day more than 500 IBM employees called in asking to join.
Design process
The design process was kept under a policy of strict secrecy, with none of the other IBM divisions knowing what was going on.
Several CPUs were considered, including the Texas Instruments TMS9900, Motorola 68000 and Intel 8088. The 68000 was considered the best choice, but was not production-ready like the others. The IBM 801 RISC processor was also considered, since it was considerably more powerful than the other options, but rejected due to the design constraint to use off-the-shelf parts.
IBM chose the 8088 over the similar but superior 8086 because Intel offered a better price for the former and could provide more units, and the 8088's 8-bit bus reduced the cost of the rest of the computer. The 8088 had the advantage that IBM already had familiarity with it from designing the IBM System/23 Datamaster. The 62-pin expansion bus slots were also designed to be similar to the Datamaster slots, and its keyboard design and layout became the Model F keyboard shipped with the PC, but otherwise the PC design differed in many ways.
The 8088 motherboard was designed in 40 days, with a working prototype created in four months, demonstrated in January 1981. The design was essentially complete by April 1981, when it was handed off to the manufacturing team. PCs were assembled in an IBM plant in Boca Raton, with components made at various IBM and third party factories. The monitor was an existing design from IBM Japan, the printer was manufactured by Epson. Because none of the functional components were designed by IBM, they obtained no patents on the PC.
Many of the designers were computer hobbyists who owned their own computers, including many Apple II owners, which influenced the decisions to design the computer with an open architecture and publish technical information so others could create software and expansion slot peripherals.
During the design process IBM avoided vertical integration as much as possible, choosing for example to license Microsoft BASIC despite having a version of BASIC of its own for mainframes, due to the better existing public familiarity with the Microsoft version.
Debut
The IBM PC debuted on August 12, 1981 after a twelve-month development. Pricing started at $1,565 for a configuration with 16 kB RAM, Color Graphics Adapter, and no disk drives. The price was designed to compete with comparable machines in the market. For comparison, the Datamaster, announced two weeks earlier as IBM's least expensive computer, cost $10,000.
IBM's marketing campaign licensed the likeness of Charlie Chaplin's character "The Little Tramp" for a series of advertisements based on Chaplin's movies, played by Billy Scudder.
The PC was IBM's first attempt to sell a computer through retail channels rather than directly to customers. Because IBM did not have retail experience, they partnered with the retail chains ComputerLand and Sears Roebuck, who provided important knowledge of the marketplace and became the main outlets for the PC. More than 190 ComputerLand stores already existed, while Sears was in the process of creating a handful of in-store computer centers for sale of the new product.
Reception was overwhelmingly positive, with sales estimates from analysts suggesting billions of dollars in sales over the next few years, and the IBM PC immediately became the talk of the entire computing industry. Dealers were overwhelmed with orders, including customers offering pre-payment for machines with no guaranteed delivery date. By the time the machine was shipping, the term "PC" was becoming a household name.
Success
Sales exceeded IBM's expectations by as much as 800%, shipping 40,000 PCs a month at one point. The company estimated that 50 to 70% of PCs sold in retail stores went to the home. In 1983 they sold more than 750,000 machines, while Digital Equipment Corporation, a competitor whose success among others had spurred them to enter the market, had sold only 69,000 machines in that period.
Software support from the industry grew rapidly, with the IBM nearly instantly becoming the primary target for most microcomputer software development. One publication counted 753 software packages available a year after the PC's release, four times as many as the Macintosh had a year after release. Hardware support also grew rapidly, with 30–40 companies competing to sell memory expansion cards within a year.
By 1984, IBM's revenue from the PC market was $4 billion, more than twice that of Apple. A 1983 study of corporate customers found that two thirds of large customers standardizing on one computer chose the PC, compared to 9% for Apple. A 1985 Fortune survey found that 56% of American companies with personal computers used PCs, compared to Apple's 16%.
Almost as soon as the PC reached the market, rumors of clones began, and the first PC compatible clone was released in June 1982, less than a year after the PC's debut.
Hardware
For low cost and a quick design turnaround time, the hardware design of the IBM PC used entirely "off-the-shelf" parts from third party manufacturers, rather than unique hardware designed by IBM.
The PC is housed in a wide, short steel chassis intended to support the weight of a CRT monitor. The front panel is made of plastic, with an opening where one or two disk drives can be installed. The back panel houses a power inlet and switch, a keyboard connector, a cassette connector and a series of tall vertical slots with blank metal panels which can be removed in order to install expansion cards.
Internally, the chassis is dominated by a motherboard which houses the CPU, built-in RAM, expansion RAM sockets, and slots for expansion cards.
The IBM PC was highly expandable and upgradeable, but the base factory configuration included:
Motherboard
The PC is built around a single large circuit board called a motherboard which carries the processor, built-in RAM, expansion slots, keyboard and cassette ports, and the various peripheral integrated circuits that connected and controlled the components of the machine.
The peripheral chips included an Intel 8259 PIC, an Intel 8237 DMA controller, and an Intel 8253 PIT. The PIT provides clock "ticks" and dynamic memory refresh timing.
CPU and RAM
The CPU is an Intel 8088, a cost-reduced form of the Intel 8086 which largely retains the 8086's internal 16-bit logic, but exposes only an 8-bit bus. The CPU is clocked at 4.77 MHz, which would eventually become an issue when clones and later PC models offered higher CPU speeds that broke compatibility with software developed for the original PC. The single base clock frequency for the system was 14.31818 MHz, which when divided by 3, yielded the 4.77 MHz for the CPU (which was considered close enough to the then 5 MHz limit of the 8088), and when divided by 4, yielded the required 3.579545 MHz for the NTSC color carrier frequency.
The PC motherboard included a second, empty socket, described by IBM simply as an "auxiliary processor socket", although the most obvious use was the addition of an Intel 8087 math coprocessor, which improved floating-point math performance.
From the factory the PC was equipped with either 16 kB or 64 kB of RAM. RAM upgrades were provided both by IBM and third parties as expansion cards, and could upgrade the machine to a maximum of 256 kB.
ROM BIOS
The BIOS is the firmware of the IBM PC, occupying four 2 kB ROM chips on the motherboard. It provides bootstrap code and a library of common functions that all software can use for many purposes, such as video output, keyboard input, disk access, interrupt handling, testing memory, and other functions. IBM shipped several versions of the BIOS throughout the PC's lifespan.
Display
While most home computers had built-in video output hardware, IBM took the unusual approach of offering two different graphics options, the MDA and CGA cards. The former provided high-resolution monochrome text, but could not display anything except text, while the latter provided medium- and low-resolution color graphics and text.
CGA used the same scan rate as NTSC television, allowing it to provide a composite video output which could be used with any compatible television or composite monitor, as well as a direct-drive TTL output suitable for use with any RGBI monitor using an NTSC scan rate. IBM also sold the 5153 color monitor for this purpose, but it was not available at release and was not released until March 1983.
MDA scanned at a higher frequency and required a proprietary monitor, the IBM 5151. The card also included a built-in printer port.
Both cards could also be installed simultaneously for mixed graphics and text applications. For instance, AutoCAD, Lotus 1-2-3 and other software allowed use of a CGA Monitor for graphics and a separate monochrome monitor for text menus. Third parties went on to provide an enormous variety of aftermarket graphics adapters, such as the Hercules Graphics Card.
The software and hardware of the PC, at release, was designed around a single 8-bit adaptation of the ASCII character set, now known as code page 437.
Storage
The two bays in the front of the machine could be populated with one or two 5.25″ floppy disk drives, storing 160 kB per disk side for a total of 320 kB of storage on one disk. The floppy drives require a controller card inserted in an expansion slot, and connect with a single ribbon cable with two edge connectors. The IBM floppy controller card provides an external 37-pin D-sub connector for attachment of an external disk drive, although IBM did not offer one for purchase until 1986.
As was common for home computers of the era, the IBM PC offered a port for connecting a cassette data recorder. Unlike the typical home computer however, this was never a major avenue for software distribution, probably because very few PCs were sold without floppy drives. The port was removed on the very next PC model, the XT.
At release, IBM did not offer any hard disk drive option and adding one was difficult - the PC's stock power supply had inadequate power to run a hard drive, the motherboard did not support BIOS expansion ROMs which was needed to support a hard drive controller, and both PC DOS and the BIOS had no support for hard disks. After the XT was released, IBM altered the design of the 5150 to add most of these capabilities, except for the upgraded power supply. At this point adding a hard drive was possible, but required the purchase of the IBM 5161 Expansion Unit, which contained a dedicated power supply and included a hard drive.
Although official hard drive support did not exist, the third party market did provide early hard drives that connected to the floppy disk controller, but required a patched version of PC DOS to support the larger disk sizes.
Human interface
The only option for human interface provided in the base PC was the built-in keyboard port, meant to connect to the included IBM Model F keyboard. The Model F was initially developed for the IBM Datamaster, and was substantially better than the keyboards provided with virtually all home computers on the market at that time in many regards - number of keys, reliability and ergonomics. While some home computers of the time utilized chiclet keyboards or inexpensive mechanical designs, the IBM keyboard provided good ergonomics, reliable and positive tactile key mechanisms and flip-up feet to adjust its angle.
Public reception of the keyboard was extremely positive, with some sources describing it as a major selling point of the PC and even as "the best keyboard available on any microcomputer."
At release, IBM provided a Game Control Adapter which offered a 15-pin port intended for the connection of up to two joysticks, each having two analog axes and two buttons.
Communications
Connectivity to other computers and peripherals was initially provided through serial and parallel ports.
IBM provided a serial card based on an 8250 UART. The BIOS supports up to two serial ports.
IBM provided two different options for connecting Centronics-compatible parallel printers. One was the IBM Printer Adapter, and the other was integrated into the MDA as the IBM Monochrome Display and Printer Adapter.
Expansion
The expansion capability of the IBM PC was very significant to its success in the market. Some publications highlighted IBM's uncharacteristic decision to publish complete, thorough specifications of the system bus and memory map immediately on release, with the intention of fostering a market of compatible third-party hardware and software.
The motherboard includes five 62-pin card edge connectors which are connected to the CPU's I/O lines. IBM referred to these as "I/O slots," but after the expansion of the PC clone industry they became retroactively known as the ISA bus. At the back of the machine is a metal panel, integrated into the steel chassis of the system unit, with a series of vertical slots lined up with each card slot.
Most expansion cards have a matching metal bracket which slots into one of these openings, serving two purposes. First, a screw inserted through a tab on the bracket into the chassis fastens the card securely in place, preventing the card from wiggling out of place. Second, any ports the card provides for external attachment are bolted to the bracket, keeping them secured in place as well.
The PC expansion slots can accept an enormous variety of expansion hardware, adding capabilities such as:
Graphics
Sound
Mouse support
Expanded memory
Additional serial or parallel ports
Networking
Connection to proprietary industrial or scientific equipment
The market reacted as IBM had intended, and within a year or two of the PC's release the available options for expansion hardware were immense.
5161 Expansion Unit
The expandability of the PC was important, but had significant limitations.
One major limitation was the inability to install a hard drive, as described above. Another was that there were only five expansion slots, which tended to get filled up by essential hardware - a PC with a graphics card, memory expansion, parallel card and serial card was left with only one open slot, for instance.
IBM rectified these problems in the later XT, which included more slots and support for an internal hard drive, but at the same time released the 5161 Expansion Unit, which could be used with either the XT or the original PC. The 5161 connected to the PC system unit using a cable and a card plugged into an expansion slot, and provided a second system chassis with more expansion slots and a hard drive.
Software
IBM initially announced intent to support multiple operating systems: CP/M-86, UCSD p-System, and an in-house product called IBM PC DOS, developed by Microsoft. In practice, IBM's expectation and intent was for the market to primarily use PC-DOS, CP/M-86 was not available for six months after the PC's release and received extremely few orders once it was, and p-System was also not available at release. PC DOS rapidly established itself as the standard OS for the PC and remained the standard for over a decade, with a variant being sold by Microsoft themselves as MS-DOS.
The PC included BASIC in ROM, a common feature of 1980s home computers. Its ROM BASIC supported the cassette tape interface, but PC DOS did not, limiting use of that interface to BASIC only.
PC DOS version 1.00 supported only 160 kB SSDD floppies, but version 1.1, which was released nine months after the PC's introduction, supported 160 kB SSDD and 320 kB DSDD floppies. Support for the slightly larger nine sector per track 180 kB and 360 kB formats was added in March 1983.
Third-party software support grew extremely quickly, and within a year the PC platform was supplied with a vast array of titles for any conceivable purpose.(It used CUI based operating system BASIC to run the computer)
Reception
Reception of the IBM PC was extremely positive. Even before its release reviewers were impressed by the advertised specifications of the machine, and upon its release reviews praised virtually every aspect of its design both in comparison to contemporary machines and with regards to new and unexpected features.
Praise was directed at the build quality of the PC, in particular its keyboard, IBM's decision to use open specifications to encourage third party software and hardware development, their speed at delivering documentation and the quality therein, the quality of the video display, and the use of commodity components from established suppliers in the electronics industry. The price was considered extremely competitive compared to the value per dollar of competing machines.
Two years after its release, BYTE Magazine retrospectively concluded that the PC had succeeded both because of its features – an 80-column screen, open architecture, and high-quality keyboard – and the failure of other computer manufacturers to achieve these features first:
Creative Computing that year named the PC the best desktop computer between $2000 and $4000, praising its vast hardware and software selection, manufacturer support, and resale value.
Many IBM PCs remained in service long after their technology became largely obsolete. For instance, as of June 2006 (23–25 years after release) IBM PC and XT models were still in use at the majority of U.S. National Weather Service upper-air observing sites, processing data returned from radiosondes attached to weather balloons.
Due to its status as the first entry in the extremely influential PC industry, the original IBM PC remains valuable as a collector's item. , the system had a market value of $50–$500.
Model line
IBM sold a number of computers under the "Personal Computer" or "PC" name throughout the 80s. The name was not used for several years before being reused for the IBM PC Series in the 90s and early 2000s.
As with all PC-derived systems, all IBM PC models are nominally software-compatible, although some timing-sensitive software will not run correctly on models with faster CPUs.
Clones
Because the IBM PC was based on commodity hardware rather than unique IBM components, and because its operation was extensively documented by IBM, creating machines that were fully compatible with the PC offered few challenges other than the creation of a compatible BIOS ROM.
Simple duplication of the IBM PC BIOS was a direct violation of copyright law, but soon into the PC's life the BIOS was reverse-engineered by companies like Compaq, Phoenix Software Associates, American Megatrends and Award, who either built their own computers that could run the same software and use the same expansion hardware as the PC, or sold their BIOS code to other manufacturers who wished to build their own machines.
These machines became known as IBM compatibles or "clones", and software was widely marketed as compatible with "IBM PC or 100% compatible". Shortly thereafter, clone manufacturers began to make improvements and extensions to the hardware, such as by using faster processors like the NEC V20, which executed the same software as the 8088 at a higher speed up to 10 MHz.
The clone market eventually became so large that it lost its associations with the original PC and became a set of de facto standards established by various hardware manufacturers.
References
Cited references
External links
IBM SCAMP
IBM 5150 information at www.minuszerodegrees.net
IBM PC 5150 System Disks and ROMs
IBM PC from IT Dictionary
IBM PC history and technical information
What a legacy! The IBM PC's 25 year legacy
CNN.com - IBM PC turns 25
IBM-5150 and collection of old digital and analog computers at oldcomputermuseum.com
IBM PC images and information
A brochure from November, 1982 advertising the IBM PC
A Picture of the XT/370 cards, showing the dual 68000 processors
Personal Computer
Computer-related introductions in 1981
Products introduced in 1981
16-bit computers | Operating System (OS) | 1,201 |
The Beginner's Guide to Computers
The Beginner's Guide to Computers is a book about microcomputers and general computing. It was published in 1982 as an accompaniment to the BBC Computer Literacy Project and The Computer Programme.
Its content covers the basics of the history of computing, programming languages, debugging, logic programming, semiconductor memory, printing, ADCs/DACs, flowcharts, as well as some technologies only found in Britain (such as Prestel, Ceefax, ORACLE). The possibilities of networks, robotics, electronic offices and publishing are also considered, with particular reference to the BBC Micro.
Reception
The book's square shape was described in The New York Times as "clumsy", although this does not stop it from being a "quite decent introduction" which is "easy to read". Those interested in actually using personal computers to "do something" were advised to look elsewhere. The World Yearbook of Education 1982/83: Computers and Education described it as "lucidly written and well laid out with profuse illustrations", noting the use of "appealing cartoons".
References
Handbooks and manuals
1982 non-fiction books | Operating System (OS) | 1,202 |
WSL
WSL may refer to:
Computing
Wide-spectrum language, a kind of programming language
Windows Subsystem for Linux, a part of Microsoft Windows 10 and Windows 11 which allows the installation of Linux distributions.
Organisations
Swiss Federal Institute for Forest, Snow and Landscape Research (Eidgenössische Forschungsanstalt für Wald, Schnee und Landschaft, WSL)
White Star Line, a shipping company, owner of the RMS Titanic
Workers' Socialist League, a UK Trotskyist party
Sport
FA WSL (FA Women's Super League), an English professional league for women's association football clubs
Women's Super League (rugby union), the top-level women's rugby union league in England
Women's Super League (basketball), Ireland
World Series Lights, a motor racing competition
World Surf League, a global professional competitive surfing league founded in 1976
Wrestling Superstars Live, a defunct wrestling promotion
Other uses
Weatherscan Local, the former name of 24-hour weather channel Weatherscan
Woluwe-Saint-Lambert, a district of Brussels | Operating System (OS) | 1,203 |
Pentium III
The Pentium III (marketed as Intel Pentium III Processor, informally PIII or P3, and stylized as pentium !!!) brand refers to Intel's 32-bit x86 desktop and mobile microprocessors based on the sixth-generation P6 microarchitecture introduced on February 26, 1999. The brand's initial processors were very similar to the earlier Pentium II-branded microprocessors. The most notable differences were the addition of the Streaming SIMD Extensions (SSE) instruction set (to accelerate floating point and parallel calculations), and the introduction of a controversial serial number embedded in the chip during manufacturing.
Even after the release of the Pentium 4 in late 2000, the Pentium III continued to be produced with new models introduced until early 2003, and were discontinued in April 2004 for desktop units, and May 2007 for mobile units.
Processor cores
Similarly to the Pentium II it superseded, the Pentium III was also accompanied by the Celeron brand for lower-end versions, and the Xeon for high-end (server and workstation) derivatives. The Pentium III was eventually superseded by the Pentium 4, but its Tualatin core also served as the basis for the Pentium M CPUs, which used many ideas from the P6 microarchitecture. Subsequently, it was the Pentium M microarchitecture of Pentium M branded CPUs, and not the NetBurst found in Pentium 4 processors, that formed the basis for Intel's energy-efficient Core microarchitecture of CPUs branded Core 2, Pentium Dual-Core, Celeron (Core), and Xeon.
Katmai
The first Pentium III variant was the Katmai (Intel product code 80525). It was a further development of the Deschutes Pentium II. The Pentium III saw an increase of 2 million transistors over the Pentium II. The differences were the addition of execution units and SSE instruction support, and an improved L1 cache controller (the L2 cache controller was left unchanged, as it would be fully redesigned for Coppermine anyway), which were responsible for the minor performance improvements over the "Deschutes" Pentium IIs. It was first released at speeds of 450 and 500 MHz in February 1999. Two more versions were released: 550 MHz on May 17, 1999 and 600 MHz on August 2, 1999. On September 27, 1999 Intel released the 533B and 600B running at 533 & 600 MHz respectively. The 'B' suffix indicated that it featured a 133 MHz FSB, instead of the 100 MHz FSB of prior models.
The Katmai contains 9.5 million transistors, not including the 512 Kbytes L2 cache (which adds 25 million transistors), and has dimensions of 12.3 mm by 10.4 mm (128 mm2). It is fabricated in Intel's P856.5 process, a 0.25 micrometre complementary metal–oxide–semiconductor (CMOS) process with five levels of aluminum interconnect. The Katmai used the same slot-based design as the Pentium II but with the newer Slot 1 Single Edge Contact Cartridge (SECC) 2 that allowed direct CPU core contact with the heat sink. There have been some early models of the Pentium III with 450 and 500 MHz packaged in an older SECC cartridge intended for original equipment manufacturers (OEMs).
A notable stepping level for enthusiasts was SL35D. This version of Katmai was officially rated for 450 MHz, but often contained cache chips for the 600 MHz model and thus usually can run at 600 MHz.
Coppermine
The second version, codenamed Coppermine (Intel product code: 80526), was released on October 25, 1999, running at 500, 533, 550, 600, 650, 667, 700, and 733 MHz. From December 1999 to May 2000, Intel released Pentium IIIs running at speeds of 750, 800, 850, 866, 900, 933 and 1000 MHz (1 GHz). Both 100 MHz FSB and 133 MHz FSB models were made. For models that were already available with the same frequency, an "E" was appended to the model name to indicate cores using the new 0.18 μm fabrication process. An additional "B" was later appended to designate 133 MHz FSB models, resulting in an "EB" suffix. In overall performance, Coppermine had a small advantage over the Advanced Micro Devices (AMD) Athlons it was released against, which was reversed when AMD applied their own die shrink and added an on-die L2 cache to the Athlon. Athlon held the advantage in floating-point intensive code, while the Coppermine could perform better when SSE optimizations were used, but in practical terms there was little difference in how the two chips performed, clock-for-clock. However, AMD were able to clock the Athlon higher, reaching speeds of 1.2 GHz before the launch of the Pentium 4.
In performance, Coppermine arguably marked a bigger step than Katmai by introducing an on-chip L2 cache, which Intel names Advanced Transfer Cache (ATC). The ATC operates at the core clock rate and has a capacity of 256 KB, twice that of the on-chip cache formerly on Mendocino Celerons. It is eight-way set-associative and is accessed via a Double Quad Word Wide 256-bit bus, four times as wide as Katmai's. Further, latency was dropped to a quarter compared to Katmai. Another marketing term by Intel was Advanced System Buffering, which encompassed improvements to better take advantage of a 133 MHz system bus. These include 6 fill buffers (vs. 4 on Katmai), 8 bus queue entries (vs. 4 on Katmai) and 4 write-back buffers (vs. 1 on Katmai). Under competitive pressure from the AMD Athlon, Intel reworked the internals, finally removing some well-known pipeline stalls. As a result, applications affected by the stalls ran faster on Coppermine by up to 30%. The Coppermine contained 29 million transistors and was fabricated in a 0.18 μm process.
Although its codename could give the impression that it used copper interconnects, its interconnects were aluminium. The Coppermine was available in 370-pin FC-PGA or FC-PGA2 for use with Socket 370, or in SECC2 for Slot 1 (all speeds except 900 and 1100). FC-PGA and Slot 1 Coppermine CPUs have an exposed die, however most higher frequency SKUs starting with the 866 MHz model were also produced in FC-PGA2 variants that feature an integrated heat spreader (IHS). This in itself did not improve thermal conductivity, since it added another layer of metal and thermal paste between the die and the heatsink, but it greatly assisted in holding the heatsink flat against the die. Earlier Coppermines without the IHS made heatsink mounting challenging. If the heatsink was not situated flat against the die, heat transfer efficiency was greatly reduced. Some heatsink manufacturers began providing pads on their products, similar to what AMD did with the "Thunderbird" Athlon to ensure that the heatsink was mounted flatly. The enthusiast community went so far as to create shims to assist in maintaining a flat interface.
A 1.13 GHz version (S-Spec SL4HH) was released in mid-2000 but famously recalled after a collaboration between HardOCP and Tom's Hardware discovered various instabilities with the operation of the new CPU speed grade. The Coppermine core was unable to reliably reach the 1.13 GHz speed without various tweaks to the processor's microcode, effective cooling, higher voltage (1.75 V vs. 1.65 V), and specifically validated platforms. Intel only officially supported the processor on its own VC820 i820-based motherboard, but even this motherboard displayed instability in the independent tests of the hardware review sites. In benchmarks that were stable, performance was shown to be sub-par, with the 1.13 GHz CPU equalling a 1.0 GHz model. Tom's Hardware attributed this performance deficit to relaxed tuning of the CPU and motherboard to improve stability. Intel needed at least six months to resolve the problems using a new cD0 stepping and re-released 1.1 GHz and 1.13 GHz versions in 2001.
Microsoft's Xbox game console uses a variant of the Pentium III/Mobile Celeron family in a Micro-PGA2 form factor. The sSpec designator of the chips is SL5Sx, which makes it more similar to the Mobile Celeron Coppermine-128 processor. It shares with the Coppermine-128 Celeron its 128 KB L2 cache, and 180 nm process technology, but keeps the 8-way cache associativity from the Pentium III.
Coppermine T
This revision is an intermediate step between Coppermine and Tualatin, with support for lower-voltage system logic present on the latter but core power within previously defined voltage specs of the former so it could work in older system boards.
Intel used the latest FC-PGA2 Coppermines with the cD0 stepping and modified them so that they worked with low voltage system bus operation at 1.25 V AGTL as well as normal 1.5 V AGTL+ signal levels, and would auto detect differential or single-ended clocking. This modification made them compatible to the latest generation Socket 370 boards supporting Tualatin CPUs while maintaining compatibility with older Socket 370 boards. The Coppermine-T also had two way symmetrical multiprocessing capabilities, but only in Tualatin boards.
They can be distinguished from Tualatin processors by their part numbers, which include the digits "80533", e.g. the 1133 MHz SL5QK P/N is RK80533PZ006256, while the 1000 MHz SL5QJ P/N is RK80533PZ001256.
Tualatin
The third revision, Tualatin (80530), was a trial for Intel's new 0.13 μm process. Tualatin-based Pentium IIIs were released during 2001 until early 2002 at speeds of 1.0, 1.13, 1.2, 1.26, 1.33 and 1.4 GHz. A basic shrink of Coppermine, no new features were added, except for added data prefetch logic similar to Pentium 4 and Athlon XP for potentially better use of the L2 cache, although its use compared to these newer CPUs is limited due to the relatively smaller FSB bandwidth (FSB was still kept at 133 MHz). Variants with 256 and 512 KB L2 cache were produced, the latter being dubbed Pentium III-S; this variant was mainly intended for low-power consumption servers and also exclusively featured SMP support within the Tualatin line.
Although the Socket 370 designation was kept, the use of 1.25 AGTL signaling in place of 1.5V AGTL+ rendered prior motherboards incompatible. This confusion carried over to the chipset naming, where only the B-stepping of the i815 chipset was compatible with Tualatin processors. A new VRM guideline was also designed by Intel, version 8.5, which required finer voltage steps and debuted load line Vcore (in place of fixed voltage regardless of current on 8.4). Some motherboard manufacturers would mark the change with blue sockets (instead of white), and were often also backwards compatible with Coppermine CPUs.
The Tualatin also formed the basis for the highly popular Pentium III-M mobile processor, which became Intel's front-line mobile chip (the Pentium 4 drew significantly more power, and so was not well-suited for this role) for the next two years. The chip offered a good balance between power consumption and performance, thus finding a place in both performance notebooks and the "thin and light" category.
The Tualatin-based Pentium III performed well in some applications compared to the fastest Willamette-based Pentium 4, and even the Thunderbird-based Athlons. Despite this, its appeal was limited due to the aforementioned incompatibility with existing systems, and Intel's only officially supported chipset for Tualatins, the i815, could only handle 512 MB RAM as opposed to 1 GB of registered RAM with the older, incompatible 440BX chipset. However, the enthusiast community found a way to run Tualatins on then-ubiquitous BX chipset based boards, although it was often a non-trivial task and required some degree of technical skills.
Tualatin-based Pentium III CPUs can usually be visually distinguished from Coppermine-based processors by the metal integrated heat-spreader (IHS) fixed on top of the package. However, the last models of Coppermine Pentium IIIs also featured the IHS — the integrated heat spreader is actually what distinguishes the FC-PGA2 package from the FC-PGA — both are for Socket 370 motherboards.
Before the addition of the heat spreader, it was sometimes difficult to install a heatsink on a Pentium III. One had to be careful not to put force on the core at an angle because doing so would cause the edges and corners of the core to crack and could destroy the CPU. It was also sometimes difficult to achieve a flat mating of the CPU and heatsink surfaces, a factor of critical importance to good heat transfer. This became increasingly challenging with the Socket 370 CPUs, compared with their Slot 1 predecessors, because of the force required to mount a socket-based cooler and the narrower, 2-sided mounting mechanism (Slot 1 featured 4-point mounting). As such, and because the 0.13 μm Tualatin had an even smaller core surface area than the 0.18 μm Coppermine, Intel installed the metal heatspreader on Tualatin and all future desktop processors.
The Tualatin core was named after the Tualatin Valley and Tualatin River in Oregon, where Intel has large manufacturing and design facilities.
Pentium III's SSE implementation
Since Katmai was built in the same 0.25 µm process as Pentium II "Deschutes", it had to implement Streaming SIMD Extensions (SSE) using minimal silicon. To achieve this goal, Intel implemented the 128-bit architecture by double-cycling the existing 64-bit data paths and by merging the SIMD-FP multiplier unit with the x87 scalar FPU multiplier into a single unit. To utilize the existing 64-bit data paths, Katmai issues each SIMD-FP instruction as two μops. To compensate partially for implementing only half of SSE's architectural width, Katmai implements the SIMD-FP adder as a separate unit on the second dispatch port. This organization allows one half of a SIMD multiply and one half of an independent SIMD add to be issued together bringing the peak throughput back to four floating point operations per cycle — at least for code with an even distribution of multiplies and adds.
The issue was that Katmai's hardware-implementation contradicted the parallelism model implied by the SSE instruction-set. Programmers faced a code-scheduling dilemma: "Should the SSE-code be tuned for Katmai's limited execution resources, or should it be tuned for a future processor with more resources?" Katmai-specific SSE optimizations yielded the best possible performance from the Pentium III family but was suboptimal for Coppermine onwards as well as future Intel processors, such as the Pentium 4 and Core series.
Core specifications
Controversy about privacy issues
The Pentium III was the first x86 CPU to include a unique, retrievable, identification number, called Processor Serial Number (PSN). A Pentium III's PSN can be read by software through the CPUID instruction if this feature has not been disabled through the BIOS.
On November 29, 1999, the Science and Technology Options Assessment (STOA) Panel of the European Parliament, following their report on electronic surveillance techniques asked parliamentary committee members to consider legal measures that would "prevent these chips from being installed in the computers of European citizens."
Intel eventually removed the PSN feature from Tualatin-based Pentium IIIs, and the feature was absent in Pentium 4 and Pentium M.
A largely equivalent feature, the Protected Processor Identification Number (PPIN) was later added to x86 CPUs with little public notice, starting with Intel's Ivy Bridge architecture and compatible Zen 2 AMD CPUs. It is implemented as a set of model-specific registers and is useful for machine check exception handling.
Pentium III RNG (Random Number Generator)
A new feature was added to the Pentium III: a hardware-based random number generator. It has been described as "several oscillators combine their outputs and that odd waveform is sampled asynchronously." These numbers, however, were only 32 bit, at a time when export controls were on 56 bits and higher, so they weren't state of the art.
See also
Intel Pentium 4 microprocessor
List of Intel Pentium III microprocessors
List of Intel Celeron microprocessors
References
External links
Listing of various PII, PIII, and Celeron alphanumeric model designations
Comparison of 7th generation x86 CPU architectures
Intel FAQ about the pentium III processor serial number
Intel datasheets
Pentium III (Katmai)
Pentium III (Coppermine)
Pentium III (Tualatin)
Computer-related introductions in 1999
Pentium 3
Superscalar microprocessors
32-bit microprocessors | Operating System (OS) | 1,204 |
Opera (web browser)
Opera is a multi-platform web browser developed by its namesake company Opera. Opera is a Chromium-based browser. It distinguishes itself from other browsers through its user interface and other features.
Opera was initially released on April 10, 1995, making it one of the oldest desktop web browsers still actively developed today. It was commercial software for the first ten years and had its own proprietary layout engine, Presto. In 2013, Opera switched from the Presto engine to Chromium.
The web browser can be used on Microsoft Windows, Android, iOS, macOS, and Linux operating systems. There are also mobile versions called Opera Mobile and Opera Mini. Additionally, Opera users have access to a news app based on an AI-platform, Opera News.
The company released a gaming-oriented version of the browser called Opera GX in 2019.
History
In 1994, Jon Stephenson von Tetzchner and Geir Ivarsøy started developing the Opera web browser while working at Telenor, a Norwegian telecommunications company.
In 1995, they founded Opera Software AS. Opera was initially released on April 10, 1995 and was first publicly released in 1996 with version 2.10, which ran on Microsoft Windows 95. Opera began development of its first browser for mobile device platforms in 1998.
Opera 4.0, released in 2000, included a new cross-platform core that facilitated the creation of editions of Opera for multiple operating systems and platforms.
Up to this point, Opera was trialware and had to be purchased after the trial period ended. Version 5.0 (released in 2000) saw the end of this requirement. Instead, Opera became ad-sponsored, displaying advertisements to users who had not paid for it. Later versions of Opera gave the user the choice of seeing banner ads or targeted text advertisements from Google.
With version 8.5 (released in 2005) the advertisements were completely removed and the primary financial support for the browser came through revenue from Google (which is by contract Opera's default search engine).
Among the new features introduced in version 9.1 (released in 2006) was fraud protection using technology from GeoTrust, a digital certificate provider, and PhishTank, an organization that tracks known phishing web sites. This feature was further expanded in version 9.5, when GeoTrust was replaced with Netcraft, and malware protection from Haute Secure was added.
In 2006, Opera Software ASA was released as well as Internet Channel and Nintendo DS Browser for Nintendo's DS and Wii gaming systems.
A new JavaScript engine called Carakan, after the Javanese alphabet, was introduced with version 10.50. According to Opera Software, Carakan made Opera 10.50 more than seven times faster in SunSpider than Opera 10.10.
On December 16, 2010, Opera 11 was released, featuring extensions, tab stacking (where dragging one tab over another allows creating a group of tabs), visual mouse gestures and changes to the address bar. Opera 12 was released on June 14, 2012.
On February 12, 2013, Opera Software announced that it would drop its own Presto layout engine in favour of WebKit as implemented by Google's Chrome browser, using code from the Chromium project. Opera Software planned as well to contribute code to WebKit. On April 3, 2013, Google announced that it would fork components from WebKit to form a new layout engine known as Blink. The same day, Opera Software confirmed that it would follow Google in implementing the Blink layout engine.
On May 28, 2013, a beta release of Opera 15 was made available, the first version of which was based on the Chromium project. Many distinctive Opera features of the previous versions were dropped, and Opera Mail was separated into a standalone application derived from Opera 12.
Acquisition by Chinese consortium
In 2016, the company changed ownership when a group of Chinese investors purchased the web browser, consumer business, and brand of Opera Software ASA. On 18 July 2016, Opera Software ASA announced it had sold its browser, privacy and performance apps, and the Opera brand to Golden Brick Capital Private Equity Fund I Limited Partnership, a consortium of Chinese investors.
In January 2017, the source code of Opera 12.15, one of the last few versions that was still based on the Presto layout engine, was leaked.
To demonstrate how radically different a browser could look, Opera Neon, dubbed a "concept browser," was released in January 2017. PC World compared it to demo models that automakers and hardware vendors release to show their visions of the future. Instead of a Speed Dial (also explained in the following chapter "Features"), it displays the frequently accessed websites in resemblance to a desktop with computer icons scattered all over it in an artistic formation.
Features
Opera has originated features later adopted by other web browsers, including: Speed Dial, pop-up blocking, re-opening recently closed pages, private browsing, and tabbed browsing. Additional features include a built-in screenshot tool called Snapshot which also includes an image-markup tool, built-in ad blockers and tracking blockers.
Built-in messengers
Opera’s desktop browser includes access to social media messaging apps WhatsApp, Telegram, Facebook Messenger, Twitter, Instagram, and VKontakte.
Usability and accessibility
Opera includes a bookmarks bar and a download manager. It also has "Speed Dial" which allows the user to add an unlimited number of pages shown in thumbnail form in a page displayed when a new tab is opened.
Opera was one of the first browsers to support Cascading Style Sheets (CSS) in 1998.
Opera Turbo, a feature that compresses requested web pages (except HTTPS pages) before sending them to the users, is no longer available on the desktop browser. Opera Turbo is available in Opera Mini, the mobile browser.
Privacy and security
One security feature is the option to delete private data, such as HTTP cookies, browsing history, items in cache and passwords with the click of a button.
When visiting a site, Opera displays a security badge in the address bar which shows details about the website, including security certificates. Opera's fraud and malware protection warns the user about suspicious web pages and is enabled by default. It checks the requested page against several databases of known phishing and malware websites, called blacklists.
In 2016, a free virtual private network (VPN) service was implemented in the browser. Opera said that this would allow encrypted access to websites otherwise blocked, and provide security on public WiFi networks. It was later determined that the browser VPN operated as a web proxy rather than a VPN, meaning that it only secured connections made by the browser and not by any other apps on the computer.
Crypto wallet support
In 2018, a built-in cryptocurrency wallet to the Opera Web Browser was released, announcing that they would be the first browser with a built-in Crypto Wallet. On December 13, 2018, Opera released a video showing many decentralized applications like Cryptokitties running on the Android version of the Opera Web Browser.
In March 2020, Opera updated its Android browser to access crypto domains, making it the first browser to be able to support a domain name system (DNS) which is not part of the traditional DNS directly without the need of a plugin or add-on. This was through a collaboration with a San Francisco based startup, Unstoppable Domains.
Other versions
Opera GX
Opera GX is a gaming-oriented counterpart of Opera. The browser was announced and released in early access for Windows on June 11, 2019, during E3 2019. The macOS version was released in December of the same year.
Opera GX adds features geared towards gamers on top of the regular Opera browser. The browser allows users to limit network, CPU, and memory usage to preserve system resources. It also adds integrations with other apps such as Twitch, Discord, Twitter, and Instagram. The browser also has a built-in page called the GX Corner, which collates gaming-related releases, deals, and news articles.
On May 20, 2021, Opera released a mobile version of Opera GX in beta for iOS and Android.
Development stages
Opera Software uses a release cycle consisting of three "streams," corresponding to phases of development, that can be downloaded and installed independently of each other: "developer," "beta," and "stable." New features are first introduced in the developer build, then, depending on user feedback, may progress to the beta version and eventually be released.
The developer stream allows early testing of new features, mainly targeting developers, extension creators, and early adopters. Opera developer is not intended for everyday browsing as it is unstable and is prone to failure or crashing, but it enables advanced users to try out new features that are still under development, without affecting their normal installation of the browser. New versions of the browser are released frequently, generally a few times a week.
The beta stream, formerly known as "Opera Next," is a feature complete package, allowing stability and quality to mature before the final release. A new version is released every couple of weeks.
Both streams can be installed alongside the official release without interference. Each has a different icon to help the user distinguish between the variants.
Market adoption
Integrations
In 2005, Adobe Systems integrated Opera's rendering engine, Presto, into its Adobe Creative Suite applications. Opera technology was employed in Adobe GoLive, Adobe Photoshop, Adobe Dreamweaver, and other components of the Adobe Creative Suite. Opera's layout engine is also found in Virtual Mechanics SiteSpinner Pro. The Internet Channel is a version of the Opera 9 web browser for use on the Nintendo Wii created by Opera Software and Nintendo. Opera Software is also implemented in the Nintendo DS Browser for Nintendo's handheld systems.
Opera is one of the top 5 browsers used around the world. As of April 2021, Opera's offerings had over 320 million active users.
Reception
The Opera browser has been listed as a “tried and tested direct alternative to Chrome.” It scores close to Chrome on the HTML5test, which scores browsers’ compatibility with different web standards.
Versions with the Presto layout engine have been positively reviewed, although they have been criticized for website compatibility issues. Because of this issue, Opera 8.01 and higher had included workarounds to help certain popular but problematic web sites display properly.
Versions with the Blink layout engine have been criticized by some users for missing features such as UI customization, and for abandoning Opera Software's own Presto layout engine. Despite that, versions with the Blink layout engine have been noted for being fast and stable, for handling the latest web standards and for having a better website compatibility and a modern-style user interface.
See also
Opera browser platform variants:
Opera Mini: a browser for tablets and telephones
Opera Mobile: a browser for tablets and telephones
Related other browsers:
Otter Browser: an open-source browser that recreates some aspects of the classic Opera
Vivaldi: a freeware browser created by former Opera Software employees
Related topics:
Comparison of browser synchronizers
History of the web browser
List of pop-up blocking software
List of web browsers
Timeline of web browsers
References
External links
C++ software
Cross-platform web browsers
Embedded Linux
Freeware
Java device platform
OS/2 web browsers
MacOS web browsers
Pocket PC software
Portable software
POSIX web browsers
Proprietary cross-platform software
Proprietary freeware for Linux
Science and technology in Norway
Software based on WebKit
Software companies of Norway
Telenor
Windows web browsers
1994 software
1995 software
Computer-related introductions in 1995
BSD software | Operating System (OS) | 1,205 |
NetBIOS
NetBIOS () is an acronym for Network Basic Input/Output System. It provides services related to the session layer of the OSI model allowing applications on separate computers to communicate over a local area network. As strictly an API, NetBIOS is not a networking protocol. Older operating systems ran NetBIOS over IEEE 802.2 and IPX/SPX using the NetBIOS Frames (NBF) and NetBIOS over IPX/SPX (NBX) protocols, respectively. In modern networks, NetBIOS normally runs over TCP/IP via the NetBIOS over TCP/IP (NBT) protocol. This results in each computer in the network having both an IP address and a NetBIOS name corresponding to a (possibly different) host name. NetBIOS is also used for identifying system names in TCP/IP(Windows). Simply saying, it is a protocol that allows communication of files and printers through the Session Layer of the OSI Model in a LAN.
History and terminology
NetBIOS is a non-routable OSI Session Layer 5 Protocol and a service that allows applications on computers to communicate with one another over a local area network (LAN). NetBIOS was developed in 1983 by Sytek Inc. as an API for software communication over IBM PC Network LAN technology. On IBM PC Network, as an API alone, NetBIOS relied on proprietary Sytek networking protocols for communication over the wire. Despite supporting a maximum of 80 PCs in a LAN, NetBIOS became an industry standard.
In 1985, IBM went forward with the Token Ring network scheme and a NetBIOS emulator was produced to allow NetBIOS-aware applications from the PC-Network era to work over this new design. This emulator, named NetBIOS Extended User Interface (NetBEUI), expanded the base NetBIOS API with, among other things, the ability to deal with the greater node capacity of Token Ring. A new networking protocol, NBF, was simultaneously produced to allow NetBEUI (NetBIOS) to provide its services over Token Ring – specifically, at the IEEE 802.2 Logical Link Control layer.
In 1985, Microsoft created a NetBIOS implementation for its MS-Net networking technology. As in the case of IBM's Token Ring, the services of Microsoft's NetBIOS implementation were provided over the IEEE 802.2 Logical Link Control layer by the NBF protocol. Until Microsoft adopted Domain Name System (DNS) resolution of hostnames, Microsoft operating systems used NetBIOS to resolve names in Windows client-server networks.
In 1986, Novell released Advanced Novell NetWare 2.0 featuring the company's own NetBIOS emulator. Its services were encapsulated within NetWare's IPX/SPX protocol using the NetBIOS over IPX/SPX (NBX) protocol.
In 1987, a method of encapsulating NetBIOS in TCP and UDP packets, NetBIOS over TCP/IP (NBT), was published. It was described in RFC 1001 ("Protocol Standard for a NetBIOS Service on a TCP/UDP Transport: Concepts and Methods") and RFC 1002 ("Protocol Standard for a NetBIOS Service on a TCP/UDP Transport: Detailed Specifications"). The NBT protocol was developed in order to "allow an implementation [of NetBIOS applications] to be built on virtually any type of system where the TCP/IP protocol suite is available," and to "allow NetBIOS interoperation in the Internet."
After the PS/2 computer hit the market in 1987, IBM released the PC LAN Support Program, which included a driver for NetBIOS.
There is some confusion between the names NetBIOS and NetBEUI. NetBEUI originated strictly as the moniker for IBM's enhanced 1985 NetBIOS emulator for Token Ring. The name NetBEUI should have died there, considering that at the time, the NetBIOS implementations by other companies were known simply as NetBIOS regardless of whether they incorporated the API extensions found in that emulator. For MS-Net, however, Microsoft elected to name its implementation of the NBF protocol "NetBEUI" – naming its implementation of the transport protocol after IBM's second version of the API. Consequently Microsoft file and printer sharing over Ethernet continues to be called NetBEUI, with the name NetBIOS commonly used only in for file and printer sharing over TCP/IP. More accurately, the former is NetBIOS Frames (NBF), and the latter is NetBIOS over TCP/IP (NBT).
Since its original publishing in a technical reference book from IBM, the NetBIOS API specification has become a de facto standard.
Services
NetBIOS provides three distinct services:
Name service (NetBIOS-NS) for name registration and resolution.
Datagram distribution service (NetBIOS-DGM) for connectionless communication.
Session service (NetBIOS-SSN) for connection-oriented communication.
(Note: SMB, an upper layer, is a service that runs on top of the Session Service and the Datagram service, and is not to be confused as a necessary and integral part of NetBIOS itself. It can now run atop TCP with a small adaptation layer that adds a packet length to each SMB message; this is necessary because TCP only provides a byte-stream service with no notion of packet boundaries.)
Name service
In order to start sessions or distribute datagrams, an application must register its NetBIOS name using the name service. NetBIOS names are 16 octets in length and vary based on the particular implementation. Frequently, the 16th octet, called the NetBIOS Suffix, designates the type of resource, and can be used to tell other applications what type of services the system offers. In NBT, the name service operates on UDP port 137 (TCP port 137 can also be used, but rarely is).
The name service primitives offered by NetBIOS are:
Add name – registers a NetBIOS name.
Add group name – registers a NetBIOS "group" name.
Delete name – un-registers a NetBIOS name or group name.
Find name – looks up a NetBIOS name on the network.
NetBIOS name resolution is not supported by Microsoft for Internet Protocol Version 6 (IPv6).
Datagram distribution service
Datagram mode is connectionless; the application is responsible for error detection and recovery. In NBT, the datagram service runs on UDP port 138.
The datagram service primitives offered by NetBIOS are:
Send Datagram – send a datagram to a remote NetBIOS name.
Send Broadcast Datagram – send a datagram to all NetBIOS names on the network.
Receive Datagram – wait for a packet to arrive from a Send Datagram operation.
Receive Broadcast Datagram – wait for a packet to arrive from a Send Broadcast Datagram operation.
Session service
Session mode lets two computers establish a connection, allows messages to span multiple packets, and provides error detection and recovery. In NBT, the session service runs on TCP port 139.
The session service primitives offered by NetBIOS are:
Call – opens a session to a remote NetBIOS name.
Listen – listen for attempts to open a session to a NetBIOS name.
Hang Up – close a session.
Send – sends a packet to the computer on the other end of a session.
Send No Ack – like Send, but doesn't require an acknowledgment.
Receive – wait for a packet to arrive from a Send on the other end of a session.
In the original protocol used to implement NetworkBIOS services on PC-Network, to establish a session, the initiating computer sends an Open request which is answered by an Open acknowledgment. The computer that started the session will then send a Session Request packet which will prompt either a Session Accept or Session Reject packet.
During an established session, each transmitted packet is answered by either a positive-acknowledgment (ACK) or negative-acknowledgment (NAK) response. A NAK will prompt retransmission of the data. Sessions are closed by the non-initiating computer by sending a close request. The computer that started the session will reply with a close response which prompts the final session closed packet.
NetBIOS name vs Internet host name
When NetBIOS is run in conjunction with Internet protocols (e.g., NBT), each computer may have multiple names: one or more NetBIOS name service names and one or more Internet host names.
NetBIOS name
The NetBIOS name is 16 ASCII characters, however Microsoft limits the host name to 15 characters and reserves the 16th character as a NetBIOS Suffix. This suffix describes the service or name record type such as host record, master browser record, or domain controller record or other services. The host name (or short host name) is specified when Windows networking is installed/configured, the suffixes registered are determined by the individual services supplied by the host. In order to connect to a computer running TCP/IP via its NetBIOS name, the name must be resolved to a network address. Today this is usually an IP address (the NetBIOS name to IP address resolution is often done by either broadcasts or a WINS Server – NetBIOS Name Server). A computer's NetBIOS name is often the same as that computer's host name (see below), although truncated to 15 characters, but it may also be completely different.
NetBIOS names are a sequence of alphanumeric characters. The following characters are explicitly not permitted: \/:*?"<>|. Since Windows 2000, NetBIOS names also had to comply with restrictions on DNS names: they cannot consist entirely of digits, and the hyphen ("-") or full-stop (".") characters may not appear as the first or last character. Since Windows 2000, Microsoft has advised against including any full-stop (".") characters in NetBIOS names, such that applications can use the presence of a full-stop to distinguish domain names from NetBIOS names.
The Windows LMHOSTS file provides a NetBIOS name resolution method that can be used for small networks that do not use a WINS server.
Internet host name
A Windows machine's NetBIOS name is not to be confused with the computer's Internet host name (assuming that the computer is also an Internet host in addition to being a NetBIOS node, which need not necessarily be the case). Generally a computer running Internet protocols (whether it is a Windows machine or not) usually has a host name (also sometimes called a machine name). Originally these names were stored in and provided by a hosts file but today most such names are part of the hierarchical Domain Name System (DNS).
Generally the host name of a Windows computer is based on the NetBIOS name plus the Primary DNS Suffix, which are both set in the System Properties dialog box. There may also be connection-specific suffixes which can be viewed or changed on the DNS tab in Control Panel → Network → TCP/IP → Advanced Properties. Host names are used by applications such as telnet, ftp, web browsers, etc. To connect to a computer running the TCP/IP protocol using its name, the host name must be resolved into an IP address, typically by a DNS server. (It is also possible to operate many TCP/IP-based applications, including the three listed above, using only IP addresses, but this is not the norm.)
Node types
Under Windows, the node type of a networked computer relates to the way it resolves NetBIOS names to IP addresses. This assumes that there are any IP addresses for the NetBIOS nodes, which is assured only when NetBIOS operates over NBT; thus, node types are not a property of NetBIOS per se but of interaction between NetBIOS and TCP/IP in the Windows OS environment. There are four node types.
B-node: 0x01 Broadcast
P-node: 0x02 Peer (WINS only)
M-node: 0x04 Mixed (broadcast, then WINS)
H-node: 0x08 Hybrid (WINS, then broadcast)
The node type in use is displayed by opening a command line and typing ipconfig /all.
A Windows computer registry may also be configured in such a way as to display "unknown" for the node type.
NetBIOS Suffixes
The NetBIOS Suffix, alternately called the NetBIOS End Character (endchar), is the 16th character of a NetBIOS name and indicates service type for the registered name. The number of record types is limited to 255; some commonly used values are:
For unique names:
00: Workstation Service (workstation name)
03: Windows Messenger service
06: Remote Access Service
20: File Service (also called Host Record)
21: Remote Access Service client
1B: Domain Master Browser – Primary Domain Controller for a domain
1D: Master Browser
For group names:
00: Workstation Service (workgroup/domain name)
1C: Domain Controllers for a domain (group record with up to 25 IP addresses)
1E: Browser Service Elections
See also
NetBIOS over TCP/IP (NBT)
NetBIOS Frames (NBF)
Server Message Block (SMB)
References
Further reading
Haugdahl, J. Scott (1990). Inside NetBIOS. Architecture Technology Corp.
Silberschatz, Abraham; Galvin, Peter Baer; Gagne, Greg (2004). Operating System Concepts. (7th Ed.). John Wiley & Sons.
Meyers, Michael (2004). "Managing and Troubleshooting Networks". McGraw-Hill.
Tamara Dean. Network+ Guide to Networks, pg. 206 (NetBEUI)
External links
LAN Technical Reference: 802.2 and NetBIOS APIs
Implementing CIFS (from the Samba team, published under the Open Publication License)
NetBIOS, NetBEUI, NBF, SMB, CIFS Networking
LMHOSTS File
NETBIOS End Characters / Suffixes – Microsoft Knowledge Base article describing list of NetBIOS Suffixes.
– Visual Basic 2010 NetBIOS API source code.
Network protocols | Operating System (OS) | 1,206 |
Java Platform, Standard Edition
Java Platform, Standard Edition (Java SE) is a computing platform for development and deployment of portable code for desktop and server environments. Java SE was formerly known as Java 2 Platform, Standard Edition (J2SE).
The platform uses Java programming language and is part of the Java software-platform family. Java SE defines a range of general-purpose APIs—such as Java APIs for the Java Class Library—and also includes the Java Language Specification and the Java Virtual Machine Specification. OpenJDK is the official reference implementation since version 7.
Nomenclature, standards and specifications
The platform was known as Java 2 Platform, Standard Edition or J2SE from version 1.2, until the name was changed to Java Platform, Standard Edition or Java SE in version 1.5. The "SE" is used to distinguish the base platform from the Enterprise Edition (Java EE) and Micro Edition (Java ME) platforms. The "2" was originally intended to emphasize the major changes introduced in version 1.2, but was removed in version 1.6. The naming convention has been changed several times over the Java version history. Starting with J2SE 1.4 (Merlin), Java SE has been developed under the Java Community Process, which produces descriptions of proposed and final specifications for the Java platform called Java Specification Requests (JSR). JSR 59 was the umbrella specification for J2SE 1.4 and JSR 176 specified J2SE 5.0 (Tiger). Java SE 6 (Mustang) was released under JSR 270.
Java Platform, Enterprise Edition (Java EE) is a related specification that includes all the classes in Java SE, plus a number that are more useful to programs that run on servers as opposed to workstations.
Java Platform, Micro Edition (Java ME) is a related specification intended to provide a certified collection of Java APIs for the development of software for small, resource-constrained devices such as cell phones, PDAs and set-top boxes.
The Java Runtime Environment (JRE) and Java Development Kit (JDK) are the actual files downloaded and installed on a computer to run or develop Java programs, respectively.
General purpose packages
java.lang
The Java package contains fundamental classes and interfaces closely tied to the language and runtime system. This includes the root classes that form the class hierarchy, types tied to the language definition, basic exceptions, math functions, threading, security functions, as well as some information on the underlying native system. This package contains 22 of 32 Error classes provided in JDK 6.
The main classes and interfaces in java.lang are:
– the class that is the root of every class hierarchy.
– the base class for enumeration classes (as of J2SE 5.0).
– the class that is the root of the Java reflection system.
– the class that is the base class of the exception class hierarchy.
, , and – the base classes for each exception type.
– the class that allows operations on threads.
– the class for strings and string literals.
and – classes for performing string manipulation (StringBuilder as of J2SE 5.0).
– the interface that allows generic comparison and ordering of objects (as of J2SE 1.2).
– the interface that allows generic iteration using the enhanced for loop (as of J2SE 5.0).
, , , , and – classes that provide "system operations" that manage the dynamic loading of classes, creation of external processes, host environment inquiries such as the time of day, and enforcement of security policies.
and – classes that provide basic math functions such as sine, cosine, and square root (StrictMath as of J2SE 1.3).
The primitive wrapper classes that encapsulate primitive types as objects.
The basic exception classes thrown for language-level and other common exceptions.
Classes in java.lang are automatically imported into every source file.
java.lang.ref
The package provides more flexible types of references than are otherwise available, permitting limited interaction between the application and the Java Virtual Machine (JVM) garbage collector. It is an important package, central enough to the language for the language designers to give it a name that starts with "java.lang", but it is somewhat special-purpose and not used by a lot of developers. This package was added in J2SE 1.2.
Java has an expressive system of references and allows for special behavior for garbage collection. A normal reference in Java is known as a "strong reference." The java.lang.ref package defines three other types of references—soft, weak, and phantom references. Each type of reference is designed for a specific use.
A can be used to implement a cache. An object that is not reachable by a strong reference (that is, not strongly reachable), but is referenced by a soft reference is called "softly reachable." A softly reachable object may be garbage collected at the discretion of the garbage collector. This generally means that softly reachable objects are only garbage collected when free memory is low—but again, this is at the garbage collector's discretion. Semantically, a soft reference means, "Keep this object when nothing else references it, unless the memory is needed."
A is used to implement weak maps. An object that is not strongly or softly reachable, but is referenced by a weak reference is called "weakly reachable". A weakly reachable object is garbage collected in the next collection cycle. This behavior is used in the class . A weak map allows the programmer to put key/value pairs in the map and not worry about the objects taking up memory when the key is no longer reachable anywhere else. Another possible application of weak references is the string intern pool. Semantically, a weak reference means "get rid of this object when nothing else references it at the next garbage collection."
A is used to reference objects that have been marked for garbage collection and have been finalized, but have not yet been reclaimed. An object that is not strongly, softly or weakly reachable, but is referenced by a phantom reference is called "phantom reachable." This allows for more flexible cleanup than is possible with the finalization mechanism alone. Semantically, a phantom reference means "this object is no longer needed and has been finalized in preparation for being collected."
Each of these reference types extends the class, which provides the method to return a strong reference to the referent object (or null if the reference has been cleared or if the reference type is phantom), and the method to clear the reference.
The java.lang.ref also defines the class , which can be used in each of the applications discussed above to keep track of objects that have changed reference type. When a Reference is created it is optionally registered with a reference queue. The application polls the reference queue to get references that have changed reachability state.
java.lang.reflect
Reflection is a constituent of the Java API that lets Java code examine and "reflect" on Java components at runtime and use the reflected members. Classes in the package, along with java.lang.Class and accommodate applications such as debuggers, interpreters, object inspectors, class browsers, and services such as object serialization and JavaBeans that need access to either the public members of a target object (based on its runtime class) or the members declared by a given class. This package was added in JDK 1.1.
Reflection is used to instantiate classes and invoke methods using their names, a concept that allows for dynamic programming. Classes, interfaces, methods, fields, and constructors can all be discovered and used at runtime. Reflection is supported by metadata that the JVM has about the program.
Techniques
There are basic techniques involved in reflection:
Discovery – this involves taking an object or class and discovering the members, superclasses, implemented interfaces, and then possibly using the discovered elements.
Use by name – involves starting with the symbolic name of an element and using the named element.
Discovery
Discovery typically starts with an object and calling the method to get the object's Class. The Class object has several methods for discovering the contents of the class, for example:
– returns an array of objects representing all the public methods of the class or interface
– returns an array of objects representing all the public constructors of the class
– returns an array of objects representing all the public fields of the class or interface
– returns an array of Class objects representing all the public classes and interfaces that are members (e.g. inner classes) of the class or interface
– returns the Class object representing the superclass of the class or interface (null is returned for interfaces)
– returns an array of Class objects representing all the interfaces that are implemented by the class or interface
Use by name
The Class object can be obtained either through discovery, by using the class literal (e.g. MyClass.class) or by using the name of the class (e.g. ). With a Class object, member Method, Constructor, or Field objects can be obtained using the symbolic name of the member. For example:
– returns the Method object representing the public method with the name "methodName" of the class or interface that accepts the parameters specified by the Class... parameters.
– returns the Constructor object representing the public constructor of the class that accepts the parameters specified by the Class... parameters.
– returns the Field object representing the public field with the name "fieldName" of the class or interface.
Method, Constructor, and Field objects can be used to dynamically access the represented member of the class. For example:
– returns an Object containing the value of the field from the instance of the object passed to get(). (If the Field object represents a static field then the Object parameter is ignored and may be null.)
– returns an Object containing the result of invoking the method for the instance of the first Object parameter passed to invoke(). The remaining Object... parameters are passed to the method. (If the Method object represents a static method then the first Object parameter is ignored and may be null.)
– returns the new Object instance from invoking the constructor. The Object... parameters are passed to the constructor. (Note that the parameterless constructor for a class can also be invoked by calling .)
Arrays and proxies
The java.lang.reflect package also provides an class that contains static methods for creating and manipulating array objects, and since J2SE 1.3, a class that supports dynamic creation of proxy classes that implement specified interfaces.
The implementation of a Proxy class is provided by a supplied object that implements the interface. The InvocationHandler's method is called for each method invoked on the proxy object—the first parameter is the proxy object, the second parameter is the Method object representing the method from the interface implemented by the proxy, and the third parameter is the array of parameters passed to the interface method. The invoke() method returns an Object result that contains the result returned to the code that called the proxy interface method.
java.io
The package contains classes that support input and output. The classes in the package are primarily stream-oriented; however, a class for random access files is also provided. The central classes in the package are and , which are abstract base classes for reading from and writing to byte streams, respectively. The related classes and are abstract base classes for reading from and writing to character streams, respectively. The package also has a few miscellaneous classes to support interactions with the host file system.
Streams
The stream classes follow the decorator pattern by extending the base subclass to add features to the stream classes. Subclasses of the base stream classes are typically named for one of the following attributes:
the source/destination of the stream data
the type of data written to/read from the stream
additional processing or filtering performed on the stream data
The stream subclasses are named using the naming pattern XxxStreamType where Xxx is the name describing the feature and StreamType is one of InputStream, OutputStream, Reader, or Writer.
The following table shows the sources/destinations supported directly by the java.io package:
Other standard library packages provide stream implementations for other destinations, such as the InputStream returned by the method or the Java EE class.
Data type handling and processing or filtering of stream data is accomplished through stream filters. The filter classes all accept another compatible stream object as a parameter to the constructor and decorate the enclosed stream with additional features. Filters are created by extending one of the base filter classes , , , or .
The Reader and Writer classes are really just byte streams with additional processing performed on the data stream to convert the bytes to characters. They use the default character encoding for the platform, which as of J2SE 5.0 is represented by the returned by the static method. The class converts an InputStream to a Reader and the class converts an OutputStream to a Writer. Both these classes have constructors that support specifying the character encoding to use. If no encoding is specified, the program uses the default encoding for the platform.
The following table shows the other processes and filters that the java.io package directly supports. All these classes extend the corresponding Filter class.
Random access
The class supports random access reading and writing of files. The class uses a file pointer that represents a byte-offset within the file for the next read or write operation. The file pointer is moved implicitly by reading or writing and explicitly by calling the or methods. The current position of the file pointer is returned by the method.
File system
The class represents a file or directory path in a file system. File objects support the creation, deletion and renaming of files and directories and the manipulation of file attributes such as read-only and last modified timestamp. File objects that represent directories can be used to get a list of all the contained files and directories.
The class is a file descriptor that represents a source or sink (destination) of bytes. Typically this is a file, but can also be a console or network socket. FileDescriptor objects are used to create File streams. They are obtained from File streams and java.net sockets and datagram sockets.
java.nio
In J2SE 1.4, the package (NIO or Non-blocking I/O) was added to support memory-mapped I/O, facilitating I/O operations closer to the underlying hardware with sometimes dramatically better performance. The java.nio package provides support for a number of buffer types. The subpackage provides support for different character encodings for character data. The subpackage provides support for channels, which represent connections to entities that are capable of performing I/O operations, such as files and sockets. The java.nio.channels package also provides support for fine-grained locking of files.
java.math
The package supports multiprecision arithmetic (including modular arithmetic operations) and provides multiprecision prime number generators used for cryptographic key generation. The main classes of the package are:
– provides arbitrary-precision signed decimal numbers. BigDecimal gives the user control over rounding behavior through RoundingMode.
– provides arbitrary-precision integers. Operations on BigInteger do not overflow or lose precision. In addition to standard arithmetic operations, it provides modular arithmetic, GCD calculation, primality testing, prime number generation, bit manipulation, and other miscellaneous operations.
– encapsulate the context settings that describe certain rules for numerical operators.
– an enumeration that provides eight rounding behaviors.
java.net
The package provides special IO routines for networks, allowing HTTP requests, as well as other common transactions.
java.text
The package implements parsing routines for strings and supports various human-readable languages and locale-specific parsing.
java.util
Data structures that aggregate objects are the focus of the package. Included in the package is the Collections API, an organized data structure hierarchy influenced heavily by the design patterns considerations.
Special purpose packages
java.applet
Created to support Java applet creation, the package lets applications be downloaded over a network and run within a guarded sandbox. Security restrictions are easily imposed on the sandbox. A developer, for example, may apply a digital signature to an applet, thereby labeling it as safe. Doing so allows the user to grant the applet permission to perform restricted operations (such as accessing the local hard drive), and removes some or all the sandbox restrictions. Digital certificates are issued by certificate authorities.
java.beans
Included in the package are various classes for developing and manipulating beans, reusable components defined by the JavaBeans architecture. The architecture provides mechanisms for manipulating properties of components and firing events when those properties change.
The APIs in java.beans are intended for use by a bean editing tool, in which beans can be combined, customized, and manipulated. One type of bean editor is a GUI designer in an integrated development environment.
java.awt
The , or Abstract Window Toolkit, provides access to a basic set of GUI widgets based on the underlying native platform's widget set, the core of the GUI event subsystem, and the interface between the native windowing system and the Java application. It also provides several basic layout managers, a datatransfer package for use with the Clipboard and Drag and Drop, the interface to input devices such as mice and keyboards, as well as access to the system tray on supporting systems. This package, along with javax.swing contains the largest number of enums (7 in all) in JDK 6.
java.rmi
The package provides Java remote method invocation to support remote procedure calls between two java applications running in different JVMs.
java.security
Support for security, including the message digest algorithm, is included in the package.
java.sql
An implementation of the JDBC API (used to access SQL databases) is grouped into the package.
javax.rmi
The package provides the support for the remote communication between applications, using the RMI over IIOP protocol. This protocol combines RMI and CORBA features.
Java SE Core Technologies - CORBA / RMI-IIOP
javax.swing
Swing is a collection of routines that build on java.awt to provide a platform independent widget toolkit. uses the 2D drawing routines to render the user interface components instead of relying on the underlying native operating system GUI support.
This package contains the largest number of classes (133 in all) in JDK 6. This package, along with java.awt also contains the largest number of enums (7 in all) in JDK 6. It supports pluggable looks and feels (PLAFs) so that widgets in the GUI can imitate those from the underlying native system. Design patterns permeate the system, especially a modification of the model–view–controller pattern, which loosens the coupling between function and appearance. One inconsistency is that (as of J2SE 1.3) fonts are drawn by the underlying native system, and not by Java, limiting text portability. Workarounds, such as using bitmap fonts, do exist. In general, "layouts" are used and keep elements within an aesthetically consistent GUI across platforms.
javax.swing.text.html.parser
The package provides the error tolerant HTML parser that is used for writing various web browsers and web bots.
javax.xml.bind.annotation
The package contains the largest number of Annotation Types (30 in all) in JDK 6. It defines annotations for customizing Java program elements to XML Schema mapping.
OMG packages
org.omg.CORBA
The package provides the support for the remote communication between applications using the General Inter-ORB Protocol and supports other features of the common object request broker architecture. Same as RMI and RMI-IIOP, this package is for calling remote methods of objects on other virtual machines (usually via network).
This package contains the largest number of Exception classes (45 in all) in JDK 6. From all communication possibilities CORBA is portable between various languages; however, with this comes more complexity.
These packages were deprecated in Java 9 and removed from Java 11.
org.omg.PortableInterceptor
The package contains the largest number of interfaces (39 in all) in JDK 6. It provides a mechanism to register ORB hooks through which ORB services intercept the normal flow of execution of the ORB.
Security
Several critical security vulnerabilities have been reported. Security alerts from Oracle announce critical security-related patches to Java SE.
References
External links
Oracle Technology Network's Java SE
Java SE API documentation
JSR 270 (Java SE 6)
1.8
1.7
1.6
Computing platforms
Platform, Standard Edition
Platform, Standard Edition | Operating System (OS) | 1,207 |
Manjaro
Manjaro () is a free and open-source Linux distribution based on the Arch Linux operating system. Manjaro has a focus on user-friendliness and accessibility. It features a rolling release update model and uses Pacman as its package manager. Manjaro is mainly developed in Austria, France and Germany.
Official editions
Manjaro Xfce, which features Manjaro's own dark theme as well as the Xfce desktop.
Manjaro KDE, which features Manjaro's own dark Plasma theme as well as the latest KDE Plasma 5, apps and frameworks.
Manjaro GNOME became the third official version with the Gellivara release and offers the GNOME desktop along with a version of the Manjaro theme.
While not official releases, Manjaro Community Editions are maintained by members of the Manjaro community. They offer additional user interfaces over the official releases, including Budgie, Cinnamon, Deepin, i3, MATE, and Sway.
Manjaro also has editions for devices with ARM processors, such as single-board computers or Pinebook notebooks.
Features
Manjaro comes with both a CLI and a graphical installer. The rolling release model means that the user does not need to upgrade/reinstall the whole system to keep it all up-to-date inline with the latest release. Package management is handled by Pacman via the command line (terminal), and front-end GUI package manager tools like the pre-installed Pamac. It can be configured as either a stable system (default) or bleeding edge in line with Arch.
The repositories are managed with their own tool called BoxIt, which is designed like Git.
Manjaro includes its own GUI settings manager where options like language, drivers, and kernel version can be simply configured.
Certain commonly used Arch utilities such as the Arch Build System (ABS) are available but have alternate implementations in Manjaro.
Manjaro Architect is a CLI net installer that allows the user to choose their own kernel version, drivers, and desktop environment during the install process. Both the official and the community edition's desktop environments are available for selection. For GUI based installations, Manjaro uses the GUI installer Calamares.
Release history
The 0.8.x series releases were the last versions of Manjaro to use a version number. The desktop environments offered, as well as the number of programs bundled into each separate release, have varied in different releases.
Relation to Arch Linux
The main difference compared to Arch Linux is the repositories.
Manjaro uses three sets of repositories:
Unstable: contains the most up to date Arch Linux packages. Unstable is ~3 days behind Arch Linux.
Testing: contains packages from the unstable repositories after they have been tested by unstable users.
Stable: contains only packages that are deemed stable by the development team, which can mean a delay of a few weeks before getting major upgrades.
, non-security related package updates derived from the Arch Linux stable branch to the Manjaro stable branch typically have a lag of a few weeks.
History
Manjaro was first released on July 10, 2011. By mid 2013, Manjaro was in the beta stage, though key elements of the final system had all been implemented such as: a GUI installer (then an Antergos installer fork); a package manager (Pacman) with its choice of frontends; Pamac (GTK) for Xfce desktop and Octopi (Qt) for its Openbox edition; MHWD (Manjaro Hardware Detection, for detection of free & proprietary video drivers); and Manjaro Settings Manager (for system-wide settings, user management, and graphics driver installation and management).
GNOME Shell support was dropped with the release of version 0.8.3. in 2012. However, efforts within Arch Linux made it possible to restart the Cinnamon/GNOME edition as a community edition. An official release offering the GNOME desktop environment was reinstated in March 2017.
During the development of Manjaro 0.9.0 at the end of August 2015, the Manjaro team decided to switch to year and month designations for the Manjaro version scheme instead of numbers. This applies to both the 0.8.x series as well as the new 0.9.x series, renaming 0.8.13, released in June 2015, as 15.06 and so on. Manjaro 15.09, codenamed Bellatrix and formerly known as 0.9.0, was released on 27 September 2015 with the new Calamares installer and updated packages.
In September 2017, Manjaro announced that support for i686 architecture would be dropped because "popularity of this architecture is decreasing". However, in November 2017 a semi-official community project "manjaro32", based on archlinux32, continued i686 support.
In September 2019, the Manjaro GmbH & Co. KG company was founded. It's FOSS website stated the company was formed '... to effectively engage in commercial agreements, form partnerships, and offer professional services'.
Derivatives
Netrunner Rolling, in addition to Blue Systems Netrunner, which is Debian-based. The first version of Netrunner Rolling was 2014.04, which was based on Manjaro 0.8.9 KDE. It was released in 2014. The last released version was Netrunner Rolling 2019.04.
The Sonar GNU/Linux project was aimed at providing a barrier-free Linux to people who required assistive technology for computer use, with support for GNOME and MATE desktop. The first version was released in February 2015, the latest release was in 2016. As of 2017, the Sonar project was discontinued.
Hardware
Although Manjaro can be installed on most systems, some vendors sell computers with Manjaro pre-installed on them. Suppliers of computers pre-installed with Manjaro include StarLabs Systems, Tuxedo Computers, manjarocomputer.eu and Pine64.
Manjaro is also the official default operating system used in the PinePhone, an ARM based smartphone released by Pine64.
Reception
Manjaro Linux is noted as an easy desktop to set up and use, suitable for both beginners and experienced users. It is recommended as an easy and friendly way to install, build and maintain a cutting-edge Arch-derived distribution. Some users will find appeal in the large range of contributed software available from the AUR, which has a reputation for being kept up to date from upstream resources.
Very early versions of Manjaro had a reputation for crashing and for installation difficulties, but this was reported to have improved with later versions, and by 2014 was, according to Jesse Smith of DistroWatch, "proving to be probably the most polished child of Arch Linux I have used to date. The distribution is not only easy to set up, but it has a friendly feel, complete with a nice graphical package manager, quality system installer and helpful welcome screen. Manjaro comes with lots of useful software and multimedia support."
Smith did a review of Manjaro 17.0.2 Xfce in July 2017, and observed that it did "a lot of things well". He went on to extol some of the notable features as part of his conclusion:
Notes
References
External links
2011 software
Arch-based Linux distributions
ARM Linux distributions
KDE
Linux distributions
Live USB
LXQt
Operating system distributions bootable from read-only media
Pacman-based Linux distributions
Rolling Release Linux distributions
X86-64 Linux distributions | Operating System (OS) | 1,208 |
Sprint (word processor)
Sprint is a text-based word processor for MS-DOS, first published by Borland in 1987.
History
Sprint originally appeared as the "FinalWord" application, developed by Jason Linhart, Craig Finseth, Scott Layson Burson, Brian Hess, and Bill Spitzak at Mark of the Unicorn - a company (headquartered in Cambridge, MA) which is now better known for its music software products. At the time MOTU sold MINCE and SCRIBBLE, a text editor package based on Emacs. As The FinalWord, the package met with some success: for example, the manuals of the Lotus software package were written on it, as was Marvin Minsky's book The Society of Mind.
FinalWord II was renamed Sprint when it was acquired by Borland, which added a new user interface, new manuals, and features to the application. The editor speed was considered blazing at the time, running with no delays on machines as slow as 8 megahertz.
This was the time of European development for Borland: Sidekick and Turbo Pascal had been founded in Denmark; and the management of the European subsidiary comprised former Micropro France managers (Micropro was at the time the world leader in Word processing software with the famous WordStar line-up. They had success with the launch of WordStar 2000 - the first word processor package with a spelling dictionary in French.)
This is why the development and marketing of the product was conducted in France. Sprint is one of the very few major products from an American software publisher that had a French version shipped before the American version.
Sprint v1.0 shipped in France with notable initial success, capturing a 30 percent market share and getting the jump on competing word processors. MicroPro was weakening with old Wordstar products and still-new WordStar 2000; WordPerfect was having problems with the translation and the user interface; and MS-Word was a decent but less polished or powerful product, and was also DOS and text-based.
The lack of beta-test combined with pressure to ship for back-to-school time resulted in a Sprint 1.0 which had a number of minor glitches and bugs that had to be corrected with version 1.01 and a whole new set of diskettes for every single registered user.
Version 1.0 (equivalent of French 1.01) shipped a few months later in the US and rest of world, with a mixed reception from customers. Traditional Borland fans who bought Sprint were happy with the editor, but wondered why the package included a sophisticated formatter, while business users who wanted a word processor just to write their memos and letters wondered what to do with the heavy manual and powerful features of the formatter language. In any event, word processing was shifting to WYSIWYG.
Version 1.5 did ship with a number of new features and real stability in France, but never made it elsewhere, although a number of localized versions had been built for various European countries. At this time, Borland Scandinavia had gone bankrupt, while Borland France had to be saved by massive financial help from the US. The developers who once worked in Europe had to move to the Scotts Valley CA premises. Version 1.5 was a reasonable success in France for some years, but Microsoft Word and Windows gained momentum and obscured all the other products.
In North America, Sprint never really gained traction in the marketplace, as it was overshadowed by WordPerfect and then Microsoft Word. It built up a small, but loyal and often enthusiastic, following among professional writers, researchers, academics, and programmers who appreciated its power, speed, and ability to handle large documents. Borland did not believe that there was enough of a market to warrant updating the product, and it eventually stopped supporting it.
Features
Crash-recovery: Sprint had incremental back-up, with its swap file updated every 3 seconds, enabling full recovery from crashes. At trade shows, demos were made with one person pulling out the power cord, and the typist resuming work as soon as the machine restarted.
Spell-as-you-type: With this feature, Sprint could beep at you in real time when detecting a typo.
Multilingual editing: Sprint included dictionary switching, support for hyphenation, and spelling and thesaurus dictionaries that have yet to be matched by the competitors.
Separate formatter and programmable editor: These have been useful features for corporate environments aiming at standardizing documents or building "boilerplate" contracts. In France, for example, applications were built for Banques Populaires (loan contracts) or Conseil d'Etat, while some local government agencies created specific applications for tenders and contracts.
Powerful programming language: Programming in Sprint was done with the internal language of the word processor - a language that is much like C. Programmers have the ability to "get under the hood" and add modifications and extensions to an extent not possible with other word processors. Once written, Sprint programs are compiled into the interface, and run at full speed.
Interface switching: Modifications and extensions to Sprint can be saved into separate interfaces which can be easily and quickly switched. This is useful for people working in different languages, as the keys can be mapped to the accents and characters of each language, depending on the interface.
File handling: Users could work in up to 24 files at once. All open files could be saved on exit or not—and nevertheless automatically reopened as left, including each file's cursor position, cut and paste buffer contents, and spell check status. Because this behavior was accomplished using the crash-recovery swap file (see above), it allowed an "instant-on" behavior using the saved state from the previous run; this was unusual for its time.
Handling large documents: Sprint has the ability to publish large documents (hundreds of pages) with strict formatting consistency and automatic table of contents, index generation, tables of figures, and tables of authorities. These features made Sprint a leader in the production of technical documents - and Borland itself did all its manuals on Sprint, for years.
PostScript capabilities: Sprint could print in-line EPS images with dimensioning, and also had the ability to add in-line PostScript procedures. This made the product rather popular in the printing industry. For example, making a 200-page novel fit into 192 pages was simply a matter of changing the point size from 11 to 10.56. Sprint could size by 0.04 increment and scale the line spacing and kerning accordingly. (The 192 pages size is important in the printing industry, where the number of pages often has to be divisible by 32. A 200-page book would have to be printed using 224 pages, the extra 24 pages being empty.)
Consistency with familiar environments: The default editor key bindings were a subset of those provided by EMACS, and the mark-up language was a subset of Scribe, making it easy for people familiar with those tools to use Sprint.
Reception
BYTE in 1984 praised FinalWord 1.16's low memory requirements and many powerful features. Criticisms included great difficulty in learning how to use it and instability, including a serious bug that destroyed four days of work. The magazine in 1989 listed Sprint as among the "Distinction" winners of the BYTE Awards, stating that "if you can live without [WYSIWYG], Sprint may be all you need in word processing software".
See also
MINCE
References
External references
Manuals are on the "Wayback Machine", the Internet Archive in several formats, emobi, pdf, djvu, etc.
FROM
https://archive.org/search.php?query=borland%20sprint
1. Borland Sprint Reference Guide 1988 (Jan 8, 2013)
From the bitsavers.org collection, a scanned-in computer-related document.
Topics: sprint, command, menu, text, file, commands, choose, formatter, chapter, format, reference guide,...
Bitsavers
2. Borland Sprint Users Guide 1988 (Jan 8, 2013)
texts
From the bitsavers.org collection, a scanned-in computer-related document.
Topics: sprint, file, text, choose, command, menu, files, typestyle, press, user, user interface, record...
Bitsavers
3. Borland Sprint Advanced Users Guide 1988 (Jan 8, 2013)
texts
From the bitsavers.org collection, a scanned-in computer-related document.
Topics: sprint, command, text, macro, format, file, commands, formatter, advanced, chapter, sprint...
Bitsavers
4. Borland Sprint Alternative User Interfaces 1988 (Jan 8, 2013)
texts
From the bitsavers.org collection, a scanned-in computer-related document. Topics: sprint, user, menu, command, interface, wordstar, commands, msword, file, alternative, use
Bitsavers
1987 software
DOS word processors
Borland software | Operating System (OS) | 1,209 |
Microsoft Surface
Microsoft Surface is a series of touchscreen-based personal computers, tablets and interactive whiteboards designed and developed by Microsoft, running the Microsoft Windows operating system, apart from the Surface Duo, which runs on Android. The devices are manufactured by original equipment manufacturers, including Pegatron, and are designed to be premium devices that set examples to Windows OEMs. It comprises seven generations of hybrid tablets, 2-in-1 detachable notebooks, a convertible desktop all-in-one, an interactive whiteboard, and various accessories all with unique form factors. The majority of the Surface lineup features Intel processors and are compatible with Microsoft's Windows 10 or Windows 11 operating system.
Devices
The Surface family features ten main lines of devices:
The Surface line of tabletop computers, which featured PixelSense technology to recognize objects placed on the screen.
The Surface Go line of hybrid tablets, with optional detachable keyboard accessories and optional stylus pen. The latest model is the Surface Go 3.
The Surface Pro line of hybrid tablets, with similar, optional detachable keyboard accessories and optional stylus pen. The two latest models are the Surface Pro 8 and the Surface Pro X, which has the Microsoft SQ2 ARM SoC (a custom version of the Snapdragon 8cx.)
The Surface Laptop Go, introduced by Microsoft in October 2020, the Laptop Go is marketed as a more affordable alternative to the brand's premium laptops.
The Surface Laptop, a notebook with a 13.5-inch or 15-inch non-detachable touchscreen. The original device runs Windows 10 S by default; however, it can be upgraded to Windows 10 Pro. Starting with the Surface Laptop 2, the regular Home and Pro editions are used.
The Surface Book, a notebook with a detachable tablet screen. The base is configurable with or without discrete graphics and an independently operable tablet screen, on which the optional stylus pen functions. The stylus pen is sold separately from the latest Surface Book model.
The Surface Laptop Studio, a laptop that can adjust into a digital drafting table.
The Surface Studio, a 28-inch all-in-one desktop that adjusts into a digital drafting table with stylus and on-screen Surface Dial support.
The Surface Hub, a touch screen interactive whiteboard designed for collaboration.
The Surface Neo, an upcoming dual-screen touch screen device of which both screens are nine inches, which was originally planned to run Windows 10X until this OS was discontinued by Microsoft. At this time, it is unknown which OS the Surface Neo will run.
The Surface Duo, a dual-screen touch screen device of which both screens are 5.6 inches and can be used as a phone that runs Android.
History
Microsoft first announced Surface at an event on June 18, 2012, presented by former CEO Steve Ballmer in Milk Studios Los Angeles. Surface was the first major initiative by Microsoft to integrate its Windows operating system with its own hardware, and is the first PC designed and distributed solely by Microsoft.
The first Surface device in the Surface line, was marketed as "Surface for Windows RT" at the time was and was announced by Steven Sinofsky, former President of Windows and Windows Live. The second Surface line, based on the Intel architecture was spearheaded with Surface Pro, marketed as "Surface for Windows 8 Pro" at the time, and was demoed by Michael Angiulo, a corporate VP.
Sinofsky initially stated that pricing for the first Surface would be comparable to other ARM devices and pricing for Surface Pro would be comparable to current ultrabooks. Later, Ballmer noted the "sweet spot" for the bulk of the PC market was $300 to $800. Microsoft revealed the pricing and began accepting preorders for the 2012 Surface tablet, on October 16, 2012 "for delivery by 10/26". The device was launched alongside the general availability of Windows 8 on October 26, 2012. Surface Pro became available the following year on February 9, 2013. The devices were initially available only at Microsoft Stores retail and online, but availability was later expanded into other vendors.
In November 2012, Ballmer described the distribution approach to Surface as "modest" and on November 29 of that year, Microsoft revealed the pricing for the 64 GB and 128 GB versions of Surface with Windows 8 Pro. The tablet would go on sale on February 9, 2013, in the United States and Canada. A launch event was set to be held on February 8, 2013, but was cancelled at the last minute due to the February 2013 nor'easter. The 128GB version of the tablet sold out on the same day as its release. Though there was less demand for the 64GB version because of the much smaller available storage capacity, supplies of the lower cost unit were almost as tight.
On September 23, 2013, Microsoft announced the Surface 2 and Surface Pro 2, which feature hardware and software updates from the original. The Surface 2 launched October 22, 2013, alongside the Surface Pro 2, four days after the general availability of Windows 8.1. Later, Microsoft launched a variation of the Surface 2 with LTE connectivity for the AT&T network on March 18, 2014. Microsoft then announced the redesigned Surface Pro 3 on May 20, 2014, which went on sale on June 20, 2014.
The following year, on March 30, 2015, it announced the Surface 3, a more compact version of the Surface Pro 3. On September 8, 2015, Microsoft announced the "Surface Enterprise Initiative", a partnership between Accenture, Avanade, Dell Inc., and HP, to "enable more customers to enjoy the benefits of Windows 10." As part of the partnership, Dell will resell Surface Pro products through its business and enterprise channels, and offer its existing enterprise services (including Pro Support, warranty, and Configuration and Deployment) for Surface Pro devices it sells.
Microsoft announced the next generation Surface Pro 4 and the all new Surface Book, a hybrid laptop, at Microsoft October 2015 Event in New York on October 10, 2015. Microsoft began shipping Surface Hub devices on March 25, 2016. In June 2016, Microsoft confirmed production of the Surface 3 would stop in December of that year. No replacement product has been announced. Reports suggest this may be a consequence of Intel discontinuing the Broxton iteration of the Atom processor. On October 26, 2016, at Microsoft's event, a Surface Studio and Surface Book with Performance Base was announced. A wheel accessory, the Surface Dial, was announced as well, and became available on November 10, 2016.
Immediately following the announcement of the Surface Laptop at the #MicrosoftEDU event on May 2, 2017, and the Microsoft Build 2017 developer conference, Microsoft announced the fifth-generation Surface Pro at a special event in Shanghai on May 23, 2017.
On October 17, 2017, Microsoft announced Surface Book 2 adding a 15 in model to the line, and updating internal components.
On May 15, 2018, Microsoft announced the Surface Hub 2, featuring a new rotating hinge and the ability to link multiple Hubs together.
In June 2018, Microsoft announced the Surface Go, a $400 Surface tablet with a 10-inch screen and 64 or 128 GB of storage.
On October 2, 2019, Microsoft announced the Surface Pro 7, the Surface Laptop 3, and the Surface Pro X. Both the Surface Pro 7 and the Surface Laptop 3 come with a USB-C port. The Surface Pro X comes with the Microsoft SQ1 ARM processor. Microsoft also teased upcoming products: the Surface Neo, a dual screen tablet originally planned to run Windows 10X; and the Surface Duo, a dual screen mobile phone that runs Android. Both products were initially announced to be released in 2020, though reports suggest the release of the Surface Neo will be delayed until 2021. The Surface Duo was released on September 10, 2020.
On September 22, 2021, Microsoft announced the Surface Pro 8, the Surface Duo 2 and the Surface Laptop Studio. The Surface Pro 8 departs from the design established by the Surface Pro 3, instead resembling the Surface Pro X, with thinner bezels, rounded corners and two Thunderbolt 4 ports (replacing the USB-A ports in previous generations). The Surface Laptop Studio is seen as a successor to the Surface Book line, forgoing the detachable screen and opting for a new hinge design that allows the screen to be positioned in three different orientations, exactly like the Sony VAIO multi-flip line and VAIO Z Flip. All three products were released on October 5, 2021, coinciding with the release of Windows 11.
Hardware
Screen and input
The first two generations of both Surface lines feature ClearType Full HD display with 16:9 aspect ratio. With the release of the third generation Surface and Surface Pro, Microsoft increased the screen sizes to and respectively, each with a 3:2 aspect ratio, designed for a comfort use in a portrait orientation. The fourth generation increased the screen further to . The screens feature a multi-touch technology with 10 touch-points and scratch-resistance Gorilla Glass. All generations of the Surface Pro and third generation of the Surface also features an active pen, but it is not included in the box with all models.
The display responds to other sensors: an ambient light sensor to adjust screen brightness and a 3-axis accelerometer to sense Surface orientation and switch between portrait and landscape orientation modes. The Surface's built-in applications support screen rotation in all four directions, including upside-down.
There are three buttons on the first three generation of Surface, including a capacitive Windows button near the display that opens the Start Screen, and two physical buttons on the sides: power and volume. The fourth generation removed the capacitive windows button on the screen.
All Surfaces and Surface Pros have front and rear cameras as well as microphones.
Processor
The first generation Surface uses a quad-core Nvidia Tegra 3 of the ARM architecture, as opposed to the Intel x64 architecture and therefore shipped with Windows RT, which was written for the ARM architecture. The second generation Surface 2 added an Nvidia Tegra 4. The architecture limited Surface and Surface 2 to only apps from the Windows Store recompiled for ARM. With the release of the Surface 3, Microsoft switched the Surface line to the Intel x64 architecture, the same architecture found in the Surface Pro line. Surface 3 uses the Braswell Atom X7 processor.
With the Surface Pro line, Microsoft uses the Intel x64 architecture which can run most software designed for Microsoft Windows. Both Surface Pro and Surface Pro 2 had one processor variant, the Core i5, though the Surface Pro runs the Ivy Bridge iteration, and the Surface Pro 2 runs the Haswell iteration. The Surface Pro 3 added the Haswell Core i3 and Core i7 variants.
The 2019 Surface Pro X uses a custom ARM64 SOC, the Microsoft SQ1. The latest model uses an updated version of the SOC, known as Microsoft SQ2.
Storage
The Surface devices are released in six internal storage capacities: 32, 64, 128, 256, 512 GB and 1 TB. With the release of the third generation, the 32 GB model was discontinued. All models except the Surface Pro X also feature a microSDXC card slot, located behind the kickstand, which allow for the use of memory cards up to 200 GB.
Surface devices have a different amount of non-replaceable RAM, ranging from 2 to 32 GB, attached to the motherboard.
Microsoft's Surface/Storage site revealed that the 32 GB Surface RT has approximately 16 GB of user-available storage and the 64 GB Surface RT has roughly 45 GB.
External ports
On the left or right side of most Surface tablets, there is a full-size USB Type A port (except the Surface Pro X and Surface Go), and a 3.5 mm headphone/microphone jack (except the Surface Pro X). Older devices commonly had a Mini DisplayPort (or a micro-HDMI port on even older models), however these ports have been replaced with USB C ports since the 7th Surface Pro generation and Surface Go. All surface devices except the Surface 3 have Microsoft's magnetic Surface Connect port, for charging and data, and come with Surface Connect power cables, although devices that have them can also be charged over their USB C ports. All the devices have an accessory spine/Cover Port along the bottom that has not changed in dimensions. The ports have been moved in different locations throughout the various generations of Surface Pros/Surfaces, and beginning with the Surface Pro 3, Microsoft moved to a fin-style Surface Connect port.
Cellular connectivity
While all Surface devices come in the Wi-Fi only models, some generations also feature the Wi-Fi with a cellular support. The cellular variants, however, do not support circuit-switched voice calls and texts, allowing only data connectivity. The cellular models have a micro-SIM slot at the bottom of the device, next to the Type Cover connecting pins.
External color and kickstand
The exterior of the earlier generations of Surface (2012 tablet, Pro, and Pro 2) is made of VaporMg magnesium alloy giving a semi-glossy black durable finish that Microsoft calls "dark titanium". Originally, the design of Surface was to feature a full "VaporMg" design, but the production models ditched this and went with a "VaporMg" coating. Later devices moved towards a matte gray finish showing the actual magnesium color through the semi-transparent top coating. The Surface Laptop is available in four colors: platinum, graphite gold, burgundy, and cobalt blue.
The Surface and Surface Pro lines feature a kickstand which flips out from the back of the device to prop it up, allowing the device to be stood up at an angle hands-free. According to Microsoft, this is great for watching movies, video chatting, and typing documents. According to some reviewers, this kickstand is uncomfortable to use in one's lap and means the device won't fit on shallow desks. The first generation has a kickstand that can be set to a 22 degrees angle position. The second generation added a 55 degrees angle position which according to Microsoft makes the device more comfortable to type on the lap. The Surface 3 features three angle positions: 22, 44, and 60 degrees. The Surface Pro 3 is the first device to have a continuous kickstand that can be set at any angles between 22 and 150 degrees. With the fifth-generation Surface Pro, Microsoft added an additional 15 degrees of rotation to the hinge bringing the widest possible angle to 165 degrees, or what Microsoft calls "Studio Mode".
Surface Book
On October 6, 2015, Microsoft unveiled the Surface Book, a 2-in-1 detachable with a mechanically attached, durable hardware keyboard. It became the first Surface device to be marketed as a laptop instead of a tablet. The device has a teardrop design.
The Surface Book has what Microsoft calls a "dynamic fulcrum hinge" which allows the device to support the heavier notebook/screen portion.
Another unique aspect of the Surface Book is an available discrete graphics adapter, contained in the keyboard module. This module can then be detached while the Surface Book is running, in which case the system automatically switches to the integrated graphics in the tablet unit.
On October 26, 2016, Microsoft unveiled an additional configuration, called the Surface Book with Performance Base, which has an upgraded processor and a longer battery life.
The second generation Surface Book 2 was announced on October 17, 2017, introducing an upgraded ceramic hinge for stability, and lighter overall weight distribution. A 15-inch model was added to the line.
On May 6, 2020, the third generation Surface Book 3 was announced, featuring 10th-generation Intel processors, improved battery life, and faster SSD storage.
Surface Laptop
On May 3, 2017, Microsoft unveiled the Surface Laptop, a non-detachable version of the Surface Book claiming to have the thinnest touch-enabled LCD panel of its kind. Its permanently attached hardware keyboard comes in four colors and uses the same kind of fabric as the Type Cover accessories for the tablets. The device comes with the newly announced Windows 10 S operating system, which enables faster boot times at the expense of the ability to download and install programs from the web instead of the Microsoft Store. Users can switch to a fully enabled version of Windows 10 for free.
Surface Studio
On October 26, 2016, Microsoft announced a 28-inch all-in-one desktop PC, the Surface Studio. The device claims to have the thinnest LCD ever made in an all-in-one PC. All its components, including the processor and a surround-sound system, are located in a compact base on which the screen is mounted upon via a flexible, four-point hinge. The design allows the screen to fold down to a 20-degree angle for physical interaction with the user. It comes with the Windows 10 Anniversary Update preinstalled, but is optimized for the Windows 10 Creators Update released in April 2017.
Surface Hub
On January 21, 2015, Microsoft introduced a new device category under the Surface family: the Surface Hub. It is an 84-inch 120 Hz 4K or 55-inch 1080p multi-touch, multi-pen, wall-mounted all-in-one device, aimed for collaboration and videoconferencing use of businesses. The device runs a variant of the Windows 10 operating system.
Surface Neo
On October 2, 2019, Microsoft unveiled the Surface Neo, an upcoming dual-screen tablet. The device is a folio with two 9-inch displays that can be used in various configurations ("postures"), including a laptop-like form where a Bluetooth keyboard is attached to the bottom screen. Depending on its position, the remainder of the touchscreen can be used for different features; the keyboard can be attached at the top to use the bottom as a touchpad, or at the bottom to display a special area above the keyboard (the "wonderbar"), which can house tools such as emojis. The device was originally planned to run a new Windows 10 edition known as Windows 10X, which was designed specifically for this class of devices. However, Microsoft eventually discontinued Windows 10X. At this time, it is unknown which version of Windows it will run. It is possible it may run Windows 11.
Surface Duo
Alongside the Surface Neo, Microsoft also unveiled the Surface Duo, a dual-screen Android smartphone with a similar design.
Software
Surface devices sold since July 29, 2015 ship with the Windows 10 operating system. Also, up to July 2016, older models which shipped with Windows 8.1 were eligible for a free upgrade to Windows 10.
The original Surface and Surface 2 models use Windows RT, a special version of Windows 8 designed for devices with ARM processors and cannot be upgraded to Windows 10. However, there were several major updates made available after its initial release that include Windows RT 8.1, RT 8.1 Update 1, RT 8.1 August update, and RT 8.1 Update 3. These older, ARM-based models of Surface are not compatible with Windows 10, but received several new features including a new Start menu similar to that found in early preview builds of Windows 10.
From Surface Pro 4 and onward, all Surface devices support Windows Hello facial biometric authentication out of the box through its cameras and IR-sensors. The Surface Pro 3 can utilize the Surface Pro 4 Type Cover with Fingerprint ID to gain Windows Hello support.
The Surface Duo runs Android.
Tablet mode
The Windows 10 user interface has two modes: desktop mode and tablet mode. When a keyboard is connected to the Surface, Windows 10 runs in desktop mode; when it is removed or folded around the back, Windows 10 runs in tablet mode.
When running in tablet mode, the start menu and all the apps run in full screen. All running apps are hidden from the taskbar and a back button appears. Swiping from the top closes the app while swiping from the left evokes the Task View and swiping from the right evokes the Action Center.
Apps
Several of the included apps updated with Windows 10 are: Mail, People, Calendar, Camera, Microsoft Edge, Xbox app, OneNote, Photos, Voice Recorder, Phone Companion, Calculator, Scan, Alarms & Clock, and the Microsoft Store. Other apps include Maps, Movies & TV, Groove Music, Microsoft Solitaire Collection and the MSN apps: Money, News, Weather, and Sports.
Surface devices come preinstalled with the OneNote app for taking handwritten notes. Windows 10 also features a text input panel with handwriting recognition which automatically converts handwriting to text.
The Microsoft Edge browser features an inking function which allows handwritten annotations directly on webpages.
Microsoft has ported its Office suite for use on Windows 10 devices, including the Surface devices running Windows 10. As the screen size on these devices exceed 10 inches, the apps require an Office 365 subscription to edit documents, although it is not needed to view and print them.
Surface devices have an internal microphone and speakers optimized for the Cortana personal assistant feature included on Windows 10 devices.
Third-party applications that have been designed with the pen and touch interaction of Surface in mind include Drawboard PDF and Sketchable.
Specialized software
Prior to the release of Windows 10, on Surface Pro 3 Microsoft made the Surface Hub app available, which allowed the adjustment of Pen pressure sensitivity and button functions. The Surface Hub app was renamed "Surface" following the launch of the Surface Hub device. Additionally, toggles to control sound quality and to disable the capacitive Windows button on the Surface 3 and Pro 3 devices were included.
With Surface Pro 3 and the Surface Pen based on N-Trig technology, Microsoft added the capability to launch OneNote from the lock screen without logging in by pressing the purple button at the top of the pen. Microsoft added sections to Windows 10 settings that have the ability to control the functions of the buttons on the Surface Pen. One such function is to launch OneNote with the press of the top button of the Surface Pro 4 pen. With the introduction of the Surface Dial, Microsoft added a Wheel settings section to the Settings app in Windows 10 under Devices. The Windows 10 Anniversary Update added the ability to adjust the shortcuts of each of the Pen's buttons performed.
Accessories
Microsoft offer several Surface accessories, most of which are Bluetooth connected devices. Among these are the Surface Pen, the keyboard covers, and the Surface Dial.
There are two main versions of the keyboard covers that connect via the Accessory Spine on the Surface tablets. The now discontinued Touch Cover, and the ever-evolving Type Cover. They feature a multi-touch touchpad, and a full QWERTY keyboard (with pre-defined action keys in place of the function row, though the function row is still accessible via the function button). The covers are made of various soft-touch materials and connect to the Surface with a polycarbonate spine with pogo pins.
Microsoft sells the Surface Pen, an active-digitzer pen, separate of Surface, but included it in all Surface tablets until the fifth-generation Surface Pro where it was removed. The Surface Pen is designed to integrate with inking capabilities on Windows including OneNote and Windows Ink Workspace.
The Surface Dial was introduced alongside the Surface Studio, and is a computer wheel designed to work on-screen with the Surface Studio and fifth-generation Surface Pro. Previous Surface Pro devices were updated to support it as well.
Remix project
In 2013, Microsoft announced that they were going to design other covers for the Surface accessory spine (code named "blades") based on the Touch Cover 2's sensors. The only product that was shipped was the Surface Music Cover and the Surface Music Kit app.
Model comparison
Surface and Surface Go line
Surface Pro line
Surface Book line
Surface Laptop line
Surface Studio line
Promotion
Television commercial
In October 2012, Microsoft aired its first commercial, directed by Jon Chu, for the Surface product line. The first 30-second commercial is the Surface Movement which focus on Windows RT version of the first generation of Surface with detachable keyboard and kickstand. It first aired during Dancing with the Stars commercial break.
Partnership with NFL
In 2014, Microsoft announced a five-year, $400 million deal with the National Football League, in which Surface became the official tablet computer brand of the NFL. As part of the partnership, special, ruggedized Surface Pro 2 devices were issued to teams for use on the sidelines, allowing coaches and players view and annotate footage of previous plays. The partnership was initially hampered by television commentators, who erroneously referred to the devices as being an "iPad" on several occasions. Microsoft has since stated that it "coached" commentators on properly referring to the devices on-air.
Designed on Surface
On January 11, 2016, Microsoft announced a collaboration with POW! WOW!. It includes a group of artists from around the world that utilizes various Surface devices, such as the Surface Pro 4 and the Surface Book, to create a total of 17 murals. The artists are filmed using their Surface devices and explain how they integrate Surface into their workflow. The final products are then posted to YouTube that accompanies a post on the Microsoft Devices blog.
United States Department of Defense
On February 17, 2016, Microsoft announced that alongside the US Department of Defense's plans to upgrade to Windows 10, that it has approved Surface devices and certified them for use through the Defense Information Systems Agency Unified Capabilities Approved Products List. Surface Book, Surface Pro 4, Surface Pro 3, and Surface 3 have all been approved as Multifunction Mobile Devices, thus meeting the necessary requirements for security and compatibility with other systems.
Reception
Reviews of the first-generation Surface RT by critics ranged broadly. The hardware received mostly positive reviews, while the software and overall experience were mixed. Wired reviewer Mathew Honan stated that while "This is one of the most exciting pieces of hardware I’ve ever used. It is extremely well-designed; meticulous even," the tablets are "likely to confuse many of Microsoft’s longtime customers". TechCrunch, Matt Buchanan at BuzzFeed, and Gizmodo recommended against purchasing the tablet. Gizmodo mentioned issues such as the high price tag and described it as similar but inferior to the iPad, but also praised the hardware saying, "You'll appreciate it every time you pick it up and turn it on. It's a simple, joyful experience." David Pogue at The New York Times praised the hardware but criticized the software. The Verge described the technology as fulfilling the role of a laptop or tablet "half as well as other devices on the market," adding "the whole thing is honestly perplexing." Warner Crocker from Gotta Be Mobile described it as "frustratingly confusing." Farhad Manjoo of Slate noted that the "shortcomings are puzzling" given how much time Microsoft spent developing the device. Neil McAllister has noted the lack of a compelling case to switch from the iPad to a Windows RT device at the same price point, because Apple already has a strong network effect from their app developers and few Windows developers have ported their offerings over to the ARM processor. The Surface RT had worse battery life than similar devices. The first-generation Surface Pro has shorter battery life than the original ARM-based Surface due in part to its full HD screen and Intel Core i5 processor.
The Surface Pro 3 has received positive reviewers. David Pogue suggested "The upshot is that, with hardly any thickness or weight penalty, the kickstand and the Type Cover let you transform your 1.8-pound tablet into an actual, fast, luxury laptop". Pogue said that the Surface Pro 3's form factor works well as a tablet, in contrast to the Surface Pro 2, whose bulk and weight limited its appeal as a tablet. Pogue also stated that the new multi-stage kickstand, 3:2 screen aspect ratio, and new Type Cover 3 detachable keyboard made it a competent laptop. Another advantage of the Surface Pro 3 is that it is considered a tablet by the FAA and TSA, despite its hardware which makes it capable of running all x86 Windows programs. This is advantageous in air travel, since a tablet can be used during takeoff or landing, and a tablet can be left in a bag when going through a TSA scanner machine, neither of which apply to a laptop. It has been suggested that the Surface Pro 3 comes closest to the Microsoft Tablet PC concept that company founder Bill Gates announced in 2001, being the first Surface to become a credible laptop replacement. Time magazine included Microsoft Surface Pro 3 in the list of the 25 best inventions of 2014.
The Surface 3 (non-Pro) received generally positive reviews from computer critics. They praised Microsoft's shift from ARM architecture toward x86, and therefore from Windows RT to a regular Windows OS. Most noted a well designed chassis and accessories produced of quality materials, and overall premium feeling of use. While less powerful, the Surface 3 was a lighter and cheaper alternative to the Surface Pro 3. More importantly, the Surface 3 could compete at the high-end of Android and iPad tablets, with the advantage of being a device running a full desktop OS instead of a mobile OS for a similar price. Reviewers also note that 37 GB of the total storage space in the low-end Surface 3 is available to the user, while its close competitor, the low-end iPad Air 2, has only 12.5 GB of user-available storage space for the same price. The most common downsides are relatively low battery life, slower performance compared to devices with Intel Core processors and a high price since accessories like Surface Pen and Type Cover are not included.
Industry response
When Surface was first announced, critics noted that the device represented a significant departure for Microsoft, as the company had previously relied exclusively on third-party OEMs to produce devices running Windows, and began shifting towards a first-party hardware model with similarities to that of Apple. Steve Ballmer said that like Xbox, Surface was an example of the sort of hardware products Microsoft will release in the future.
Original equipment manufacturers (OEMs), whose products have traditionally run Microsoft operating systems, have had positive responses to the release of Surface. HP, Lenovo, Samsung, and Dell applauded Microsoft's decision to create its own Tablet PC and said that relationships with Microsoft have not changed. John Solomon, senior vice president of HP, said that "Microsoft was basically making a leadership statement and showing what's possible in the tablet space". Acer founder Stan Shih said that he believed Microsoft only introduced its own hardware in order to establish the market and would then withdraw in favor of its OEMs.
However, others believe that OEMs were left sidelined by the perception that Microsoft's new tablet would replace their products. Acer chairman JT Wang advised Microsoft to "please think twice". Microsoft has acknowledged that Surface may "affect their commitment" of partners to the Windows platform.
The need for the Surface to market an ARM-compatible version of Windows was questioned by analysts because of recent developments in the PC industry; both Intel and AMD introduced x86-based system-on-chip designs for Windows 8, Atom "Clover Trail" and "Temash" respectively, in response to the growing competition from ARM licensees. In particular, Intel claimed that Clover Trail-based tablets could provide battery life rivaling that of ARM devices; in a test by PC World, Samsung's Clover Trail-based Ativ Smart PC was shown to have battery life exceeding that of the first gen ARM-based Surface. Peter Bright of Ars Technica argued that Windows RT had no clear purpose, since the power advantage of ARM-based devices was "nowhere near as clear-cut as it was two years ago", and that users would be better off purchasing Office 2013 themselves because of the removed features and licensing restrictions of Office RT.
Sales
Sales of the first generation Surface did not meet Microsoft's expectations, which led to price reductions and other sales incentives.
In March 2013, Bloomberg reported from inside sources that Surface sales were behind expectations, particularly of the ARM-based Surface model. Microsoft had originally projected sales of 2 million Surface units during the final quarter of 2012, a total of 1.5 million Surface devices had been sold since launch with Surface Pro accounting for 400,000 of these sales. The more expensive Surface Pro, with its Intel CPU that makes it a full-fledged Windows laptop PC, despite its compromises, was successful compared to other OEMs' first-generation Windows 8 Ultrabook hybrids which were larger and/or more expensive.
In July 2013, Steve Ballmer revealed that the Surface hasn't sold as well as he hoped. He reported that Microsoft had made a loss of due to the lackluster Surface sales. Concurrently, Microsoft cut the price of first-gen Surface RT worldwide by 30%, with its U.S. price falling to . This was followed by a further price cut in August after it was revealed that even the marketing costs had exceed the sales. On August 4, 2013, the cost of Surface Pro was cut by $100 giving it an entry price of $799. Several law firms sued Microsoft, accusing the company of misleading shareholders about sales of the first-gen ARM based Surface tablet, calling it an "unmitigated disaster". In the first two years of sales, Microsoft lost almost two billion dollars.
The poor sales of the ARM-based Surface tablet had been credited to the continuing market dominance of Microsoft's competitors in the tablet market. Particularly, Apple's iPad retained its dominance due its App store offering the most tablet-optimized applications. Most OEMs opted to produce tablets running Google Android, which came in a wide variety of sizes and prices (albeit with mixed success among most OEMs), and Google Play had the second-largest selection of tablet applications. By contrast there was a limited amount of software designed specifically for Surface's operating system, Windows RT, the selection which was even weaker than Windows Phone. Indeed, OEMs reported that most customers felt Intel-based tablets were more appropriate for use in business environments, as they were compatible with the much more widely available x86 programs while Windows RT was not. Microsoft's subsequent efforts have been focused upon refining the Surface Pro and making it a viable competitor in the premium ultra-mobile PC category, against other Ultrabooks and the MacBook Air, while discontinuing development of ARM-powered Surface devices as the Surface 3 (non-Pro) had an Intel x86 CPU (albeit with lower performance than the Surface Pro 3).
The resultant Surface Pro 3 succeeded in garnering a great interest in the Surface line, making Surface business profitable for the first time in fiscal year Q1 2015. Later in Q2, the Surface division's sales topped $1 billion. Surface division scored $888 million for Q4 2015 despite an overall loss of $2.1 billion for Microsoft, a 117% year-over-year growth thanks to the steady commercial performance of Surface Pro 3 and the launch of mainstream model Surface 3. In the first quarter of fiscal year 2018 the Surface division posted its best earnings performance to date.
Reported issues
Users on Microsoft's support forum reported that some Touch Covers were splitting at the seam where it connects to the tablet, exposing its wiring. A Microsoft spokesperson stated that the company was aware of the issue, and would offer free replacements for those who have been affected by the defect. Other users reported issues with audio randomly stuttering or muting on the Surface tablet while in use. Wi-Fi connectivity issues were also reported. Firmware updates that attempted to fix the problem were released, but some users still reported problems like blue screen errors while watching video and crash of display driver. Microsoft has acknowledged a bug in the Windows key that does not always work, but has promised a fix. The latest update, which promised to fix the issue, was not able to fix it.
With the original Surface Pro, Microsoft acknowledged issues encountered by some users with its stylus pen, including intermittent pen failures, and with older applications that do not have complete pen support due to the different APIs used by Surface Pro's stylus drivers. In the latter case, Microsoft has indicated that it is working with software vendors to ensure better compatibility. As for later models beginning with the Surface Pro 3, the N-Trig digital pen digitizer system has attained high pen compatibility with older applications thanks to a regularly updated, optional WinTab driver. Issues had also been experienced with slow Wi-Fi connectivity, and the device not properly returning from standby.
iFixit has awarded the Surface Pro its worst ever repairability rating, but CEO Kyle Wiens claims that it is due to incompetence rather than deliberate design choices.
Timeline
See also
Microsoft PixelSense, a product line launched in 2007 and formerly called Microsoft Surface
Comparison of tablet computers
Microsoft Lumia
References
External links
Building of Surface
Tablet computers introduced in 2012
Windows RT devices
Microsoft Surface | Operating System (OS) | 1,210 |
Green Hills Software
Green Hills Software is a privately owned company that builds operating systems and programming tools for embedded systems. The firm was founded in 1982 by Dan O'Dowd and Carl Rosenberg. Its world headquarters are in Santa Barbara, California.
History
Green Hills Software and Wind River Systems enacted a 99-year contract as cooperative peers in the embedded software engineering market throughout the 1990s, with their relationship ending in a series of lawsuits throughout the early 2000s. This resulted in their opposite parting of ways, whereupon Wind River devoted itself to publicly embrace Linux and open-source software but Green Hills initiated a public relations campaign decrying its use in issues of national security.
In 2008, the Green Hills real-time operating system (RTOS) named Integrity-178 was the first system to be certified by the National Information Assurance Partnership (NIAP), composed of National Security Agency (NSA) and National Institute of Standards and Technology (NIST), to Evaluation Assurance Level (EAL) 6+.
By November 2008, it was announced that a commercialized version of Integrity 178-B will be available to be sold to the private sector by Integrity Global Security, a subsidiary of Green Hills Software.
On March 27, 2012, a contract was announced between Green Hills Software and Nintendo. This designates MULTI as the official integrated development environment and toolchain for Nintendo and its licensed developers to program the Wii U video game console.
On February 25, 2014, it was announced that the operating system Integrity had been chosen by Urban Aeronautics for their AirMule flying car unmanned aerial vehicle (UAV), since renamed the Tactical Robotics Cormorant.
Selected products
Real-time operating systems
Integrity is a POSIX real-time operating system (RTOS). An Integrity variant, named Integrity-178B, was certified to Common Criteria Evaluation Assurance Level (EAL) 6+, High Robustness in November 2008.
Micro Velosity (stylized as µ-velOSity) is a real-time microkernel for resource-constrained devices.
Compilers
Green Hills produces compilers for the programming languages C, C++, Fortran, and Ada. They are cross-platform, for 32- and 64-bit microprocessors, including ARM, Blackfin, ColdFire, MIPS, PowerPC, SuperH, StarCore, x86, V850, and XScale.
Integrated development environments
MULTI is an integrated development environment (IDE) for the programming languages C, C++, Embedded C++ (EC++), and Ada, aimed at embedded engineers.
TimeMachine is a set of tools for optimizing and debugging C and C++ software. TimeMachine (introduced 2003) supports reverse debugging, a feature that later also became available in the free GNU Debugger (GDB) 7.0 (2009).
References
Software companies based in California
Computer companies established in 1982
1982 establishments in California
Companies based in Santa Barbara County, California
Microkernels
Software companies of the United States | Operating System (OS) | 1,211 |
Hard disk drive interface
Hard disk drives are accessed over one of a number of bus types, including parallel ATA (PATA, also called IDE or EIDE; described before the introduction of SATA as ATA), Serial ATA (SATA), SCSI, Serial Attached SCSI (SAS), and Fibre Channel. Bridge circuitry is sometimes used to connect hard disk drives to buses with which they cannot communicate natively, such as IEEE 1394, USB, SCSI and Thunderbolt.
Disk interface families
Disk drive interfaces have evolved from simple interfaces requiring complex controllers to attach to a computer into high level interfaces that present a consistent interface to a computer system regardless of the internal technology of the hard disk drive. The following table lists some common HDD interfaces in chronological order:
Early interfaces
The earliest hard disk drive (HDD) interfaces were bit serial data interfaces that connected an HDD to a controller with two cables, one for control and one for data. An additional cable was used for power, initially frequently AC but later usually connected directly to a DC power supply unit. The controller provided significant functions such as serial/parallel conversion, data separation, and track formatting, and required matching to the drive (after formatting) in order to assure reliability. Each control cable could serve two or more drives, while a dedicated (and smaller) data cable served each drive.
Examples of such early interfaces include:
Many early IBM drives, e.g., IBM 2311, had such an interface.
The SMD interface was popular on minicomputers in the 1970s.
ST-506 used MFM (Modified Frequency Modulation) for the data encoding method.
ST412, an ST-506 variant was available in either MFM or RLL (Run Length Limited) encoding variants.
Enhanced Small Disk Interface (ESDI) was an industry standard interface similar to ST412 supporting higher data rates between the processor and the disk drive.
In bit serial data interfaces the data frequency, data encoding scheme as written to the disk surface and error detection all influenced the design of the supporting controller. Encoding schemes used included Frequency modulation (FM), Modified Frequency Modulation (MFM) and RLL encoding at frequencies for example ranging from 0.156 MHz (FM on 2311) to 7.5 MHz (RLL on ST412) MHz. Thus each time the internal technology advanced there was a necessary delay as controllers were designed or redesigned to accommodate the advancement; this along with the cost of controller development led to the introduction of Word serial interfaces.
Enhanced Small Disk Interface (ESDI) was an attempt to minimize controller design time by supporting multiple data rates with a standard data encoding scheme; this was usually negotiated automatically by the disk drive and controller; most of the time, however, 15 or 20 megabit ESDI disk drives were not downward compatible (i.e. a 15 or 20 megabit disk drive would not run on a 10 megabit controller). ESDI disk drives typically also had jumpers to set the number of sectors per track and (in some cases) sector size.
Word serial interfaces
Historical Word serial interfaces connect a hard disk drive to a bus adapter with one cable for combined data/control. (As for all early interfaces above, each drive also has an additional power cable, usually direct to the power supply unit.) The earliest versions of these interfaces typically had an 8 bit parallel data transfer to/from the drive, but 16-bit versions became much more common, and there are 32 bit versions. The word nature of data transfer makes the design of a host bus adapter significantly simpler than that of the precursor HDD controller.
CTL-I (Controller Interface) was an 8-bit word serial interface introduced by IBM for its mainframe hard disk drives beginning with the 3333 in 1972. The 3333 was the first unit in a string of up to eight 3330 type hard disk drives; it contained a CTL-I controller and two 3330 type disk drives. Subsequently, the first drive (containing a CTL-I controller) in a string of drives was designated by IBM as an A-unit. The drives within an A-unit and all other drives in a string had interfaces similar to the early interfaces, above. A-units connected to IBM Directors or integrated attachments.
Small Computer System Interface (SCSI), originally named SASI for Shugart Associates System Interface, is an early (circa 1978) industry standard interface explicitly deployed to minimize system integration efforts. SCSI disks became standard on servers and workstations. Commodore Amiga, and Apple Macintosh deployed SCSI drive through the mid-1990s, by which time most models had been transitioned to ATA (and later, SATA) family disks. Only in 2005 did the capacity of SCSI disks fall behind ATA disk technology, though the highest-performance disks are still available in SCSI, SAS and Fibre Channel only. The range limitations of the data cable allows for external SCSI devices. Originally SCSI data cables used single ended (common mode) data transmission, but server class SCSI could use differential transmission, either low voltage differential (LVD) or high voltage differential (HVD). ("Low" and "High" voltages for differential SCSI are relative to SCSI standards and do not meet the meaning of low voltage and high voltage as used in general electrical engineering contexts, as apply e.g. to statutory electrical codes; both LVD and HVD use low voltage signals (3.3 V and 5 V respectively) in general terminology.)
Parallel ATA, originally IDE and then standardized under the name AT Attachment (ATA), with the alias P-ATA or PATA retroactively added upon introduction of the new variant Serial ATA. The original name (circa 1986) reflected the integration of the controller with the hard drive itself. (That integration was not new with IDE, having been done a few years earlier with SCSI drives.) Moving the HDD controller from the interface card to the disk drive helped to standardize the host/contoller interface, reduce the programming complexity in the host device driver, and reduced system cost and complexity. The 40-pin IDE/ATA connection transfers 16 bits of data at a time on the data cable. The data cable was originally 40-conductor, but later higher speed requirements for data transfer to and from the hard drive led to an "ultra DMA" mode, known as UDMA. Progressively swifter versions of this standard ultimately added the requirement for an 80-conductor variant of the same cable, where half of the conductors provides grounding necessary for enhanced high-speed signal quality by reducing crosstalk. The interface for 80-conductor only has 39 pins, the missing pin acting as a key to prevent incorrect insertion of the connector to an incompatible socket, a common cause of disk and controller damage.
Bit serial interfaces
Modern bit serial interfaces connect a hard disk drive to a host bus interface adapter (today in a PC typically integrated into the "south bridge") with one data/control cable. Each drive also has an additional power cable, usually direct to the power supply unit.
DECs Standard Disk Interconnect (SDI) was an early example of a modern bit serial interface.
Fibre Channel (FC) is a successor to parallel SCSI interface on enterprise market. It is a serial protocol. In disk drives usually the Fibre Channel Arbitrated Loop (FC-AL) connection topology is used. FC has much broader usage than mere disk interfaces, and it is the cornerstone of storage area networks (SANs). Recently other protocols for this field, like iSCSI and ATA over Ethernet have been developed as well. Confusingly, drives usually use copper twisted-pair cables for Fibre Channel, not fibre optics. The latter are traditionally reserved for larger devices, such as servers or disk array controllers.
Serial ATA (SATA). The SATA data cable has one data pair for differential transmission of data to the device, and one pair for differential receiving from the device, just like EIA-422. That requires that data be transmitted serially. A similar differential signaling system is used in RS485, LocalTalk, USB, FireWire, and differential SCSI.
Serial Attached SCSI (SAS). The SAS is a new generation serial communication protocol for devices designed to allow for much higher speed data transfers and is compatible with SATA. SAS uses a mechanically identical data and power connector to standard 3.5-inch SATA1/SATA2 HDDs, and many server-oriented SAS RAID controllers are also capable of addressing SATA hard drives. SAS uses serial communication instead of the parallel method found in traditional SCSI devices but still uses SCSI commands.
Notes
References
External links
Computer History Museum's HDD Working Group Website
HDD Tracks and Zones
HDD from inside
Hard Disk Drives Encyclopedia
Video showing an opened HD working
Average seek time of a computer disk
What to consider before buying a hard disk drive
Hard disk drives | Operating System (OS) | 1,212 |
Comparison of command shells
A command shell is a command-line interface to interact with and manipulate a computer's operating system.
General characteristics
Interactive features
Background execution
Background execution allows a shell to run a command in background. POSIX shells and other Unix shells allow background execution by using the & character at the end of command, and in PowerShell you can use Start-Process or Start-Job commands.
Completions
Completion features assist the user in typing commands at the command line, by looking for and suggesting matching words for incomplete ones. Completion is generally requested by pressing the completion key (often the key).
Command name completion is the completion of the name of a command. In most shells, a command can be a program in the command path (usually $PATH), a builtin command, a function or alias.
Path completion is the completion of the path to a file, relative or absolute.
Wildcard completion is a generalization of path completion, where an expression matches any number of files, using any supported syntax for file matching.
Variable completion is the completion of the name of a variable name (environment variable or shell variable).
Bash, zsh, and fish have completion for all variable names. PowerShell has completions for environment variable names, shell variable names and — from within user-defined functions — parameter names.
Command argument completion is the completion of a specific command's arguments. There are two types of arguments, named and positional: Named arguments, often called options, are identified by their name or letter preceding a value, whereas positional arguments consist only of the value. Some shells allow completion of argument names, but few support completing values.
Bash, zsh and fish offer parameter name completion through a definition external to the command, distributed in a separate completion definition file. For command parameter name/value completions, these shells assume path/filename completion if no completion is defined for the command. Completion can be set up to suggest completions by calling a shell function. The fish shell additionally supports parsing of man pages to extract parameter information that can be used to improve completions/suggestions. In PowerShell, all types of commands (cmdlets, functions, script files) inherently expose data about the names, types and valid value ranges/lists for each argument. This metadata is used by PowerShell to automatically support argument name and value completion for built-in commands/functions, user-defined commands/functions as well as for script files. Individual cmdlets can also define dynamic completion of argument values where the completion values are computed dynamically on the running system.
Command history
A user of a shell may find that he/she is typing something similar to what the user typed before. If the shell supports command history the user can call
the previous command into the line editor and edit it before issuing it again.
Shells that support completion may also be able to directly complete the command from the command history given a partial/initial part of the previous command.
Most modern shells support command history. Shells which support command history in general also support completion from history rather than just recalling
commands from the history. In addition to the plain command text, PowerShell also records execution start- and end time and execution status in the command history.
Mandatory argument prompt
Mandatory arguments/parameters are arguments/parameters which must be assigned a value upon invocation of the command, function or script file. A shell that can determine ahead of invocation that there are missing mandatory values, can assist the interactive user by prompting for those values instead of letting the command fail. Having the shell prompt for missing values will allow the author of a script, command or function to mark a parameter as mandatory instead of creating script code to either prompt for the missing values (after determining that it is being run interactively) or fail with a message.
PowerShell allows commands, functions and scripts to define arguments/parameters as mandatory. The shell determines prior to invocation if there is any
mandatory arguments/parameters which have not been bound, and will then prompt the user for the value(s) before actual invocation.
Automatic suggestions
Shells featuring automatic suggestions display optional command-line completions as the user types. The PowerShell and fish shells natively support this feature; pressing the key inserts the completion.
Implementations of this feature can differ between shells; for example, PowerShell and zsh use an external module to provide completions, and fish derives its completions from the user's command history.
Directory history, stack or similar features
Shells may record a history of directories the user has been in and allow for fast switching to any recorded location. This is referred to as a "directory stack". The concept had been realized as early as 1978 in the release of the C shell (csh).
PowerShell allows multiple named stacks to be used. Locations (directories) can be pushed onto/popped from the current stack or a named stack. Any stack can become the current (default) stack. Unlike most other shells, PowerShell's location concept allow location stacks to hold file system locations as well as other location types like e.g. Active Directory organizational units/groups, SQL Server databases/tables/objects, Internet Information Server applications/sites/virtual directories.
Command line interpreters 4DOS and its graphical successor Take Command Console also feature a directory stack.
Implicit directory change
A directory name can be used directly as a command which implicitly changes the current location to the directory.
This must be distinguished from an unrelated load drive feature supported by Concurrent DOS, Multiuser DOS, System Manager and REAL/32, where the drive letter L: will be implicitly updated to point to the load path of a loaded application, thereby allowing applications to refer to files residing in their load directory under a standardized drive letter instead of under an absolute path.
Autocorrection
When a command line does not match a command or arguments directly, spell checking can automatically correct common typing mistakes (such as case sensitivity, missing letters). There are two approaches to this; the shell can either suggest probable corrections upon command invocation, or this can happen earlier as part of a completion or autosuggestion.
The tcsh and zsh shells feature optional spell checking/correction, upon command invocation.
Fish does the autocorrection upon completion and autosuggestion. The feature is therefore not in the way when typing out the whole command and pressing enter, whereas extensive use of the tab and right-arrow keys makes the shell mostly case insensitive.
The PSReadLine PowerShell module (which is shipped with version 5.0) provides the option to specify a CommandValidationHandler ScriptBlock which runs before submitting the command. This allows for custom correcting of commonly mistyped commands, and verification before actually running the command.
Progress indicator
A shell script (or job) can report progress of long running tasks to the interactive user.
Unix/Linux systems may offer other tools support using progress indicators from scripts or as standalone-commands, such as the program "pv". These are not integrated features of the shells, however.
PowerShell has a built-in command and API functions (to be used when authoring commands) for writing/updating a progress bar. Progress bar messages are sent separates from regular command output
and the progress bar is always displayed at the ultimate interactive users console regardless of whether the progress messages originates from an interactive script, from a background job or from a remote session.
Interactive table
Output from a command execution can be displayed in a table/grid which can be interactively sorted and filtered and/or otherwise manipulated after command execution ends.
PowerShell cmdlet displays data in an interactive window with interactive sorting and filtering.
Colored directory listings
JP Software command-line processors provide user-configurable colorization of file and directory names in directory listings based on their file extension and/or attributes through an optionally defined %COLORDIR% environment variable.
For the Unix/Linux shells, this is a feature of the command and the terminal.
Text highlighting
The command line processors in DOS Plus, Multiuser DOS, REAL/32 and in all versions of DR-DOS support a number of optional environment variables to define escape sequences allowing to control text highlighting, reversion or colorization for display or print purposes in commands like TYPE. All mentioned command line processors support %$ON% and %$OFF%. If defined, these sequences will be emitted before and after filenames. A typical sequence for %$ON% would be \033[1m in conjunction with ANSI.SYS, \033p for an ASCII terminal or \016 for an IBM or ESC/P printer. Likewise, typical sequences for %$OFF% would be \033[0m, \033q, \024, respectively. The variables %$HEADER% and %$FOOTER% are only supported by COMMAND.COM in DR-DOS 7.02 and higher to define sequences emitted before and after text blocks in order to control text highlighting, pagination or other formatting options.
For the Unix/Linux shells, this is a feature of the terminal.
Syntax highlighting
An independent project offers syntax highlighting as an add-on to the Z Shell (zsh). This is not part of the shell, however.
PowerShell provides customizable syntax highlighting on the command line through the PSReadLine module. This module can be used with PowerShell v3.0+, and it's included with v5.0. Additionally, it is loaded by default in the command line host "powershell.exe" in v5.0. The PowerShell ISE also includes syntax highlighting on the command line as well as in the script pane.
Take Command Console (TCC) offers syntax highlighting in the integrated environment.
Context sensitive help
4DOS, 4OS2, 4NT / Take Command Console and PowerShell (in PowerShell ISE) looks up context-sensitive help information when is pressed.
Zsh provides various forms of configurable context-sensitive help as part of its widget, command, or in the completion of options for some commands.
Command builder
A command builder is a guided dialog which assists the user in filling in a command. PowerShell has a command builder which is available in PowerShell ISE or which can be displayed separately through the cmdlet.
Programming features
String processing and filename matching
Inter-process communication
Keystroke stacking
In anticipation of what a given running application may accept as keyboard input, the user of the shell instructs the shell to generate a sequence of simulated keystrokes, which the application will interpret as a keyboard input from an interactive user. By sending keystroke sequences the user may be able to direct the application to perform actions that would be impossible to achieve through input redirection or would otherwise require an interactive user. For example, if an application acts on keystrokes, which cannot be redirected, distinguishes between normal and extended keys, flushes the queue before accepting new input on startup or under certain conditions, or because it does not read through standard input at all. Keystroke stacking typically also provides means to control the timing of simulated keys being sent or to delay new keys until the queue was flushed etc. It also allows to simulate keys which are not present on a keyboard (because the corresponding keys do not physically exist or because a different keyboard layout is being used) and therefore would be impossible to type by a user.
Security features
Secure prompt
Some shell scripts need to query the user for sensitive information such as passwords, private digital keys, PIN codes or other confidential information. Sensitive input should not be echoed back to the screen/input device where it could be gleaned by unauthorized persons. Plaintext memory representation of sensitive information should also be avoided as it could allow the information to be compromised, e.g., through swap files, core dumps etc.
The shells bash, zsh and PowerShell offer this as a specific feature. Shells which do not offer this as a specific feature may still be able to turn off echoing through some other means. Shells executing on a Unix/Linux operating system can use the external command to switch off/on echoing of input characters. In addition to not echoing back the characters, PowerShell's option also encrypts the input character-by-character during the input process, ensuring that the string is never represented unencrypted in memory where it could be compromised through memory dumps, scanning, transcription etc.
Encrypted variables/parameters
If a script reads a password into an environment variable it is in memory in plain text, and thus may be accessed via a core dump. It is also in the process environment, which may be accessible by other processes started by the script.
PowerShell can work with encrypted string variables/parameters. Encrypted variables ensure that values are not inadvertently disclosed through e.g. transcripts, echo'ing, logfiles, memory or crash dumps or even malicious memory scanning. PowerShell also supports saving of such encrypted strings in text files, protected by a key owned by the current user.
Execute permission
Some operating systems define an execute permission which can be granted to users/groups for a file.
On Unix systems, the execute permission controls access to invoking the file as a program, and applies both to executables and scripts.
As the permission is enforced in the program loader, no obligation is needed from the invoking program, nor the invoked program, in enforcing the execute permission this also goes for shells and other interpreter programs.
The behaviour is mandated by the POSIX C library that is used for interfacing with the kernel. POSIX specifies that the exec family of functions shall fail with EACCESS (permission denied) if the file denies execution permission (see ).
The execute permission only applies when the script is run directly. If a script is invoked as an argument to the interpreting shell, it will be executed regardless of whether the user holds the execute permission for that script.
Although Windows also specifies an execute permission, none of the Windows-specific shells block script execution if the permission has not been granted.
Untrusted script blocking
Some shells will block scripts determined to be untrustworthy, or refuse to run scripts if mandated by a system administrator.
Script origin execution restriction
PowerShell can be set to block execution of scripts which has been marked as obtained from an unknown/untrusted origin (e.g. the Internet). Internet facing applications such as web browsers, IM clients, mail readers etc. mark files downloaded from the internet with the origin zone in an alternate data stream which is understood by PowerShell.
Signed script restriction
Script/code signing policies can be used to ensure that an operations department only run approved scripts/code which have been reviewed and signed by a trusted reviewer/approver. Signing regimes also protects against tampering. If a script is sent from vendor to a client, the client can use signing to ensure that the script has not been tampered with during transit and that the script indeed originates from the vendor and not an attacker trying to social engineer an operator into running an attack script.
PowerShell can be set to allow execution of otherwise blocked scripts (e.g. originating from an untrusted zone) if the script has been digitally signed using a trusted digital certificate.
Multilevel execution policies
A company may want to enforce execution restriction globally within the company and/or certain parts of the company. It may want to set a policy for running signed scripts but allow certain parts of the company to set their own policies for zoned restrictions.
PowerShell allows script blocking policies to be enforced at multiple levels: Local machine, current user etc. A higher level policy overrides a lower level policy, e.g. if a policy is defined for the local machine it is in place for all users of the local machine, only if it is left undefined at the higher level can it be defined for the lower levels.
Restricted shell subset
Several shells can be started or be configured to start in a mode where only a limited set of commands and actions is available to the user. While not a security boundary (the command accessing a resource is blocked rather than the resource) this is nevertheless typically used to restrict users' actions before logging in.
A restricted mode is part of the POSIX specification for shells, and most of the Linux/Unix shells support such a mode where several of the built-in commands are disabled and only external commands from a certain directory can be invoked.
PowerShell supports restricted modes through session configuration files or session configurations. A session configuration file can define visible (available) cmdlets, aliases, functions, path providers and more.
Safe data subset
Scripts that invoke other scripts can be a security risk as they can potentially execute foreign code in the context of the user who launched the initial script. Scripts will usually be designed to exclusively include scripts from known safe locations; but in some instances, e.g. when offering the user a way to configure the environment or loading localized messages, the script may need to include other scripts/files. One way to address this risk is for the shell to offer a safe subset of commands which can be executed by an included script.
PowerShell data sections can contain constants and expressions using a restricted subset of operators and commands. PowerShell data sections are used when e.g. localized strings needs to be read from an external source while protecting against unwanted side effects.
Notes
References
External links
Command shells
Shells | Operating System (OS) | 1,213 |
KDE
KDE is an international free software community that develops free and open-source software. As a central development hub, it provides tools and resources that allow collaborative work on this kind of software. Well-known products include the Plasma Desktop (the default desktop environment on many Linux distributions), Frameworks and a range of cross-platform applications like Krita or digiKam designed to run on Unix and Unix-like desktops, Microsoft Windows and Android.
Origins
KDE (back then called the K(ool) Desktop Environment) was founded in 1996 by Matthias Ettrich, a student at the University of Tübingen.
At the time, he was troubled by certain aspects of the Unix desktop. Among his concerns was that none of the applications looked or behaved alike. In his opinion, desktop applications of the time were too complicated for end users. In order to solve the issue, he proposed the creation of a desktop environment in which users could expect the applications to be consistent and easy to use. His initial Usenet post spurred significant interest, and the KDE project was born.
The name KDE was intended as a wordplay on the existing Common Desktop Environment, available for Unix systems. CDE was an X11-based user environment jointly developed by HP, IBM, and Sun through the X/Open consortium, with an interface and productivity tools based on the Motif graphical widget toolkit. It was supposed to be an intuitively easy-to-use desktop computer environment. The K was originally suggested to stand for "Kool", but it was quickly decided that the K should stand for nothing in particular. Therefore, the KDE initialism expanded to "K Desktop Environment" before it was dropped altogether in favor of simply KDE in a rebranding effort.
In the beginning Matthias Ettrich chose to use Trolltech's Qt framework for the KDE project. Other programmers quickly started developing KDE/Qt applications, and by early 1997, a few applications were being released. On 12 July 1998 the first version of the desktop environment, called KDE 1.0, was released. The original GPL licensed version of this toolkit only existed for platforms which used the X11 display server, but with the release of Qt 4, LGPL licensed versions are available for more platforms. This allowed KDE software based on Qt 4 or newer versions to theoretically be distributed to Microsoft Windows and OS X.
The KDE Marketing Team announced a rebranding of the KDE project components on November 24, 2009. Motivated by the perceived shift in objectives, the rebranding focused on emphasizing both the community of software creators and the various tools supplied by the KDE, rather than just the desktop environment.
What was previously known as KDE 4 was split into KDE Plasma Workspaces, KDE Applications, and KDE Platform (now KDE Frameworks) bundled as KDE Software Compilation 4. Since 2014, the name KDE no longer stands for K Desktop Environment, but for the community that produces the software.
Software releases
KDE Projects
The KDE community maintains multiple free-software projects. The project formerly referred to as KDE (or KDE SC (Software Compilation)) nowadays consists of three parts:
KDE Plasma, a graphical desktop environment with customizable layouts and panels, supporting virtual desktops and widgets. Written with Qt 5 and KDE Frameworks 5.
KDE Frameworks, a collection of libraries and software frameworks built on top of Qt (formerly known as 'kdelibs' or 'KDE Platform').
KDE Gear, utility applications (like Kdenlive or Krita) mostly built on KDE Frameworks and which are often part of the official KDE Applications release.
Other projects
KDE neon
KDE neon is a software repository that uses Ubuntu LTS as a core. It aims to provide the users with rapidly updated Qt and KDE software, while updating the rest of the OS components from the Ubuntu repositories at the normal pace. KDE maintains that it is not a "KDE distribution," but rather an up-to-date archive of KDE and Qt packages.
WikiToLearn
WikiToLearn, abbreviated WTL, is one of KDE's newer endeavors. It is a wiki (based on MediaWiki, like Wikipedia) that provides a platform to create and share open source textbooks. The idea is to have a massive library of textbooks for anyone and everyone to use and create. Its roots lie in the University of Milan, where a group of physics majors wanted to share notes and then decided that it was for everyone and not just their internal group of friends. They have become an official KDE project with several universities backing it.
Contributors
Developing KDE software is primarily a volunteer effort, although various companies, such as Novell, Nokia, or Blue Systems employ or employed developers to work on various parts of the project. Since a large number of individuals contribute to KDE in various ways (e.g. code, translation, artwork), organization of such a project is complex. A mentor program helps beginners to get started with developing and communicating within KDE projects and communities.
Communication within the community takes place via mailing lists, IRC, blogs, forums, news announcements, wikis and conferences. The community has a Code of Conduct for acceptable behavior within the community.
Development
Currently the KDE community uses the Git revision control system. The KDE Gitlab Instance (named invent) give an overview of all projects hosted by KDE's Git repository system. Phabricator is used for task management.
On 20 July 2009, KDE announced that the one millionth commit has been made to its Subversion repository. On October 11, 2009, Cornelius Schumacher, a main developer within KDE, wrote about the estimated cost (using the COCOMO model with SLOCCount) to develop KDE software package with 4,273,291 LoC, which would be about US$175,364,716. This estimation does not include Qt, Calligra Suite, Amarok, Digikam, and other applications that are not part of KDE core.
The Core Team
The overall direction is set by the KDE Core Team. These are developers who have made significant contributions within KDE over a long period of time. This team communicates using the kde-core-devel mailing list, which is publicly archived and readable, but joining requires approval. KDE does not have a single central leader who can veto important decisions. Instead, the KDE core team consists of several dozen of contributors who make decision not by a formal vote, but through discussions. The Developers also organize alongside topical teams. For example, the KDE Edu team develops free educational software. While these teams work mostly independent and do not all follow a common release schedule. Each team has its own messaging channels, both on IRC and on the mailing lists.
KDE Patrons
A KDE Patron is an individual or organization supporting the KDE community by donating at least 5000 Euro (depending on the company's size) to the KDE e.V.
As of October 2017, there are six such patrons: Blue Systems, Canonical Ltd., Google, Private Internet Access, SUSE, and The Qt Company.
Community structure
Mascot
The KDE community's mascot is a green dragon named Konqi. Konqi's appearance was officially redesigned with the coming of Plasma 5, with Tyson Tan's entry (seen on the right) winning the redesign competition on the KDE Forums.
Katie is a female dragon. She was presented in 2010, and is appointed as a mascot for the KDE women's community.
Other dragons with different colors and professions were added to Konqi as part of the Tyson Tan redesign concept. Each dragon has a pair of letter-shaped antlers that reflect their role in the KDE community.Kandalf the wizard was the former mascot for the KDE community during its 1.x and 2.x versions. Kandalf's similarity to the character of Gandalf led to speculation that the mascot was switched to Konqi due to copyright infringement concerns, but this has never been confirmed by KDE.
KDE e.V. organization
The financial and legal matters of KDE are handled by KDE e.V., a German non-profit organization. Among others, it owns the KDE trademark and the corresponding logo. It also accepts donations on behalf of the KDE community, helps to run the servers, assists in organizing and financing conferences and meetings, but does not influence software development directly.
Local communities
In many countries, KDE has local branches. These are either informal organizations (KDE India) or like the KDE e.V., given a legal form (KDE France). The local organizations host and maintain regional websites, and organize local events, such as tradeshows, contributor meetings and social community meetings.
Identity
KDE has community identity guidelines (CIG) for definitions and recommendations which help the community to establish a unique, characteristic, and appealing design. The KDE official logo displays the white trademarked K-Gear shape on a blue square with mitred corners. Copying of the KDE Logo is subject to the LGPL. Some local community logos are derivations of the official logo.
Many KDE applications have a K in the name, mostly as an initial letter. The K in many KDE applications is obtained by spelling a word which originally begins with C or Q differently, for example Konsole and Kaffeine, while some others prefix a commonly used word with a K, for instance KGet. However, the trend is not to have a K in the name at all, such as with Stage, Spectacle, Discover and Dolphin.
Collaborations with other organizations
Wikimedia
On 23 June 2005, chairman of the Wikimedia Foundation announced that the KDE community and the Wikimedia Foundation have begun efforts towards cooperation. Fruits of that cooperation are MediaWiki syntax highlighting in Kate and accessing Wikipedia content within KDE applications, such as Amarok and Marble.
On 4 April 2008, the KDE e.V. and Wikimedia Deutschland opened shared offices in Frankfurt. In September 2009 KDE e.V. moved to shared offices with Free Software Foundation Europe in Berlin.
Free Software Foundation Europe
In May 2006, KDE e.V. became an Associate Member of the Free Software Foundation Europe (FSFE).
On 22 August 2008, KDE e.V. and FSFE jointly announced that after working with FSFE's Freedom Task Force for one and a half years KDE adopts FSFE's Fiduciary Licence Agreement. Using that, KDE developers can – on a voluntary basis – assign their copyrights to KDE e.V.
In September 2009, KDE e.V. and FSFE moved into shared offices in Berlin.
Commercial enterprises
Several companies actively contribute to KDE, like Collabora, Erfrakon, Intevation GmbH, Kolab Konsortium, Klarälvdalens Datakonsult AB (KDAB), Blue Systems, and KO GmbH.
Nokia used Calligra Suite as base for their Office Viewer application for Maemo/MeeGo. They have also been contracting KO GmbH to bring MS Office 2007 file format filters to Calligra. Nokia also employed several KDE developers directly – either to use KDE software for MeeGo (e.g. KCal) or as sponsorship.
The software development and consulting companies Intevation GmbH of Germany and the Swedish KDAB use Qt and KDE software – especially Kontact and Akonadi for Kolab – for their services and products, therefore both employ KDE developers.
Others
KDE participates in freedesktop.org, an effort to standardize Unix desktop interoperability.
In 2009 and 2011, GNOME and KDE co-hosted their conferences Akademy and GUADEC under the Desktop Summit label.
In December 2010 KDE e.V. became a licensee of the Open Invention Network.
Many Linux distributions and other free operating systems are involved in the development and distribution of the software, and are therefore also active in the KDE community. These include commercial distributors such as SUSE/Novell or Red Hat but also government-funded non-commercial organizations such as the Scientific and Technological Research Council of Turkey with its Linux distribution Pardus.
In October 2018, Red Hat declared that KDE Plasma was no longer supported in future updates of Red Hat Enterprise Linux, though it continues to be part of Fedora. The announcement came shortly after the announcement of the business acquisition of Red Hat by IBM for close to US$43 billion.
Activities
The two most important conferences of KDE are Akademy and Camp KDE. Each event is on a large scale, both thematically and geographically. Akademy-BR and Akademy-es are local community events.
Akademy
Akademy is the annual world summit, held each summer at varying venues in Europe. The primary goals of Akademy are to act as a community building event, to communicate the achievements of community, and to provide a platform for collaboration with community and industry partners. Secondary goals are to engage local people, and to provide space for getting together to write code. KDE e.V. assist with procedures, advice and organization. Akademy including conference, KDE e.V. general assembly, marathon coding sessions, BOFs (birds of a feather sessions) and social program. BOFs meet to discuss specific sub-projects or issues.
The KDE community held KDE One that was first conference in Arnsberg, Germany in 1997 to discuss the first KDE release. Initially, each conference was numbered after the release, and not regular held. Since 2003 the conferences were held once a year. And they were named Akademy since 2004.
The yearly Akademy conference gives Akademy Awards, are awards that the KDE community gives to KDE contributors. Their purpose is to recognize outstanding contribution to KDE. There are three awards, best application, best non-application and jury's award. As always the winners are chosen by the winners from the previous year. First winners received a framed picture of Konqi signed by all attending KDE developers.
Camp KDE
Camp KDE is another annual contributor's conference of the KDE community. The event provides a regional opportunity for contributors and enthusiasts to gather and share their experiences. It is free to all participants. It is intended to ensure that KDE in the world is not simply seen as being Euro-centric. The KDE e.V. helps travel and accommodation subsidies for presenters, BoF leaders, organizers or core contributor. It is held in the North America since 2009.
In January 2008, KDE 4.0 Release Event was held at the Google headquarters in Mountain View, California, USA to celebrate the release of KDE SC 4.0. The community realized that there was a strong demand for KDE events in the Americas, therefore Camp KDE was produced.
Camp KDE 2009 was the premiere meeting of the KDE Americas, was held at the Travellers Beach Resort in Negril, Jamaica, sponsored by Google, Intel, iXsystem, KDE e.V. and Kitware. The event included 1–2 days of presentations, BoF meetings and hackathon sessions. Camp KDE 2010 took place at the University of California, San Diego (UCSD) in La Jolla, USA. The schedule included presentations, BoFs, hackathons and a day trip. It started with a short introduction by Jeff Mitchell, who was the principal organizer of the conference, talked a bit of history about Camp KDE and some statistics about the KDE community. The talks of the event were relatively well attended, and an increase over the previous year to around 70 people. On 1/19, the social event was a tour of a local brewery. Camp KDE 2011 was held at Hotel Kabuki in San Francisco, USA, was co-located with the Linux Foundation Collaboration Summit. The schedule included presentations, hackathons and a party at Noisebridge. The conference opened with an introduction spoken by Celeste Lyn Paul.
SoK (Season of KDE)
Season of KDE is an outreach program hosted by the KDE community. Students are appointed mentors from the KDE community that help bring their project to fruition.
Other community events
conf.kde.in was the first KDE and Qt conference in India. The conference was organized by KDE India, was held at R.V. College of Engineering in Bangalore, India. The first three days of the event had talks, tutorials and interactive sessions. The last two days were a focused code sprint. The conference was opened by its main organizer Pradeepto Bhattacharya, over 300 people were at the opening talks. The Lighting of the Auspicious Lamp ceremony was performed to open the conference. The first session was by Lydia Pintscher who talk "So much to do – so little time". At the event, Project Neon announced return on Mar 11, 2011, provides nightly builds of the KDE Software Compilation. Closing the conference was keynote speaker and old-time KDE developer Sirtaj.
Día KDE (KDE Day) is an Argentinian event focused on KDE. It gives talks and workshops. The purpose of the event are: spread the free software movement among the population of Argentina, bringing to it the KDE community and environment developed by it, to know and strengthen KDE-AR, and generally bring the community together to have fun. The event is free.
A Release party is a party, which celebrates the release of a new version of the KDE SC (twice a year). KDE also participates in other conferences that revolve around free software.
Notable uses
Brazil's primary school education system operates computers running KDE software, with more than 42,000 schools in 4,000 cities, thus serving nearly 52 million children. The base distribution is called Educational Linux, which is based on Kubuntu. Besides this, thousands more students in Brazil use KDE products in their universities. KDE software is also running on computers in Portuguese and Venezuelan schools, with respectively 700,000 and one million systems reached.
Through Pardus, a local Linux distribution, many sections of the Turkish government make use of KDE software, including the Turkish Armed Forces, Ministry of Foreign Affairs, Ministry of National Defence, Turkish Police, and the SGK (Social Security Institution of Turkey), although these departments often do not exclusively use Pardus as their operating system.
CERN (European Organization for Nuclear Research) is using KDE software.
Germany uses KDE software in its embassies around the world, representing around 11,000 systems.
NASA used the Plasma Desktop during the Mars Mission.
Valve Corporation's new handheld gaming computer, the Steam Deck, is reported to use the Plasma Desktop as part of its environment.
See also
KDE Projects
List of KDE applications
Free software community
Trinity Desktop Environment
References
External links
KDE.News, news announcements
KDE Wikis
1996 establishments in Germany
1996 software
Free and open-source software organizations
Free software projects | Operating System (OS) | 1,214 |
Nintendo Switch system software
The Nintendo Switch system software (also known by its codename Horizon) is an updatable firmware and operating system used by the Nintendo Switch video game console. Its main portion is the HOME screen, consisting of the top bar, the screenshot viewer ("Album"), and shortcuts to the Nintendo eShop, News, and Settings.
Technology
OS
Nintendo has released only limited information about the Switch's internals to the public. However, computer security researchers, homebrew software developers, and the authors of emulators have all analyzed the operating system in great depth.
Notable findings include that the Switch operating system is codenamed Horizon, that it is an evolution of the Nintendo 3DS system software, and that it implements a proprietary microkernel architecture. All drivers run in userspace, including the Nvidia driver which the security researchers described as "kind of similar to the Linux driver". The graphics driver features an undocumented thin API layer, called NVN, which is "kind of like Vulkan" but exposes most hardware features like OpenGL compatibility profile with Nvidia extensions. All userspace processes use Address Space Layout Randomization and are sandboxed. Address space layout randomization (ASLR) is a computer security technique involved (sandboxed) in preventing exploitation of memory corruption vulnerabilities.
Nintendo made efforts to design the system software to be as minimalist as possible, with the home menu's graphical assets using less than 200 kilobytes. This minimalism is meant to improve system performance and launch games faster.
As early as July 2018, Nintendo has been trying to counter Switch-homebrewing and piracy. Measures include an online ban, and on the hardware side, patching of the Tegra to prevent exploits. On 11 December 2018, Nintendo sued Mikel Euskaldunak for selling a Switch modification that can play pirated games. Since August 2019, the difficulty of homebrewing has gone up, as the new Mariko chip replaced the old Erista chip. After the release of the Lite, in late 2019, tools for hacking all Switch consoles were announced. Gary Bowser was arrested in September 2020 in the Dominican Republic. Afterwards, Bowser appeared in court in USA. The prosecution alleges that Bowser was a piracy group leader.
Open source components
Despite popular misconceptions to the contrary, Horizon is not largely derived from FreeBSD code, nor from Android, although the software licence and reverse engineering efforts have revealed that Nintendo does use some code from both in some system services and drivers. For example, the networking stack in the Switch OS is derived at least in part from FreeBSD code. Nintendo's use of FreeBSD networking code is legal as it is made available under the permissive BSD licence, and not even particularly unusual – notably, the Microsoft Windows TCP/IP stack was originally derived from BSD code in a similar fashion.
Components derived from Android code include the Stagefright multimedia framework, as well as components of the graphics stack including the display server (derived from SurfaceFlinger) and the graphics driver (which seems to be derived from Nvidia's proprietary Linux driver).
Although a full web browser intended for general browsing is not available on the console as of February 2022, several so-called 'applets' are included which utilise the WebKit rendering engine to display web content within a stripped back interface. A WebKit-powered applet is used to allow users to log in to captive portals when connecting to certain wireless networks, as well as for operating system features such as the Nintendo eShop, social media integrations, and digital manuals.
User interface
Home screen
The Nintendo Switch home screen has battery, internet and time information in the top right corner, and below it is a grid showing all software on the system, downloaded or physical. Underneath that it has shortcuts to OS functions such as Nintendo Switch Online, the News, eShop, Album, Controller settings, System Settings, and a Sleep Mode button. The Nintendo Switch home screen currently lacks an internet browser and a messaging system.
News
The News function of the Nintendo Switch software allows users to read gaming news and advertisements provided by Nintendo and third-party developers. News is also displayed when the system is locked.
The News interface was originally available in the 1.0.0 version of the software, however new headlines were not transmitted until the 2.0.0 update was released. The 3.0.0 update revamped the News system, adding multiple news "channels" for different games that users can subscribe to. The news headlines that appear depend on which channels are subscribed to. The 4.0.0 update further improved the News screen, updating its layout. The 9.0.0 update added search support to the News channel, allowing users to narrow the list via filters or free text. The 10.0.0 update added a "Bookmark" feature, allowing users to save their favorite News articles.
Nintendo eShop
The Nintendo eShop option on the Home menu opens a WebKit-based interface that allows games to be purchased and downloaded from the Nintendo eShop.
As well as games, the eShop offers select non-gaming apps. Niconico, a popular Japanese video service, launched for the Switch in Japan on 13 July 2017 and was the Switch's first third-party media app in any market. Hulu was the first video streaming application released for the Switch in the United States on 9 November 2017. In June 2018, Fils-Aimé said that conversations to bring Netflix to the Switch were "on-going". A YouTube application was released on 8 November 2018. On 4 November 2020, a trial version app of the Tencent Video streaming service was launched exclusively for Nintendo Switch consoles officially distributed by Tencent in mainland China. An official version app will be launched at a later date. Funimation launched their own streaming app for the Nintendo Switch, featuring a reworked layout and new functions. The app became available via eShop in the United States and Canada on 15 December 2020, and will launch in various other countries at a later date, such as the United Kingdom and Ireland on 22 March 2021. A version of the Twitch app launched for the Nintendo Switch on 11 November 2021 in most regions worldwide. The eShop version of the app allows users to watch or follow any live or recorded content on Twitch, but does not support any native ability for Switch players to contribute content.
Korg Gadget, a music production app, was released for the Nintendo Switch on 26 April 2018. InkyPen, a comics and manga subscription app, launched exclusively on the Switch worldwide in December 2018. Izneo, another comics and manga subscription service, was released for the Switch in February 2019. FUZE4, a text-based programming language app, was released in August 2019.
Album
The Album stores captured screenshots and videos. Pressing the "Capture" button on the controller, in supported software, will save a screenshot, either to the microSD card, or to the system memory. The Album allows users to view screenshots that have been taken. Screenshots can be edited by adding text, and they can be shared to Facebook or Twitter. In addition, in supported games, holding down the Capture button briefly will save the last 30 seconds of video to the Album. It can then be trimmed and posted online.
The 2.0.0 update added the ability to post screenshots to Facebook or Twitter from within the system UI, making it easier to share screenshots. The 4.0.0 update added support for saving 30 second videos, in compatible games. The 11.0.0 updated added the ability to download screenshots and videos to a PC via a USB cable or to a Mobile device via a webpage hosting the files generated by the Switch.
Controllers
The Controllers menu allows controllers to be paired, disconnected, or reconnected. The 3.0.0 update added the "Find Controllers" option, which allows any nearby controllers that have been paired to be remotely turned on and vibrated, to help find lost controllers.
Settings
The Settings option allows for system settings to be changed, and includes other functionality, such as creating Miis.
History of updates
The initial version of the system software for Nintendo Switch on the launch day consoles was updated as a "day one" patch on 3 March 2017, the console's launch date. The update added online features that were previously missing from the original software before its official launch date. Some notable features of this update are access to the Nintendo eShop as well as the ability to add friends to a friends list, similar to that of the Nintendo 3DS. On 7 June 2021, patch 12.0.3 was released, but was removed 12 hours later for problems with network connections as well as issues with MicroSDXC cards.
The April 2021 firmware update was found by dataminers to have added rudimentary support for Bluetooth audio. This support was expanded and made available to regular users on September 14, 2021, when patch 13.0 was released. Patch 13.0 also added the ability to apply software updates to the Switch Dock (only applicable for docks released with the Switch OLED Model, which have a built-in LAN port), and a new setting for Sleep Mode that allows the Switch to maintain an Internet connection when the Switch is asleep to download updates. When disabled, the console will only connect to the Internet occasionally when asleep, in order to save power. Additionally, Patch 13.0 changed the method to initiate a control stick calibration and allowed users to view their wireless internet frequency band (2.4 GHz or 5 GHz) on the Internet Connection Status page.
In November 2021, the 13.1.0 version update added support for Nintendo Switch Online + Expansion Pack.
References
2017 software
Game console operating systems
Nintendo Switch
Proprietary operating systems
Microkernel-based operating systems
Microkernels | Operating System (OS) | 1,215 |
Android Pie
Android Pie (codenamed Android P during development), also known as Android 9 is the ninth major release and the 16th version of the Android mobile operating system. It was first released as a developer preview on March 7, 2018, and was released publicly on August 6, 2018.
On August 6, 2018, Google officially announced the final release of Android 9 under the title "Pie", with the update initially available for current Google Pixel devices, and releases for Android One devices and others to follow "later this year". The Essential Phone was the first third-party Android device to receive an update to Pie, notably coming day-and-date with its final release. The Sony Xperia XZ3 was the first device with Android Pie pre-installed.
, 13.9% of Android devices run Pie (API 28), making it the third most used version of Android.
History
Android Pie, then referred to as "Android P", was first announced by Google on March 7, 2018, and the first developer preview was released on the same day. The second preview, the first beta release, was released on May 8, 2018. The third preview, called Beta 2, was released on June 6, 2018. The fourth preview, called Beta 3, was released on July 2, 2018. The final beta of Android P was released on July 25, 2018.
Custom distributions
There are, as of August 2019, a handful of notable custom Android distributions (ROMs) of 9.0 Pie.
Features
User experience
Android Pie utilizes a refresh of Google's "material design" language, unofficially referred to as "Material Design 2.0". The revamp provides more variance in aesthetics, encouraging the creation of custom "themes" for the base guidelines and components rather than a standardized appearance. Bottom-aligned navigation bars are also more prominent. As applied to Android Pie's interface, rounded corners (influenced by the proprietary Google theme used by in-house software implementing Material Design 2.0) are more prominent. In addition, Pie contains official support for screen cutouts ("notches"), including APIs and system behaviors depending on their size and position. Android certification requirements restrict devices to two cutouts, which may only be along the top or bottom of the screen.
The most significant user interface change on Pie is a redesigned on-screen navigation bar. Unlike previous versions of Android, it only consists of a slim home button, and a back button rendered only when available. The bar utilizes gesture navigation: swiping up opens the "Overview" screen, a redesign of the existing recent apps menu. Swiping the handle to the right activates application switching. The gesture bar is used primarily on new devices such as the Pixel 3; existing devices may either use the previous navigation key setup or offer the ability to opt into gesture navigation. As opposed to the previous recent apps menu, Overview utilizes a horizontal layout rather than vertical, and text may also be selected and copied from apps appearing there (although this uses OCR rather than the native text as to conserve resources). The Pixel Launcher exclusively supports the ability to access the app drawer and most recently used apps from the overview as well. However, this integration is proprietary, as there are no current plans to offer the necessary integration to third-party software due to security concerns. In addition, when rotation lock is enabled, rotating the device causes a screen rotation button to appear on the navigation bar.
The notification area was redesigned, with the clock moved to the left, and the number of icons that may be displayed at once limited to four, in order to accommodate displays that may have "notch" cutouts in the center. The drop-down panels attached to quick settings items have been removed; long-pressing a toggle directs users to the relevant settings screen. Notifications for chats can now be threaded, displaying previous messages within (complementing the existing inline reply functionality). If a particular type of notification is frequently dismissed, the user will now be offered to disable it. The Do Not Disturb mode has been overhauled with a larger array of settings.
The power menu now contains a screenshot button (which itself now supports cropping an image after taking one), and an optional "lockdown" mode that disables biometric unlock methods. The volume pop-up now only controls media volume, as well as the choice of sound, vibrate, or silent modes for notifications. Users are directed to the settings menu to change the volume of notifications. A magnifier display has been added to text selection, and "smart linkify" offers access to relevant apps if particular types of text (such as phone numbers or addresses) are highlighted.
Platform
Android Pie introduces a major change to power management, using algorithms to prioritize background activity by apps based on long-term usage patterns and predictions, dividing apps into "Active", "Working Set" (run often), "Frequent", "Rare", and "Never". Similar "adaptive brightness" settings are adjusted automatically based on detected lighting conditions. Both of these features were developed in collaboration with DeepMind.
The "PrecomputedText" API (also available as a compatibility library compatible with Android 4.0 and newer) can be used to perform text display processing in a background thread as opposed to a UI thread to improve performance.
The fingerprint authentication API has also been revamped to account for different types of biometric authentication experiences (including face scanning and in-screen fingerprint readers).
Android Runtime can now create compressed bytecode files, and profiler data can be uploaded to Google Play servers to be bundled with apps when downloaded by users with a similar device.
Apps targeting older Android API levels (beginning with Android 4.2) display a warning when launched. Google Play Store is now requiring all apps to target an API level released within the past year, and will also mandate 64-bit support in 2019.
Android Pie supports IEEE 802.11mc, including Wi-Fi Round Trip Time for location positioning.
The camera API now supports accessing multiple cameras at once. Apps may no longer perform background audio or video recording unless they run a foreground service. There is support for the High Efficiency Image File Format (subject to patent licensing and hardware support) and VP9 Profile 2.
DNS over TLS is supported under the name "Private DNS".
Android Go for Android Pie uses less storage than the previous release, and has enhancements to security and storage tracking.
Reception
Shortly after its launch, several users on Pixel devices and the Essential Phone noted a decrease in battery life. As Android Pie became available to more phones, some users on various devices reported similar comparisons.
See also
Android version history
iOS 12
macOS Mojave
Windows 10
References
External links
2018 software
Android (operating system) | Operating System (OS) | 1,216 |
Emerge Desktop
Emerge Desktop is a replacement shell for Windows XP (both Home and Professional editions), Windows Vista and Windows 7 written in C++, primarily developed with the MinGW compiler, and is licensed under the GNU General Public License, Version 3.
Applets
Most of the Emerge Desktop applets are capable of being run as both standalone as well as being integrated, however the core applet (emergeCore) must be running in order for each applet to communicate with another in the suite. Each applet is aimed at replacing or extending a functionality of the default Windows shell and offers various configuration and visual customization features.
Core Applets
emergeTasks
emergeTasks is the 'Tasks' applet for Emerge Desktop. It displays an icon for each running task in a movable, resizable window.
emergeTray
emergeTray is the 'system tray' applet of Emerge Desktop. It displays the system icons in a movable, resizable window.
emergeWorkspace
emergeWorkspace is the desktop component of Emerge Desktop. It provides the mouse RightClick and MiddleClick menus.
Additional Applets
emergeDesktop has several additional applets that can be used. These are all standalone applets and can be run independently of the three core applets above. They can also be run on top of Windows Explorer or any other Windows Shell Replacement.
emergeCommand
emergeCommand is a clock / command line launcher applet for Emerge Desktop. By default it displays the date and time in a configurable format.
When the left mouse button is clicked on the text, it allows for the typing in of a command to execute.
emergeHotkeys
emergeHotkeys is the hotkey applet of Emerge Desktop. It defines a set of hotkeys which allow quick access to Emerge Desktop functions and other applications.
emergeLauncher
emergeLauncher provides a "Quick Launch" applet for Emerge Desktop. It displays icons of applications in a movable, resizable window.
emergePower
emergePower reports the status of battery power for laptops computers, in a movable and resizable window.
emergeSysMon
emergeSysMon reports the CPU usage, the Commit Charge, The Physical Memory Usage, and the Pagefile Usage in a movable, resizable window.
emergeVWM
emergeVWM is Emerge Desktop's "Virtual Window Manager" applet. It allows the user to switch monitor views between different virtual "desktops", and displays a grid of corresponding mini-windows in a movable and resizable window.
Versions
Version 5 added native 64-bit support, theming, dynamic positioning and support for 'Run As'. Also, several new applets were added, such as emergeSysMon, a system resource monitor; emergePower, a battery charge monitor; reg2xml, settings converter to new XML format. The shell is capable to run as self-contained now. There are also numerous bug fixes.
See also
Shell (computing)
Windows shell replacement
External links
Official website
Emerge Desktop at Sourceforge.net
Emerge Desktop at DeviantArt.com
Emerge Desktop review at the Technology Cooler
Five Best Desktop Customization Tools (Lifehacker)
Versatile Underground - [How To] Use Emerge Desktop – A Novice’s Guide to Greatness
emergeDesktop on CNET
emergeDesktop on Customize.Org!
References
Desktop shell replacement
Windows-only free software | Operating System (OS) | 1,217 |
ZiiLABS
ZiiLABS is a global electronics company, producing a line of media-oriented application processors, reference platforms and enabling software, in a series of platforms named ZMS. Its products are found in low-power consumer electronics and embedded devices, including Android-based phones and tablets.
History
ZiiLABS was founded in 1994 as 3Dlabs, which became a wholly owned subsidiary of Creative Technology Ltd in 2002. In January 2009 the company re-branded as ZiiLABS. This re-branding reflected 3Dlabs' focus on supplying low-power, media-rich application processors, hardware platforms and middleware, rather than just 3D GPUs as had previously been the case.
The company announced its first applications/media processor, the DMS-02 in 2005 and this has been followed by the ZMS-05, ZMS-08 and most recently the ZMS-20 and ZMS-40. The ZMS processors combine ZiiLABS’ core asset, the "Stemcell Computing Array" with ARM cores and integrated peripheral functions to create a system on a chip (SoC).
As 3Dlabs the company developed the GLINT and Permedia GPUs used in both personal and workstation graphics cards. In 2002 the company acquired the Intense3D group to become a vertically integrated graphics board vendor supplying workstation graphics card under the RealiZm brand. 3Dlabs stopped developing graphics GPUs and cards in 2006 to focus on its media processor business.
In November 2012, Creative Technology Limited announced it has entered into an agreement with Intel Corporation for Intel to license certain technology and patents from ZiiLABS Inc. Ltd and acquire certain engineering resources and assets related to its UK branch as a part of a $50 million deal. ZiiLABS (still wholly owned by Creative) continues to retain all ownership of its StemCell media processor technologies and patents, and will continue to supply and support its ZMS series of chips to its customers.
Products
The company's products include a range of ARM-based ZMS processors that feature its so-called StemCell media processing architecture, plus a portfolio of tablet reference platforms based on its in-house Android board support package and application software. The most recent platform, the JAGUAR Android reference tablet, was announced in May 2011.
StemCell cores
The core asset of the ZiiLABS ZMS chips seem to be an array of processing units called StemCells that are programmed to perform media processing. These are described as 32-bit floating-point processing units and are likely some form of digital signal processor cores used to accelerate various operations. All video codec and 3D graphics handling in the ZMS processors is handled by programming this array of coprocessors to do the job.
Processors
ZMS-40 (quad Cortex-A9 with 96 StemCell cores)
ZMS-20 (dual Cortex A9 with 48 StemCell cores)
ZMS-08 (single Cortex-A8 with 64 StemCell cores)
ZMS-05 (dual ARM9 with 24 StemCell cores)
Reference platforms
Zii EGG (this product is now end-of-life)
JAGUAR (Android 3.2 Reference Tablet)
JAGUAR3 (Slim Android 3.2 Reference Tablet)
Over the years a number of other development platforms have been made introduced including the Zii Development kits (traditional large form factor systems).
References
External links
Electronics companies established in 2009
ARM architecture
Creative Technology products
Creative Technology
Graphics processing units
Singaporean brands
Singaporean companies established in 2009
Intel acquisitions
2012 mergers and acquisitions | Operating System (OS) | 1,218 |
Pocket-sized computer
Pocket-sized computer describes the post-programmable calculator / pre-smartphone pocket-sized portable-office hardware devices that included the earlier DOS-based palmtops and subsequent Windows-CE handhelds, as well as a few other terms, primarily covering the 1980s through 2007.
Sometimes called Pocket-sized computing devices, they were a series of internally different devices, and included Handheld ("Pocket-sized handheld computing device"), and the earlier-introduced Palmtop
("Pocket-sized palmtop computing device") and "pocket-sized palmtop computer." The New York Times used the term "palmtop/handheld."
The media called "the first computer that fits in your palm and weighs less than a pound" and its early day competitors a palmtop. Although the word "handheld" was used before Microsoft's 1996 introduction of Windows CE, a lawsuit by Palm, Inc pushed Microsoft's use of the new term Handheld PC.
Timeline summary
1973 - The first portable computer, the MCM/70, was introduced. It weighed about 9 kg.
1975 - The second portable computer, the IBM 5100, was introduced. It weighed 50 pounds (24 kg).
1977 - The original TRS-80 was introduced. It used an 8-bit Z-80 processor.
1980 - The term Pocket computer began in 1980 with the popular acceptance of the oddly-named TRS-80/Tandy Pocket Computer. It was not a TRS-80, and was the first of 8 models named PC-1 through PC-8. The TRS-80 Pocket Computer PC-1 was a rebadged Sharp PC-1211. that used two 4-bit processors.
1981 - The first IBM Personal Computer
1989 - The first Palmtop PC, using a 16-bit X86 processor
1996 - The first Handheld PC
Neither the Palmtop PC nor the Handheld PC were pocket computers. As late as March 1981 a "computer small enough to fit in a coat pocket" had yet to be introduced.
Market acceptance
The first hand-held device compatible with desktop IBM personal computers of the time was the DIP Pocket PC aka Atari Portfolio in 1989. The term "Handheld PC" described the product first introduced in 1989 by Atari as "the first computer that fits in your palm and weighs less than a pound." The full version of the ad ran as eight pages and showed the device in actual size, including one page topped by a hand placing an Atari Portfolio(tm) into a suit inner lapel pocket.
Other early models were the Poqet PC of 1989 and the Hewlett Packard HP 95LX of 1991. Other DOS-compatible hand-held computers also existed. Some handheld PCs use Microsoft's Windows CE operating system, with the term also covering Windows CE devices released by the broader commercial market.
Despite the arrival in the early 2010s of devices lacking keyboards, demand for used pocket computers remained strong. The PsiXpda Ultimate Pocket Computer from 2009; the GPD Win from 2016; the Gemini from 2018 and the eponymous GPD Pocket commercial offerings continue to supply this market while the crowd-funded open source hardware Pandora and Pyra maintain small-scale production and ongoing development.
A combination of price and size makes them useful both for business and education; they also target the "games" market.
Nomenclature
By the mid 1990s, the New York Times referred to these portable office devices as:
Palmtop computer
Handheld computer
Pocket-size computer
Palmtop PC
Personal Digital Assistant (PDA)
Personal Intelligent Communicator (PIC)
Pocket computer was another term used. Subsequently, another publisher's "10 awesome handheld computers from yesteryear" included "1991 - HP 95LX pocket computer" even though HP called it a palmtop and HPmueum called it a handheld PC. As recently as 2017, these terms were intermixed.
Comparison among alternatives
Early Palmtops, beginning with
Atari's 1989 Portfolio, used Intel-compatible x86 processors and a mostly IBM-compatible PC architecture and BIOS. Their operating system was DOS-like.
By the late-1990s, non-Intel processors and other operating systems were used for some devices, using Microsoft's Windows CE operating system, even as the term Handheld was growing.
The term PC was helpful, since many Palmtop PCs and Handheld PCs came with some personal-computer, PDA and office applications pre-installed in ROM, and most of them could also run generic, off-the-shelf PC software with minimal if any modifications. Some could also run other operating systems such as GEOS, MINIX 2.0, Windows 1.0-3.0 (in Real mode only), or Linux.
Most palmtop PCs were based on a static hardware design for low power consumption and instant-on/off without a need to reboot. Depending on the model, the battery could power the device from several hours up to several days while running, or between a week and a year in standby mode. Combined with the instant-on/off feature, a battery would typically last from a week up to several months in practical use as PDA.
Handheld computer, Palm PC, Palmtop and Personal Digital Assistant (PDA) were used concurrently and almost interchangeably. to describe these pocket-sized computing devices. The acronym PIM referred to Personal Information Manager, a similar type of device that often came with a stylus interface instead of a keyboard. None of these, at the time, were intended to replace the PC.
Non-Wintel (Palm-top/Palm-size/Pocket computer)
Not all of the pocket-sized hardware was/is used for Windows/Intel systems.
At one point the Windows CE market share was less than 10%. Terms used included:
Internet tablets -or-
Tablet computers.
Not all Windows-running devices had a keyboard. If they matched all of the hardware requirements except for lacking a keyboard they were known as:
Windows Tablet PCs
Windows CE Tablet PCs
A list of handheld/pocket Linux computers
Some of them ran/run Linux.
History
Each term had a role:
Palmtop PC
Palmtop PCs from 1989 through 1996 included:
DIP Pocket PC (DIP DOS 2.11, 1989)
Atari Portfolio (DIP DOS 2.11, 1989)
Poqet PC Classic (MS-DOS 3.3, 80C88, 1989)
Poqet PC Prime (MS-DOS 3.3, 80C88)
Poqet PC Plus (MS-DOS 5.0, NEC V30)
ZEOS Pocket PC (MS-DOS 5.0, 1991)
Sharp PC-3000 (MS-DOS 3.3, 1991)
Sharp PC-3100 (MS-DOS 3.3, 1991)
Hewlett-Packard:
95LX (1991) - MS-DOS 3.22, NEC V20
MS-DOS 5.0, 80186-compatible HP Hornet:
100LX (1993)
200LX (1994)
1000CX (1995)
700LX (1996)
Handheld PC
The Handheld PC was a late 1990s hardware design for personal digital assistant (PDA) devices running Windows CE. It provided the appointment calendar functions usual for any PDA.
The intent of Windows CE was to provide an environment for applications compatible with the Microsoft Windows operating system, on processors better suited to low-power operation in a portable device.
Originally announced in 1996, the Handheld PC was distinct from the Palm-size, Pocket PC, or smartphone in that the specification provided for larger screen sizes as well as a keyboard.
Personal Digital Assistant (PDA)
Psion's 1984-introduced handheld palmtop device was the first Personal Digital Assistant (PDA). Two years later the Psion Organizer
was followed by the Psion Organizer II and other pocket-sized computers.
Other, less expensive devices of this type were Palm Inc's Palm Pilot and various Pocket PCs running Windows CE. Their main era was the 1990s, and included the Apple Newton.
Personal Information Manager (PIM)
Both by goal and by marketing, the audience for the "Personal Information Manager (PIM)" was the individual, not the corporation. Market research showed that people "wanted a device that would straddle the telephone and computer."
Until the smartphones of the 2010s, the goal of what an AT&T study called "an intelligent cellphone" was still pending.
See also
Sub-notebook, IBM- and x86- compatible, clamshell design, but larger than palmtop PCs
Psion netBook, ARM-based clamshell design
generic Netbook, IBM- and x86- compatible, legacy-free, clamshell design typically much larger than a pocket
Ultra-mobile PC, IBM- and x86- compatible, legacy-free, not necessarily clamshell design
Pen computing, using a pen/stylus rather than a keyboard, joystick or mouse
ActiveSync, Application for synchronizing hand-held devices and Windows PCs
Smartbook
EPOC, operating system of Psion's x86 and ARM -based palmtops and pocket computers.
Windows CE, one operating system of Palm-sized PCs.
Windows Mobile, one operating system of Pocket PCs.
HP Jornada, A line of Handheld, Palm-size and Pocket PCs.
Atari Portfolio, the first (1989)
Palm (PDA)
References
External links
List of DOS based palmtop PCs]
History of computing hardware
Personal digital assistants
Mobile computers
Information appliances | Operating System (OS) | 1,219 |
Native (computing)
In computing, native software or data-formats are those that were designed to run on a particular operating system. In a more technical sense, native code is code written specifically for a certain processor. In contrast, cross-platform software can be run on multiple operating systems and/or computer architectures.
For example, a Game Boy receives its software through a cartridge, which contains code that runs natively on the Game Boy. The only way to run this code on another processor is to use an emulator, which simulates an actual Game Boy. This usually comes at the cost of speed.
Applications
Something running on a computer natively means that it is running without any external layer requiring fewer software layers. For example, in Microsoft Windows the Native API is an application programming interface specific for Windows NT kernel, which can be used to give access to some kernel functions, which cannot be directly accessed through a more universal Windows API.
Operating systems
Used to designate the lowest level of virtualization or the absence of virtualization. For instance the term “Native VM” is used to ensure reference to the lowest level operating system, the one that actually maintains direct control of the hardware when multiple levels of virtualization occur.
Machine code
Machine code, also known as native code, is a program which is written in machine language. Machine code is usually considered the lowest level of code for a computer, that, in its lowest level form, is written in binary (0s and 1s), but is often written in hexadecimal or octal to make it a little easier to handle. These instruction sets are then interpreted by the computer. With this, there is no need for translation. machine code is strictly numerical and usually isn't what programmers program in, due to this complex nature. Machine code is also as close as you can get to the processor, so using this language, you are programming specifically for that processor as machine code for each processor may differ. Typically programmers will code in high-level languages such as C, C++, Pascal, (or other directly compiled languages) which gets translated into assembly code, which then translates it into machine code (or in most cases the compiler generates machine code directly). Since each CPU is different, programs need to be recompiled or rewritten in order to work on that CPU.
Data
Applied to data, native data formats or communication protocols are those supported by a certain computer hardware or software, with maximal consistency and minimal amount of additional components.
For example, EGA and VGA video adapters natively support code page 437. This does not preclude supporting other code pages, but it requires either a font uploading or using graphic modes.
References
Computer jargon | Operating System (OS) | 1,220 |
Logical partition
A logical partition (LPAR) is a subset of a computer's hardware resources, virtualized as a separate computer. In effect, a physical machine can be partitioned into multiple logical partitions, each hosting a separate instance of an operating system.
History
IBM developed the concept of hypervisors (virtual machines in CP-40 and CP-67) and in 1972 provided it for the S/370 as Virtual Machine Facility/370. IBM introduced the Start Interpretive Execution (SIE) instruction (designed specifically for the execution of virtual machines) as part of 370-XA architecture on the 3081, as well as VM/XA versions of VM to exploit it. PR/SM is a type-1 Hypervisor based on the CP component of VM/XA that runs directly on the machine level and allocates system resources across LPARs to share physical resources. It is a standard feature on IBM System z only. An IBM POWER system uses PHYP (the POWER Hypervisor) to enable its LPAR functionalities for System p and System i since approximately 2000 in POWER4 systems.
The terms PR/SM and LPAR are often used interchangeably in IBM Z, including in IBM documentation. Formally, LPAR designates the logical partitioning function and mode of operation, whereas PR/SM is the commercial designation of the feature.
Amdahl Corporation's Multiple Domain Facility (MDF) was introduced in 1982. IBM began marketing its functionally similar PR/SM in 1988, implemented on its ESA/390 architecture released that year. MDF-based LPAR technology continued to be developed separately by Amdahl, and Hitachi Data Systems in part for their implementations of the new architecture, which featured the introduction of access registers that allowed use of multiple data spaces addressable by a single address space. IBM subsequently continued its LPAR development with its 64-bit System z and IBM AS/400 architectures. LPAR and PR/SM reconfigurations can be made without rebooting the computer, i.e., while some LPARs remain active. Reconfigurations can include changing channel path definitions and device definitions.
z/VM supports the z/Architecture HiperSockets function for high-speed TCP/IP communication among virtual machines and logical partitions (LPARs) within the same IBM zSeries server. This function uses an adaptation of the Queued-Direct Input/Output (QDIO) high-speed I/O protocol.
IBM later introduced LPARs to their iSeries and pSeries servers in 1999 and 2001, respectively, albeit with varying technical specifications. Multiple operating systems are compatible with LPARs, including z/OS, z/VM, z/VSE, z/TPF, AIX, Linux, and IBM i. In storage systems, such as the IBM TotalStorage DS8000, LPARs allow for multiple virtual instances of a storage array to exist within a single physical array.
In first part of 2010 year, Fujitsu announced availability of its x86-64 PRIMEQUEST line of servers, which support LPARs.
In second part of 2011 year, Hitachi has announced availability of CB2000 and CB320 blade systems, which support LPAR on x86-64 hardware.
Hardware partitioning
Logical partitioning divides hardware resources. Two LPARs may access memory from a common memory chip, provided that the ranges of addresses directly accessible to each do not overlap. It is possible for one partition to control memory managed by a second partition indirectly by communicating with a process on the partition with direct access, which acts as an intermediary. CPUs may be dedicated to a single LPAR or shared. While on Amdahl's MDF (Multiple Domain Facility) it was possible to configure an LPAR with both shared and dedicated CPUs, this is no longer possible with any mainframes currently in the market.
On IBM mainframes, LPARs are managed by the PR/SM facility or a related, optional, simplified facility called Dynamic Partition Manager (DPM). All 64-bit IBM mainframes, except for the first generation 64-bit models (z900 and z800), operate exclusively in LPAR mode, even when there is only one partition on a machine. Multiple LPARs running z/OS can form a Sysplex or Parallel Sysplex, whether on one machine or spread across multiple machines.
On IBM System p POWER hardware, LPARs are managed by PHYP (the POWER Hypervisor). PHYP acts as a virtual switch between the LPARs and also handles the virtual SCSI traffic between LPARs. Micro-Partitioning supports 10 times as many LPARs as processors with fractional allocations. It was introduced with the POWER5 processor. All IBM POWER5, POWER6, and successor systems may be partitioned. Note that a full system partition may be defined where all resources are consumed by a single partition. System P servers with PowerVM enabled allow LPARs with shared CPUs to delegate their unused cycles into the shared pool. Dedicated processors are not available for sharing. Unused cycles become available for other partitions and are governed by the parameters specified when the LPAR is defined. Changes to a running partition can be made dynamically up to the maximum value set, and down to the minimum value set in the active profile. The changing of resource allocations without restart of the logical partition is called dynamic logical partitioning. IBM PowerVM is the licensed/purchased feature that enables the virtualization features on p4, 5, 6, 7, and subsequent series servers.
Exploiting Intel vPro (i.e. Non-uniform memory access), there are also implementations of Logical Partitioning based on Intel Xeon e.g. by Hitachi Data Systems.
LPARs (with sufficient certification) safely allow combining multiple test, development, quality assurance, and production work on the same server, offering advantages such as lower costs, faster deployment, and more convenience. IBM mainframe LPARs are Common Criteria EAL 5+ certifiable, equivalent to physically unconnected servers, so they support the highest security requirements, including military use. Nearly all IBM mainframes run with multiple LPARs with the IBM System z9 and IBM System z10 supporting up to 60 LPARs and later models up to 85.
See also
VM (operating system)
Full virtualization
Dynamic Logical Partitioning (DLPAR)
Workload Partitions (WPAR)
HiperSocket, Hypervisor
Virtualization
PowerVM
Sun Solaris Containers
Sun LDOM
References
External links
Security on the Mainframe, December 2009, by Karan Singh, Chapter 4. Virtualization, page 24 and page 83.
System i and System p: Logical Partitioning Guide
IBM System p Virtualization — The most complete virtualization offering for UNIX and Linux
Hitachi Compute Blade LPARs
Fujitsu XPARs (SPARC) and "Flexible I/O and Partitioning" (x86_64)
AS/400
Hardware partitioning
IBM mainframe technology
IBM storage software
Virtualization software | Operating System (OS) | 1,221 |
I/O virtualization
Input/output (I/O) virtualization is a methodology to simplify management, lower costs and improve performance of servers in enterprise environments. I/O virtualization environments are created by abstracting the upper layer protocols from the physical connections.
The technology enables one physical adapter card to appear as multiple virtual network interface cards (vNICs) and virtual host bus adapters (vHBAs). Virtual NICs and HBAs function as conventional NICs and HBAs, and are designed to be compatible with existing operating systems, hypervisors, and applications. To networking resources (LANs and SANs), they appear as normal cards.
In the physical view, virtual I/O replaces a server’s multiple I/O cables with a single cable that provides a shared transport for all network and storage connections. That cable (or commonly two cables for redundancy) connects to an external device, which then provides connections to the data center networks.
Background
Server I/O is a critical component to successful and effective server deployments, particularly with virtualized servers. To accommodate multiple applications, virtualized servers demand more network bandwidth and connections to more networks and storage. According to a survey, 75% of virtualized servers require 7 or more I/O connections per device, and are likely to require more frequent I/O reconfigurations.
In virtualized data centers, I/O performance problems are caused by running numerous virtual machines (VMs) on one server. In early server virtualization implementations, the number of virtual machines per server was typically limited to six or less. But it was found that it could safely run seven or more applications per server, often using 80 percent of total server capacity, an improvement over the average 5 to 15 percent utilized with non-virtualized servers .
However, increased utilization created by virtualization placed a significant strain on the server’s I/O capacity. Network traffic, storage traffic, and inter-server communications combine to impose increased loads that may overwhelm the server's channels, leading to backlogs and idle CPUs as they wait for data.
Virtual I/O addresses performance bottlenecks by consolidating I/O to a single connection whose bandwidth ideally exceeds the I/O capacity of the server itself, thereby ensuring that the I/O link itself is not a bottleneck. That bandwidth is then dynamically allocated in real time across multiple virtual connections to both storage and network resources. In I/O intensive applications, this approach can help increase both VM performance and the potential number of VMs per server.
Virtual I/O systems that include quality of service (QoS) controls can also regulate I/O bandwidth to specific virtual machines, thus ensuring predictable performance for critical applications. QoS thus increases the applicability of server virtualization for both production server and end-user applications.
Benefits
Management agility: By abstracting upper layer protocols from physical connections, I/O virtualization provides greater flexibility, greater utilization and faster provisioning when compared to traditional NIC and HBA card architectures. Virtual I/O technologies can be dynamically expanded and contracted (versus traditional physical I/O channels that are fixed and static), and usually replace multiple network and storage connections to each server with a single cable that carries multiple traffic types. Because configuration changes are implemented in software rather than hardware, time periods to perform common data center tasks – such as adding servers, storage or network connectivity – can be reduced from days to minutes.
Reduced cost: Virtual I/O lowers costs and enables simplified server management by using fewer cards, cables, and switch ports, while still achieving full network I/O performance. It also simplifies data center network design by consolidating and better utilizing LAN and SAN network switches.
Reduced cabling: In a virtualized I/O environment, only one cable is needed to connect servers to both storage and network traffic. This can reduce data center server-to-network, and server-to-storage cabling within a single server rack by more than 70 percent, which equates to reduced cost, complexity, and power requirements. Because the high-speed interconnect is dynamically shared among various requirements, it frequently results in increased performance as well.
Increased density: I/O virtualization increases the practical density of I/O by allowing more connections to exist within a given space. This in turn enables greater utilization of dense 1U high servers and blade servers that would otherwise be I/O constrained.
Blade server chassis enhance density by packaging many servers (and hence many I/O connections) in a small physical space. Virtual I/O consolidates all storage and network connections to a single physical interconnect, which eliminates any physical restrictions on port counts. Virtual I/O also enables software-based configuration management, which simplifies control of the I/O devices. The combination allows more I/O ports to be deployed in a given space, and facilitates the practical management of the resulting environment.
See also
Intel VT-d and AMD-Vi
Intel VT-x
PCI-SIG I/O virtualization
x86 virtualization
References
Computer networking
Virtualization software | Operating System (OS) | 1,222 |
Desktop (word processor)
Desktop is a WYSIWYG word processor for computers Sinclair ZX Spectrum and compatible (e. g. Didaktik). It is a word processor of Czech origin, its author is Tomáš Vilím who used the name Universum as his author name. Distributor of the program was Proxima - Software.
Destop is very advanced word processor when compared with other ZX Spectrum word processors, it uses proportional fonts and it can use four different font in one document. However, it is not possible to use bold and italics variant, every variant needs to use independent font.
Program was distributed with three supporting programs:
Convertor - converter of text created in Tasword, D-Text, R-Text, D-Writer, and Textmachine into Desktop format,
Fonteditor - for editing fonts and writing headings, it can covert color images into gray scale images,
Screen Top - for editing images up to dimensions of 512 by 384 pixels (2 by 2 screens of ZX Spectrum).
Several printers and plotters were supported for printing the text:
plotter Minigraf Aritma 0507,
plotters XY 4140, XY 4150, XY 4160,
plotter Merkur Alfi,
1-pin dot matrix printer BT100,
printer Gamacentrum 01,
thermoprinter Robotron K6304,
9-pin dot matrix printers Epson FX, RX, LX, EX and compatible,
24-pin dot matrix printers Epson LQ and compatible.
The driver Ultra LQ was developed for better printing at Epson compatible printers, it prints the letters in the matrix of 16 by 24 points instead of original 8 by 12 points. The driver Ultra BT for better printing at BT100 printer existed too. These drivers were distributed as independent software packages.
Special version of Desktop with driver of D-100 printer existed, because D-100 was not compatible with Epson printers. Also a version supporting PRT 42G printer existed.
Four sets of supporting programs, drivers, fonts, images were developed and distributed under the names Klub uživatelů Desktopu 1 - 4 (in English Desktop user club 1 - 4).
Klub uživatelů Desktopu 1 contains following:
Archives - for printing of font look overview,
driver BT100-552 - it allows printing of up to 552 point per line instead of standard 480 points per line (it requires modified printer BT100, unmodified can be damaged),
Fonteditor - keypad - font editor directly runnable from Desktop word processor, controlled by keyboard,
Fonteditor - Kempston - font editor directly runnable from Desktop word processor, controlled by Kempston joystick,
Insert+Cat - it allows creating a text file from a diskette directory, similarly, it allows to make a list of files on cassette. Additionally, it allows to convert sequential .Q files into text file (.Q files are sequential files used by diskette units Didaktik 40 and Didaktik 80).
Keyboard View - for printing of actual keyboard letter placement,
Pulldown Menus - pull-down menus for Desktop,
20 fonts,
2 font complets - every complet contains four fonts,
3 big fonts,
60 images,
3 sample texts.
Klub uživatelů Desktopu 2 is the driver Ultra BT.
Klub uživatelů Desktopu 3 contains following:
Art Studio - the utility for modifying images (it is not the graphics editor Art Studio of OCP),
Block operations,
Calculator - scientific calculator, the results can be inserted into text,
Keywords - for faster inserting of repeating parts of text into text,
Remaker - for simple operations with whole fonts, it does not allow operation with single characters,
Telefony - a game runnable directly from program Desktop,
19 fonts,
8 big fonts,
2 sample text,
12 image sets (cliparts),
5 image fonts (for program Fonteditor),
2 big images (for program Screen Top).
Klub uživatelů Desktopu 4 are drivers Ultra LX and Ultra LQ.
The next supporting programs were distributed in a set Public 12 - pro Desktop (in English Public 12 - for Desktop):
XY 4150 - the drivet that does not simulates 1-pin dot matrix printer, but draws the letters,
Great Font - for use of Fonteditoru big fonts directly in Desktop,
Chess 1 - for making chess positions and their inserting into text,
Chess 2 - for making chess positions and their inserting into text,
Tetris for Desktop - the Tetris game runnable directly from program Desktop.
All programs are now available to use for free.
External links
Desktop at World of Spectrum
Desktop at Spectrum Computing
Manual for Desktop at the website of journal ZX Magazín (in Czech)
Manual for Desktop at softhouse.speccy.cz (in Czech)
Manual for Ultra LX/LQ at softhouse.speccy.cz (in Czech)
Desktop's and Fonteditor's TTF Fonts at softhouse.speccy.cz (in Czech)
1993 software
ZX Spectrum software
Word processors | Operating System (OS) | 1,223 |
Institute for System Programming
The Institute for System Programming (ISP) of the Russian Academy of Sciences (RAS; ) was founded on January 25, 1994, on the base of the departments of System Programming and Numerical Software of the Institute for Cybernetics Problems of the RAS. ISP RAS belongs to the Division of Mathematical Sciences of the RAS.
R and D groups
Compiler Technologies Department The department is specialized in applying compiler approach to different computer science fields, as well as modern optimizing compiler development and design. The first compiler projects started in early 1980s. The recent research activity of the team is concentrated on parallel programming and reverse engineering.
Computing Systems Architecture Department The main directions of the department research activities have been connected with effective implementation of network architectures and hardware platforms for local and global networks.
Information Systems Department The main activities of the department: multi-user fully functional relational DBMS, CORBA-based technology for distributed information systems, XML-based technology for heterogeneous data integration, native XML database Sedna, text mining and information retrieval.
Software Development Tools Department The main direction is creation of tools supporting formal specification and modeling languages and easing the development process.
Software Engineering Department The spectrum of the scientific research of the department covers a broad range of Software Engineering, including analysis of programs and their models, verification and validation, standardization issues including development of open software standards, various aspects of development, maintenance and evolution of software together with methods of education and deployment of advanced technologies.
System Programming Department Research activities of the department lie in the area of program static analysis, excavation of architecture using program code and visualization of software architecture model, modelling of architecture and code generation using software model.
Theoretical Computer Science Department The members of the department are specialists in different branches of mathematics and theoretical computer science: combinatorics, complexity of computations, probabilistic methods, mathematical logic, formal methods of program analysis, logical programming, mathematical cryptography.
Councils
Academic council The main task of the council is coordination of research and scientific programs aimed on prioritization of new important directions.
Dissertation council Being a part of the Institute Dissertation council D.002.087.01 considers applications for scientific degrees of candidate and doctor of physical and mathematical, and technical sciences according to qualification standard 05.13.11 “Mathematical and program support for computers, their complexes, and networks”.
Centers
Verification Center of the Operating System Linux The mission of the Center is to propagate the Linux platform by ensuring its high reliability and compatibility through the use of open standards and advanced testing and verification technologies.
Center of competence in parallel and distributed computing The goal of the center is in significant increase of the usage of parallel and distributed computations in the areas of educational, research, and production activities of Russian organizations.
External links
Institute for System Programming
Company Profile at Linux Foundation
Verification Center of the Operating System Linux
Institutes of the Russian Academy of Sciences | Operating System (OS) | 1,224 |
SHAKTI (microprocessor)
SHAKTI is an open-source initiative by the Reconfigurable Intelligent Systems Engineering (RISE) group at Indian Institute of Technology, Madras to develop the first indigenous Indian industrial-grade processor. The aim of SHAKTI initiative includes building an opensource production-grade processor, complete System on Chips (SoCs), development boards and SHAKTI based software platform. The primary focus of the team is architecture research to develop SoCs, which is competitive with commercial offerings in the market concerning area, power and performance. All the source codes for SHAKTI are open-sourced under the Modified BSD License. The project was funded by Ministry of Electronics and Information Technology (MeITY), Government of India.
Processors
SHAKTI processors are based on the RISC-V ISA. The processors are based on 22 nm FinFET technology. SHAKTI has envisioned a family of processors as part of its road-map, catering to different segments of the market. They have been broadly categorized into "Base Processors", "Multi-Core Processors" and "Experimental Processors". The E and C-classes are the first set of indigenous processors aimed at Internet of Things (IoT), Embedded and Desktop markets. The processor design is free of any royalty and is open-sourced under the Modified BSD License.
The SHAKTI project aims to build 6 variants of processors based on the RISC-V ISA.
Base Classes Of Processors
E-class
The E-class are 32/64 bit microcontrollers capable of supporting all extensions of the RISC-V ISA, aimed at low-power and low computer applications. The E-class is an in-order 3 stage pipeline having an operational frequency of less than 200 MHz on silicon. It is positioned against ARM’s M-class (Cortex-M series) cores. It is capable of running real-time operating systems like FreeRTOS, Zephyr and eChronos. Market segments of E-class processor support Smart-cards, IoT devices, motor controls and robotic platforms.
E-arty35T is a SoC built around E-class. The E-arty35T SoC is a single-chip 32-bit E-class microcontroller with 128kB RAM. It has 32 General Purpose Input Output (GPIO) pins (out of which upper 16 GPIO pins are dedicated to onboard LEDs and switches), a Platform Level Interrupt Controller (PLIC), a Counter, 2 Serial Peripheral (SPI), 2 Universal Asynchronous Receiver Transmitter (UART), 1 Inter-Integrated Circuit (I2C), 6 Pulse Width Modulator (PWM) and an inbuilt Xilinx analog-to-digital converter (X-ADC).
C-class
The C-class is a 64-bit controller class of processor, aimed at mid-range embedded application. The core is highly optimized, 6-stage in-order design with MMU support and the capability to run operating systems like Linux and Sel4. It is extremely configurable with the support of the standard RV64GC ISA extensions. It targets mid-range compute systems running over 200-800 MHz. It can also be customized up to 2 GHz. It is positioned against ARM's Cortex A35/A55. The application domain of this class ranges from embedded systems, motor-control, IoT, storage, industrial applications to low-cost high-performance Linux based applications such as networking, gateways etc.
C-arty100T is a SoC built around the C-class. The C-arty100T SoC is a single-chip 64-bit C-class microcontroller with 128MB DDR3 RAM, 16 General Purpose Input Output (GPIO) pins, a Platform Level Interrupt Controller (PLIC), a Counter, 1 Universal Asynchronous Receiver Transmitter (UART) and 1 Inter-Integrated Circuit (I2C). It is aimed at mid-range application workloads with a very low power consumption and has support for optional memory protection.
I-class
The I-class is a 64-bit processor which targets the compute, mobile, storage and networking platforms. Its features include out-of-order execution, multithreading, aggressive branch prediction, non-blocking caches and deep pipeline stages. The operational clock frequency of this processor is 1.5-2.5 GHz. The team is currently working on implementing atomics, Memory dependence prediction, Instruction Window/Scheduler optimizations, Implementation of some functional units, Performance analysis/projections, Optimizations to meet first-cut target frequency on 1 GHz on 22 nm processor.
Multicore Processors
M-class
A mobile class processor with a maximum of eight cores, the cores being a combination of C and I class cores. The M-class processors are aimed at general-purpose compute, low-end server and mobile applications. The operation frequency ranges up to 2.5 GHz. It supports large issue size, quad-threaded and optional NoC fabric. The M-class processors are optimized for various power and performance targets.
S-class
The S-Class is a 64-bit superscalar, multi-threaded variant aimed at Desktop and Enterprise server Application. Its supports 2-16 cores with a clock frequency of about 1.2–3 GHz.
H-class
The H-class is a 64-bit processor aimed at highly parallel enterprise, HPC and analytics applications. The cores can be a combination of C or I class, single-thread performance driving the core choice. The H-class has up to 128 cores with multiple accelerators per core.
Experimental Processors
These are experimental/research projects which focus on developing a high security and fault tolerant processor.
T-class
The T-class is aimed to provide additional hardware support for securing information from memory-based attacks. Its design focuses on a unified hardware framework for mitigating spatial and temporal memory attacks.
F-class
The F-class is a fault-tolerant version of the base class processor. Features include redundant compute blocks (like DMR and TMR), temporal redundancy modules to detect permanent faults, lock-step core configurations, fault localization circuits, ECC for critical memory blocks and redundant bus fabrics.
Tapeouts
Two tapeouts of the C-class processors have been performed. They have been codenamed as RIMO and Rise-creek.
RIMO
RIMO is the code name of the SHAKTI C-class based SoC that has been taped-out at Semi-Conductor Laboratory (SCL) at Chandigarh using 180 nm process technology. The 144 sq.mm. chip has been tested to operate at a frequency of up to 70 MHz. The chip has been packaged on a 208-pin Ceramic Quad Flat Pack (CQFP).
Risecreek
CREEK is the code name of the SHAKTI C-class based SoC that has been taped-out at Intel's Oregon fab using a 22nm FinFET process. The 16mm² chip has been tested to operate at a frequency of up to 350 MHz. The chip has been packaged on a 208-pin Ball Grid Array (BGA).
Moushik
Moushik is the code name of the SHAKTI E-class based SoC that has been taped-out at Semi-Conductor Laboratory (SCL) at Chandigarh using 180 nm process technology. It operates in frequency of 100 MHz and developed alongside with a motherboard called Ardonyx 1.0.
Features of RIMO and Risecreek
Some of the features of RIMO and Risecreek are as follows:
In-order 5 stage 64-bit microcontroller supporting the entire stable RISC-V ISA(RV64IMAFD).
Compatible with privilege spec (v1.10) of RISC-V ISA and supports the sv39 virtualisation scheme.
Includes a branch predictor with a Return-Address-Stack.
Pipelined IEEE-754 compliant single and double-precision floating point units and Multi-channel Direct Memory Access (DMA) support.
Peripherals like 2 x I2C, 2 x UART, 2 x QSPI, a Debugger, a 256KB tightly coupled memory, 32-bit GPIOs and an expansion bus that can be connected to an FPGA.
Development boards
There are development boards for both E and C class of processors. The details on the board support for different classes of processors are given below.
E-arty35T
E-arty35Tis a SoC based on SHAKTI E class [14].
E-arty35Tis supported on Artix 7 35T board.
It has an abridged version of 32 bit E class. It includes I, M, A and C.
C-arty100T
C-arty100Tis a SoC based on SHAKTI C class.
C-arty100Tis supported on Artix 7 100T board.
It has an abridged version of 64 bit C class. It includes I, M, A, F, D and C.
Commercial Support
Altair Engineering from July 2021, included E-Class processor in its embedded system firmware support portfolio for its global customers.
References
Microcontrollers
Technology companies of India
32-bit microprocessors
64-bit microprocessors | Operating System (OS) | 1,225 |
Gambas
Gambas is the name of an object-oriented dialect of the BASIC programming language, as well as the integrated development environment that accompanies it. Designed to run on Linux and other Unix-like computer operating systems, its name is a recursive acronym for Gambas Almost Means Basic. Gambas is also the word for prawns in the Spanish, French, and Portuguese languages, from which the project's logos are derived.
History
Gambas was developed by the French programmer Benoît Minisini, with its first release coming in 1999. Benoît had grown up with the BASIC language, and decided to make a free software development environment that could quickly and easily make programs with user interfaces.
The Gambas 1.x versions featured an interface made up of several different separate windows for forms and IDE dialogues in a similar fashion to the interface of earlier versions of the GIMP. It could also only develop applications using Qt and was more oriented towards the development of applications for KDE. The last release of the 1.x versions was Gambas 1.0.19.
The first of the 2.x versions was released on January 2, 2008, after three to four years of development. It featured a major redesign of the interface, now with all forms and functions embedded in a single window, as well as some changes to the Gambas syntax, although for the most part code compatibility was kept. It featured major updates to existing Gambas components as well as the addition of some new ones, such as new components that could use GTK+ or SDL for drawing or utilize OpenGL acceleration. Gambas 2.x versions can load up and run Gambas 1.x projects, with occasional incompatibilities; the same is true for Gambas 2.x to 3.x, but not from Gambas 1.x to 3.x.
The next major iteration of Gambas, the 3.x versions, was released on December 31, 2011. A 2015 benchmark published on the Gambas website showed Gambas 3.8.90 scripting as being faster to varying degrees than Perl 5.20.2 and the then-latest 2.7.10 version of Python in many tests. Version 3.16.0 released on April 20, 2021, featured full support for Wayland using the graphical components, as well as parity between the Qt 5 and GTK 3 components.
Features
Gambas is designed to build graphical programs using the Qt (currently Qt 4.x or 5.x since 3.8.0) or the GTK toolkit (GTK 3.x also supported as of 3.6.0); the Gambas IDE is written in Gambas. Gambas includes a GUI designer to aid in creating user interfaces in an event-driven style, but can also make command line applications, as well as text-based user interfaces using the ncurses toolkit. The Gambas runtime environment is needed to run executables.
Functionality is provided by a variety of components, each of which can be selected to provide additional features. Drawing can be provided either through Qt and GTK toolkits, with an additional component which is designed to switch between them. Drawing can also be provided through the Simple DirectMedia Layer (originally version 1.x, with 2.x added as of 3.7.0), which can also be utilized for audio playback through a separate sound component (a component for the OpenAL specification has also been added). GPU acceleration support is available through an OpenGL component, as well as other hardware functionally provided by various other components. There are also components for handling other specialized tasks.
With Gambas, developers can also use databases such as MySQL or PostgreSQL, build KDE (Qt) and GNOME GTK applications with DCOP, translate Visual Basic programs to Gambas and run them under Linux, build network solutions, and create CGI web applications. The IDE also includes a tool for the creation of installation packages, supporting GNU Autotools, slackpkg, pacman, RPM, and debs (the latter two then tailored for specific distributions such as Fedora/RHEL/CentOS, Mageia, Mandriva, OpenSUSE and Debian, Ubuntu/Mint).
Gambas since version 3.2 IDE has integrated profiler and it started to use Just-in-time compilation technology.
Differences from Visual Basic
Gambas is intended to provide a similar experience as developing in Microsoft Visual Basic, but it is not a free software clone of the popular proprietary program. The author of Gambas makes it clear that there are similarities to Visual Basic, such as syntax for BASIC programs and the integrated development environment; Gambas was written from the start to be a development environment of its own and seeks to improve on the formula.
Its object model, each class being represented in a file, as well as the archiver to package the program is all inspired by the Java programming language. Gambas is intended to be an alternative for former Visual Basic developers who have decided to migrate to Linux. There are also other important distinctions between Gambas and Visual Basic. One notable example is that in Gambas array indexes always start with 0, whereas Visual Basic indexes can start with 0 or 1. Gambas also supports the += and -= shorthand not found in classic Visual Basic. Both of these are features of Visual Basic .NET however.
Adoption
Several programs and many forms of example code have been written using and for Gambas. , Freecode (formerly Freshmeat) listed 23 applications that were developed using Gambas, while the Gambas wiki listed 82; several other specialized sites list Gambas applications and code. A Gambas written application, named Gambas3 ShowCase, acted as a software center to download or install Gambas 3 applications. It has since been discontinued following the launch of the first party Gambas Software Farm integrated into the IDE since 3.7.1, which contains nearly 500 applications and demos. Several community sites, including community forums and mailing lists, also exist for Gambas. A notable application written in Gambas is Xt7-player-mpv, a GUI frontend for mpv player contained in a number of Linux software repositories.
Availability
Gambas is included in the repositories of a number of Linux distributions, such as Debian, Fedora, Mandriva Linux and Ubuntu. A Microsoft Windows version of Gambas was run under the Cygwin environment, although this version was significantly less tested than its Linux counterparts and was command-line only; Cooperative Linux and derivatives have also been used, as well as specialized Linux virtual machines. An independent contributor, François Gallo, also worked on porting Gambas 3.x to Mac OS X and FreeBSD, based on using local versions of the X11 system. Gambas from version 3.2 can run on Raspberry Pi, and offers just-in-time compilation there from version 3.12.
In November 2013, the future portability of Gambas was discussed, listing the main concerns being Linux kernel features utilized in the interpreter, components using Linux specific software and libraries, and primarily X11-tying in the Qt, GTK and desktop integration components. However, partly due to the need to upgrade to newer toolkits such as GTK 3 (added as of 3.6.0) and Qt 5 (as of 3.8.0), future versions would be less X11 tied, making projects like Cygwin or utterly native versions on other platforms more possible. Benoît Minisini stated that he intended to "encapsulate" X11 specific code to aid in any attempt to replace it, with the X11 support in the desktop component moved to its own component as of 3.6.0.
On October 27, 2016, a screenshot and setup guide was released from the main page for running Gambas fully through Cygwin, including most components, graphical toolkits, and the complete IDE. The relevant patches were mainlined as of version 3.9.2. This replaces the prior recommended method of using freenx forwarding from a Linux server. It has also been successfully run using the Windows Subsystem for Linux., this is usually done using an X terminal emulator such Mobaxterm on Windows as WSL does not support X11 graphics directly.
Example code
A "Hello, World!" program with graphical user interface.
Public Sub Main()
Message("Hello, World!")
End
Program that computes a 100-term polynomial 500000 times, and repeats it ten times (used for benchmarking).
Private Sub Test(X As Float) As Float
Dim Mu As Float = 10.0
Dim Pu, Su As Float
Dim I, J, N As Integer
Dim aPoly As New Float[100]
N = 500000
For I = 0 To N - 1
For J = 0 To 99
Mu = (Mu + 2.0) / 2.0
aPoly[J] = Mu
Next
Su = 0.0
For J = 0 To 99
Su = X * Su + aPoly[J]
Next
Pu += Su
Next
Return Pu
End
Public Sub Main()
Dim I as Integer
For I = 1 To 10
Print Test(0.2)
Next
End
See also
List of BASIC dialects
Comparison of integrated development environments#BASIC
GNAVI
Lazarus
References
Further reading
Mark Alexander Bain (Apr 28, 2006) An Introduction to Gambas, Linux Journal, issue 146, June 2006 (in print)
Mark Alexander Bain (Dec 3, 2004) Gambas speeds database development, Linux.com
Mark Alexander Bain (Dec 12, 2007) Creating simple charts with Gambas 2.0, Linux.com
Fabián Flores Vadell (Nov, 2010) How to Program with Gambas
External links
Gambas source code
Gambas Documentation
Gambas Mailinglist
Gambas Almost Means Basic
Gambas Magazine — Linux Software Development with Gambas
BASIC compilers
BASIC interpreters
BASIC programming language family
Free integrated development environments
Free software
Free software programmed in BASIC
Linux integrated development environments
Linux programming tools
Object-oriented programming languages
Procedural programming languages
Programming languages created in 1999
Self-hosting software
Software that uses Qt
Software using the GPL license
User interface builders | Operating System (OS) | 1,226 |
OProfile
In computing, OProfile is a system-wide statistical profiling tool for Linux. John Levon wrote it in 2001 for Linux kernel version 2.4 after his M.Sc. project; it consists of a kernel module, a user-space daemon and several user-space tools.
Details
OProfile can profile an entire system or its parts, from interrupt routines or drivers, to user-space processes. It has low overhead.
The most widely supported kernel mode of uses a system timer (See: Gathering profiling events). However, this mode is unable to measure kernel functions where interrupts are disabled. Newer CPU models support a hardware performance counter mode which uses hardware logic to record events without any active code needed. In Linux 2.2/2.4 only 32-bit x86 and IA64 are supported; in Linux 2.6 there is wider support: x86 (32 and 64 bit), DEC Alpha, MIPS, ARM, sparc64, ppc64, AVR32.
Call graphs are supported only on x86 and ARM.
In 2012 two IBM engineers recognized OProfile as one of the two most commonly used performance counter monitor profiling tools on Linux, alongside perf tool.
In 2021, OProfile is set to be removed from version 5.12 of the Linux kernel, with the user-space tools continuing to work by using the kernel's perf system.
User-space tools
is used to start and stop the daemon, which collects profiling data. This data is periodically saved to the directory.
shows basic profiling data. can produce annotated sources or assembly.
converts from oprofile data into gprof-compatible format.
Example:
See also
List of performance analysis tools
References
External links
W. Cohen, Tuning programs with OProfile // Wide Open Magazine, 2004, pages 53–62
Prasanna Panchamukhi, Smashing performance with OProfile. Identifying performance bottlenecks in real-world systems // IBM DeveloperWorks, Technical Library, 16 Oct 2003
Justin Thiel, An Overview of Software Performance Analysis Tools and Techniques: From GProf to DTrace, (2006) "2.2.2 Overview of Oprofile"
Linux kernel features
Profilers | Operating System (OS) | 1,227 |
AN/UYK-7
The AN/UYK-7 was the standard 32-bit computer of the United States Navy for surface ship and submarine platforms, starting in 1970. It was used in the Navy's NTDS & Aegis combat systems and U.S. Coast Guard, and the navies of U.S. allies. It was also used by the U.S. Army. Built by UNIVAC, it used integrated circuits, had 18-bit addressing and could support multiple CPUs and I/O controllers (three CPUs and two I/O controllers were a common configuration). Its multiprocessor architecture was based upon the UNIVAC 1108. An airborne version, the UNIVAC 1832, was also produced.
In the mid-1980s, the UYK-7 was replaced by the AN/UYK-43 which shared the same instruction set. Retired systems are being cannibalized for repair parts to support systems still in use by U.S. and non-U.S. forces.
See also
AN/USQ-20 30-bit computer that the AN/UYK-7 replaced
AN/UYK-20 16-bit computer developed for navy projects that did not need the full power of the AN/UYK-7
CMS-2 (programming language)
References
External links
Description
Military computers
Equipment of the United States Navy
Military electronics of the United States
32-bit computers | Operating System (OS) | 1,228 |
Windows Phone 8
Windows Phone 8 is the second generation of the Windows Phone mobile operating system from Microsoft. It was released on October 29, 2012, and, like its predecessor, it features a flat user interface based on the Metro design language. It was succeeded by Windows Phone 8.1, which was unveiled on April 2, 2014.
Windows Phone 8 replaces the Windows CE-based architecture used in Windows Phone 7 with the Windows NT kernel found in Windows 8. Windows Phone 7 devices cannot run or update to Windows Phone 8, and new applications compiled specifically for Windows Phone 8 are not made available for Windows Phone 7 devices. Developers can make their apps available on both Windows Phone 7 and Windows Phone 8 devices by targeting both platforms via the proper SDKs in Visual Studio.
Windows Phone 8 devices are manufactured by Microsoft Mobile (formerly Nokia), HTC, Samsung and Huawei.
History
On June 20, 2012, Microsoft unveiled Windows Phone 8 (codenamed Apollo), a third generation of the Windows Phone operating system for release later in 2012. Windows Phone 8 replaces its previously Windows CE-based architecture with one based on the Windows NT kernel, and shares many components with Windows 8, allowing developers to easily port applications between the two platforms.
Windows Phone 8 also allows devices with larger screens (the four confirmed sizes are "WVGA 800×480 15:9","WXGA 1280×768 15:9","720p 1280×720 16:9","1080p 1920x1080 16:9" resolutions) and multi-core processors, NFC (that can primarily be used to share content and perform payments), backwards compatibility with Windows Phone 7 apps, improved support for removable storage (that now functions more similarly to how such storage is handled on Windows and Android), a redesigned home screen incorporating resizable tiles across the entire screen, a new Wallet hub (to integrate NFC payments, coupon websites such as Groupon, and loyalty cards), and "first-class" integration of VoIP applications into the core functions of the OS. Additionally, Windows Phone 8 will include more features aimed at the enterprise market, such as device management, BitLocker encryption, and the ability to create a private Marketplace to distribute apps to employeesfeatures expected to meet or exceed the enterprise capabilities of the previous Windows Mobile platform. Additionally, Windows Phone 8 will support over-the-air updates, and all Windows Phone 8 devices will receive software support for at least 36 months after their release.
In the interest of ensuring it is released with devices designed to take advantage of its new features, Windows Phone 8 will not be made available as an update for existing Windows Phone 7 devices. Instead, Microsoft released Windows Phone 7.8 as an update for Windows Phone 7 devices, which backported several features such as the redesigned home screen.
Addressing some software bugs with Windows Phone 8 forced Microsoft to delay some enterprise improvements, like VPN support, until the 2014 release of Windows Phone 8.1.
Support
In March 2013, Microsoft announced that updates for the Windows Phone 8 operating system would be made available through July 8, 2014. Microsoft pushed support up to 36 months, announcing that updates for the Windows Phone 8 operating system would be made available through January 12, 2016. Windows Phone 8 devices will be upgradeable to the next edition of Windows Phone 8.1.
Features
The following features were confirmed at Microsoft's 'sneak peek' at Windows Phone on June 20, 2012 and the unveiling of Windows Phone 8 on October 29, 2012:
Core
Windows Phone 8 is the first mobile OS from Microsoft to use the Windows NT kernel, which is the same kernel that runs Windows 8. The operating system adds improved file system, drivers, network stack, security components, media and graphics support. Using the NT kernel, Windows Phone can now support multi-core CPUs of up to 64 cores, as well as 1280×720 and 1280×768 resolutions, in addition to the base 800×480 resolution already available on Windows Phone 7. Furthermore, Windows Phone 8 also adds support for MicroSD cards, which are commonly used to add extra storage to phones. Support for 1080p screens was added in October 2013 with the GDR3 update.
Due to the switch to the NT kernel, Windows Phone 8 also supports native 128-bit Bitlocker encryption and Secure Boot. Windows Phone 8 also supports NTFS due to this switch.
Web
Internet Explorer 10 is the default browser in Windows Phone 8 and carries over key improvements also found in the desktop version. The navigation interface has been simplified down to a single customizable button (defaults to stop / refresh) and the address bar. While users can change the button to a 'Back' button, there is no way to add a 'Forward' button. However, as the browser supports swipe navigation for both forwards and back, this is a minor issue.
Multitasking
Unlike its predecessor, Windows Phone 8 uses true multitasking, allowing developers to create apps that can run in the background and resume instantly.
A user can switch between "active" tasks by pressing and holding the Back button, but any application listed may be suspended or terminated under certain conditions, such as a network connection being established or battery power running low. An app running in the background may also automatically suspend, if the user has not opened it for a long duration of time.
The user can close applications by opening the multitasking view and pressing the "X" button in the right-hand corner of each application window, a feature that was added in Update 3.
Kids Corner
Windows Phone 8 adds Kids Corner, which operates as a kind of "guest mode". The user chooses which applications and games appear on the Kids Corner. When Kids Corner is activated, apps and games installed on the device can be played or accessed without touching the data of the main user signed into the Windows Phone.
Rooms
Rooms is a feature added specifically for group messaging and communication. Using Rooms, users can contact and see Facebook and Twitter updates only from members of the group created. Members of the group can also share instant messages and photos from within the room. These messages will be shared only with the other room members. Microsoft will be removing this feature sometime during March 2015.
Driving Mode
With the release of Update 3 in late 2013, pairing a Windows Phone 8 device with a car via Bluetooth now automatically activates "Driving Mode", a specialized UI designed for using a mobile device while driving.
Data Sense
Data Sense allows users to set data usage limits based on their individual plan. Data Sense can restrict background data when the user is near their set limit (a heart icon is used to notify the user when background tasks are being automatically stopped). Although this feature was originally exclusive to Verizon phones in the United States, the GDR2 update released in July 2013 made Data Sense available to all Windows Phone 8 handsets.
NFC and Wallet
Select Windows Phones running Windows Phone 8 add NFC capability, which allows for data transfer between two Windows Phone devices, or between a Windows Phone device, and a Windows 8 computer or tablet, using a feature called "Tap and Send".
In certain markets, NFC support on Windows Phone 8 can also be used to conduct in-person transactions through credit and debit cards stored on the phone through the Wallet application. Carriers may activate the NFC feature through SIM or integrated phone hardware. Orange will be first carrier to support NFC on Windows Phone 8. Besides NFC support for transactions, Wallet can also be used to store credit cards in order to make Windows Phone Store and other in-app purchases (that is also a new feature), and can be used to store coupons and loyalty cards.
Syncing
The Windows Phone app succeeds the Zune Software as the sync application to transfer music, videos, other multimedia files and office documents between Windows Phone 8 and a Windows 8/Windows RT computer or tablet. Versions for OS X and Windows Desktop are also available. Windows Phone 7 devices are not compatible with the PC version of the app, but will work with the Mac version. (Zune is still used for syncing Windows Phone 7s with PCs, and thus remains downloadable from the Windows Phone website.)
Due to Windows Phone 8 identifying itself as an MTP device, Windows Media Player and Windows Explorer may be used to transfer music, videos and other multimedia files unlike in Windows Phone 7. Videos transferred to a computer are limited to a maximum size of 4 GB.
Other features
Xbox SmartGlass allows control of an Xbox 360 and Xbox One with a phone (available for Windows Phone, iOS and Android).
Xbox Music+Video services support playback of audio and video files in Windows Phone, as well as music purchases. Video purchases were made available with the release of a standalone version of Xbox Video in late 2013 that can be downloaded from the Windows Phone Store.
Native code support (C++)
toast notifications sent by apps and app developers using the Microsoft Push Notification Service.
Simplified porting of Windows 8 apps to Windows Phone 8 (compatibility with Windows 8 "Modern UI" apps)
Remote device management of Windows Phone similar to management of Windows PCs
VoIP and video chat integration for any VoIP or video chat app (integrates into the phone dialer, people hub)
Firmware over the air for Windows Phone updates
Minimum 36 month support of Windows Phone updates to Windows Phone 8 devices.
Camera app now supports "lenses", which allow third parties to skin and add features to camera interface.
Native screen capture is added by pressing home and power buttons simultaneously.
Hebrew language support is added for Microsoft to introduce Windows Phone to the Israeli market.
Hardware specifications
Version history
Reception
Reviewers generally praised the increased capabilities of Windows Phone 8, but noted the smaller app selection when compared to other phones. Brad Molen of Engadget mentioned that "Windows Phone 8 is precisely what we wanted to see come out of Redmond in the first place," and praised the more customizable Start Screen, compatibility with Windows 8, and improved NFC support. However, Molen also noted the drawback of a lack of apps in the Windows Phone Store. The Verge gave the OS a 7.9/10 rating, stating that "Redmond is presenting one of the most compelling ecosystem stories in the business right now," but criticized the lack of a unified notifications center. Alexandra Chang of Wired gave Windows Phone 8 an 8/10, noting improvement in features previously lacking in Windows Phone 7, such as multi-core processor support, faster Internet browsing, and the switch from Bing Maps to Nokia Maps, but also criticized the smaller selection of apps.
Usage
IDC reported that in Q1 2013, the first full quarter where WP8 was available to most countries, Windows Phone market share jumped to 3.2% of the worldwide smartphone market, allowing the OS to overtake BlackBerry OS as the third largest mobile operating system by usage.
Roughly a year after the release of WP8, Kantar reported in October 2013 that Windows Phone grew its market share substantially to 4.8% in the United States and 10.2% in Europe. Similar statistics from Gartner for Q3 2013 indicated that Windows Phone's global market share increased 123% from the same period in 2012 to 3.6%.
In Q1 2014 IDC reported that global market share of Windows Phone has dropped to 2.7%.
See also
List of Windows Phone 8 devices
References
External links
Official website (Archive)
Windows Phone
Phone 8
Smartphones | Operating System (OS) | 1,229 |
TSS/8
TSS/8 is a discontinued time-sharing operating system co-written by Don Witcraft and John Everett at Digital Equipment Corporation in 1967. DEC also referred to it as Timeshared-8 and EduSystem 50.
The operating system ran on the 12-bit PDP-8 computer and was released in 1968.
Authorship
Architecture
This timesharing system:
Like IBM's CALL/OS, it implemented language variants:
FORTRAN-D could only access 2 data files at a time, and the entire program was MAIN: no subroutines.
BASIC-8 programs were limited to 350 lines, but "chaining" allowed "programs of virtually any length." BASIC-8 was based on Dartmouth BASIC but lacked matrix operations, implicit declaration of small arrays, strings, ON-GOTO/GOSUB, TAB, and multiline DEF FN statements.
PAL-D (Program Assembly Language/Disk) allowed the "full standard" but, like all TSS/8 programs, was restricted to 4K.
ALGOL was implemented as a known standard subset, "IFIP Subset ALGOL 60."
It also supported DEC's FOCAL, which was "developed specifically for the PDP 8/E" and it provided "an algebraic language" and also a "desk calculator mode."
Historical notes
TSS/8 sold more than 100 copies.
Operating costs were about 1/20 of TSS/360. TSS/8 was also designed to be more cost-effective than the PDP-10 "for jobs with low computational requirements (like editing)".
The RSTS-11 operating system is a descendant of TSS/8.
References
DEC operating systems
Time-sharing operating systems
1968 software | Operating System (OS) | 1,230 |
Computer
A computer is a digital electronic machine that can be programmed to carry out sequences of arithmetic or logical operations (computation) automatically. Modern computers can perform generic sets of operations known as programs. These programs enable computers to perform a wide range of tasks. A computer system is a "complete" computer that includes the hardware, operating system (main software), and peripheral equipment needed and used for "full" operation. This term may also refer to a group of computers that are linked and function together, such as a computer network or computer cluster.
A broad range of industrial and consumer products use computers as control systems. Simple special-purpose devices like microwave ovens and remote controls are included, as are factory devices like industrial robots and computer-aided design, as well as general-purpose devices like personal computers and mobile devices like smartphones. Computers power the Internet, which links billions of other computers and users.
Early computers were meant to be used only for calculations. Simple manual instruments like the abacus have aided people in doing calculations since ancient times. Early in the Industrial Revolution, some mechanical devices were built to automate long tedious tasks, such as guiding patterns for looms. More sophisticated electrical machines did specialized analog calculations in the early 20th century. The first digital electronic calculating machines were developed during World War II. The first semiconductor transistors in the late 1940s were followed by the silicon-based MOSFET (MOS transistor) and monolithic integrated circuit (IC) chip technologies in the late 1950s, leading to the microprocessor and the microcomputer revolution in the 1970s. The speed, power and versatility of computers have been increasing dramatically ever since then, with transistor counts increasing at a rapid pace (as predicted by Moore's law), leading to the Digital Revolution during the late 20th to early 21st centuries.
Conventionally, a modern computer consists of at least one processing element, typically a central processing unit (CPU) in the form of a microprocessor, along with some type of computer memory, typically semiconductor memory chips. The processing element carries out arithmetic and logical operations, and a sequencing and control unit can change the order of operations in response to stored information. Peripheral devices include input devices (keyboards, mice, joystick, etc.), output devices (monitor screens, printers, etc.), and input/output devices that perform both functions (e.g., the 2000s-era touchscreen). Peripheral devices allow information to be retrieved from an external source and they enable the result of operations to be saved and retrieved.
Etymology
According to the Oxford English Dictionary, the first known use of computer was in a 1613 book called The Yong Mans Gleanings by the English writer Richard Brathwait: "I haue read the truest computer of Times, and the best Arithmetician that euer [sic] breathed, and he reduceth thy dayes into a short number." This usage of the term referred to a human computer, a person who carried out calculations or computations. The word continued with the same meaning until the middle of the 20th century. During the latter part of this period women were often hired as computers because they could be paid less than their male counterparts. By 1943, most human computers were women.
The Online Etymology Dictionary gives the first attested use of computer in the 1640s, meaning 'one who calculates'; this is an "agent noun from compute (v.)". The Online Etymology Dictionary states that the use of the term to mean calculating machine' (of any type) is from 1897." The Online Etymology Dictionary indicates that the "modern use" of the term, to mean 'programmable digital electronic computer' dates from "1945 under this name; [in a] theoretical [sense] from 1937, as Turing machine".
History
Pre-20th century
Devices have been used to aid computation for thousands of years, mostly using one-to-one correspondence with fingers. The earliest counting device was probably a form of tally stick. Later record keeping aids throughout the Fertile Crescent included calculi (clay spheres, cones, etc.) which represented counts of items, probably livestock or grains, sealed in hollow unbaked clay containers. The use of counting rods is one example.
The abacus was initially used for arithmetic tasks. The Roman abacus was developed from devices used in Babylonia as early as 2400 BC. Since then, many other forms of reckoning boards or tables have been invented. In a medieval European counting house, a checkered cloth would be placed on a table, and markers moved around on it according to certain rules, as an aid to calculating sums of money.
The Antikythera mechanism is believed to be the earliest mechanical analog computer, according to Derek J. de Solla Price. It was designed to calculate astronomical positions. It was discovered in 1901 in the Antikythera wreck off the Greek island of Antikythera, between Kythera and Crete, and has been dated to . Devices of a level of complexity comparable to that of the Antikythera mechanism would not reappear until a thousand years later.
Many mechanical aids to calculation and measurement were constructed for astronomical and navigation use. The planisphere was a star chart invented by Abū Rayhān al-Bīrūnī in the early 11th century. The astrolabe was invented in the Hellenistic world in either the 1st or 2nd centuries BC and is often attributed to Hipparchus. A combination of the planisphere and dioptra, the astrolabe was effectively an analog computer capable of working out several different kinds of problems in spherical astronomy. An astrolabe incorporating a mechanical calendar computer and gear-wheels was invented by Abi Bakr of Isfahan, Persia in 1235. Abū Rayhān al-Bīrūnī invented the first mechanical geared lunisolar calendar astrolabe, an early fixed-wired knowledge processing machine with a gear train and gear-wheels, .
The sector, a calculating instrument used for solving problems in proportion, trigonometry, multiplication and division, and for various functions, such as squares and cube roots, was developed in the late 16th century and found application in gunnery, surveying and navigation.
The planimeter was a manual instrument to calculate the area of a closed figure by tracing over it with a mechanical linkage.
The slide rule was invented around 1620–1630 by the English clergyman William Oughtred, shortly after the publication of the concept of the logarithm. It is a hand-operated analog computer for doing multiplication and division. As slide rule development progressed, added scales provided reciprocals, squares and square roots, cubes and cube roots, as well as transcendental functions such as logarithms and exponentials, circular and hyperbolic trigonometry and other functions. Slide rules with special scales are still used for quick performance of routine calculations, such as the E6B circular slide rule used for time and distance calculations on light aircraft.
In the 1770s, Pierre Jaquet-Droz, a Swiss watchmaker, built a mechanical doll (automaton) that could write holding a quill pen. By switching the number and order of its internal wheels different letters, and hence different messages, could be produced. In effect, it could be mechanically "programmed" to read instructions. Along with two other complex machines, the doll is at the Musée d'Art et d'Histoire of Neuchâtel, Switzerland, and still operates.
In 1831–1835, mathematician and engineer Giovanni Plana devised a Perpetual Calendar machine, which, through a system of pulleys and cylinders and over, could predict the perpetual calendar for every year from AD 0 (that is, 1 BC) to AD 4000, keeping track of leap years and varying day length. The tide-predicting machine invented by the Scottish scientist Sir William Thomson in 1872 was of great utility to navigation in shallow waters. It used a system of pulleys and wires to automatically calculate predicted tide levels for a set period at a particular location.
The differential analyser, a mechanical analog computer designed to solve differential equations by integration, used wheel-and-disc mechanisms to perform the integration. In 1876, Sir William Thomson had already discussed the possible construction of such calculators, but he had been stymied by the limited output torque of the ball-and-disk integrators. In a differential analyzer, the output of one integrator drove the input of the next integrator, or a graphing output. The torque amplifier was the advance that allowed these machines to work. Starting in the 1920s, Vannevar Bush and others developed mechanical differential analyzers.
First computer
Charles Babbage, an English mechanical engineer and polymath, originated the concept of a programmable computer. Considered the "father of the computer", he conceptualized and invented the first mechanical computer in the early 19th century. After working on his revolutionary difference engine, designed to aid in navigational calculations, in 1833 he realized that a much more general design, an Analytical Engine, was possible. The input of programs and data was to be provided to the machine via punched cards, a method being used at the time to direct mechanical looms such as the Jacquard loom. For output, the machine would have a printer, a curve plotter and a bell. The machine would also be able to punch numbers onto cards to be read in later. The Engine incorporated an arithmetic logic unit, control flow in the form of conditional branching and loops, and integrated memory, making it the first design for a general-purpose computer that could be described in modern terms as Turing-complete.
The machine was about a century ahead of its time. All the parts for his machine had to be made by hand – this was a major problem for a device with thousands of parts. Eventually, the project was dissolved with the decision of the British Government to cease funding. Babbage's failure to complete the analytical engine can be chiefly attributed to political and financial difficulties as well as his desire to develop an increasingly sophisticated computer and to move ahead faster than anyone else could follow. Nevertheless, his son, Henry Babbage, completed a simplified version of the analytical engine's computing unit (the mill) in 1888. He gave a successful demonstration of its use in computing tables in 1906.
Analog computers
During the first half of the 20th century, many scientific computing needs were met by increasingly sophisticated analog computers, which used a direct mechanical or electrical model of the problem as a basis for computation. However, these were not programmable and generally lacked the versatility and accuracy of modern digital computers. The first modern analog computer was a tide-predicting machine, invented by Sir William Thomson (later to become Lord Kelvin) in 1872. The differential analyser, a mechanical analog computer designed to solve differential equations by integration using wheel-and-disc mechanisms, was conceptualized in 1876 by James Thomson, the elder brother of the more famous Sir William Thomson.
The art of mechanical analog computing reached its zenith with the differential analyzer, built by H. L. Hazen and Vannevar Bush at MIT starting in 1927. This built on the mechanical integrators of James Thomson and the torque amplifiers invented by H. W. Nieman. A dozen of these devices were built before their obsolescence became obvious. By the 1950s, the success of digital electronic computers had spelled the end for most analog computing machines, but analog computers remained in use during the 1950s in some specialized applications such as education (slide rule) and aircraft (control systems).
Digital computers
Electromechanical
By 1938, the United States Navy had developed an electromechanical analog computer small enough to use aboard a submarine. This was the Torpedo Data Computer, which used trigonometry to solve the problem of firing a torpedo at a moving target. During World War II similar devices were developed in other countries as well.
Early digital computers were electromechanical; electric switches drove mechanical relays to perform the calculation. These devices had a low operating speed and were eventually superseded by much faster all-electric computers, originally using vacuum tubes. The Z2, created by German engineer Konrad Zuse in 1939, was one of the earliest examples of an electromechanical relay computer.
In 1941, Zuse followed his earlier machine up with the Z3, the world's first working electromechanical programmable, fully automatic digital computer. The Z3 was built with 2000 relays, implementing a 22 bit word length that operated at a clock frequency of about 5–10 Hz. Program code was supplied on punched film while data could be stored in 64 words of memory or supplied from the keyboard. It was quite similar to modern machines in some respects, pioneering numerous advances such as floating-point numbers. Rather than the harder-to-implement decimal system (used in Charles Babbage's earlier design), using a binary system meant that Zuse's machines were easier to build and potentially more reliable, given the technologies available at that time. The Z3 was not itself a universal computer but could be extended to be Turing complete.
Zuse's next computer, the Z4, became the world's first commercial computer; after initial delay due to the Second World War, it was completed in 1950 and delivered to the ETH Zurich. The computer was manufactured by Zuse's own company, , which was founded in 1941 as the first company with the sole purpose of developing computers.
Vacuum tubes and digital electronic circuits
Purely electronic circuit elements soon replaced their mechanical and electromechanical equivalents, at the same time that digital calculation replaced analog. The engineer Tommy Flowers, working at the Post Office Research Station in London in the 1930s, began to explore the possible use of electronics for the telephone exchange. Experimental equipment that he built in 1934 went into operation five years later, converting a portion of the telephone exchange network into an electronic data processing system, using thousands of vacuum tubes. In the US, John Vincent Atanasoff and Clifford E. Berry of Iowa State University developed and tested the Atanasoff–Berry Computer (ABC) in 1942, the first "automatic electronic digital computer". This design was also all-electronic and used about 300 vacuum tubes, with capacitors fixed in a mechanically rotating drum for memory.
During World War II, the British code-breakers at Bletchley Park achieved a number of successes at breaking encrypted German military communications. The German encryption machine, Enigma, was first attacked with the help of the electro-mechanical bombes which were often run by women. To crack the more sophisticated German Lorenz SZ 40/42 machine, used for high-level Army communications, Max Newman and his colleagues commissioned Flowers to build the Colossus. He spent eleven months from early February 1943 designing and building the first Colossus. After a functional test in December 1943, Colossus was shipped to Bletchley Park, where it was delivered on 18 January 1944 and attacked its first message on 5 February.
Colossus was the world's first electronic digital programmable computer. It used a large number of valves (vacuum tubes). It had paper-tape input and was capable of being configured to perform a variety of boolean logical operations on its data, but it was not Turing-complete. Nine Mk II Colossi were built (The Mk I was converted to a Mk II making ten machines in total). Colossus Mark I contained 1,500 thermionic valves (tubes), but Mark II with 2,400 valves, was both five times faster and simpler to operate than Mark I, greatly speeding the decoding process.
The ENIAC (Electronic Numerical Integrator and Computer) was the first electronic programmable computer built in the U.S. Although the ENIAC was similar to the Colossus, it was much faster, more flexible, and it was Turing-complete. Like the Colossus, a "program" on the ENIAC was defined by the states of its patch cables and switches, a far cry from the stored program electronic machines that came later. Once a program was written, it had to be mechanically set into the machine with manual resetting of plugs and switches. The programmers of the ENIAC were six women, often known collectively as the "ENIAC girls".
It combined the high speed of electronics with the ability to be programmed for many complex problems. It could add or subtract 5000 times a second, a thousand times faster than any other machine. It also had modules to multiply, divide, and square root. High speed memory was limited to 20 words (about 80 bytes). Built under the direction of John Mauchly and J. Presper Eckert at the University of Pennsylvania, ENIAC's development and construction lasted from 1943 to full operation at the end of 1945. The machine was huge, weighing 30 tons, using 200 kilowatts of electric power and contained over 18,000 vacuum tubes, 1,500 relays, and hundreds of thousands of resistors, capacitors, and inductors.
Modern computers
Concept of modern computer
The principle of the modern computer was proposed by Alan Turing in his seminal 1936 paper, On Computable Numbers. Turing proposed a simple device that he called "Universal Computing machine" and that is now known as a universal Turing machine. He proved that such a machine is capable of computing anything that is computable by executing instructions (program) stored on tape, allowing the machine to be programmable. The fundamental concept of Turing's design is the stored program, where all the instructions for computing are stored in memory. Von Neumann acknowledged that the central concept of the modern computer was due to this paper. Turing machines are to this day a central object of study in theory of computation. Except for the limitations imposed by their finite memory stores, modern computers are said to be Turing-complete, which is to say, they have algorithm execution capability equivalent to a universal Turing machine.
Stored programs
Early computing machines had fixed programs. Changing its function required the re-wiring and re-structuring of the machine. With the proposal of the stored-program computer this changed. A stored-program computer includes by design an instruction set and can store in memory a set of instructions (a program) that details the computation. The theoretical basis for the stored-program computer was laid by Alan Turing in his 1936 paper. In 1945, Turing joined the National Physical Laboratory and began work on developing an electronic stored-program digital computer. His 1945 report "Proposed Electronic Calculator" was the first specification for such a device. John von Neumann at the University of Pennsylvania also circulated his First Draft of a Report on the EDVAC in 1945.
The Manchester Baby was the world's first stored-program computer. It was built at the University of Manchester in England by Frederic C. Williams, Tom Kilburn and Geoff Tootill, and ran its first program on 21 June 1948. It was designed as a testbed for the Williams tube, the first random-access digital storage device. Although the computer was considered "small and primitive" by the standards of its time, it was the first working machine to contain all of the elements essential to a modern electronic computer. As soon as the Baby had demonstrated the feasibility of its design, a project was initiated at the university to develop it into a more usable computer, the Manchester Mark 1. Grace Hopper was the first person to develop a compiler for programming language.
The Mark 1 in turn quickly became the prototype for the Ferranti Mark 1, the world's first commercially available general-purpose computer. Built by Ferranti, it was delivered to the University of Manchester in February 1951. At least seven of these later machines were delivered between 1953 and 1957, one of them to Shell labs in Amsterdam. In October 1947, the directors of British catering company J. Lyons & Company decided to take an active role in promoting the commercial development of computers. The LEO I computer became operational in April 1951 and ran the world's first regular routine office computer job.
Transistors
The concept of a field-effect transistor was proposed by Julius Edgar Lilienfeld in 1925. John Bardeen and Walter Brattain, while working under William Shockley at Bell Labs, built the first working transistor, the point-contact transistor, in 1947, which was followed by Shockley's bipolar junction transistor in 1948. From 1955 onwards, transistors replaced vacuum tubes in computer designs, giving rise to the "second generation" of computers. Compared to vacuum tubes, transistors have many advantages: they are smaller, and require less power than vacuum tubes, so give off less heat. Junction transistors were much more reliable than vacuum tubes and had longer, indefinite, service life. Transistorized computers could contain tens of thousands of binary logic circuits in a relatively compact space. However, early junction transistors were relatively bulky devices that were difficult to manufacture on a mass-production basis, which limited them to a number of specialised applications.
At the University of Manchester, a team under the leadership of Tom Kilburn designed and built a machine using the newly developed transistors instead of valves. Their first transistorised computer and the first in the world, was operational by 1953, and a second version was completed there in April 1955. However, the machine did make use of valves to generate its 125 kHz clock waveforms and in the circuitry to read and write on its magnetic drum memory, so it was not the first completely transistorized computer. That distinction goes to the Harwell CADET of 1955, built by the electronics division of the Atomic Energy Research Establishment at Harwell.
The metal–oxide–silicon field-effect transistor (MOSFET), also known as the MOS transistor, was invented by Mohamed M. Atalla and Dawon Kahng at Bell Labs in 1959. It was the first truly compact transistor that could be miniaturised and mass-produced for a wide range of uses. With its high scalability, and much lower power consumption and higher density than bipolar junction transistors, the MOSFET made it possible to build high-density integrated circuits. In addition to data processing, it also enabled the practical use of MOS transistors as memory cell storage elements, leading to the development of MOS semiconductor memory, which replaced earlier magnetic-core memory in computers. The MOSFET led to the microcomputer revolution, and became the driving force behind the computer revolution. The MOSFET is the most widely used transistor in computers, and is the fundamental building block of digital electronics.
Integrated circuits
The next great advance in computing power came with the advent of the integrated circuit (IC).
The idea of the integrated circuit was first conceived by a radar scientist working for the Royal Radar Establishment of the Ministry of Defence, Geoffrey W.A. Dummer. Dummer presented the first public description of an integrated circuit at the Symposium on Progress in Quality Electronic Components in Washington, D.C. on 7 May 1952.
The first working ICs were invented by Jack Kilby at Texas Instruments and Robert Noyce at Fairchild Semiconductor. Kilby recorded his initial ideas concerning the integrated circuit in July 1958, successfully demonstrating the first working integrated example on 12 September 1958. In his patent application of 6 February 1959, Kilby described his new device as "a body of semiconductor material ... wherein all the components of the electronic circuit are completely integrated". However, Kilby's invention was a hybrid integrated circuit (hybrid IC), rather than a monolithic integrated circuit (IC) chip. Kilby's IC had external wire connections, which made it difficult to mass-produce.
Noyce also came up with his own idea of an integrated circuit half a year later than Kilby. Noyce's invention was the first true monolithic IC chip. His chip solved many practical problems that Kilby's had not. Produced at Fairchild Semiconductor, it was made of silicon, whereas Kilby's chip was made of germanium. Noyce's monolithic IC was fabricated using the planar process, developed by his colleague Jean Hoerni in early 1959. In turn, the planar process was based on Mohamed M. Atalla's work on semiconductor surface passivation by silicon dioxide in the late 1950s.
Modern monolithic ICs are predominantly MOS (metal-oxide-semiconductor) integrated circuits, built from MOSFETs (MOS transistors). The earliest experimental MOS IC to be fabricated was a 16-transistor chip built by Fred Heiman and Steven Hofstein at RCA in 1962. General Microelectronics later introduced the first commercial MOS IC in 1964, developed by Robert Norman. Following the development of the self-aligned gate (silicon-gate) MOS transistor by Robert Kerwin, Donald Klein and John Sarace at Bell Labs in 1967, the first silicon-gate MOS IC with self-aligned gates was developed by Federico Faggin at Fairchild Semiconductor in 1968. The MOSFET has since become the most critical device component in modern ICs.
The development of the MOS integrated circuit led to the invention of the microprocessor, and heralded an explosion in the commercial and personal use of computers. While the subject of exactly which device was the first microprocessor is contentious, partly due to lack of agreement on the exact definition of the term "microprocessor", it is largely undisputed that the first single-chip microprocessor was the Intel 4004, designed and realized by Federico Faggin with his silicon-gate MOS IC technology, along with Ted Hoff, Masatoshi Shima and Stanley Mazor at Intel. In the early 1970s, MOS IC technology enabled the integration of more than 10,000 transistors on a single chip.
System on a Chip (SoCs) are complete computers on a microchip (or chip) the size of a coin. They may or may not have integrated RAM and flash memory. If not integrated, the RAM is usually placed directly above (known as Package on package) or below (on the opposite side of the circuit board) the SoC, and the flash memory is usually placed right next to the SoC, this all done to improve data transfer speeds, as the data signals don't have to travel long distances. Since ENIAC in 1945, computers have advanced enormously, with modern SoCs (Such as the Snapdragon 865) being the size of a coin while also being hundreds of thousands of times more powerful than ENIAC, integrating billions of transistors, and consuming only a few watts of power.
Mobile computers
The first mobile computers were heavy and ran from mains power. The IBM 5100 was an early example. Later portables such as the Osborne 1 and Compaq Portable were considerably lighter but still needed to be plugged in. The first laptops, such as the Grid Compass, removed this requirement by incorporating batteries – and with the continued miniaturization of computing resources and advancements in portable battery life, portable computers grew in popularity in the 2000s. The same developments allowed manufacturers to integrate computing resources into cellular mobile phones by the early 2000s.
These smartphones and tablets run on a variety of operating systems and recently became the dominant computing device on the market. These are powered by System on a Chip (SoCs), which are complete computers on a microchip the size of a coin.
Types
Computers can be classified in a number of different ways, including:
By architecture
Analog computer
Digital computer
Hybrid computer
Harvard architecture
Von Neumann architecture
Complex instruction set computer
Reduced instruction set computer
By size, form-factor and purpose
Supercomputer
Mainframe computer
Minicomputer (term no longer used)
Server
Rackmount server
Blade server
Tower server
Personal computer
Workstation
Microcomputer (term no longer used)
Home computer
Desktop computer
Tower desktop
Slimline desktop
Multimedia computer (non-linear editing system computers, video editing PCs and the like)
Gaming computer
All-in-one PC
Nettop (Small form factor PCs, Mini PCs)
Home theater PC
Keyboard computer
Portable computer
Thin client
Internet appliance
Laptop
Desktop replacement computer
Gaming laptop
Rugged laptop
2-in-1 PC
Ultrabook
Chromebook
Subnotebook
Netbook
Mobile computers:
Tablet computer
Smartphone
Ultra-mobile PC
Pocket PC
Palmtop PC
Handheld PC
Wearable computer
Smartwatch
Smartglasses
Single-board computer
Plug computer
Stick PC
Programmable logic controller
Computer-on-module
System on module
System in a package
System-on-chip (Also known as an Application Processor or AP if it lacks circuitry such as radio circuitry)
Microcontroller
Hardware
The term hardware covers all of those parts of a computer that are tangible physical objects. Circuits, computer chips, graphic cards, sound cards, memory (RAM), motherboard, displays, power supplies, cables, keyboards, printers and "mice" input devices are all hardware.
History of computing hardware
Other hardware topics
A general-purpose computer has four main components: the arithmetic logic unit (ALU), the control unit, the memory, and the input and output devices (collectively termed I/O). These parts are interconnected by buses, often made of groups of wires. Inside each of these parts are thousands to trillions of small electrical circuits which can be turned off or on by means of an electronic switch. Each circuit represents a bit (binary digit) of information so that when the circuit is on it represents a "1", and when off it represents a "0" (in positive logic representation). The circuits are arranged in logic gates so that one or more of the circuits may control the state of one or more of the other circuits.
Input devices
When unprocessed data is sent to the computer with the help of input devices, the data is processed and sent to output devices. The input devices may be hand-operated or automated. The act of processing is mainly regulated by the CPU. Some examples of input devices are:
Computer keyboard
Digital camera
Digital video
Graphics tablet
Image scanner
Joystick
Microphone
Mouse
Overlay keyboard
Real-time clock
Trackball
Touchscreen
Light pen
Output devices
The means through which computer gives output are known as output devices. Some examples of output devices are:
Computer monitor
Printer
PC speaker
Projector
Sound card
Video card
Control unit
The control unit (often called a control system or central controller) manages the computer's various components; it reads and interprets (decodes) the program instructions, transforming them into control signals that activate other parts of the computer. Control systems in advanced computers may change the order of execution of some instructions to improve performance.
A key component common to all CPUs is the program counter, a special memory cell (a register) that keeps track of which location in memory the next instruction is to be read from.
The control system's function is as follows— this is a simplified description, and some of these steps may be performed concurrently or in a different order depending on the type of CPU:
Read the code for the next instruction from the cell indicated by the program counter.
Decode the numerical code for the instruction into a set of commands or signals for each of the other systems.
Increment the program counter so it points to the next instruction.
Read whatever data the instruction requires from cells in memory (or perhaps from an input device). The location of this required data is typically stored within the instruction code.
Provide the necessary data to an ALU or register.
If the instruction requires an ALU or specialized hardware to complete, instruct the hardware to perform the requested operation.
Write the result from the ALU back to a memory location or to a register or perhaps an output device.
Jump back to step (1).
Since the program counter is (conceptually) just another set of memory cells, it can be changed by calculations done in the ALU. Adding 100 to the program counter would cause the next instruction to be read from a place 100 locations further down the program. Instructions that modify the program counter are often known as "jumps" and allow for loops (instructions that are repeated by the computer) and often conditional instruction execution (both examples of control flow).
The sequence of operations that the control unit goes through to process an instruction is in itself like a short computer program, and indeed, in some more complex CPU designs, there is another yet smaller computer called a microsequencer, which runs a microcode program that causes all of these events to happen.
Central processing unit (CPU)
The control unit, ALU, and registers are collectively known as a central processing unit (CPU). Early CPUs were composed of many separate components. Since the 1970s, CPUs have typically been constructed on a single MOS integrated circuit chip called a microprocessor.
Arithmetic logic unit (ALU)
The ALU is capable of performing two classes of operations: arithmetic and logic. The set of arithmetic operations that a particular ALU supports may be limited to addition and subtraction, or might include multiplication, division, trigonometry functions such as sine, cosine, etc., and square roots. Some can operate only on whole numbers (integers) while others use floating point to represent real numbers, albeit with limited precision. However, any computer that is capable of performing just the simplest operations can be programmed to break down the more complex operations into simple steps that it can perform. Therefore, any computer can be programmed to perform any arithmetic operation—although it will take more time to do so if its ALU does not directly support the operation. An ALU may also compare numbers and return Boolean truth values (true or false) depending on whether one is equal to, greater than or less than the other ("is 64 greater than 65?"). Logic operations involve Boolean logic: AND, OR, XOR, and NOT. These can be useful for creating complicated conditional statements and processing Boolean logic.
Superscalar computers may contain multiple ALUs, allowing them to process several instructions simultaneously. Graphics processors and computers with SIMD and MIMD features often contain ALUs that can perform arithmetic on vectors and matrices.
Memory
A computer's memory can be viewed as a list of cells into which numbers can be placed or read. Each cell has a numbered "address" and can store a single number. The computer can be instructed to "put the number 123 into the cell numbered 1357" or to "add the number that is in cell 1357 to the number that is in cell 2468 and put the answer into cell 1595." The information stored in memory may represent practically anything. Letters, numbers, even computer instructions can be placed into memory with equal ease. Since the CPU does not differentiate between different types of information, it is the software's responsibility to give significance to what the memory sees as nothing but a series of numbers.
In almost all modern computers, each memory cell is set up to store binary numbers in groups of eight bits (called a byte). Each byte is able to represent 256 different numbers (28 = 256); either from 0 to 255 or −128 to +127. To store larger numbers, several consecutive bytes may be used (typically, two, four or eight). When negative numbers are required, they are usually stored in two's complement notation. Other arrangements are possible, but are usually not seen outside of specialized applications or historical contexts. A computer can store any kind of information in memory if it can be represented numerically. Modern computers have billions or even trillions of bytes of memory.
The CPU contains a special set of memory cells called registers that can be read and written to much more rapidly than the main memory area. There are typically between two and one hundred registers depending on the type of CPU. Registers are used for the most frequently needed data items to avoid having to access main memory every time data is needed. As data is constantly being worked on, reducing the need to access main memory (which is often slow compared to the ALU and control units) greatly increases the computer's speed.
Computer main memory comes in two principal varieties:
random-access memory or RAM
read-only memory or ROM
RAM can be read and written to anytime the CPU commands it, but ROM is preloaded with data and software that never changes, therefore the CPU can only read from it. ROM is typically used to store the computer's initial start-up instructions. In general, the contents of RAM are erased when the power to the computer is turned off, but ROM retains its data indefinitely. In a PC, the ROM contains a specialized program called the BIOS that orchestrates loading the computer's operating system from the hard disk drive into RAM whenever the computer is turned on or reset. In embedded computers, which frequently do not have disk drives, all of the required software may be stored in ROM. Software stored in ROM is often called firmware, because it is notionally more like hardware than software. Flash memory blurs the distinction between ROM and RAM, as it retains its data when turned off but is also rewritable. It is typically much slower than conventional ROM and RAM however, so its use is restricted to applications where high speed is unnecessary.
In more sophisticated computers there may be one or more RAM cache memories, which are slower than registers but faster than main memory. Generally computers with this sort of cache are designed to move frequently needed data into the cache automatically, often without the need for any intervention on the programmer's part.
Input/output (I/O)
I/O is the means by which a computer exchanges information with the outside world. Devices that provide input or output to the computer are called peripherals. On a typical personal computer, peripherals include input devices like the keyboard and mouse, and output devices such as the display and printer. Hard disk drives, floppy disk drives and optical disc drives serve as both input and output devices. Computer networking is another form of I/O.
I/O devices are often complex computers in their own right, with their own CPU and memory. A graphics processing unit might contain fifty or more tiny computers that perform the calculations necessary to display 3D graphics. Modern desktop computers contain many smaller computers that assist the main CPU in performing I/O. A 2016-era flat screen display contains its own computer circuitry.
Multitasking
While a computer may be viewed as running one gigantic program stored in its main memory, in some systems it is necessary to give the appearance of running several programs simultaneously. This is achieved by multitasking i.e. having the computer switch rapidly between running each program in turn. One means by which this is done is with a special signal called an interrupt, which can periodically cause the computer to stop executing instructions where it was and do something else instead. By remembering where it was executing prior to the interrupt, the computer can return to that task later. If several programs are running "at the same time". then the interrupt generator might be causing several hundred interrupts per second, causing a program switch each time. Since modern computers typically execute instructions several orders of magnitude faster than human perception, it may appear that many programs are running at the same time even though only one is ever executing in any given instant. This method of multitasking is sometimes termed "time-sharing" since each program is allocated a "slice" of time in turn.
Before the era of inexpensive computers, the principal use for multitasking was to allow many people to share the same computer. Seemingly, multitasking would cause a computer that is switching between several programs to run more slowly, in direct proportion to the number of programs it is running, but most programs spend much of their time waiting for slow input/output devices to complete their tasks. If a program is waiting for the user to click on the mouse or press a key on the keyboard, then it will not take a "time slice" until the event it is waiting for has occurred. This frees up time for other programs to execute so that many programs may be run simultaneously without unacceptable speed loss.
Multiprocessing
Some computers are designed to distribute their work across several CPUs in a multiprocessing configuration, a technique once employed in only large and powerful machines such as supercomputers, mainframe computers and servers. Multiprocessor and multi-core (multiple CPUs on a single integrated circuit) personal and laptop computers are now widely available, and are being increasingly used in lower-end markets as a result.
Supercomputers in particular often have highly unique architectures that differ significantly from the basic stored-program architecture and from general-purpose computers. They often feature thousands of CPUs, customized high-speed interconnects, and specialized computing hardware. Such designs tend to be useful for only specialized tasks due to the large scale of program organization required to successfully utilize most of the available resources at once. Supercomputers usually see usage in large-scale simulation, graphics rendering, and cryptography applications, as well as with other so-called "embarrassingly parallel" tasks.
Software
Software refers to parts of the computer which do not have a material form, such as programs, data, protocols, etc. Software is that part of a computer system that consists of encoded information or computer instructions, in contrast to the physical hardware from which the system is built. Computer software includes computer programs, libraries and related non-executable data, such as online documentation or digital media. It is often divided into system software and application software Computer hardware and software require each other and neither can be realistically used on its own. When software is stored in hardware that cannot easily be modified, such as with BIOS ROM in an IBM PC compatible computer, it is sometimes called "firmware".
Languages
There are thousands of different programming languages—some intended for general purpose, others useful for only highly specialized applications.
Programs
The defining feature of modern computers which distinguishes them from all other machines is that they can be programmed. That is to say that some type of instructions (the program) can be given to the computer, and it will process them. Modern computers based on the von Neumann architecture often have machine code in the form of an imperative programming language. In practical terms, a computer program may be just a few instructions or extend to many millions of instructions, as do the programs for word processors and web browsers for example. A typical modern computer can execute billions of instructions per second (gigaflops) and rarely makes a mistake over many years of operation. Large computer programs consisting of several million instructions may take teams of programmers years to write, and due to the complexity of the task almost certainly contain errors.
Stored program architecture
This section applies to most common RAM machine–based computers.
In most cases, computer instructions are simple: add one number to another, move some data from one location to another, send a message to some external device, etc. These instructions are read from the computer's memory and are generally carried out (executed) in the order they were given. However, there are usually specialized instructions to tell the computer to jump ahead or backwards to some other place in the program and to carry on executing from there. These are called "jump" instructions (or branches). Furthermore, jump instructions may be made to happen conditionally so that different sequences of instructions may be used depending on the result of some previous calculation or some external event. Many computers directly support subroutines by providing a type of jump that "remembers" the location it jumped from and another instruction to return to the instruction following that jump instruction.
Program execution might be likened to reading a book. While a person will normally read each word and line in sequence, they may at times jump back to an earlier place in the text or skip sections that are not of interest. Similarly, a computer may sometimes go back and repeat the instructions in some section of the program over and over again until some internal condition is met. This is called the flow of control within the program and it is what allows the computer to perform tasks repeatedly without human intervention.
Comparatively, a person using a pocket calculator can perform a basic arithmetic operation such as adding two numbers with just a few button presses. But to add together all of the numbers from 1 to 1,000 would take thousands of button presses and a lot of time, with a near certainty of making a mistake. On the other hand, a computer may be programmed to do this with just a few simple instructions. The following example is written in the MIPS assembly language:
begin:
addi $8, $0, 0 # initialize sum to 0
addi $9, $0, 1 # set first number to add = 1
loop:
slti $10, $9, 1000 # check if the number is less than 1000
beq $10, $0, finish # if odd number is greater than n then exit
add $8, $8, $9 # update sum
addi $9, $9, 1 # get next number
j loop # repeat the summing process
finish:
add $2, $8, $0 # put sum in output register
Once told to run this program, the computer will perform the repetitive addition task without further human intervention. It will almost never make a mistake and a modern PC can complete the task in a fraction of a second.
Machine code
In most computers, individual instructions are stored as machine code with each instruction being given a unique number (its operation code or opcode for short). The command to add two numbers together would have one opcode; the command to multiply them would have a different opcode, and so on. The simplest computers are able to perform any of a handful of different instructions; the more complex computers have several hundred to choose from, each with a unique numerical code. Since the computer's memory is able to store numbers, it can also store the instruction codes. This leads to the important fact that entire programs (which are just lists of these instructions) can be represented as lists of numbers and can themselves be manipulated inside the computer in the same way as numeric data. The fundamental concept of storing programs in the computer's memory alongside the data they operate on is the crux of the von Neumann, or stored program, architecture. In some cases, a computer might store some or all of its program in memory that is kept separate from the data it operates on. This is called the Harvard architecture after the Harvard Mark I computer. Modern von Neumann computers display some traits of the Harvard architecture in their designs, such as in CPU caches.
While it is possible to write computer programs as long lists of numbers (machine language) and while this technique was used with many early computers, it is extremely tedious and potentially error-prone to do so in practice, especially for complicated programs. Instead, each basic instruction can be given a short name that is indicative of its function and easy to remember – a mnemonic such as ADD, SUB, MULT or JUMP. These mnemonics are collectively known as a computer's assembly language. Converting programs written in assembly language into something the computer can actually understand (machine language) is usually done by a computer program called an assembler.
Programming language
Programming languages provide various ways of specifying programs for computers to run. Unlike natural languages, programming languages are designed to permit no ambiguity and to be concise. They are purely written languages and are often difficult to read aloud. They are generally either translated into machine code by a compiler or an assembler before being run, or translated directly at run time by an interpreter. Sometimes programs are executed by a hybrid method of the two techniques.
Low-level languages
Machine languages and the assembly languages that represent them (collectively termed low-level programming languages) are generally unique to the particular architecture of a computer's central processing unit (CPU). For instance, an ARM architecture CPU (such as may be found in a smartphone or a hand-held videogame) cannot understand the machine language of an x86 CPU that might be in a PC. Historically a significant number of other cpu architectures were created and saw extensive use, notably including the MOS Technology 6502 and 6510 in addition to the Zilog Z80.
High-level languages
Although considerably easier than in machine language, writing long programs in assembly language is often difficult and is also error prone. Therefore, most practical programs are written in more abstract high-level programming languages that are able to express the needs of the programmer more conveniently (and thereby help reduce programmer error). High level languages are usually "compiled" into machine language (or sometimes into assembly language and then into machine language) using another computer program called a compiler. High level languages are less related to the workings of the target computer than assembly language, and more related to the language and structure of the problem(s) to be solved by the final program. It is therefore often possible to use different compilers to translate the same high level language program into the machine language of many different types of computer. This is part of the means by which software like video games may be made available for different computer architectures such as personal computers and various video game consoles.
Program design
Program design of small programs is relatively simple and involves the analysis of the problem, collection of inputs, using the programming constructs within languages, devising or using established procedures and algorithms, providing data for output devices and solutions to the problem as applicable. As problems become larger and more complex, features such as subprograms, modules, formal documentation, and new paradigms such as object-oriented programming are encountered. Large programs involving thousands of line of code and more require formal software methodologies.
The task of developing large software systems presents a significant intellectual challenge. Producing software with an acceptably high reliability within a predictable schedule and budget has historically been difficult; the academic and professional discipline of software engineering concentrates specifically on this challenge.
Bugs
Errors in computer programs are called "bugs". They may be benign and not affect the usefulness of the program, or have only subtle effects. But in some cases, they may cause the program or the entire system to "hang", becoming unresponsive to input such as mouse clicks or keystrokes, to completely fail, or to crash. Otherwise benign bugs may sometimes be harnessed for malicious intent by an unscrupulous user writing an exploit, code designed to take advantage of a bug and disrupt a computer's proper execution. Bugs are usually not the fault of the computer. Since computers merely execute the instructions they are given, bugs are nearly always the result of programmer error or an oversight made in the program's design.
Admiral Grace Hopper, an American computer scientist and developer of the first compiler, is credited for having first used the term "bugs" in computing after a dead moth was found shorting a relay in the Harvard Mark II computer in September 1947.
Networking and the Internet
Computers have been used to coordinate information between multiple locations since the 1950s. The U.S. military's SAGE system was the first large-scale example of such a system, which led to a number of special-purpose commercial systems such as Sabre. In the 1970s, computer engineers at research institutions throughout the United States began to link their computers together using telecommunications technology. The effort was funded by ARPA (now DARPA), and the computer network that resulted was called the ARPANET. The technologies that made the Arpanet possible spread and evolved.
In time, the network spread beyond academic and military institutions and became known as the Internet. The emergence of networking involved a redefinition of the nature and boundaries of the computer. Computer operating systems and applications were modified to include the ability to define and access the resources of other computers on the network, such as peripheral devices, stored information, and the like, as extensions of the resources of an individual computer. Initially these facilities were available primarily to people working in high-tech environments, but in the 1990s the spread of applications like e-mail and the World Wide Web, combined with the development of cheap, fast networking technologies like Ethernet and ADSL saw computer networking become almost ubiquitous. In fact, the number of computers that are networked is growing phenomenally. A very large proportion of personal computers regularly connect to the Internet to communicate and receive information. "Wireless" networking, often utilizing mobile phone networks, has meant networking is becoming increasingly ubiquitous even in mobile computing environments.
Unconventional computers
A computer does not need to be electronic, nor even have a processor, nor RAM, nor even a hard disk. While popular usage of the word "computer" is synonymous with a personal electronic computer, the modern definition of a computer is literally: "A device that computes, especially a programmable [usually] electronic machine that performs high-speed mathematical or logical operations or that assembles, stores, correlates, or otherwise processes information." Any device which processes information qualifies as a computer, especially if the processing is purposeful.
Future
There is active research to make computers out of many promising new types of technology, such as optical computers, DNA computers, neural computers, and quantum computers. Most computers are universal, and are able to calculate any computable function, and are limited only by their memory capacity and operating speed. However different designs of computers can give very different performance for particular problems; for example quantum computers can potentially break some modern encryption algorithms (by quantum factoring) very quickly.
Computer architecture paradigms
There are many types of computer architectures:
Quantum computer vs. Chemical computer
Scalar processor vs. Vector processor
Non-Uniform Memory Access (NUMA) computers
Register machine vs. Stack machine
Harvard architecture vs. von Neumann architecture
Cellular architecture
Of all these abstract machines, a quantum computer holds the most promise for revolutionizing computing. Logic gates are a common abstraction which can apply to most of the above digital or analog paradigms. The ability to store and execute lists of instructions called programs makes computers extremely versatile, distinguishing them from calculators. The Church–Turing thesis is a mathematical statement of this versatility: any computer with a minimum capability (being Turing-complete) is, in principle, capable of performing the same tasks that any other computer can perform. Therefore, any type of computer (netbook, supercomputer, cellular automaton, etc.) is able to perform the same computational tasks, given enough time and storage capacity.
Artificial intelligence
A computer will solve problems in exactly the way it is programmed to, without regard to efficiency, alternative solutions, possible shortcuts, or possible errors in the code. Computer programs that learn and adapt are part of the emerging field of artificial intelligence and machine learning. Artificial intelligence based products generally fall into two major categories: rule-based systems and pattern recognition systems. Rule-based systems attempt to represent the rules used by human experts and tend to be expensive to develop. Pattern-based systems use data about a problem to generate conclusions. Examples of pattern-based systems include voice recognition, font recognition, translation and the emerging field of on-line marketing.
Professions and organizations
As the use of computers has spread throughout society, there are an increasing number of careers involving computers.
The need for computers to work well together and to be able to exchange information has spawned the need for many standards organizations, clubs and societies of both a formal and informal nature.
See also
Glossary of computers
Computability theory
Computer security
Glossary of computer hardware terms
History of computer science
List of computer term etymologies
List of fictional computers
List of pioneers in computer science
Pulse computation
TOP500 (list of most powerful computers)
Unconventional computing
References
Notes
External links
Warhol & The Computer
Consumer electronics
Articles containing video clips
Articles with example code
Electronics industry | Operating System (OS) | 1,231 |
Navigation system
A navigation system is a computing system that aids in navigation. Navigation systems may be entirely on board the vehicle or vessel that the system is controlling (for example, on the ship's bridge) or located elsewhere, making use of radio or other signal transmission to control the vehicle or vessel. In some cases, a combination of these methods is used.
Navigation systems may be capable of one or more of:
containing maps, which may be displayed in human-readable format via text or in a graphical format
determining a vehicle or vessel's location via sensors, maps, or information from external sources
providing suggested directions to a human in charge of a vehicle or vessel via text or speech
providing directions directly to an autonomous vehicle such as a robotic probe or guided missile
providing information on nearby vehicles or vessels, or other hazards or obstacles
providing information on traffic conditions and suggesting alternative directions
simultaneous localization and mapping
acoustic positioning for underwater navigation
The first in-car navigation navigation system available to consumers in 1985 was called Etak Navigation. The company, Etak, was led by engineer Stan Honey and incubated by Nolan Bushnell's Catalyst Technologies in Silicon Valley. Etak held a number of patents and produced digitized maps for the navigation system. The maps were streamed to the navigation system from special tape cassettes. The early digitized maps turned out to be more valuable than the navigation system. The car icon used in Etak Navigation display was a vector-based graphic based on Atari, Inc.'s Asteroids spaceship.
Types of navigation systems
Automotive navigation system
Marine navigation systems using sonar
Satellite navigation system
Global Positioning System, a group of satellites and computers that can provide information on any person, vessel, or vehicle's location via a GPS receiver
GPS navigation device, a device that can receive GPS signals for the purpose of determining the device's location and possibly to suggest or give directions
GLONASS, satellite navigation system run by Russia
Galileo global navigation satellite system
IRNSS, regional satellite system run by India.
Surgical navigation system, a system that determines the position of surgical instruments in relation to patient images such as CT or MRI scans.
Inertial guidance system, a system which continuously determines the position, orientation, and velocity (direction and speed of movement) of a moving object without the need for external reference
Robotic mapping, the methods and equipment by which an autonomous robot is able to construct (or use) a map or floor plan and to localize itself within it
XNAV for deep space navigation
See also
Positioning system
Guidance, navigation and control
Guidance system
References
Navigation
Navigational equipment | Operating System (OS) | 1,232 |
Fuduntu
Fuduntu Linux was a Fedora-based Linux distribution created by Andrew Wyatt. It was actively developed between 2010 and 2013. It was designed to fit in somewhere between Fedora and Ubuntu. It was notable for providing a 'classic' desktop experience. Although it was optimized for netbooks and other portable computers it is a general-purpose OS.
History
After forking Fedora 14 in early November 2011, Fuduntu became an independent distribution and was no longer considered a "remix" of Fedora, it did not qualify as a "spin" because it contained packages not included in Fedora.
On a Team Meeting held on 14 April 2013, it was decided that Fuduntu would discontinue development and no new versions will be released. Large parts of the team were planning to work on a new rebased OS. The move of wider support to GTK 3 and Systemd were also factors, as Fuduntu used GTK 2 and wasn't systemd based.
Post development
On 28 April 2013 the Fuduntu website officially informed that the project has come to an end and that users may want to switch to the new project Cloverleaf Linux which is based on openSUSE instead.
In late August, the development team decided to discontinue Cloverleaf, due to lack of manpower. Other concerns included image leaks and other issues regarding source code for KDE's upcoming lightweight desktop environment KLyDE, which in term was supposed to be used as Cloverleaf's default desktop environment.
Features
As Fuduntu was originally targeted at the Asus Eee PC and other netbooks it contained tweaks to reduce power consumption. This included moving the /tmp and /var/log directories to a RAM disk, and reducing swappiness 10, to reduce the frequency of disk spin-up. Also, Fuduntu includes the Jupiter power management applet (also developed by Andrew Wyatt), for easy adjustment of CPU performance settings, screen output and resolution, etc.
The default packages include Nautilus Elementary, Adobe Flash, the Fluendo MP3 Codec, VLC, Infinality Freetype, LibreOffice, and the nano editor.
Look and Feel
The used icon theme is called Faenza Cupertino. It features the distinctive square icon set of Faenza. Because it was originally based on Fedora and Ubuntu it has GNOME as window manager, giving the opportunity to change themes, including window frames.
It aimed for a 'classic' desktop experience, as opposed to a 'mobile' experience.
See also
Ubuntu GNOME
References
External links
Fuduntu listing on Distrowatch
RPM-based Linux distributions
X86-64 Linux distributions
Linux distributions | Operating System (OS) | 1,233 |
University of the East College of Computer Studies and System
The University of the East College of Computer Studies and Systems pioneered in the offering of a baccalaureate degree in Computer Science in the University Belt area starting 1988. Presently the Commission on Higher Education (CHED) has identified the University of the East as a Center of Excellence in Information Technology Education.
History
As early as 1984, the University started offering computer courses built into the curriculum of the BS in Business Administration program, both as credit-earning subjects as well as non-degree computer courses.
In 1986, the CCSS was known as the Computer Institute for Studies and Systems (CISS). Its initial offerings up to 1987 were non-degree computer training programs conducted in consortium with the University of the Philippines. Nati C. San Gabriel was the Director of the CISS and later the Dean when the Institute became a College.
After Dean San Gabriel’s retirement in 1997, Presidio R. Calumpit Jr.-who was crucial to the transformation of the CISS into the CCSS-became the second CCSS Dean in May 1997. An Economics graduate of CAS Manila and a Master of Science in Computer Science graduate of the Ateneo de Manila University, Dean Calumpit holds the CCSS deanship to this day, along with being the National President of the Philippine Society of Information Technology Educators Foundation Inc. (PSITE) and a Member of the Board of Directors of the Philippine Computer Society (PCS), both for SY 2004-2005.
The UE Management has been supportive in updating physical facilities since the first semester SY 1998-1999, with the major renovation of the CCSS Academic Building. Today the College has five computer laboratory rooms at the ground floor and four at the Panfilo O. Domingo Center for IT Building.
The university migrates from ATM to Gigabit Ethernet to support growing online requirements. Expanding online learning capabilities and improving administrative functionality at its Manila campus with an Ethernet networking solution from Nortel Networks. With the upgrade this will significantly improve speed and performance of student and faculty access to online learning resources and academic records. The upgrade will also support the University’s plan for a unified communications network ultimately linking the Manila facility with campuses in Caloocan and Quezon City.
Department Profile
The college curriculum spans from computer programming, computer organization, computer systems, data structures and algorithms, file processing, programming languages, database systems, software engineering, artificial intelligence and computer networks.
Besides the four-year Bachelor of Science in Computer Science (BSCS) program, and in response to the needs of the industry, the College has added three other courses: a two-year degree program in Associate in Computer Technology (ACT) in June 1998, a four-year degree program in Bachelor of Science in Information Technology (BSIT) in June 2000, and a four-year program in Bachelor of Science in Information Management (BSIM) in June 2004.
Curricular Offerings
Bachelor of Science in Computer Science (BSCS) Level III accreditation by PACUCOA
Bachelor of Science in Information Technology (BSIT)
Bachelor of Science in Information System (BSIS)
Bachelor Of Science in Entertainment and Multimedia Computing (BSEMC)
With Specializations in Digital Animation and Game Development
Associate in Computer Technology (ACT)
Graduate Degree
Master in Information Management (MIM)
Master in Information System (MIS)
Center of Excellence for IT Education
The Commission on Higher Education (CHED) in cooperation with the Technical panel for Information Technology Education (TPITE) identified the University of the East as a Center of Development in Information Technology Education.
The status of Center of Development (COD) in Information Technology is one of the projects of CHED, where the main objective is to give recognition to all higher education institution (HEIs)—public and private alike—offering computer courses and which have demonstrated the highest degree or standards along the areas of instruction, research and extension.
Academic resources
College Library
Internet Laboratory
Network Laboratory
Multimedia Laboratory
Electronics Laboratory
Language Laboratory
Educational Affiliations
The CCSS provides education to its students especially through the College’s tie-ups with three global education partners:
Java Education and Development Initiative (JEDI) - JEDI is a collaborative project that aims to make high-quality, industry-endorsed IT and Computer Science course material available for free. The course materials are developed with inputs from industry and conforms to international education standards. JEDI materials and resources are developed, used and enhanced in a collaborative environment using Java.net. JEDI is a project of Sun Microsystems, Inc. through the University of the Philippines Java Research and Development Center and in partnership with various groups from the Education and Industry Sectors.
Microsoft Corporation - the CCSS has three partnership agreements-under the
Microsoft IT Academy Program (MSITA)
Microsoft Developer Network Academic Alliance (MSDNAA)
Microsoft Learning (MSL).
Oracle Corporation - via the Oracle University and Oracle Academic Initiative (OU-OAI) programs
Pearson Education Asia
See also
University of the East
References
University of the East http://www.ue.edu.ph
Computer Studies and Systems | Operating System (OS) | 1,234 |
Barbarians Led by Bill Gates
Barbarians Led by Bill Gates: Microsoft from the Inside is a book that was jointly written by Jennifer Edstrom and Marlin Eller, an American programmer who was a manager and a software developer at Microsoft Corporation from 1982 to 1995, and development lead for the Graphics Device Interface (GDI) of Windows 1.0 and also for Pen Windows. Written as a third-person account of Eller's experiences at Microsoft, it goes into detail about the early years of Microsoft and its emergence as a massive corporation.
Two chapters of the book deal specifically with the business contacts between Microsoft and GO Corporation. In April 2008, as part of a larger federal court case, the gesture features of the Windows/Tablet PC operating system and hardware were found to infringe on a patent by GO Corp. concerning gesture interfaces in operating systems for portable computers.
References
External links
The first chapter of Barbarians Led by Bill Gates at washingtonpost.com
1998 non-fiction books
Books about computer and internet entrepreneurs
History of Microsoft | Operating System (OS) | 1,235 |
Sales force management system
Salesforce management systems (also sales force automation systems (SFA)) are information systems used in customer relationship management (CRM) marketing and management that help automate some sales and sales force management functions. They are often combined with a marketing information system, in which case they are often called CRM systems.
An SFA, typically a part of a company's CRM system, is a system that automatically records all the stages in a sales process. SFA includes a contact management system which tracks all contact that has been made with a given customer, the purpose of the contact, and any follow up that may be needed. This ensures that sales efforts are not duplicated, reducing the risk of irritating customers. SFA also includes a sales lead tracking system, which lists potential customers through paid phone lists, or customers of related products. Other elements of an SFA system can include sales forecasting, order management and product knowledge. More developed SFA systems have features where customers can actually model the product to meet their needs through online product building systems. This is becoming popular in the automobile industry, where patrons can customize various features such as color and interior features such as leather vs. upholstered seats.
An integral part of any SFA system is company-wide integration among different departments. If SFA systems aren't adopted and properly integrated to all departments, there might be a lack of communication which could result in different departments contacting the same customer for the same purpose. In order to mitigate this risk, SFA must be fully integrated in all departments that deal with customer service management.
Making a dynamic sales force links strategy and operational actions that can take place within a department. the SFA relies on objectives, plans, budget, and control indicators under specific conditions. In order to perform the objectives correctly, specific procedures must be implemented:
Identifiable sales force management processes
Setting targets and objectives based on inputs (usually via a command center)
Assigning factors responsible for achieving objectives
Control processes for ensuring objectives are being achieved within
a given time frame
a given constrained context (customers and/or markets)
System management to handle uncertain environments
The process usually starts with specific sales targets. The command center analyzes the inputs and outputs established from a modeled control process and the sales force. The control process enables the sales force to establish performance standards, measuring actual performance, comparing measured performance against established standards and taking corrective action. The sales managers adjust their actions based on the overall process.
Aside from the control process, the following metrics are implemented:
Time management – Accurately measures the tasks and the fraction of time needed for each task.
Call management – Plan for customer interaction accounts for the fraction of command center reps that comply with the process and have successful calls.
Opportunity management – If the process is followed correctly then a sales opportunity exists. The fraction of command center reps that use the tools, comply with the objective are all measured.
Account management – For multiple opportunities with a customer the account is measured by the tools, process, and objectives.
Territory management – For monitoring the account, the territory is measured by the number of account reps and prospective versus active customers.
Sales force management – Process includes training, IT systems, control, coaching, and is shared across several people and departments.
Five major activities are involved in staffing a sales force. They must be divided into related steps. The first step is to plan the recruiting and selection process. The responsibilities associated with this step are generally assigned to top sales executives, the field sales manager or the human resources manager. The company wants to determine the number and type of people needed, which involves analyzing the market and the job and preparing a written job description. The qualifications of the job must be established to fill the job. Second, the recruiting phase includes identifying sources of recruits that are consistent with the type of person desired, selecting the source to be used and contacting the recruits. You need to weigh out the options and evaluate its potential effectiveness versus its costs. Third, select the most qualified applicants. The selection phase has three steps, in the planning phase there may be qualifications specified and in the first step it is necessary to design a system for measuring the recruits against the standards from the planning phase. Then the system must be put into effect with the new applicants and then making the actual selection is the final step. The fourth activity is to hire those people who have been selected. Just because one makes an offer does not mean that the job is done. One must convince a recruit that the job offers everything that they need and want to get them to join a company or at least consider it. The fifth activity is to assimilate the new hires into the company. This is done by placing them under direction of an employee in the firm and possibly giving them a mentor to help them feel comfortable working in the firm and going through the training programs.
Components of sales-force automation systems
Sales-force automation systems vary in their capabilities. They can vary depending on what information an organization needs. The application also has implications based on an organization's size, organization rollup, demand of new system, sales processes, and number of users.
Depending on requirements, services can fall into one of two categories:
on-premises software
on-demand (hosted) software
With on-premises software, the customer manages and purchases the application. On-premises software has some advantages and disadvantages. The disadvantage of on-premises is the higher cost of the software, along with maintenance. Customization is also needed for some who use additional processes outside of the normal out of the box solution. Time is also a factor. Many on-premises software implementations take longer - along with numerous testing and training sessions. The overall advantage of on-premises software relates to overall return on investment. Using the application for three to five years becomes more cost-effective. Another advantage may depend on the amount of data. With on-demand, certain volume restrictions hold, but with on-premises, data restrictions are based on the storage size of local hardware.
CRM is a mechanism which manages all the data of their customers, clients and other business partners in a single container. CRM with cloud computing allows businesses to keep stature of its customers from all its corners.
Several tools can aid in automating sales activities. The largest vendors are Salesforce.com, Microsoft Dynamics CRM, SAP AG and Oracle.
Mobile sales force automation application
Many sales managers are always on the go. The growth of smartphones has reignited the creation of mobile sales force automation systems. Most companies IT departments are aware that adopting new abilities requires extensive testing. Despite the time needed to test such a new product, it will pay off in the future for the sales department. Smartphones appeal to salespeople because they are easy to carry and easy to use, show an appealing interface design, touchscreens and fast wireless network abilities. More than 55% of global 2000 organization will deploy mobile SFA project by 2011 and newer Smartphone platforms, such as Apple's iOS and Google's Android, point to a future of increasing diversity in device selecting and support for sales force. When implementing the mobile sales force automation application or during the first stage of systems development life cycle, project teams will need to evaluate how prospective solutions comprising mobile devices, software and support infrastructure and carrier services are packaged to deliver optimal system usability, manageability and integrative abilities, as well as scalability, reliability and performance.
Encouraging use
Many organisations have found it difficult to persuade sales people to enter data into the system. For this reason many have questioned the value of the investment. Recent developments have embedded sales process systems that give something back to the seller within the CRM screens. Because these systems help the sales person plan and structure their selling in the most effective way, increasing productivity, they give a reason to use the CRM.
See also
:Category:Customer relationship management software
Information technology management
Predictive analytics
Sales Management Systems by Microsoft
References
Sources
Customer relationship management
Information systems
Personal selling | Operating System (OS) | 1,236 |
Macintosh SE/30
The Macintosh SE/30 is a personal computer designed, manufactured and sold by Apple Computer from January 1989 to October 1991. It is the fastest of the original black-and-white compact Macintosh series.
The SE/30 has a black-and-white monitor and a single Processor Direct Slot (rather than the NuBus slots of the IIx, with which the SE/30 shares a common architecture) which supported third-party accelerators, network cards, or a display adapter. The SE/30 could expand up to 128 MB of RAM (a significant amount of RAM at the time), and included a 40 or 80 MB hard drive. It was also the first compact Mac to include a 1.44 MB high density floppy disk drive as standard (late versions of the SE had one, but earlier versions did not). The power of the SE/30 was demonstrated by its use to produce the This Week newspaper, the first colour tabloid newspaper in the UK to use new, digital pre-press technology on a personal, desktop computer. In keeping with Apple's practice, from the Apple II+ until the Power Macintosh G3 was announced, a logic board upgrade was available for US$1,699 to convert a regular SE to an SE/30. The SE would then have exactly the same specs as an SE/30, with the difference only in the floppy drive if the SE had an 800 KB drive. The set included a new front bezel to replace the original SE bezel with that of an SE/30.
This machine was followed in 1991 by the Macintosh Classic II, which, despite the same processor and clock speed, was only 60% as fast as the SE/30 due to its 16-bit data path, supported no more than 10 MB of memory, lacked an internal expansion slot, and made the Motorola 68882 FPU an optional upgrade.
Hardware
Although it uses 32-bit instructions, the SE/30 ROM, like the IIx ROM, includes some code using 24-bit addressing, rendering the ROM "32-bit dirty". This limited the actual amount of RAM that can be accessed to 8 MB under System 6.0.8. A system extension called MODE32 enables access to installed extra memory under System 6.0.8. Under System 7.0 up to System 7.5.5 the SE/30 can use up to 128 MB of RAM. Alternatively, replacing the ROM SIMM with one from a Mac IIsi or Mac IIfx makes the SE/30 "32-bit clean" and thereby enables use of up to 128 MB RAM and System 7.5 through OS 7.6.1.
A standard SE/30 can run up to System 7.5.5, since Mac OS 7.6 requires a "32-bit clean" ROM.
Additionally, the SE/30 can run A/UX, Apple's older version of Unix that was able to run Macintosh programs.
Though there was no official upgrade path for the SE/30, several third-party processor upgrades were available. A 68040 upgrade made it possible to run Mac OS 8.1, which extended the SE/30's productive life for many more years. The Micron Technology Xceed Gray-Scale 30 video card fit into the SE/30's Processor Direct Slot, enabling it to display greyscale video on its internal display, the only non-color compact Mac able to do so.
Models
Macintosh SE/30: Available in multiple configurations.
: 1 MB RAM, No hard disk
: 1 MB RAM, 40 MB hard disk
: 4 MB RAM, 80 MB hard disk
Reception
Bruce F. Webster wrote in Macworld in March 1989 that the SE/30 did not "break new ground. It does, however, establish Apple's commitment to the classic Mac product line, and it provides users with an Apple-supported alternative to either a small, slow Mac or a large, powerful one. More important, it fills a gap in the Macintosh family ... a new level of power and portability for the Macintosh community".
In a January 2009 Macworld feature commemorating the 25th anniversary of the Macintosh, three industry commentators – Adam C. Engst of TidBITS, John Gruber of Daring Fireball, and John Siracusa of Ars Technica – chose the SE/30 as their favorite Mac model of all time. "Like any great Mac," wrote Gruber, "the SE/30 wasn't just a terrific system just when it debuted; it remained eminently usable for years to come. When I think of the original Mac era, the machine in my mind is the SE/30."
The SE/30 remains popular with hobbyists, and has been described as “the best computer Apple will ever make,” with used models selling for a significant premium relative to other machines of the era. Contemporary PDS upgrades allowed an SE/30's internal monitor to be upgraded to support 256 shades of gray (the only original-design Macintosh to support such an upgrade) or a 68040 processor, and the SE/30's standard RAM limit of 128MB greatly exceeded even that of much later models such as the Color Classic and Macintosh LC II. In 2018, add-ons and software became available to add WiFi and even make the SE/30 work as a remote control for Spotify.
In popular culture
In the NBC TV series Seinfeld, Jerry has an SE/30 sitting on his desk during the first seasons. This would be the first of many Macs to occupy the desk, including a PowerBook Duo and a Twentieth Anniversary Macintosh.
In the FX series It's Always Sunny in Philadelphia, the Waitress is seen with a Macintosh SE/30 on her bedroom desk in the episode "The Gang Gives Back".
In the film Watchmen, Ozymandias has an all-black TEMPEST-shielded SE/30 on his desk.
References
SE 30
SE 30
SE 30
Computer-related introductions in 1989
sv:Macintosh SE#SE/30 | Operating System (OS) | 1,237 |
Emmabuntüs
Emmabuntüs is a Linux distribution derived from Ubuntu/Debian and designed to facilitate the repacking of computers donated to humanitarian organizations like the Emmaüs Communities.
The name Emmabuntüs is a portmanteau of Emmaüs and Ubuntu.
Features
This Linux distribution can be installed, in its entirety, without an Internet connection as all of the required packages are included within the disk image. The disk image includes packages for multiple languages and also optional non-free codecs that the user can choose whether to install or not.
One gigabyte of RAM is required for the distribution.
An installation script automatically performs some installation steps (user name, password predefined). The script allows you to choose whether or not to install non-free software, whether to uninstall unused languages to reduce updates.
Emmabuntüs includes browser plug-ins for data privacy.
There are three docks to choose from to simplify access to the software and are defined by the type of user (children, beginners and "all").
Desktop environment
The desktop environment is Xfce with Cairo-Dock. LXDE is also included and can be optionally installed.
Applications
Multiple applications are installed that perform the same task in order to provide a choice for each user that uses the system. Here are some examples:
Firefox web browser with some plug-ins and extensions: Flash Player, uBlock Origin, Disconnect, HTTPS Everywhere
E-mail readers: Mozilla Thunderbird
Instant messaging: Pidgin, Skype, Jitsi
Transfer tools: FileZilla, Transmission
Office: AbiWord, Gnumeric, HomeBank, LibreOffice, LibreOffice for schools, Kiwix, Calibre, Scribus
Audio: Audacious Media Player, Audacity, Clementine, PulseAudio, Asunder
Video: Kaffeine, VLC media player, guvcview, Kdenlive, HandBrake
Photo: Nomacs, Picasa, GIMP, Inkscape
Burning: Xfburn
Games: PlayOnLinux, SuperTux, TuxGuitar
Genealogy: Ancestris
Education: GCompris, Stellarium, TuxPaint, TuxMath, Scratch
Utilities: GParted, TeamViewer, Wine, CUPS
Releases
See also
List of Linux distributions § Ubuntu-based
References
External links
Emmabuntüs – A Distro Tailor-made For Refurbished Computers
Linux Voice 2 : Linux for humanitarians
Linux Format 216 : A Distro for All Seasons
Full Circle 128 : Review Emmabuntüs DE2 - Stretch 1.00
LinuxInsider : Emmabuntüs Is a Hidden Linux Gem
Debian
Debian-based_distributions
Ubuntu derivatives
X86-64_Linux_distributions
Operating system distributions bootable from read-only media
Linux distributions | Operating System (OS) | 1,238 |
Windows for Pen Computing
Windows for Pen Computing is a software suite for Windows 3.1x, that Microsoft designed to incorporate pen computing capabilities into the Windows operating environment. Windows for Pen Computing was the second major pen computing platform for x86 tablet PCs; GO Corporation released their operating system, PenPoint OS, shortly before Microsoft published Windows for Pen Computing 1.0 in 1992.
The software features of Windows for Pen Computing 1.0 includes an on-screen keyboard, a notepad program for writing with the stylus, and a program for training the system to respond accurately to the user's handwriting. Microsoft included Windows for Pen Computing 1.0 in the Windows SDK, and the operating environment was also bundled with compatible devices.
Microsoft published Windows 95 in 1995, and later released Pen Services for Windows 95, also known as Windows for Pen Computing 2.0, for this new operating system. Windows XP Tablet PC Edition superseded Windows for Pen Computing in 2002. Subsequent Windows versions, such as Windows Vista and Windows 7, supported pen computing intrinsically.
See also
Windows Ink Workspace
References
External links
The Unknown History of Pen Computing contains a history of pen computing, including touch and gesture technology, from approximately 1917 to 1992.
About Tablet Computing Old and New - an article that mentions Windows Pen in passing
Annotated bibliography of references to handwriting recognition and pen computing
Windows für Pen Computer
Windows for Pen Computer (German link above translated by Google)
Notes on the History of Pen-based Computing (YouTube)
1992 software
Handwriting recognition
Pen Computing
Microsoft Tablet PC
Tablet computers | Operating System (OS) | 1,239 |
Commodore BASIC
Commodore BASIC, also known as PET BASIC or CBM-BASIC, is the dialect of the BASIC programming language used in Commodore International's 8-bit home computer line, stretching from the PET of 1977 to the C128 of 1985.
The core is based on 6502 Microsoft BASIC, and as such it shares many characteristics with other 6502 BASICs of the time, such as Applesoft BASIC. Commodore licensed BASIC from Microsoft in 1977 on a "pay once, no royalties" basis after Jack Tramiel turned down Bill Gates' offer of a $3 per unit fee, stating, "I'm already married," and would pay no more than $25,000 for a perpetual license.
The original PET version was very similar to the original Microsoft implementation with few modifications. BASIC 2.0 on the C64 was also similar, and was also seen on some C128s and other models. Later PETs featured BASIC 4.0, similar to the original but adding a number of commands for working with floppy disks.
BASIC 3.5 was the first to really deviate, adding a number of commands for graphics and sound support on the C16 and Plus/4. BASIC 7.0 was included with the Commodore 128, and included structured programming commands from the Plus/4's BASIC 3.5, as well as keywords designed specifically to take advantage of the machine's new capabilities. A sprite editor and machine language monitor were added. The last, BASIC 10.0, was part of the unreleased Commodore 65.
History
Commodore took the source code of the flat-fee BASIC and further developed it internally for all their other 8-bit home computers. It was not until the Commodore 128 (with V7.0) that a Microsoft copyright notice was displayed. However, Microsoft had built an easter egg into the version 2 or "upgrade" Commodore Basic that proved its provenance: typing the (obscure) command WAIT 6502, 1 would result in Microsoft! appearing on the screen. (The easter egg was well-obfuscated—the message did not show up in any disassembly of the interpreter.)
The popular Commodore 64 came with BASIC v2.0 in ROM despite the computer being released after the PET/CBM series that had version 4.0 because the 64 was intended as a home computer, while the PET/CBM series were targeted at business and educational use where their built-in programming language was presumed to be more heavily used. This saved manufacturing costs, as the V2 fit into smaller ROMs.
Technical details
Program editing
A convenient feature of Commodore's ROM-resident BASIC interpreter and KERNAL was the full-screen editor. Although Commodore keyboards only have two cursor keys which alternated direction when the shift key was held, the screen editor allowed users to enter direct commands or to input and edit program lines from anywhere on the screen. If a line was prefixed with a line number, it was tokenized and stored in program memory. Lines not beginning with a number were executed by pressing the key whenever the cursor happened to be on the line. This marked a significant upgrade in program entry interfaces compared to other common home computer BASICs at the time, which typically used line editors, invoked by a separate EDIT command, or a "copy cursor" that truncated the line at the cursor's position.
It also had the capability of saving named files to any device, including the cassette – a popular storage device in the days of the PET, and one that remained in use throughout the lifespan of the 8-bit Commodores as an inexpensive form of mass storage. Most systems only supported filenames on diskette, which made saving multiple files on other devices more difficult. The user of one of these other systems had to note the recorder's counter display at the location of the file, but this was inaccurate and prone to error. With the PET (and BASIC 2.0), files from cassettes could be requested by name. The device would search for the filename by reading data sequentially, ignoring any non-matching filenames. The file system was also supported by a powerful record structure that could be loaded or saved to files. Commodore cassette data was recorded digitally, rather than less expensive (and less reliable) analog methods used by other manufacturers. Therefore, the specialized Datasette was required rather than a standard tape recorder. Adapters were available that used an analog-to-digital converter to allow use of a standard recorder, but these cost only a little less than the Datasette.
The command may be used with the optional parameter which will load a program into the memory address contained in the first two bytes of the file (these bytes are discarded and not retained in memory). If the parameter is not used, the program will load into the start of the BASIC program area, which widely differs between machines. Some Commodore BASIC variants supplied BLOAD and BSAVE commands that worked like their counterparts in Applesoft BASIC, loading or saving bitmaps from specified memory locations.
The PET does not support relocatable programs and the command will always load at the first two bytes contained in the program file. This created a problem when trying to load BASIC programs saved on other Commodore machines as they would load at a higher address than where the PET's BASIC expected the program to be, there were workarounds to "move" programs to the proper location. If a program was saved on a CBM-II machine, the only way to load it on a PET was by modifying the first two bytes with a disk sector editor as the CBM-II series had their BASIC program area at $0, which would result in a PET attempting to load into the zero page and locking up.
Commodore BASIC keywords could be abbreviated by entering first an unshifted keypress, and then a shifted keypress of the next letter. This set the high bit, causing the interpreter to stop reading and parse the statement according to a lookup table. This meant that the statement up to where the high bit was set was accepted as a substitute for typing the entire command out. However, since all BASIC keywords were stored in memory as single byte tokens, this was a convenience for statement entry rather than an optimization.
In the default uppercase-only character set, shifted characters appear as a graphics symbol; e.g. the command, GOTO, could be abbreviated G{Shift-O} (which resembled GΓ onscreen). Most such commands were two letters long, but in some cases they were longer. In cases like this, there was an ambiguity, so more unshifted letters of the command were needed, such as GO{Shift-S} (GO♥) being required for GOSUB. Some commands had no abbreviated form, either due to brevity or ambiguity with other commands. For example, the command, INPUT had no abbreviation because its spelling collided with the separate INPUT# keyword, which was located nearer to the beginning of the keyword lookup table. The heavily used PRINT command had a single ? shortcut, as was common in most Microsoft BASIC dialects. Abbreviating commands with shifted letters is unique to Commodore BASIC.
This tokenizing method had a glitch such that if one included a REM (BASIC statement to add a comment to the code) followed by a {Shift-L}, when trying to view the program listing, the BASIC interpreter would immediately abort the listing, display a ?SYNTAX ERROR and return to the READY. prompt. This glitch was used to some effect by programmers who wanted to try and protect their work, although it was fairly easy to circumvent.
By abbreviating keywords, it was possible to fit more code on a single program line (which could take up two screen lines on 40-column displays - i.e., C64 or PET, or four lines on the VIC-20's 22-column display). This allowed for a slight saving on the overhead to store otherwise necessary extra program lines, but nothing more. All BASIC commands were tokenized and took up 1 byte (or two, in the case of several commands of BASIC 7 or BASIC 10) in memory no matter which way they were entered. Such long lines were a nuisance to edit. The LIST command displayed the entire command keyword - extending the program line beyond the 2 or 4 screen lines which could be entered into program memory.
Performance
Like the original Microsoft BASIC interpreter, Commodore BASIC is slower than native machine code. Test results have shown that copying 16 kilobytes from ROM to RAM takes less than a second in machine code, compared to over a minute in BASIC. To execute faster than the interpreter, programmers started using various techniques to speed up execution. One was to store often-used floating point values in variables rather than using literal values, as interpreting a variable name was faster than interpreting a literal number. Since floating point is default type for all commands, it's faster to use floating point numbers as arguments, rather than integers. When speed was important, some programmers converted sections of BASIC programs to 6502 or 6510 assembly language that was loaded separately from a file or POKEd into memory from DATA statements at the end of the BASIC program, and executed from BASIC using the SYS command, either from direct mode or from the program itself. When the execution speed of machine language was too great, such as for a game or when waiting for user input, programmers could poll by reading selected memory locations (such as $C6 for the 64, or $D0 for the 128, denoting size of the keyboard queue) to delay or halt execution.
A unique feature of Commodore BASIC is the use of control codes to perform tasks such as clearing the screen or positioning the cursor within a program; these can be invoked either by issuing a command where X corresponds to the control code to be issued (for example, is the control code to clear the screen) or by pressing the key in question between quote marks, thus pressing following a quote mark will cause BASIC to display the visual representation of the control code (in this case, a reversed heart) which is then acted upon at program execution (directly printing out the control codes uses less memory and executes faster than invoking a function). This is in comparison to other implementations of BASIC which typically have dedicated commands to clear the screen or move the cursor.
BASIC 3.5 and up have proper commands for clearing the screen and moving the cursor.
Program lines in Commodore BASIC do not require spaces anywhere (but the command will always display one between the line number and the statement), e.g., , and it was common to write programs with no spacing. This feature was added to conserve memory since the tokenizer never removes any space inserted between keywords: the presence of spaces results in extra 0x20 bytes in the tokenized program which are merely skipped during execution. Spaces between the line number and program statement are removed by the tokenizer.
Program lines can be 80 characters total on most machines, but machines with 40 column text would cause the line to wrap around to the next line on the screen, and on the VIC-20, which had a 22 column display, program lines could occupy as many as four. BASIC 7.0 on the Commodore 128 increased the limit of a program line to 160 characters (four 40-column lines or two 80-column lines). By using abbreviations such as instead of , it is possible to fit even more on a line. BASIC 7.0 displays a error if the user enters a program line over 160 characters in length. Earlier versions do not produced an error and simply display the READY prompt two lines down if the line length is exceeded. The line number is counted in the number of characters in the program line, so a five digit line number will result in four fewer characters allowed than a one digit number.
The order of execution of Commodore BASIC lines was not determined by line numbering; instead, it followed the order in which the lines were linked in memory. Program lines were stored in memory as a singly linked list with a pointer (containing the address of the beginning of the next program line), a line number, and then the tokenized code for the line. While a program was being entered, BASIC would constantly reorder program lines in memory so that the line numbers and pointers were all in ascending order. However, after a program was entered, manually altering the line numbers and pointers with the POKE commands could allow for out-of-order execution or even give each line the same line number. In the early days, when BASIC was used commercially, this was a software protection technique to discourage casual modification of the program.
Line numbers can range from 0 to 65520 and take five bytes to store regardless of how many digits are in the line number, although execution is faster the fewer digits there are. Putting multiple statements on a line will use less memory and execute faster.
and statements will search downward from the current line to find a line number if a forward jump is performed, in case of a backwards jump, they will return to the start of the program to begin searching. This will slow down larger programs, so it is preferable to put commonly used subroutines near the start of a program.
Variable names are only significant to 2 characters; thus the variable names VARIABLE1, VARIABLE2, and VA all refer to the same variable.
Commodore BASIC also supports bitwise operators , and , although this feature was part of the core Microsoft 6502 BASIC code, it was usually omitted in other implementations such as Applesoft BASIC.
The native number format of Commodore BASIC, like that of its parent MS BASIC, was floating point. Most contemporary BASIC implementations used one byte for the characteristic (exponent) and three bytes for the mantissa. The accuracy of a floating point number using a three-byte mantissa is only about 6.5 decimal digits, and round-off error is common. 6502 implementations of Microsoft BASIC utilized 40-bit floating point arithmetic, meaning that variables took five bytes to store (four byte mantissa and one byte for the exponent) unlike the 32-bit floating point found in BASIC-80.
While 8080/Z80 implementations of Microsoft BASIC supported integer and double precision variables, 6502 implementations were floating point only.
Although Commodore BASIC supports signed integer variables (denoted with a percent sign) in the range -32768 to 32767, in practice they are only used for array variables and serve the function of conserving memory by limiting array elements to two bytes each (an array of 2000 elements will occupy 10,000 bytes if declared as a floating point array, but only 4000 if declared as an integer array). Denoting any variable as integer simply causes BASIC to convert it back to floating point, slowing down program execution and wasting memory as each percent sign takes one additional byte to store (since this also applies to integer arrays, the programmer should avoid using them unless very large arrays are used that would exceed available memory if stored as floating point). Also, it is not possible to or memory locations above 32767 with address defined as a signed integer.
A period (.) can be used in place of the number 0 (thus instead of or instead of ), this will execute slightly faster.
The statement, used to start machine language programs, was added by Commodore and was not in the original Microsoft BASIC code, which featured only the USR function for invoking machine language routines. It automatically loads the CPU's registers with the values in (C64, varies on other machines)--this can be used to pass data to machine language routines or as a means of calling kernal functions from BASIC (as an example, clears the screen).
Since Commodore 8-bit machines other than the C128 cannot automatically boot disk software, the usual technique is to include a BASIC stub like to begin program execution. It is possible to automatically start software after loading and not require the user to type a statement, this is done by having a piece of code that hooks the BASIC "ready" vector at .
As with most other versions of Microsoft BASIC, if an array is not declared with a statement, it is automatically set to ten elements (in practice 11 since array elements are counted from 0). Larger arrays must be declared or BASIC will display an error when the program is run and an array cannot be re-dimensioned in a program unless all variables are wiped via a CLR statement. Numeric arrays are automatically filled with zeros when they are created, there may be a momentary delay in program execution if a large array is dimensioned.
String variables are represented by tagging the variable name with a dollar sign. Thus, the variables AA$, AA, and AA% would each be understood as distinct. Array variables are also considered distinct from simple variables, thus and do not refer to the same variable. The size of a string array merely refers to how many strings are stored in the array, not the size of each element, which is allocated dynamically. Unlike some other implementations of Microsoft BASIC, Commodore BASIC does not require string space to be reserved at the start of a program.
Unlike other 8-bit machines such as the Apple II, Commodore's machines all have a built-in clock that is initialized to 0 at power on and updated with every tick of the PIA/VIA/TED/CIA timer, thus 60 times per second. It is assigned two system variables in BASIC, and , which both contain the current time. TI is read-only and cannot be modified; doing so will result in a Syntax Error message. may be used to set the time via a six number string (an error results from using a string other than six numbers). The clock is not a very reliable method of timekeeping since it stops whenever interrupts are turned off (done by some kernal routines) and accessing the IEC (or IEEE port on the PET) port will slow the clock update by a few ticks.
The function in Commodore BASIC can use the clock to generate random numbers; this is accomplished by , however it is of relatively limited use as only numbers between 0 and 255 are returned. Otherwise, works the same as other implementations of Microsoft BASIC in that a pseudo-random sequence is used via a fixed 5-byte seed value stored at power on in memory locations on the C64 (the location differs on other machines). with any number higher than 0 will generate a random number amalgamated from the value included with the function and the seed value, which is updated by 1 each time an RND function is executed. with a negative number goes to a point in the sequence of the current seed value specified by the number.
Since true random number generation is impossible with the statement, it is more typical on the C64 and C128 to utilize the SID chip's white noise channel for random numbers.
BASIC 2.0 notoriously suffered from extremely slow garbage collection of strings. Garbage collection is automatically invoked any time a function is executed and if there are many string variables and arrays that have been manipulated over the course of a program, clearing them can take more than an hour under the worst conditions. It is also not possible to abort garbage collection as BASIC does not scan the RUN/STOP key while performing this routine. BASIC 4.0 introduced an improved garbage collection system with back pointers and all later implementations of Commodore BASIC also have it.
The function in BASIC 2.0 suffered from another technical flaw in that it cannot handle signed numbers over 32768, thus if the function is invoked on a C64 (38k BASIC memory), a negative amount of free BASIC memory will be displayed (adding 65535 to the reported number will obtain the correct amount of free memory). The PET and VIC-20 never had more than 32k of total memory available to BASIC, so this limitation did not become apparent until the C64 was developed. The function on BASIC 3.5 and 7.0 corrected this problem and on BASIC 7.0 was also "split" into two functions, one to display free BASIC program text memory and the other to display free variable memory.
Alternatives
Many BASIC extensions were released for the Commodore 64, due to the relatively limited capabilities of its native BASIC 2.0. One of the most popular extensions was the DOS Wedge, which was included on the Commodore 1541 Test/Demo Disk. This 1 KB extension to BASIC added a number of disk-related commands, including the ability to read a disk directory without destroying the program in memory. Its features were subsequently incorporated in various third-party extensions, such as the popular Epyx FastLoad cartridge. Other BASIC extensions added additional keywords to make it easier to code sprites, sound, and high-resolution graphics like Simons' BASIC.
Although BASIC 2.0's lack of sound or graphics features was frustrating to many users, some critics argued that it was ultimately beneficial since it forced the user to learn machine language.
The limitations of BASIC 2.0 on the C64 led to use of built-in ROM machine language from BASIC. To load a file to a designated memory location, the filename, drive, and device number would be read by a call: ; the location would be specified in the X and Y registers: ; and the load routine would be called: .
A disk magazine for the C64, Loadstar, was a venue for hobbyist programmers, who shared collections of proto-commands for BASIC, called with the command.
From a modern programming point of view, the earlier versions of Commodore BASIC presented a host of bad programming traps for the programmer. As most of these issues derived from Microsoft BASIC, virtually every home computer BASIC of the era suffered from similar deficiencies. Every line of a Microsoft BASIC program was assigned a line number by the programmer. It was common practice to increment numbers by some value (5, 10 or 100) to make inserting lines during program editing or debugging easier, but bad planning meant that inserting large sections into a program often required restructuring the entire code. A common technique was to start a program at some low line number with an jump table, with the body of the program structured into sections starting at a designated line number like 1000, 2000, and so on. If a large section needed to be added, it could just be assigned the next available major line number and inserted to the jump table.
Later BASIC versions on Commodore and other platforms included a and command, as well as an AUTO line numbering command that would automatically select and insert line numbers according to a selected increment. In addition, all variables are treated as global variables. Clearly defined loops are hard to create, often causing the programmer to rely on the command (this was later rectified in BASIC 3.5 with the addition of the , and commands). Flag variables often needed to be created to perform certain tasks. Earlier BASICs from Commodore also lack debugging commands, meaning that bugs and unused variables are hard to trap. structures, a standard part of Z80 Microsoft BASICs, were added to BASIC 3.5 after being unavailable in earlier versions of Commodore BASIC.
Use as user interface
In common with other home computers, Commodore's machines booted directly into the BASIC interpreter. BASIC's file and programming commands could be entered in direct mode to load and execute software. If program execution was halted using the RUN/STOP key, variable values would be preserved in RAM and could be PRINTed for debugging. The 128 even dedicated its second 64k bank to variable storage, allowing values to persist until a NEW or RUN command was issued. This, along with the advanced screen editor included with Commodore BASIC gave the programming environment a REPL-like feel; programmers could insert and edit program lines at any screen location, interactively building the program. This is in contrast to business-oriented operating systems of the time like CP/M or MS-DOS, which typically booted into a command line interface. If a programming language was required on these platforms, it had to be loaded separately.
While some versions of Commodore BASIC included disk-specific DLOAD and DSAVE commands, the version built into the Commodore 64 lacked these, requiring the user to specify the disk drive's device number (typically 8 or 9) to the standard LOAD command, which otherwise defaulted to tape. Another omission from the Commodore 64s BASIC 2.0 was a DIRECTORY command to display a disk's contents without clearing main memory. On the 64, viewing files on a disk was implemented as loading a "program" which when listed showed the directory as a pseudo BASIC program, with the file's block size as the line number. This had the effect of overwriting the currently loaded program. Addons like the DOS Wedge overcame this by rendering the directory listing direct to screen memory.
Versions and features
A list of CBM BASIC versions in chronological order, with successively added features:
Released versions
V1.0: PET 2001 with chiclet keyboard and built-in Datassette (original PET)
arrays limited to 256 elements
PEEK command explicitly disabled over BASIC ROM locations above $C000
V2.0 (first release): PET 2001 with full-travel keyboard & upgrade ROMs
add IEEE-488 support
improved the garbage collection
fix array bug
Easter egg – entering WAIT6502,[number] displays MICROSOFT! an arbitrary number of times
V4.0: PET/CBM 4000/8000 series (and late version PET 2001s)
disk operations: DLOAD,DSAVE,COPY,SCRATCH, etc. (15 in all)
disk error-channel variables: DS,DS$
greatly improved garbage-collection performance
V2.0 (second release, after 4.0): VIC-20; C64
V4+ : CBM-II series (aka B, P range)
memory management: BANK
more disk operations: BLOAD, BSAVE,DCLEAR
formatted printing: PRINT USING,PUDEF
error trapping: DISPOSE
alternative branching: ELSE
dynamic error handling: TRAP,RESUME,ERR$()
flexible DATA read: RESTORE [linenumber]
string search function: INSTR
V3.5: C16/116, Plus/4
sound and graphics commands
joystick input: JOY
decimal ↔ hexadecimal conversion: DEC(),HEX$()
structured looping: DO,LOOP,WHILE,UNTIL,EXIT
function key assignment: KEY (also direct mode)
program entry/editing: AUTO,DELETE,RENUMBER
debugging (tracing): TRON, TROFF
MLM entry command: MONITOR
C(1)16, Plus/4 Easter egg – enter SYS 52650
V7.0: C128
more sound and graphics commands, including sprite handling
built-in sprite editor: SPRDEF
multi-statement blocks for IF THEN ELSE structures: BEGIN,BEND
paddle, lightpen input: POT,PEN
exclusive or function: XOR
get variable address: POINTER
text mode windowing: WINDOW
controlled time delay: SLEEP
memory management: SWAP,FETCH,STASH,FRE(1)
used the 128's bank switching to store program code separately from variables. Variable values would be preserved across program executions if the program was started with the GOTO command.
more disk operations: BOOT,DVERIFY
CPU speed adjustment: FAST,SLOW (2 vs 1 MHz)
enter C64 mode: GO64
undocumented, working: RREG (read CPU registers after a SYS)
unimplemented commands: OFF,QUIT
C128 Easter egg – enter SYS 32800,123,45,6
Unreleased versions
V3.6 : Commodore LCD (unreleased prototype). Almost identical to V7.0, with the following differences:
VOLUME instead of VOL
EXIT instead of QUIT
FAST,SLOW commands not present
Additional command: POPUPS
V10 : Commodore 65 (unreleased prototype)
graphics/video commands: PALETTE,GENLOCK
mouse input: MOUSE,RMOUSE
text file (SEQ) utility: TYPE
program editing: FIND,CHANGE
memory management: DMA,FRE(2)
unimplemented commands: PAINT,LOCATE,SCALE,WIDTH,SET,VIEWPORT,PASTE,CUT
Notable extension packages
Super Expander (VIC-20; delivered on ROM cartridge) (Commodore)
Super Expander 64 (C64; cartridge) (Commodore)
Simons' BASIC (C64; cartridge) (Commodore)
Graphics BASIC (C64; floppy disk) (Hesware)
BASIC 8 (C128; floppy disk and optional internal ROM chip) (Walrusoft)
References
Sources
Commodore/Microsoft Basic version timeline
Bill Gates’ Personal Easter Eggs in 8 Bit BASIC, pagetable.com
BASIC 2.0
Angerhausen et al. (1983). The Anatomy of the Commodore 64 (for the full reference, see the C64 article).
BASIC 3.5
Gerrard, Peter; Bergin, Kevin (1985). The Complete COMMODORE 16 ROM Disassembly. Gerald Duckworth & Co. Ltd. .
BASIC 7.0
Jarvis, Dennis; Springer, Jim D. (1987). BASIC 7.0 Internals. Grand Rapids, Michigan: Abacus Software, Inc. .
BASIC 10.0
c65manual.txt Commodore 65 preliminary documentation (March 1991), with addendum for ROM version 910501.
Basic
Basic
Basic
Discontinued Microsoft BASICs
BASIC interpreters
BASIC programming language family
Microsoft programming languages
Programming languages created in 1977 | Operating System (OS) | 1,240 |
BSD Daemon
The BSD Daemon, nicknamed Beastie, is the generic mascot of BSD operating systems. The BSD Daemon is named after software daemons, a class of long-running computer programs in Unix-like operating systems, which through a play on words takes the cartoon shape of a demon. The BSD Daemon's nickname Beastie is a slurred phonetic pronunciation of BSD. Beastie customarily carries a trident to symbolize a software daemon's forking of processes. The FreeBSD web site has noted Evi Nemeth's 1988 remarks about cultural-historical daemons in the Unix System Administration Handbook: "The ancient Greeks' concept of a 'personal daemon' was similar to the modern concept of a 'guardian angel' ...As a rule, UNIX systems seem to be infested with both daemons and demons."
Copyright
The copyright of the official BSD Daemon images is held by Marshall Kirk McKusick (a very early BSD developer who worked with Bill Joy). He has freely licensed the mascot for individual "personal use within the bounds of good taste (an example of bad taste was a picture of the BSD Daemon blowtorching a Solaris logo)." Any use requires both a copyright notice and attribution.
Reproduction of the daemon in quantity, such as on T-shirts and CDROMs, requires advance permission from McKusick, who restricts its use to implementations having to do with BSD and not as a company logo (although companies with BSD-based products such as Scotgold and Wind River Systems have gotten this kind of permission).
McKusick has said that during the early 1990s "I almost lost the daemon to a certain large company because I failed to show due diligence in protecting it. So, I've taken due diligence seriously since then."
In a request to use a license such as Creative Commons, McKusick replied:
History
The BSD Daemon was first drawn in 1976 by comic artist Phil Foglio. Developer Mike O'Brien, who was working as a bonded locksmith at the time, opened a wall safe in Foglio's Chicago apartment after a roommate had "split town" without leaving the combination. In return Foglio agreed to draw T-shirt artwork for O'Brien, who gave him some Polaroid snaps of a PDP-11 system running UNIX along with some notions about visual puns having to do with pipes, demons/daemons, forks, a "bit bucket" named /dev/null, etc. Foglio's drawing showed four happy little red daemon characters carrying tridents and climbing about on (or falling off of) water pipes in front of a caricature of a PDP-11 and was used for the first national UNIX meeting in the US (which was held in Urbana, Illinois). Bell Labs bought dozens of T-shirts featuring this drawing, which subsequently appeared on UNIX T-shirts for about a decade. Usenix purchased the reproduction rights to Foglio's artwork in 1986. His original drawing was then apparently lost, shortly after having been sent to Digital Equipment Corporation for use in an advertisement; all known copies are from photographs of surviving T-shirts.
The later, more popular versions of the BSD Daemon were drawn by animation director John Lasseter beginning with an early greyscale drawing on the cover of the Unix System Manager's Manual published in 1984 by USENIX for 4.2BSD. Its author/editor Sam Leffler (who had been a technical staff member at CSRG) and Lasseter were both employees of Lucasfilm at the time. About four years after this Lasseter drew his widely known take on the BSD Daemon for the cover of McKusick's co-authored 1988 book, The Design and Implementation of the 4.3BSD Operating System. Lasseter drew a somewhat lesser-known running BSD Daemon for the 4.4BSD version of the book in 1994.
Use in operating system logos
From 1994 to 2004, the NetBSD project used artwork by Shawn Mueller as a logo, featuring four BSD Daemons in a pose similar to the famous photo, Raising the Flag on Iwo Jima. However, this logo was seen as inappropriate for an international project, and it was superseded by a more abstract flag logo, chosen from over 400 entries in a competition.
Early versions of OpenBSD (2.3 and 2.4) used a BSD Daemon with a halo, and briefly used a daemon police officer for version 2.5. Then, however, OpenBSD switched to Puffy, a blowfish, as a mascot.
The FreeBSD project used the 1988 Lasseter drawing as both a logo and mascot for 12 years. However, questions arose as to the graphic's effectiveness as a logo. The daemon was not unique to FreeBSD since it was historically used by other BSD variants and members of the FreeBSD core team considered it inappropriate for corporate and marketing purposes. Lithographically, the scanned Lasseter drawing is not line art and however drawn neither scaled easily in a wide range of sizes nor rendered appealingly in only two or three colours. A contest to create a new FreeBSD logo began in February 2005 and a scalable graphic which somewhat echoes the BSD Daemon's head was chosen the following October, although "the little red fellow" has been kept on as an official project mascot.
Walnut Creek CDROM also produced two variations of the daemon for its CDROM covers. The FreeBSD 1.0 and 1.1 CDROM covers used the 1988 Lasseter drawing. The FreeBSD 2.0 CDROM used a variant with different colored (specifically green) tennis shoes. Other distributions used this image with different colored tennis shoes over the years. Starting with FreeBSD 2.0.5, Walnut Creek CDROM covers used the daemon walking out of a CDROM. Starting with FreeBSD 4.5, the FreeBSD Mall used a mirrored image of the Walnut Creek 2.0 image. The Walnut Creek 2.0 image has also appeared on the cover of different FreeBSD Handbook editions.
Deprecated name
In the mid-1990s a marketer for Walnut Creek CDROM called the mascot Chuck, perhaps referring to a brand name for the kind of shoes worn by the character but this name is strongly deprecated by the copyright holder who has said the BSD Daemon "is very proud of the fact that he does not have a name, he is just the BSD Daemon. If you insist on a name, call him Beastie."
ASCII image
, ,
/( )`
\ \___ / |
/- _ `-/ '
(/\/ \ \ /\
/ / | ` \
O O ) / |
`-^--'`< '
(_.) _ ) /
`.___/` /
`-----' /
<----. __ / __ \
<----|====O)))==) \) /====|
<----' `--' `.__,' \
| |
\ / /\
__( (_ / \__/
,' ,-----' |
`--{__)
This ASCII art image of the BSD Daemon by Felix Lee appeared in the startup menu of FreeBSD version 5.x and can still be set as startup image in later versions. It is also used in the daemon_saver screensaver.
See also
List of computing mascots
:Category:Computing mascots
Tux (mascot), the mascot of Linux kernel
Konqi, the mascot of KDE
Glenda, the Plan 9 Bunny, the mascot of Plan 9 from Bell Labs
Puffy (mascot), the mascot of OpenBSD
Mozilla (mascot), the mascot of Mozilla Foundation
Kiki the Cyber Squirrel, the mascot of Krita
Wilber (mascot), the mascot of GIMP
References and notes
External links
Photograph of a T-shirt bearing Foglio's original 1976 drawing
Photograph of a BSD-UNIX/VAX manual showing Lasseter's 1984 drawing
Photograph of a book cover bearing Lasseter's iconic 1988 drawing
FreeBSD's The BSD Daemon page
The red guy's name, from the FreeBSD FAQ
What's that daemon? — info on daemon shirts and a funny story
How to make a beastie flag
BSD Daemon Gallery
Berkeley Software Distribution
Fictional demons and devils
Computing mascots | Operating System (OS) | 1,241 |
Microsoft Outlook
Microsoft Outlook is a personal information manager software system from Microsoft, available as a part of the Microsoft Office suite. Though primarily an email client, Outlook also includes such functions as calendaring, task managing, contact managing, note-taking, journal logging, and web browsing.
Individuals can use Outlook as a stand-alone application; organizations can deploy it as multi-user software (through Microsoft Exchange Server or SharePoint) for such shared functions as mailboxes, calendars, folders, data aggregation (i.e., SharePoint lists), and appointment scheduling. Microsoft has released apps for most mobile platforms, including iOS and Android. In addition, Windows Phone devices can synchronize almost all Outlook data to Outlook Mobile. Using Microsoft Visual Studio, developers can also build their own custom software that works with Outlook and Office components.
In March 2020, Microsoft announced the launch of a series of new features to appeal to business customers of its Teams platform, in addition to the features introduced the previous month. The chat and collaboration module now includes more efficient and integrated way points, designed to simplify group work for organizations and to encourage such an organization to adopt the Microsoft platform to become the go-to company chat-platform.
Web applications
Outlook.com is a free webmail version of Microsoft Outlook, using a similar user interface. Originally known as Hotmail, it was rebranded as Outlook.com in 2012.
Outlook on the web (previously called Exchange Web Connect, Outlook Web Access, and Outlook Web App) is a web business version of Microsoft Outlook, and is included in Office 365, Exchange Server, and Exchange Online.
Versions
Outlook has replaced Microsoft's previous scheduling and email clients, Schedule+ and Exchange Client.
Outlook 98 and Outlook 2000 offer two configurations:
Internet Mail Only (aka IMO mode): A lighter application mode with specific emphasis on POP3 and IMAP accounts, including a lightweight Fax application.
Corporate Work group (aka CW mode): A full MAPI client with specific emphasis on Microsoft Exchange accounts.
Perpetual versions of Microsoft Outlook include:
Microsoft Windows
Outlook 2002
Outlook 2002 introduced these new features:
Autocomplete for email addresses
Colored categories for calendar items
Group schedules
Hyperlink support in email subject lines
Native support for Outlook.com (formerly Hotmail)
Improved search functionality, including the ability to stop a search and resume it later
Lunar calendar support
MSN Messenger integration
Performance improvements
Preview pane improvements, including the ability to:
open hyperlinks;
respond to meeting requests; and
display email properties without opening a message
Reminder window that consolidates all reminders for appointments and tasks in a single view
Retention policies for documents and email
Security improvements, including the automatic blocking of potentially unsafe attachments and of programmatic access to information in Outlook:
SP1 introduced the ability to view all non-digitally signed email or unencrypted email as plain text;
SP2 allows users to—through the Registry—prevent the addition of new email accounts or the creation of new Personal Storage Tables;
SP3 updates the object model guard security for applications that access messages and other items.
Smart tags when Word is configured as the default email editor. This option was available only when the versions of Outlook and Word were the same, i.e. both were 2002.
Outlook 2003
Outlook 2003 introduced these new features:
Autocomplete suggestions for a single character
Cached Exchange mode
Colored (quick) flags
Desktop Alert
Email filtering to combat spam
Images in HTML mail are blocked by default to prevent spammers from determining whether an email address is active via web beacon;
SP1 introduced the ability to block email based on country code top-level domains;
SP2 introduced anti-phishing functionality that automatically disables hyperlinks present in spam
Expandable distribution lists
Information rights management
Intrinsic support for tablet PC functionality (e.g., handwriting recognition)
Reading pane
Search folders
Unicode support
Outlook 2007
Features that debuted in Outlook 2007 include:
Attachment preview, with which the contents of attachments can be previewed before opening
Supported file types include Excel, PowerPoint, Visio, and Word files. If Outlook 2007 is installed on Windows Vista, then audio and video files can be previewed. If a compatible PDF reader such as Adobe Acrobat 8.1 is installed, PDF files can also be previewed.
Auto Account Setup, which allows users to enter a username and password for an email account without entering a server name, port number, or other information
Calendar sharing improvements including the ability to export a calendar as an HTML file—for viewing by users without Outlook—and the ability to publish calendars to an external service (e.g., Office Web Apps) with an online provider (e.g., Microsoft account)
Colored categories with support for user roaming, which replace colored (quick) flags introduced in Outlook 2003
Improved email spam filtering and anti-phishing features
Postmark intends to reduce spam by making it difficult and time consuming to send it
Information rights management improvements with Windows Rights Management Services and managed policy compliance integration with Exchange Server 2007
Japanese Yomi name support for contacts
Multiple calendars can be overlaid with one another to assess details such as potential scheduling conflicts
Ribbon (Office Fluent) interface
Outlook Mobile Service support, which allowed multimedia and SMS text messages to be sent directly to mobile phones
Instant search through Windows Search, an index-based desktop search platform
Instant search functionality is also available in Outlook 2002 and Outlook 2003 if these versions are installed alongside Windows Search
Integrated RSS aggregation
Support for Windows SideShow with the introduction of a calendar gadget
To-Do Bar that consolidates calendar information, flagged email, and tasks from OneNote 2007, Outlook 2007, Project 2007, and Windows SharePoint Services 3.0 websites within a central location.
The ability to export items as PDF or XPS files
Unified messaging support with Exchange Server 2007, including features such as missed-call notifications, and voicemail with voicemail preview and Windows Media Player
Word 2007 replaces Internet Explorer as the default viewer for HTML email, and becomes the default email editor in this and all subsequent versions.
Outlook 2010
Features that debuted in Outlook 2010 include:
Additional command-line switches
An improved conversation view that groups messages based on different criteria regardless of originating folders
IMAP messages are sent to the Deleted Items folder, eliminating the need to mark messages for future deletion
Notification when an email is about to be sent without a subject
Quick Steps, individual collections of commands that allow users to perform multiple actions simultaneously
Ribbon interface in all views
Search Tools contextual tab on the ribbon that appears when performing searches and that includes basic or advanced criteria filters
Social Connector to connect to various social networks and aggregate appointments, contacts, communication history, and file attachments
Spell check-in additional areas of the user interface
Support for multiple Exchange accounts in a single Outlook profile
The ability to schedule a meeting with a contact by replying to an email message
To-Do Bar enhancements including visual indicators for conflicts and unanswered meeting requests
Voicemail transcripts for Unified Messaging communications
Zooming user interface for calendar and mail views
Outlook 2013
Features that debuted in Outlook 2013, which was released on January 29, 2013, include:
Attachment reminder
Exchange ActiveSync (EAS)
Add-in resiliency
Cached Exchange mode improvements
IMAP improvements
Outlook data file (.ost) compression
People hub
Startup performance improvements
Outlook 2016
Features that debuted in Outlook 2016, include:
Attachment link to cloud resource
Groups redesign
Search cloud
Clutter folder
Email Address Internationalization
Scalable Vector Graphics
Outlook 2019
Features that debuted in Outlook 2019, include:
Focused Inbox
Add multiple time zones
Listen to your emails
Easier email sorting
Automatic download of cloud attachments
True Dark Mode (version 1907 onward)
Macintosh
Microsoft also released several versions of Outlook for classic Mac OS, though it was only for use with Exchange servers. It was not provided as a component of Microsoft Office for Mac but instead made available to users from administrators or by download. The final version was Outlook for Mac 2001, which was fairly similar to Outlook 2000 and 2002 apart from being exclusively for Exchange users.
Microsoft Entourage was introduced as an Outlook-like application for Mac OS in Office 2001, but it lacked Exchange connectivity. Partial support for Exchange server became available natively in Mac OS X with Entourage 2004 Service Pack 2. Entourage is not directly equivalent to Outlook in terms of design or operation; rather, it is a distinct application that has several overlapping features including Exchange client capabilities. Somewhat improved Exchange support was added in Entourage 2008 Web Services Edition.
Entourage was replaced by Outlook for Mac 2011, which features greater compatibility and parity with Outlook for Windows than Entourage offered. It is the first native version of Outlook for MacOS.
Outlook 2011 initially supported Mac OS X's Sync Services only for contacts, not events, tasks or notes. It also does not have a Project Manager equivalent to that in Entourage. With Service Pack 1 (v 14.1.0), published on April 12, 2011, Outlook can now sync calendar, notes and tasks with Exchange 2007 and Exchange 2010.
On October 31, 2014, Microsoft released Outlook for Mac (v15.3 build 141024) with Office 365 (a software as a service licensing program that makes Office programs available as soon as they are developed). Outlook for Mac 15.3 improves upon its predecessors with:
Better performance and reliability as a result of a new threading model and database improvements.
A new modern user interface with improved scrolling and agility when switching between Ribbon tabs.
Online archive support for searching Exchange (online or on-premises) archived mail.
Master Category List support and enhancements delivering access to category lists (name and color) and sync between Mac, Microsoft Windows and OWA clients.
Office 365 push email support for real-time email delivery.
Faster first-run and email download experience with improved Exchange Web Services syncing.
The "New Outlook for Mac" client, included with version 16.42 and above, became available for "Early Insider" testers in the fall of 2019, with a public "Insider" debut in October 2020. It requires macOS 10.14 or greater and introduces a redesigned interface with significantly changed internals, including native search within the client that no longer depends on macOS Spotlight. Some Outlook features are still missing from the New Outlook client as it continues in development.
To date, the Macintosh client has never had the capability of syncing Contact Groups/Personal Distribution Lists from Exchange, Microsoft 365 or Outlook.com accounts, something that the Windows and web clients have always supported. A UserVoice post created in December 2019 suggesting that the missing functionality be added has shown a "Planned" tag since October 2020.
Phones and tablets
First released in April 2014 by the venture capital-backed startup Acompli, the company was acquired by Microsoft in December 2014. On January 29, 2015, Acompli was re-branded as Outlook Mobile—sharing its name with the Microsoft Outlook desktop personal information manager and Outlook.com email service. In January 2015, Microsoft released Outlook for phones and for tablets (v1.3 build) with Office 365.
This was the first Outlook for these platforms with email, calendar, and contacts.
On February 4, 2015, Microsoft acquired Sunrise Calendar; on September 13, 2016, Sunrise ceased to operate, and an update was released to Outlook Mobile that contained enhancements to its calendar functions.
Similar to its desktop counterpart, Outlook mobile offers an aggregation of attachments and files stored on cloud storage platforms; a "focused inbox" highlights messages from frequent contacts, and calendar events, files, and locations can be embedded in messages without switching apps. The app supports a number of email platforms and services, including Outlook.com, Microsoft Exchange and Google Workspace (formerly G Suite) among others.
Outlook mobile is designed to consolidate functionality that would normally be found in separate apps on mobile devices, similarly to personal information managers on personal computers. is designed around four "hubs" for different tasks, including "Mail", "Calendar," "Files" and "People". The "People" hub lists frequently and recently used contacts and aggregates recent communications with them, and the "Files" hub aggregates recent attachments from messages, and can also integrate with other online storage services such as Dropbox, Google Drive, and OneDrive. To facility indexing of content for search and other features, emails and other information are stored on external servers.
Outlook mobile supports a large number of different e-mail services and platforms, including Exchange, iCloud, GMail, Google Workspace (formerly G Suite), Outlook.com, and Yahoo! Mail. The app supports multiple email accounts at once.
Emails are divided into two inboxes: the "Focused" inbox displays messages of high importance, and those from frequent contacts. All other messages are displayed within an "Other" section. Files, locations, and calendar events can be embedded into email messages. Swiping gestures can be used for deleting messages.
Like the desktop Outlook, Outlook mobile allows users to see appointment details, respond to Exchange meeting invites, and schedule meetings. It also incorporates the three-day view and "Interesting Calendars" features from Sunrise.
Files in the Files tab are not stored offline; they require Internet access to view.
Outlook mobile temporarily stores and indexes user data (including email, attachments, calendar information, and contacts), along with login credentials, in a "secure" form on Microsoft Azure servers located in the United States. On Exchange accounts, these servers identify as a single Exchange ActiveSync user in order to fetch e-mail. Additionally, the app does not support mobile device management, nor allows administrators to control how third-party cloud storage services are used with the app to interact with their users. Concerns surrounding these security issues have prompted some firms, including the European Parliament, to block the app on their Exchange servers. Microsoft maintains a separate, pre-existing Outlook Web Access app for Android and iOS.
Outlook Groups
Outlook Groups was a mobile application for Windows Phone, Windows 10 Mobile, Android and iOS that could be used with an Office 365 domain Microsoft Account, e.g. a work or school account. It is designed to take existing email threads and turn them into a group-style conversation. The app lets users create groups, mention their contacts, share Office documents via OneDrive and work on them together, and participate in an email conversation. The app also allows the finding and joining of other Outlook Groups. It was tested internally at Microsoft and launched September 18, 2015 for Windows Phone 8.1 and Windows 10 Mobile users.
After its initial launch on Microsoft's own platforms they launched the application for Android and iOS on September 23, 2015.
Outlook Groups was updated on September 30, 2015 that introduced a deep linking feature as well as fixing a bug that blocked the "send" button from working. In March 2016 Microsoft added the ability to attach multiple images, and the most recently used document to group messages as well as the option to delete conversations within the application programme.
Outlook Groups was retired by Microsoft on May 1, 2018.
The functionality was replaced by adding the "Groups node" to the folder list within the Outlook mobile app.
Internet standards compliance
HTML rendering
Outlook 2007 was the first Outlook to switch from Internet Explorer rendering engine to Microsoft Word 2007's. This meant that HTML and Cascading Style Sheets (CSS) items not handled by Word were no longer supported. On the other hand, HTML messages composed in Word look as they appeared to the author. This affects publishing newsletters and reports, because they frequently use intricate HTML and CSS to form their layout. For example, forms can no longer be embedded in an Outlook email.
Support of CSS properties and HTML attributes
Outlook for Windows has very limited CSS support compared to various other e-mail clients. Neither CSS1 (1996) nor CSS2 (1998) specifications are fully implemented and many CSS properties can only to be used with certain HTML elements for the desired effect. Some HTML attributes help achieve proper rendering of e-mails in Outlook, but most of these attributes are already deprecated in the HTML 4.0 specifications (1997). In order to achieve the best compatibility with Outlook, most HTML e-mails are created using multiple boxed tables, as the table element and its sub-elements support the width and height property in Outlook. No improvements have been made towards a more standards-compliant email client since the release of Outlook 2007.
Transport Neutral Encapsulation Format
Outlook and Exchange Server internally handle messages, appointments, and items as objects in a data model which is derived from the old proprietary Microsoft Mail system, the Rich Text Format from Microsoft Word and the complex OLE general data model. When these programs interface with other protocols such as the various Internet and X.400 protocols, they try to map this internal model onto those protocols in a way that can be reversed if the ultimate recipient is also running Outlook or Exchange.
This focus on the possibility that emails and other items will ultimately be converted back to Microsoft Mail format is so extreme that if Outlook/Exchange cannot figure out a way to encode the complete data in the standard format, it simply encodes the entire message/item in a proprietary binary format called Transport Neutral Encapsulation Format (TNEF) and sends this as an attached file (usually named "winmail.dat") to an otherwise incomplete rendering of the mail/item. If the recipient is Outlook/Exchange it can simply discard the incomplete outer message and use the encapsulated data directly, but if the recipient is any other program, the message received will be incomplete because the data in the TNEF attachment will be of little use without the Microsoft software for which it was created. As a workaround, numerous tools for partially decoding TNEF files exist.
Calendar compatibility
Outlook does not fully support data and syncing specifications for calendaring and contacts, such as iCalendar, CalDAV, SyncML, and vCard 3.0. Outlook 2007 claims to be fully iCalendar compliant; however, it does not support all core objects, such as VTODO or VJOURNAL. Also, Outlook supports vCard 2.1 and does not support multiple contacts in the vCard format as a single file. Outlook has also been criticized for having proprietary "Outlook extensions" to these Internet standards.
.msg format
Outlook (both the web version and recent non-web versions) promotes the usage of a proprietary .msg format to save individual emails, instead of the standard .eml format. Messages use .msg by default when saved to disk or forwarded as attachments. Compatibility with past or future Outlook versions is not documented nor guaranteed; the format saw over 10 versions released since version 1 in 2008.
The standard .eml format replicates the format of emails as used for transmission and is therefore compatible with any email client which uses the normal protocols. Standard-compliant email clients, like Mozilla Thunderbird, use additional headers to store software-specific information related e.g. to the local storage of the email, while keeping the file plain-text, so that it can be read in any text editor and searched or indexed like any document by any other software.
Security concerns
As part of its Trustworthy Computing initiative, Microsoft took corrective steps to fix Outlook's reputation in Office Outlook 2003. Among the most publicized security features are that Office Outlook 2003 does not automatically load images in HTML emails or permit opening executable attachments by default, and includes a built-in Junk Mail filter. Service Pack 2 has augmented these features and adds an anti-phishing filter.
Outlook add-ins
Outlook add-ins are small additional programs for the Microsoft Outlook application, mainly purposed to add new functional capabilities into Outlook and automate various routine operations. The term also refers to programs where the main function is to work on Outlook files, such as synchronization or backup utilities. Outlook add-ins may be developed in Microsoft Visual Studio or third-party tools such as Add-in Express. Outlook add-ins are not supported in Outlook Web App.
From Outlook 97 on, Exchange Client Extensions are supported in Outlook. Outlook 2000 and later support specific COM components called Outlook Add-Ins. The exact supported features (such as .NET components) for later generations were extended with each release.
SalesforceIQ Inbox for Outlook
In March 2016, Salesforce announced that its relationship intelligence platform, SalesforceIQ, would be able to seamlessly integrate with Outlook. SalesforceIQ works from inside the Outlook inbox providing data from CRM, email, and customer social profiles. It also provides recommendations within the inbox on various aspects like appointment scheduling, contacts, responses, etc.
Hotmail Connector
Microsoft Outlook Hotmail Connector (formerly Microsoft Office Outlook Connector), is a discontinued and defunct free add-in for Microsoft Outlook 2003, 2007 and 2010, intended to integrate Outlook.com (formerly Hotmail) into Microsoft Outlook. It uses DeltaSync, a proprietary Microsoft communications protocol that Hotmail formerly used.
In version 12, access to tasks and notes and online synchronization with MSN Calendar is only available to MSN subscribers of paid premium accounts. Version 12.1, released in December 2008 as an optional upgrade, uses Windows Live Calendar instead of the former MSN Calendar. This meant that calendar features became free for all users, except for task synchronization which became unavailable. In April 2008, version 12.1 became a required upgrade to continue using the service as part of a migration from MSN Calendar to Windows Live Calendar.
Microsoft Outlook 2013 and later have intrinsic support for accessing Outlook.com and its calendar over the Exchange ActiveSync (EAS) protocol, while older versions of Microsoft Outlook can read and synchronize Outlook.com emails over the IMAP protocol.
Social Connector
Outlook Social Connector was a free add-in for Microsoft Outlook 2003 and 2007 by Microsoft that allowed integration of social networks such as Facebook, LinkedIn and Windows Live Messenger into Microsoft Outlook. It was first introduced in November 18, 2009. Starting with Microsoft Office 2010, Outlook Social Connector is an integral part of Outlook.
CardDAV and CalDAV Connector
Since Microsoft Outlook does not support CalDAV and CardDAV protocol along the way, various third-party software vendors developed Outlook add-ins to enable users synchronizing with CalDAV and CardDAV servers. CalConnect has a list of software that enable users to synchronize their calendars with CalDAV servers/contacts with CardDAV servers.
Importing from other email clients
Traditionally, Outlook supported importing messages from Outlook Express and Lotus Notes. In addition, Microsoft Outlook supports POP3 and IMAP protocols, enabling users to import emails from servers that support these protocols. Microsoft Hotmail Connector add-in (described above) helps importing emails from Hotmail accounts. Outlook 2013 later integrated the functionality of this add-in and added the ability to import email (as well as a calendar) through Exchange ActiveSync protocol.
There are some ways to get the emails from Thunderbird; the first is to use a tool that can convert a Thunderbird folder to a format that can be imported from Outlook Express. This method must be processed folder by folder. The other method is to use a couple of free tools that keep the original folder structure. If Exchange is available, an easier method is to connect the old mail client (Thunderbird) to Exchange using IMAP, and upload the original mail from the client to the Exchange account.
See also
Address book
Calendar (Apple)—iCal
Comparison of email clients
Comparison of feed aggregators
Comparison of office suites
Evolution (software)
Kontact
List of applications with iCalendar support
List of personal information managers
Personal Storage Table (.pst file)
Windows Contacts
References
Notes
Citations
External links
Outlook Developer Portal
Microsft Outlook error [pii_email_84e9c709276f599ab1e7] solved
1997 software
Calendaring software
Computer-related introductions in 1997
Outlook
Outlook
News aggregator software
Personal information managers
Windows email clients
Android (operating system) software
IOS software | Operating System (OS) | 1,242 |
Application Portability Profile
The Application Portability Profile (APP) is a 1990s framework for Open-System Environment designed by the NIST for use by the U.S. Government. It contains a selected suite of specifications that defines the interfaces, services, protocols, and data formats for a particular class or domain of applications.
The Application Portability Profile offers structure to "integrate US federal, national and international, and other specifications to provide the functionality necessary to accommodate the broad range of US federal information technology requirements."
Overview
In the second half of the 20th century information systems initially developed from isolated islands of computing. Through progressive changes, these individual systems became connected by common users and common information needs. Late 20th century these systems were well on the way to migrating toward computing environments that consist of distributed, heterogeneous, networked applications, databases, and hardware. The concept emerged of a federal computing environment, that is built on an infrastructure defined by open, consensus-based standards which serve as de facto means of organizing these systems. The NIST developed such an infrastructure, and named it Open System Environment (OSE).
An Open System Environment (OSE) encompasses the functionality needed to provide interoperability, portability, and scalability of computerized applications across networks of heterogeneous, multi-vendor hardware/software/communications platforms. The Open System Environment forms an extensible framework that allows services, interfaces, protocols, and supporting data formats to be defined in terms of nonproprietary specifications that evolve through open (public), consensus-based forums.
Complementary to the Open System Environment is the Application Portability Profile standard. This standard can covers a broad range of application software domains of interest to many US federal agencies, but it does not include every domain within the U.S. Government's application inventory. The individual standards and specifications in the APP define data formats, interfaces, protocols, or a mix of these elements.
APP topics
APP and the NIST Enterprise Architecture Model
The "Application Portability Profile (APP) - The U.S. Government’s Open System Environment Profile Version 3.0" provides recommendations on a set of industry, Federal, national, international and other specifications that define interfaces, services, protocols, and data formats to support an Open System Environment (OSE).
The APP addresses the lowest architecture in the NIST Enterprise Architecture Model, i.e., the Delivery System Architecture. On this level the hardware of the computer architecture, the software and the communications are being specified. Based on these specification recommendations, various services and agencies have defined detailed technical reference models.
APP service areas
The services defined in the Application Portability Profile fall into the following broad spectrum of service areas:
Operating system services (OS)
Human/computer interface services (HCI)
Data management services (DM)
Data interchange services (DI)
Software engineering services (SWE)
Graphics services (GS)
Network services (NS)
Each of the Application Portability Profile service areas addresses specific components around which interface, data format, or protocol specifications have been or will be defined. Security and management services are common to all of the
service areas and pervade these areas in one or more forms.
Applications
In the 1990s the NIST's Application Portability Profile has been applicated in several Enterprise Information Architecture frameworks, such as:
Information Architecture framework for the U.S. Patent and Trademark Office (PTO) of the Department of Commerce (DoC), and
Department of Defense (DoD) in its Technical Architecture Framework for Information Management (TAFIM)
Further reading
Department of Defense (1996). Technical Architecture Framework for Information Management. Vol. 2, Technical Reference Model.
Gary Fisher (1993). Application Portability Profile (APP) : The U.S. Government’s Open System Environment Profile OSE/1 Version 2.0. NIST Special Publication 500-210, June 1993.
Joseph Hungate (1995) "Conference Report: Application Portability Profile and Open System Environment Users Forum Gaithersburg, MD May 9–10, 1995" in: Journal of Research of the National Institute of Standards and Technology. Volume 100, Number 6, November–December 1995
IEEE P1003.22 Draft Guide for POSIX Open Systems Environment—A Security Framework
References
Reference models
Enterprise modelling | Operating System (OS) | 1,243 |
Workshop on Information Technologies and Systems
The Workshop on Information Technologies and Systems - WITS. is an academic conference for information systems that is held annually in December in conjunction with ICIS (the International Conference on Information Systems). WITS is quantitatively/technically-oriented and is primarily attended by business school professors from leading academic research institutions in North America with growing participation from throughout the world. WITS is incorporated in the state of Georgia (US).
Origins
Richard Wang (MIT) and Sudha Ram (University of Arizona) started WITS in 1991. They co-chaired the first conference held in Boston on December 14–15, 1991. Also involved in the early conferences were Sal March (Vanderbilt University), Prabuddha De (Purdue University), Al Hevner (University of South Florida), Stuart Madnick (MIT), Veda Storey (Georgia State University), Diane Strong (Worcester Polytechnic Institute), Andrew Whinston (University of Texas at Austin), Carson Woo (University of British Columbia), and others.
WITS Presidents
2019-2021: Amit Basu, (Southern Methodist University), USA.
2016-2018: Ram Gopal, (University of Connecticut), USA.
2013-2015: Sumit Sarkar, (University of Texas at Dallas), USA.
2010–2012: Jeffrey Parsons, (Memorial University of Newfoundland), Canada.
2007–2009: Paulo Goes, (University of Arizona), USA.
2004–2006: Carson Woo, (University of British Columbia), Canada.
2001–2003: Sal March,(Vanderbilt University), USA.
1998–2000: Prabuddha De, (Purdue University), USA.
1995–1997: Richard Wang, (Massachusetts Institute of Technology), USA.
WITS Locations and Conference Chairs
WITS 2019: Munich, Germany— Martin Bichler (Technical University of Munich) & Wolfgang Ketter (University of Cologne)
WITS 2018: Santa Clara, California, USA— Kaushik Dutta (University of South Florida) & Zhengrui (Jeffrey) Jiang (Iowa State University)
WITS 2017: Seoul, South Korea— Raghu Santanam (Arizona State University) & Victoria Yoon (Virginia Commonwealth University)
WITS 2016: Dublin, Ireland— Wolfgang Ketter (Erasmus University) & Balaji Padmanabhan (University of South Florida)
WITS 2015: Dallas, USA— Varghese Jacob (University of Texas at Dallas) & Subodha Kumar (Texas A&M University)
WITS 2014: Auckland, New Zealand— Yong Tan (University of Washington) & Arvind Tripathi (University of Auckland)
WITS 2013: Milano, Italy— Raj Sharman (University at Buffalo) & Sandeep Purao (Pennsylvania State University)
WITS 2012: Orlando, Florida — Haldun Aytug (University of Florida) & Jackie Rees Ulmer (Purdue University)
WITS 2011: Shanghai, China – Roger Chiang (University of Cincinnati) & Andrew Gemino (Simon Fraser University)
WITS 2010: St. Louis, Missouri — Erik Rolland & Raymond A. Patterson (University of Calgary)
WITS 2009: Phoenix, Arizona — Huimin Zhao, (University of Wisconsin-Milwaukee) & Vijay Khatri, (Indiana University)
WITS 2008: Paris, France — Ram Gopal (University of Connecticut) & Ram Ramesh (University of Buffalo )
WITS 2007: Montreal, Canada — Kaushal Chari (University of South Florida) & Akhil Kumar (Pennsylvania State University)
WITS 2006: Milwaukee, Wisconsin — Ramesh Venkataraman, (Indiana University) & Atish Sinha
WITS 2005: Las Vegas, Nevada – Kar Yan Tam & J. Leon Zhao
WITS 2004: Washington, DC – Paulo Goes, (University of Arizona) & Amitava Dutta
WITS 2003: Seattle, Washington – Deb Dey (University of Washington) & Ramayya Krishnan (Carnegie Mellon University)
WITS 2002: Barcelona, Spain – Amit Basu (Southern Methodist University) & Soumitra Dutta
WITS 2001: New Orleans, LA — Jeffrey Parsons, (Memorial University of Newfoundland) & Olivia Sheng
WITS 2000: Brisbane, Australia – Paul Bowen and Vijay Mookerjee, (University of Texas at Dallas)
WITS 1999: Charlotte, North Carolina — Sridhar Narasimhan, (Georgia Institute of Technology) & Sumit Sarkar, (University of Texas at Dallas)
WITS 1998: Helsinki, Finland – Janis Bubenko and Sal March,(Vanderbilt University)
WITS 1997: Atlanta, Georgia – Arie Segev & Vijay K. Vaishnavi
WITS 1996: Cleveland, Ohio — Arun Sen (Texas A&M University) & George Ernst
WITS 1995: Amsterdam, The Netherlands – Sudha Ram, (University of Arizona) and Matthias Jarke
WITS 1994: Vancouver, BC, Canada — Prabuddha De, (Purdue University) & Carson Woo
WITS 1993: Orlando, Florida – Al Hevner & Nabil Kamel
WITS 1992: Dallas, Texas — Andrew Whinston (University of Texas at Austin) & Veda Storey
WITS 1991: Cambridge, MA — Richard Wang, (Massachusetts Institute of Technology) & Sudha Ram, (University of Arizona)
References
Information systems conferences
Computer science conferences
Academic conferences | Operating System (OS) | 1,244 |
Videotex
Videotex (or interactive videotex) was one of the earliest implementations of an end-user information system. From the late 1970s to early 2010s, it was used to deliver information (usually pages of text) to a user in computer-like format, typically to be displayed on a television or a dumb terminal.
In a strict definition, videotex is any system that provides interactive content and displays it on a video monitor such as a television, typically using modems to send data in both directions. A close relative is teletext, which sends data in one direction only, typically encoded in a television signal. All such systems are occasionally referred to as viewdata. Unlike the modern Internet, traditional videotex services were highly centralized.
Videotex in its broader definition can be used to refer to any such service, including teletext, the Internet, bulletin board systems, online service providers, and even the arrival/departure displays at an airport. This usage is no longer common.
With the exception of Minitel in France, videotex elsewhere never managed to attract any more than a very small percentage of the universal mass market once envisaged. By the end of the 1980s its use was essentially limited to a few niche applications.
Initial development and technologies
United Kingdom
The first attempts at a general-purpose videotex service were created in the United Kingdom in the late 1960s. In about 1970 the BBC had a brainstorming session in which it was decided to start researching ways to send closed captioning information to the audience. As the Teledata research continued the BBC became interested in using the system for delivering any sort of information, not just closed captioning.
In 1972, the concept was first made public under the new name Ceefax. Meanwhile, the General Post Office (soon to become British Telecom) had been researching a similar concept since the late 1960s, known as Viewdata. Unlike Ceefax which was a one-way service carried in the existing TV signal, Viewdata was a two-way system using telephones. Since the Post Office owned the telephones, this was considered to be an excellent way to drive more customers to use the phones. Not to be outdone by the BBC, they also announced their service, under the name Prestel. ITV soon joined the fray with a Ceefax-clone known as ORACLE.
In 1974 all the services agreed on a standard for displaying the information. The display would be a simple 40×24 grid of text, with some "graphics characters" for constructing simple graphics, revised and finalized in 1976. The standard did not define the delivery system, so both Viewdata-like and Teledata-like services could at least share the TV-side hardware (which at that point in time was quite expensive). The standard also introduced a new term that covered all such services, teletext. Ceefax first started operation in 1977 with a limited 30 pages, followed quickly by ORACLE and then Prestel in 1979.
By 1981 Prestel International was available in nine countries, and a number of countries, including Sweden, The Netherlands, Finland and West Germany were developing their own national systems closely based on Prestel. General Telephone and Electronics (GTE) acquired an exclusive agency for the system for North America.
In the early 1980s videotex became the base technology for the London Stock Exchange's pricing service called TOPIC. Later versions of TOPIC, notably TOPIC2 and TOPIC3, were developed by Thanos Vassilakis and introduced trading and historic price feeds.
France
Development of a French teletext-like system began in 1973. A very simple 2-way videotex system called Tictac was also demonstrated in the mid-1970s. As in the UK, this led on to work to develop a common display standard for videotex and teletext, called Antiope, which was finalised in 1977. Antiope had similar capabilities to the UK system for displaying alphanumeric text and chunky "mosaic" character-based block graphics. A difference however was that while in the UK standard control codes automatically also occupied one character position on screen, Antiope allowed for "non spacing" control codes. This gave Antiope slightly more flexibility in the use of colours in mosaic block graphics, and in presenting the accents and diacritics of the French language.
Meanwhile, spurred on by the 1978 Nora/Minc report, the French government was determined to catch up on a perceived falling behind in its computer and communications facilities. In 1980 it began field trials issuing Antiope-based terminals for free to over 250,000 telephone subscribers in Ille-et-Vilaine region, where the French CCETT research centre was based, for use as telephone directories. The trial was a success, and in 1982 Minitel was rolled out nationwide.
Canada
Since 1970 researchers at the Communications Research Centre (CRC) in Ottawa had been working on a set of "picture description instructions", which encoded graphics commands as a text stream. Graphics were encoded as a series of instructions (graphics primitives) each represented by a single ASCII character. Graphic coordinates were encoded in multiple 6 bit strings of XY coordinate data, flagged to place them in the printable ASCII range so that they could be transmitted with conventional text transmission techniques. ASCII SI/SO characters were used to differentiate the text from graphic portions of a transmitted "page". In 1975, the CRC gave a contract to Norpak to develop an interactive graphics terminal that could decode the instructions and display them on a colour display, which was successfully up and running by 1977.
Against the background of the developments in Europe, CRC was able to persuade the Canadian government to develop the system into a fully-fledged service. In August 1978 the Canadian Department of Communications publicly launched it as Telidon, a "second generation" videotex/teletext service, and committed to a four-year development plan to encourage rollout. Compared to the European systems, Telidon offered real graphics, as opposed to block-mosaic character graphics. The downside was that it required much more advanced decoders, typically featuring Zilog Z80 or Motorola 6809 processors.
Japan
Research in Japan was shaped by the demands of the large number of Kanji characters used in Japanese script. With 1970s technology, the ability to generate of so many characters on demand in the end-user's terminal was seen as prohibitive. Instead, development focussed on methods to send pages to user terminals pre-rendered, using coding strategies similar to facsimile machines. This led to a videotex system called Captain ("Character and Pattern Telephone Access Information Network"), created by NTT in 1978, which went into full trials from 1979 to 1981. The system also lent itself naturally to photographic images, albeit at only moderate resolution. However, the pages typically took two or three times longer to load, compared to the European systems. NHK developed an experimental teletext system along similar lines, called CIBS ("Character Information Broadcasting Station"). Based on a 388×200 pixel resolution, it was first announced in 1976, and began trials in late 1978. (NHK's ultimate production teletext system launched in 1983).
Standards
Work to establish an international standard for videotex began in 1978 in CCITT. But the national delegations showed little interest in compromise, each hoping that their system would come to define what was perceived to be going to be an enormous new mass-market. In 1980 CCITT therefore issued recommendation S.100 (later T.100), noting the points of similarity but the essential incompatibility of the systems, and declaring all four to be recognised options.
Trying to kick-start the market, AT&T Corporation entered the fray, and in May 1981 announced its own Presentation Layer Protocol (PLP). This was closely based on the Canadian Telidon system, but added to it some further graphics primitives and a syntax for defining macros, algorithms to define cleaner pixel spacing for the (arbitrarily sizeable) text, and also dynamically redefinable characters and a mosaic block graphic character set, so that it could reproduce content from the French Antiope. After some further revisions this was adopted in 1983 as ANSI standard X3.110, more commonly called NAPLPS, the North American Presentation Layer Protocol Syntax. It was also adopted in 1988 as the presentation-layer syntax for NABTS, the North American Broadcast Teletext Specification.
Meanwhile, the European national Postal Telephone and Telegraph (PTT) agencies were also increasingly interested in videotex, and had convened discussions in European Conference of Postal and Telecommunications Administrations (CEPT) to co-ordinate developments, which had been diverging along national lines. As well as the British and French standards, the Swedes had proposed extending the British Prestel standard with a new set of smoother mosaic graphics characters; while the specification for the proposed German Bildschirmtext (BTX) system, developed under contract by IBM Germany for Deutsche Bundespost, was growing increasingly baroque. Originally conceived to follow the UK Prestel system, it had accreted elements from all the other European standards and more. This became the basis for setting out the CEPT recommendation T/CD 06-01, also proposed in May 1981.
However, due to national pressure, CEPT stopped short of fixing a single standard, and instead recognised four "profiles":
CEPT1, corresponding to the German BTX;
CEPT2, the French Minitel;
CEPT3, the British Prestel;
CEPT4, the Swedish Prestel Plus.
National videotex services were encouraged to follow one of the existing four basic profiles; or if they extended them, to do so in ways compatible with a "harmonised enhanced" specification.
There was talk of upgrading Prestel to the full CEPT standard "within a couple of years". But in the event, it never happened. The German BTX eventually established CEPT1; the French Minitel continued with CEPT2, which was ready to roll out; and the British stayed with CEPT3, by now too established to break compatibility.
The other countries of Europe adopted a patchwork of the different profiles.
In later years CEPT fixed a number of standards for extension levels to the basic service: for photographic images (based on JPEG; T/TE 06-01, later revisions), for alpha-geometric graphics, similar to NAPLPS/Telidon (T/TE 06-02), for transferring larger data files and software (T/TE 06-03), for active terminal-side capabilities and scripting (T/TE 06-04), and for discovery of terminal capabilities (T/TE 06-05). But interest in them was limited.
CCITT T.101
Character set
ISO-IR registered character sets for Videotex use include variants of T.51, semigraphic mosaic sets, specialised C0 control codes, and four sets of specialised C1 control codes.
Uptake
UK
Prestel was somewhat popular for a time, but never gained anywhere near the popularity of Ceefax. This may have been due primarily to the relatively low penetration of suitable hardware in British homes, requiring the user to pay for the terminal (today referred to as a set-top box), a monthly charge for the service, and phone bills on top of that (unlike the US, local calls were paid for in most of Europe at that time). In the late 1980s the system was re-focused as a provider of financial data, and eventually bought out by the Financial Times in 1994. It continues today in name only, as FT's information service. A closed access videotex system based on the Prestel model was developed by the travel industry, and continues to be almost universally used by travel agents throughout the country.
Using a prototype domestic television equipped with the Prestel chip set, Michael Aldrich of Redifon Computers Ltd demonstrated a real-time transaction processing in 1979 or online shopping as it is now called. From 1980 onwards he designed, sold and installed systems with major UK companies including the world's first travel industry system, the world's first vehicle locator system for one of the world's largest auto manufacturers and the world's first supermarket system. He wrote a book about his ideas and systems which among other topics explored a future of teleshopping and teleworking that has proven to be prophetic. Before the IBM PC, Microsoft MS-DOS and the Internet or World Wide Web, he invented and manufactured and sold the 'Teleputer' , a PC that communicated using its Prestel chip set.
The Teleputer was a range of computers that were suffixed with a number. Only the Teleputer 1 and Teleputer 3 were manufactured and sold. The Teleputer 1 was a very simple device and only worked as a teletex terminal, whereas the Teleputer 3 was a Z80 based micro computer. It ran with a pair of single sided 5 inch floppy disk drive; a 20Mb Hard disk drive version was available towards the end of the product's life. The operating system was CP/M or a proprietary variant CP*, and the unit was supplied with a suite of applications, consisting of a word processor, spreadsheet, database and a semi-compiled basic programming language. The display supplied with the unit (both the Teleputer 1 and 3) was a modified Rediffusion 14 inch portable colour television, with the tuner circuitry removed and being driven by a RGB input. The unit had a 64Kb onboard memory which could be expanded to 128Kb with a plug in card. Graphics were the standard videotext (or teletext) resolution and colour, but a high resolution graphic card was also available. A 75/1200 baud modem was fitted as standard (could also run at 300/300 and 1200/1200), and connected to the telephone via an old style round telephone connector. In addition an IEEE interface card could be fitted. On the back of the unit there was a RS232 and Centronic connections and on the front was the connector for the keyboard.
The proposed Teleputer 4 & 5 were planned to have a laser disk attached and would allow the units to control video output on a separate screen.
Spain
In Spain the system was provided by the Telefonica company and called Ibertex, which was adopted from the French Minitel system, but using the German CEPT-1 standard, used in the German Bildschirmtext.
Canada
In Canada the Department of Communications started a lengthy development program in the late 1970s that led to a graphical "second generation" service known as Telidon. Telidon was able to deliver service using the vertical blanking interval of a TV signal or completely by telephone using a Bell 202 style (split baud rate 150/1200) modem. The TV signal was used in a similar fashion to Ceefax, but used more of the available signal (due to differences in the signals between North America and Europe) for a data rate about 1200-bit/s. Some TV signal systems used a low-speed modem on the phone line for menu operation. The resulting system was rolled out in several test studies, all of which were failures.
The use of the 202 model modem, rather than one compatible with the existing DATAPAC dial-up points such as the Bell 212, created severe limitations, as it made use of the nationwide X.25 packet network essentially out-of-bounds for Telidon-based services. There were also many widely held misperceptions concerning the graphics resolution and colour resolution that slowed business acceptance. Byte magazine once described it as "low resolution", when the coding system was, in fact, capable of 224 resolution in 8-byte mode. There was also a pronounced emphasis in government and Telco circles on "hardware decoding" even after very capable PC-based software decoders became readily available. This emphasis on special single-purpose hardware was yet another impediment to the widespread adoption of the system.
Services included:
Grassroots Canada by InfoMart, Toronto
Teleguide, A kiosk-based service emphasizing tourist information in Toronto by InfoMart, and in San Francisco by The Chronicle, in Phoenix by The Arizona Republic and in Las Vegas by The Las Vegas Sun.
NAPLPS-based systems (Teleguide) were also used for an interactive Mall directory system in various locations, including the world's largest indoor mall, West Edmonton Mall (1985) and the Toronto Eaton Center. It was also used for an interactive multipoint audio-graphic educational teleconferencing system (1987) that predated today's shared interactive whiteboard systems such as those used by Blackboard and Desire2Learn.
United States
One of the earliest experiments with marketing videotex to consumers in the U.S. was by Radio Shack, which sold a consumer videotex terminal, essentially a single-purpose predecessor to the TRS-80 Color Computer, in outlets across the country. Sales were anemic. Radio Shack later sold a videotex software and hardware package for the Color Computer.
In an attempt to capitalize on the European experience, a number of US-based media firms started their own videotex systems in the early 1980s. Among them were Knight-Ridder, the Los Angeles Times, and Field Enterprises in Chicago, which launched Keyfax. The Fort Worth Star-Telegram partnered with Radio Shack to launch StarText (Radio Shack is headquartered in Fort Worth).
Unlike the UK, however, the FCC refused to set a single technical standard, so each provider could choose what it wished. Some selected Telidon (now standardized as NAPLPS) but the majority decided to use slight-modified versions of the Prestel hardware. StarText used proprietary software developed at the Star-Telegram. Rolled out across the country from 1982 to 1984, all of the services quickly died. None, except StarText, remained in operation after two years from their respective launch dates. StarText remained in operation until the late 1990s, when it was moved to the web.
The primary problem was that the systems were simply too slow, operating on 300 baud modems connected to large minicomputers. After waiting several seconds for the data to be sent, users then had to scroll up and down to view the articles. Searching and indexing was not provided, so users often had to download long lists of titles before they could download the article itself. Furthermore, most of the same information was available in easy-to-use TV format on the air, or in general reference books at the local library, and didn't tie up a landline. Unlike the Ceefax system where the signal was available for free in every TV, many U.S. systems cost hundreds of dollars to install, plus monthly fees of $30 or more.
The most successful online services of the period were not videotex services at all. Despite the promises that videotex would appeal to the mass market, the videotex services were comfortably out-distanced by Dow Jones News/Retrieval (begun in 1973), CompuServe and (somewhat further behind) The Source, both begun in 1979. None were videotex services, nor did they use the fixed frame-by-frame videotex model for content. Instead all three used search functions and text interfaces to deliver files that were for the most part plain ASCII. Other ASCII-based services that became popular included Delphi (launched in 1983) and GEnie (launched in 1985).
Nevertheless, NAPLPS-based services were developed by several other joint partnerships between 1983 and 1987. These included:
Viewtron, a joint venture of Knight-Ridder and AT&T
Gateway, A service in Southern California by a joint venture of Times Mirror and InfoMart of Canada
Keyfax, A service in Chicago by Field Enterprises and Centel
Covidea, based in New York, set up by AT&T and Chemical Bank, with Time Inc. and Bank of America
A joint venture of AT&T-CBS completed a moderately successful trial of videotex use in the homes of Ridgewood, New Jersey, leveraging technology developed at Bell Labs. After the trial in Ridgewood AT&T and CBS parted company. Subsequently, CBS partnered with IBM and Sears, Roebuck, and Company to form Trintex. Around 1985, this entity began to offer a service called Prodigy, which used NAPLPS to send information to its users, right up until it turned into an Internet service provider in the late 1990s. Because of its relatively late debut, Prodigy was able to skip the intermediate step of persuading American consumers to attach proprietary boxes to their televisions; it was among the earliest proponents of computer-based videotex.
Videotex technology was also adopted for use internally within organizations. Digital Equipment Corp (DEC) offered a videotex product (VTX) on the VAX system. Goldman Sachs, for one, adopted and developed an internal fixed income information distribution and bond sales system based on DEC VTX. Internal systems were overtaken by external vendors, notably Bloomberg, which offered the additional benefit of providing information from different firms and allowing interactive communication between the firms.
One of the earliest corporations to participate in videotex in the United States was American Express. Its service, branded "American Express ADVANCE" included card account info, travel booking, stock prices from Shearson Lehman, and even online shopping, through its Merchandise Services division.
Australia
Australia's national public Videotex service, Viatel, was launched by Telecom Australia on 28 February 1985. It was based on the British Prestel service. The service was later renamed Discovery 40, in reference to its 40 column screen format, as well as to distinguish it from another Telecom service, Discovery 80. The Viatel system had a very rapid take up in its first year due to the efforts of GEC Manager Terry Crews and his pioneering work on home banking for the Commonwealth Bank.
New Zealand
A private service known as TAARIS (Travel Agents Association Reservation and Information Service) was launched in New Zealand in 1985 for the Travel Agents Association of New Zealand by ICL Computers. This service used ICL's proprietary "Bulletin" software which was based on the Prestelstandard but provided many additional facilities such as the ability to run additional software for specific applications. It also supported a proprietary email service.
Netherlands
In the Netherlands the then state-owned phone company PTT (now KPN) operated two platforms: Viditel (launched in 1980) and Videotex Nederland. From the user perspective the main difference between these systems was that Viditel used standard dial-in phone numbers where Videotex used premium-rate telephone numbers. For Viditel you needed a (paid) subscription and on top of that you paid for each page you visited. For Videotex services you normally didn't need a subscription nor was there the need to authenticate: you paid for the services via the premium rate of the modem-connection based on connection time, regardless of the pages or services you retrieved.
From the information-provider point of view there were huge differences between Viditel and Videotex: Via Viditel all data was normally stored on the central computer(s) owned and managed by KPN: to update the information in the system you connected to the Viditel computer and via a terminal-emulation application you could edit the information.
But when using Videotex the information is on a computer-platform owned and managed by the information-provider. The Videotex system connected the end-user to the Datanet 1 line of the information-provider. It was up to the information provider if the access-point (the box directly behind the telephone line) supported the videotex protocol or that it was a transparent connection where the host handled the protocol.
As said the Videotex Nederland services offered access via several primary rate numbers and the information/service provider could choose the costs for accessing his service. Depending on the number used, the tariff could vary from ƒ 0,00 to ƒ 1,00 Dutch guilders (which is between €0.00 and €0.45 euro) per minute.
Besides these public available services, generally without authentication, there were also several private services using the same infrastructure but using their own access-phone numbers and dedicated access-points. As these services weren't public you had to log into the infrastructure. The largest private networks were Travelnet which was an information and booking-system for the travel industry and RDWNet which was set up for the automobile trade to register the outcome of MOT tests to the agency that officially issued the test-report. Later some additional services for the branch were added such as a service where the readings of the odometer could be registered each time a car was brought in for service. This was part of the Nationale Autopas Service and is now available via internet
The network of Videotex Nederland offered also direct access to most services of the French minitel system.
Ireland
A version of the French Minitel system was introduced to Ireland by eircom (then called Telecom Éireann) in 1988. The system was based on the French model and Irish services were even accessible from France via the code "3619 Irlande."
A number of major Irish businesses came together to offer a range of online services, including directory information, shopping, banking, hotel reservations, airline reservations, news, weather and information services. It wasn't a centralised service and individual service providers could connect to it via the Eirpac packet switching network. It could also connect to databases on other networks such as French Minitel services, European databases and university systems.
The system was also the first platform in Ireland to offer users access to e-mail outside of a corporate setting. Despite being cutting edge for its time, the system failed to capture a large market and was ultimately withdrawn due to lack of commercial interest. The rise of the internet and other global online services in the early to mid-1990s played a major factor in the death of Irish Minitel. The service eventually ended by the end of the 1990s..
Minitel Ireland's terminals were technically identical to their French counterparts, except that they had a Qwerty keyboard and an RJ-11 telephone jack which is the standard telephone connector in Ireland. Terminals could be rented for 5.00 Irish pounds (6.35 euro) per month or purchased for 250.00 Irish pounds (317.43 euro) in 1992.
Minitel
With the French Minitel system, unlike any other service, the users were given an entire custom designed terminal for free. This was a deliberate move on the part of France Telecom, which reasoned that it would be cheaper in the long run to give away free terminals and teach its customers how to look up telephone listings on the terminal, instead of continuing to print and ship millions of phone books each year.
Once the network was in place, commercial services started to sprout up, becoming very popular in the mid-1980s. By 1990 tens of millions of terminals were in use. Like Prestel, Minitel used an asymmetric modem (1200-bit/s for downloading information to the terminal and 75-bit/s back).
Alex
Bell Canada introduced Minitel to Quebec as Alex in 1988, and Ontario two years later. It was available both as a standalone CRT terminal (very similar in design to the ADM-3A) with 1200-bit/s modem, and as software-only for MS-DOS computers. The system was received enthusiastically thanks to a free two-month introductory period, but fizzled within two years. Online fees were very high, and the useful services such as home banking, restaurant reservations, and news feeds, that Bell Canada advertised did not materialise; within a very short time the majority of content on Alex was of poor quality or very expensive chat lines. The Alex terminals did double duty for connecting to text-only BBSes.
Minitel in Brazil
A very successful system was started in São Paulo, Brazil, by then state-owned Telesp (Telecomunicações de São Paulo). It was called Videotexto and operated from 1982 to the mid-nineties; a few other state telephone companies followed Telesp's lead, but each state kept standalone databases and services. The key to its success was that the phone company offered only the service and phone subscriber databases and third parties—banks, database providers, newspapers—offered additional content and services. The system peaked at 70 thousand subscribers around 1995.
South Africa
Beltel was launched by Telkom in the mid-eighties and continued until 1999.
Comparison to the Internet today
While some people consider videotex to be the precursor of the Internet, the two technologies evolved separately and reflect fundamentally different assumptions about how to computerize communications.
The Internet in its mature form (after 1990) is highly decentralized in that it is essentially a federation of thousands of service providers whose mutual cooperation makes everything run, more or less. Furthermore, the various hardware and software components of the Internet are designed, manufactured and supported by thousands of different companies. Thus, completing any given task on the Internet, such as retrieving a webpage, relies on the contributions of hundreds of people at a hundred or more distinct companies, each of which may have only very tenuous connections with each other.
In contrast, videotex was always highly centralized (except in the French Minitel service, also including thousands of information providers running their own servers connected to the packet switched network "TRANSPAC"). Even in videotex networks where third-party companies could post their own content and operate special services like forums, a single company usually owned and operated the underlying communications network, developed and deployed the necessary hardware and software, and billed both content providers and users for access. The exception was the transaction processing videotex system developed in the UK by Michael Aldrich in 1979, which brought teleshopping (or online shopping as it was later called) into prominence and was the idea developed later through the Internet. Aldrich's systems were based on minicomputers that could communicate with multiple mainframes. Many systems were installed in the UK including the world's first supermarket teleshopping system.
Nearly all books and articles (in English) from videotex's heyday (the late 1970s and early 1980s) seem to reflect a common assumption that in any given videotex system, there would be a single company that would build and operate the network. Although this appears shortsighted in retrospect, it is important to realize that communications had been perceived as a natural monopoly for almost a century — indeed, in much of the world, telephone networks were then and still are explicitly operated as a government monopoly. The Internet as we know it today was still in its infancy in the 1970s, and was mainly operated on telephone lines owned by AT&T which were leased by ARPA. At the time, AT&T did not take seriously the threat posed by packet switching; it actually turned down the opportunity to take over ARPANET. Other computer networks at the time were not really decentralized; for example, the private network Tymnet had central control computers called supervisors which controlled each other in an automatically determined hierarchy. It would take another decade of hard work to transform the Internet from an academic toy into the basis for a modern information utility.
Definitions
Definitions of Videotex and associated terms. These definitions were written in 1980 so some names may be out of date.
Videotex: A two-way interactive service. The term was coined by CCITT and emphasizes information retrieved with the capability of displaying pages of text and pictorial material on the screens of adapted TVs.
Viewdata: An alternative term to videotex, used in particular by the British Post Office and generally in Britain and the USA. Elsewhere, the term videotex is preferred. Viewdata was coined by the BPO in the early 1970s, but found to be unacceptable as a trade name, hence its use as a generic.
Teletext: One-way broadcast information services for displaying pages of text and pictorial material on the screens of adapted TVs. A limited choice of information pages is continuously cycled at the broadcasting station. By means of a keypad, a user can select one page at a time for display from the cycle. The information is transmitted in digital form usually using spare capacity in the broadcast TV signal. Careful design can ensure that there is no interference with the normal TV picture. Alternatively, it can use the full capacity of a dedicated channel. Compared with two-way videotex, teletext is inherently more limited, though generally less costly.
Teletex: A text communication standard for communicating word processors and similar terminals combining the facilities of office typewriters and text editing.
Ceefax ("See facts"): The BBC's name for its public teletext service available on two TV channels using spare capacity.
Oracle ("Optional recognition of coded line electronics"): The name of the IBA's equivalent teletext service.
Bildschirmtext, DataVision, Bulletin, Captain, Teletel, Prestel, Viewtron, etc.: The proprietary names for specific videotex implementations.
See also
Online service provider
Nabu Network—the Nabu Network was not a videotex system, but it was an early data communications service which was centrally run by the Canadian cable industry.
References
Further reading
Michael Aldrich (1982), Videotex: Key to the Wired City, London: Quiller Press.
Claire Ancelin and Marie Marchand, eds. (1984), Le Vidéotex: Contribution aux débats sur la télématique. Paris: Masson.
Beth Krevitt-Eres et al, for UNESCO (1986), A decision-makers' guide to videotex and teletext. Includes a useful country-by-country survey of videotex systems operating at that date.
Bernard Marti, ed. (1990), Télématique, techniques, normes, services. Paris: Dunod
Thomas P. Caruso and Mark R. Harsch, with Edward B. Roberts, thesis supervisor (1984), Joint Ventures in the Cable and Videotex Industry, Masters Thesis, MIT Sloan School of Management.
Susanne K. Schmidt and Raymund Werle (1998) Interactive Videotex, in Coordinating technology: studies in the international standardization of telecommunications, MIT Press, pp. 147–184
External links
David Carlson, The Online Timeline: 1980s, University of Florida
Web 0.1 - Before the Internet, there was videotex. - MIT Technology Review
Computer graphics
History of the Internet
Legacy systems | Operating System (OS) | 1,245 |
High availability software
High availability software is software used to ensure that systems are running and available most of the time. High availability is a high percentage of time that the system is functioning. It can be formally defined as (1 – (down time/ total time))*100%. Although the minimum required availability varies by task, systems typically attempt to achieve 99.999% (5-nines) availability. This characteristic is weaker than fault tolerance, which typically seeks to provide 100% availability, albeit with significant price and performance penalties.
High availability software is measured by its performance when a subsystem fails, its ability to resume service in a state close to the state of the system at the time of the original failure, and its ability to perform other service-affecting tasks (such as software upgrade or configuration changes) in a manner that eliminates or minimizes down time. All faults that affect availability – hardware, software, and configuration need to be addressed by High Availability Software to maximize availability.
Features
Typical high availability software provides features that:
Enable hardware and software redundancy:
These features include:
The discovery of hardware and software entities,
The assignment of active/standby roles to these entities,
Detection of failed components,
Notification to redundant components that they should become active, and
The ability to scale the system.
A service is not available if it cannot service all the requests being placed on it. The “scale-out” property of a system refers to the ability to create multiple copies of a subsystem to address increasing demand, and to efficiently distribute incoming work to these copies (Load balancing (computing)) preferably without shutting down the system. High availability software should enable scale-out without interrupting service.
Enable active/standby communication (notably Checkpointing):
Active subsystems need to communicate to standby subsystems to ensure that the standby is ready to take over where the active left off. High Availability Software can provide communications abstractions like redundant message and event queues to help active subsystems in this task. Additionally, an important concept called “checkpointing” is exclusive to highly available software. In a checkpointed system, the active subsystem identifies all of its critical state and periodically updates the standby with any changes to this state. This idea is commonly abstracted as a distributed hash table – the active writes key/value records into the table and both the active and standby subsystems read from it. Unlike a “cloud” distributed hash table (Chord (peer-to-peer), Kademlia, etc.) a checkpoint is fully replicated. That is, all records in the “checkpoint” hash table are readable so long as one copy is running. Another technique, called an [application checkpoint], periodically saves the entire state of a program.
Enable in-service upgrades:
In Service Software Upgrade is the ability to upgrade software without degrading service. It is typically implemented in redundant systems by executing what is called a “rolling” upgrade—upgrading the standby while the active provides service, failing over, and then upgrading the old active. Another important feature is the ability to rapidly fall back to an older version of the software and configuration if the new version fails.
Minimize standby latency and ensure standby correctness:
Standby latency is defined as the time between when a standby is told to become active and when it is actually providing service. “Hot” standby systems are those that actively update internal state in response to active system checkpoints, resulting in millisecond down times. “Cold” standby systems are offline until the active fails and typically restart from a “baseline” state. For example, many cloud solutions will restart a virtual machine on another physical machine if the underlying physical machine fails. “Cold” fail over standby latency can range from 30+ seconds to several minutes. Finally, “warm” standby is an informal term encompassing all systems that are running yet must do some internal processing before becoming active. For example, a warm standby system might be handling low priority jobs – when the active fails it aborts these jobs and reads the active's checkpointed state before resuming service. Warm standby latencies depend on how much data is checkpointed but typically have a few seconds latency.
System architecture
High availability software can help engineers create complex system architectures that are designed to minimize the scope of failures and to handle specific failure modes. A “normal” failure is defined as one which can be handled by the software architecture's, while a “catastrophic” failure is defined as one which is not handled. A catastrophic failure therefore causes a service outage. However, the software can still greatly increase availability by automatically returning to an in-service state as soon as the catastrophic failure is remedied.
The simplest configuration (or “redundancy model”) is 1 active, 1 standby, or 1+1. Another common configuration is N+1 (N active, 1 standby), which reduces total system cost by having fewer standby subsystems. Some systems use an all-active model, which has the advantage that “standby” subsystems are being constantly validated.
Configurations can also be defined with active, hot standby, and cold standby (or idle) subsystems, extending the traditional “active+standby” nomenclature to “active+standby+idle” (e.g. 5+1+1). Typically, “cold standby” or “idle” subsystems are active for lower priority work. Sometimes these systems are located far away from their redundant pair in a strategy called geographic redundancy. This architecture seeks to avoid loss of service from physically-local events (fire, flood, earthquake) by separating redundant machines.
Sophisticated policies can be specified by high availability software to differentiate software from hardware faults, and to attempt time-delayed restarts of individual software processes, entire software stacks, or entire systems.
Use in industry
In the past 20 years telecommunication networks and other complex software systems have become essential parts of business and recreational activities.
“At the same time [as the economy is in a downturn], 60% almost -- that's six out of 10 businesses -- require 99.999. That's four nines or five nines of availability and uptime for their mission-critical line-of-business applications.
And 9% of the respondents, so that's almost one out of 10 companies, say that they need greater than five nines of uptime. So what that means is, no downtime. In other words, you have got to really have bulletproof, bombproof applications and hardware systems. So you know, what do you use? Well one thing you have high-availability clusters or you have the more expensive and more complex fault-tolerance servers.”
Telecommunications: High Availability Software is an essential component of telecommunications equipment since a network outage can result in significant loss in revenue for telecom providers and telephone access to emergency services is an important public safety issue.
Defense/Military: Recently High Availability Software has found its way into defense projects as an inexpensive way to provide availability for manned and unmanned vehicles
Space: High Availability Software is proposed for use of non-radiation hardened equipment in space environments. Radiation hardened electronics is significantly more expensive and lower performance than off-the-shelf equipment. But High Availability Software running on a single or pair of rad-hardened controllers can manage many redundant high performance non-rad-hard computers, potentially failing over and resetting them in the event of a fault.
Use in the cloud
Typical cloud services provide a set of networked computers (typical a virtual machine) running a standard server OS like Linux. Computers can often communicate with other instances within the same data center for free (tenant network) and to outside computers for fee. The cloud infrastructure may provide simple fault detection and restart at the virtual machine level. However, restarts can take several minutes resulting in lower availability. Additionally, cloud services cannot detect software failures within the virtual machines. High Availability Software running inside the cloud virtual machines can detect software (and virtual machine) failures in seconds and can use checkpointing to ensure that standby virtual machines are ready to take over service.
Standards
The Service Availability Forum defines standards for application-aware High Availability.
See also
Computer cluster
High integrity software
References
External links
OpenClovis SAFplus High Availability Software
Linux-HA Software
Keepalived for Linux
Evidian SafeKit Software for Windows and Linux
Software by type | Operating System (OS) | 1,246 |
Optical disc
In computing and optical disc recording technologies, an optical disc (OD) is a flat, usually circular disc that encodes binary data (bits) in the form of pits and lands on a special material, often aluminum, on one of its flat surfaces. Its main uses are physical offline data distribution and long-term archival. Changes from pit to land or from land to pit correspond to a binary value of 1; while no change, regardless of whether in a land or a pit area, corresponds to a binary value of 0.
Non-circular optical discs exist for fashion purposes; see Shaped compact disc.
Design and technology
The encoding material sits atop a thicker substrate (usually polycarbonate) that makes up the bulk of the disc and forms a dust defocusing layer. The encoding pattern follows a continuous, spiral path covering the entire disc surface and extending from the innermost track to the outermost track.
The data are stored on the disc with a laser or stamping machine, and can be accessed when the data path is illuminated with a laser diode in an optical disc drive that spins the disc at speeds of about 200 to 4,000 RPM or more, depending on the drive type, disc format, and the distance of the read head from the center of the disc (outer tracks are read at a higher data speed due to higher linear velocities at the same angular velocities).
Most optical discs exhibit a characteristic iridescence as a result of the diffraction grating formed by its grooves. This side of the disc contains the actual data and is typically coated with a transparent material, usually lacquer.
The reverse side of an optical disc usually has a printed label, sometimes made of paper but often printed or stamped onto the disc itself. Unlike the 3-inch floppy disk, most optical discs do not have an integrated protective casing and are therefore susceptible to data transfer problems due to scratches, fingerprints, and other environmental problems. Blu-rays have a coating called durabis that mitigates these problems.
Optical discs are usually between 7.6 and 30 cm (3 to 12 in) in diameter, with 12 cm (4.75 in) being the most common size. The so-called program area that contains the data commonly starts 25 millimetres away from the center point. A typical disc is about 1.2 mm (0.05 in) thick, while the track pitch (distance from the center of one track to the center of the next) ranges from 1.6 μm (for CDs) to 320 nm (for Blu-ray discs).
Recording types
An optical disc is designed to support one of three recording types: read-only (e.g.: CD and CD-ROM), recordable (write-once, e.g. CD-R), or re-recordable (rewritable, e.g. CD-RW). Write-once optical discs commonly have an organic dye (may also be a (Phthalocyanine) Azo dye, mainly used by Verbatim, or an oxonol dye, used by Fujifilm) recording layer between the substrate and the reflective layer. Rewritable discs typically contain an alloy recording layer composed of a phase change material, most often AgInSbTe, an alloy of silver, indium, antimony, and tellurium. Azo dyes were introduced in 1996 and phthalocyanine only began to see wide use in 2002. The type of dye and the material used on the reflective layer on an optical disc may be determined by shining a light through the disc, as different dye and material combinations have different colors.
Blu-ray Disc recordable discs do not usually use an organic dye recording layer, instead using an inorganic recording layer. Those that do are known as low-to-high (LTH) discs and can be made in existing CD and DVD production lines, but are of lower quality than traditional Blu-ray recordable discs.
Usage
Optical discs are often stored in special cases sometimes called jewel cases and are most commonly used for digital preservation, storing music (e.g. for use in a CD player), video (e.g. for use in a Blu-ray player), or data and programs for personal computers (PC), as well as offline hard copy data distribution due to lower per-unit prices than other types of media. The Optical Storage Technology Association (OSTA) promoted standardized optical storage formats.
Libraries and archives enact optical media preservation procedures to ensure continued usability in the computer's optical disc drive or corresponding disc player.
File operations of traditional mass storage devices such as flash drives, memory cards and hard drives can be simulated using a UDF live file system.
For computer data backup and physical data transfer, optical discs such as CDs and DVDs are gradually being replaced with faster, smaller solid-state devices, especially the USB flash drive. This trend is expected to continue as USB flash drives continue to increase in capacity and drop in price.
Additionally, music, movies, games, software and TV shows purchased, shared or streamed over the Internet has significantly reduced the number of audio CDs, video DVDs and Blu-ray discs sold annually. However, audio CDs and Blu-rays are still preferred and bought by some, as a way of supporting their favorite works while getting something tangible in return and also since audio CDs (alongside vinyl records and cassette tapes) contain uncompressed audio without the artifacts introduced by lossy compression algorithms like MP3, and Blu-rays offer better image and sound quality than streaming media, without visible compression artifacts, due to higher bitrates and more available storage space. However, Blu-rays may sometimes be torrented over the internet, but torrenting may not be an option for some, due to restrictions put in place by ISPs on legal or copyright grounds, low download speeds or not having enough available storage space, since the content may weigh up to several dozen gigabytes. Blu-rays may be the only option for those looking to play large games without having to download them over an unreliable or slow internet connection, which is the reason why they are still (as of 2020) widely used by gaming consoles, like the PlayStation 4 and Xbox One X. As of 2020, it is unusual for PC games to be available in a physical format like Blu-ray.
Discs should not have any stickers and should not be stored together with paper; papers must be removed from the jewel case before storage. Discs should be handled by the edges to prevent scratching, with the thumb on the inner edge of the disc. The ISO Standard 18938:2008 is about best optical disc handling techniques. Optical disc cleaning should never be done in a circular pattern, to avoid concentric cirles from forming on the disc. Improper cleaning can scratch the disc. Recordable discs should not be exposed to light for extended periods of time. Optical discs should be stored in dry and cool conditions to increase longevity, with temperatures between -10 and 23 °C, never exceeding 32 °C, and with humidity never falling below 10%, with recommended storage at 20 to 50% of humidity without fluctuations of more than ±10%.
Durability
Although optical discs are more durable than earlier audio-visual and data storage formats, they are susceptible to environmental and daily-use damage, if handled improperly.
Optical discs are not prone to uncontrollable catastrophic failures such as head crashes, power surges, or exposure to water like hard disk drives and flash storage, since optical drives' storage controllers are not tied to optical discs themselves like with hard disk drives and flash memory controllers, and a disc is usually recoverable from a defective optical drive by pushing an unsharp needle into the emergency ejection pinhole, and has no point of immediate water ingress and no integrated circuitry.
Safety
As the media itself only is accessed through a laser beam, no internal control circuitry, it can not contain malicious hardware such as so-called rubber-duckies or USB killers.
Malware is unable to spread over factory-pressed media, finalized media, or -ROM (read only memory) drive types whose lasers lack the strength to write data. Malware is conventionally programmed to detect and spread over traditional mass storage devices such as flash drives, external solid state drives and hard disk drives.
History
The first recorded historical use of an optical disc was in 1884 when Alexander Graham Bell, Chichester Bell and Charles Sumner Tainter recorded sound on a glass disc using a beam of light.
Optophonie is a very early (1931) example of a recording device using light for both recording and playing back sound signals on a transparent photograph.
An early optical disc system existed in 1935, named Lichttonorgel.
An early analog optical disc used for video recording was invented by David Paul Gregg in 1958 and patented in the US in 1961 and 1969. This form of optical disc was a very early form of the DVD (). It is of special interest that , filed 1989, issued 1990, generated royalty income for Pioneer Corporation's DVA until 2007 —then encompassing the CD, DVD, and Blu-ray systems. In the early 1960s, the Music Corporation of America bought Gregg's patents and his company, Gauss Electrophysics.
American inventor James T. Russell has been credited with inventing the first system to record a digital signal on an optical transparent foil that is lit from behind by a high-power halogen lamp. Russell's patent application was first filed in 1966 and he was granted a patent in 1970. Following litigation, Sony and Philips licensed Russell's patents (then held by a Canadian company, Optical Recording Corp.) in the 1980s.
Both Gregg's and Russell's disc are floppy media read in transparent mode, which imposes serious drawbacks. In the Netherlands in 1969, Philips Research physicist, Pieter Kramer invented an optical videodisc in reflective mode with a protective layer read by a focused laser beam , filed 1972, issued 1991. Kramer's physical format is used in all optical discs. In 1975, Philips and MCA began to work together, and in 1978, commercially much too late, they presented their long-awaited Laserdisc in Atlanta. MCA delivered the discs and Philips the players. However, the presentation was a commercial failure, and the cooperation ended.
In Japan and the U.S., Pioneer succeeded with the Laserdisc until the advent of the DVD. In 1979, Philips and Sony, in consortium, successfully developed the audio compact disc.
In 1979, Exxon STAR Systems in Pasadena, CA built a computer controlled WORM drive that utilized thin film coatings of Tellurium and Selenium on a 12" diameter glass disk. The recording system utilized blue light at 457 nm to record and red light at 632.8 nm to read. STAR Systems was bought by Storage Technology Corporation (STC) in 1981 and moved to Boulder, CO. Development of the WORM technology was continued using 14" diameter aluminum substrates. Beta testing of the disk drives, originally labeled the Laser Storage Drive 2000 (LSD-2000), was only moderately successful. Many of the disks were shipped to RCA Laboratories (now David Sarnoff Research Center) to be used in the Library of Congress archiving efforts. The STC disks utilized a sealed cartridge with an optical window for protection .
The CD-ROM format was developed by Sony and Philips, introduced in 1984, as an extension of Compact Disc Digital Audio and adapted to hold any form of digital data. The same year, Sony demonstrated a LaserDisc data storage format, with a larger data capacity of 3.28 GB.
In the late 1980s and early 1990s, Optex, Inc. of Rockville, MD, built an erasable optical digital video disc system using Electron Trapping Optical Media (ETOM). Although this technology was written up in Video Pro Magazine's December 1994 issue promising "the death of the tape", it was never marketed.
In the mid-1990s, a consortium of manufacturers (Sony, Philips, Toshiba, Panasonic) developed the second generation of the optical disc, the DVD.
Magnetic disks found limited applications in storing the data in large amount. So, there was the need of finding some more data storing techniques. As a result, it was found that by using optical means large data storing devices can be made that in turn gave rise to the optical discs. The very first application of this kind was the Compact Disc (CD), which was used in audio systems.
Sony and Philips developed the first generation of the CDs in the mid-1980s with the complete specifications for these devices. With the help of this kind of technology the possibility of representing the analog signal into digital signal was exploited to a great level. For this purpose, the 16-bit samples of the analog signal were taken at the rate of 44,100 samples per second. This sample rate was based on the Nyquist rate of 40,000 samples per second required to capture the audible frequency range to 20 kHz without aliasing, with an additional tolerance to allow the use of less-than-perfect analog audio pre-filters to remove any higher frequencies. The first version of the standard allowed up to 75 minutes of music, which required 650MB of storage.
The DVD disc appeared after the CD-ROM had become widespread in society.
The third generation optical disc was developed in 2000–2006 and was introduced as Blu-ray Disc. First movies on Blu-ray Discs were released in June 2006. Blu-ray eventually prevailed in a high definition optical disc format war over a competing format, the HD DVD. A standard Blu-ray disc can hold about 25 GB of data, a DVD about 4.7 GB, and a CD about 700 MB.
First-generation
From the start optical discs were used to store broadcast-quality analog video, and later digital media such as music or computer software. The LaserDisc format stored analog video signals for the distribution of home video, but commercially lost to the VHS videocassette format, due mainly to its high cost and non-re-recordability; other first-generation disc formats were designed only to store digital data and were not initially capable of use as a digital video medium.
Most first-generation disc devices had an infrared laser reading head. The minimum size of the laser spot is proportional to the wavelength of the laser, so wavelength is a limiting factor upon the amount of information that can be stored in a given physical area on the disc. The infrared range is beyond the long-wavelength end of the visible light spectrum, so it supports less density than shorter-wavelength visible light. One example of high-density data storage capacity, achieved with an infrared laser, is 700 MB of net user data for a 12 cm compact disc.
Other factors that affect data storage density include: the existence of multiple layers of data on the disc, the method of rotation (Constant linear velocity (CLV), Constant angular velocity (CAV), or zoned-CAV), the composition of lands and pits, and how much margin is unused is at the center and the edge of the disc.
Compact disc (CD) and derivatives
Audio CD
Video CD (VCD)
Super Video CD
CD Video
CD-Interactive
LaserDisc
GD-ROM
Phase-change Dual
Double Density Compact Disc (DDCD)
Magneto-optical disc
MiniDisc (MD)
MD Data
Write Once Read Many (WORM)
Second-generation
Second-generation optical discs were for storing great amounts of data, including broadcast-quality digital video. Such discs usually are read with a visible-light laser (usually red); the shorter wavelength and greater numerical aperture allow a narrower light beam, permitting smaller pits and lands in the disc. In the DVD format, this allows 4.7 GB storage on a standard 12 cm, single-sided, single-layer disc; alternatively, smaller media, such as the DataPlay format, can have capacity comparable to that of the larger, standard compact 12 cm disc.
DVD and derivatives
DVD-Audio
DualDisc
Digital Video Express (DIVX)
DVD-RAM
Nintendo GameCube Game Disc (miniDVD derivative)
Wii Optical Disc (DVD derivative)
Super Audio CD (SACD)
Enhanced Versatile Disc
DataPlay
Hi-MD
Universal Media Disc (UMD)
Ultra Density Optical
Third-generation
Third-generation optical discs are used for distributing high-definition video and videogames and support greater data storage capacities, accomplished with short-wavelength visible-light lasers and greater numerical apertures. Blu-ray Disc and HD DVD uses blue-violet lasers and focusing optics of greater aperture, for use with discs with smaller pits and lands, thereby greater data storage capacity per layer.
In practice, the effective multimedia presentation capacity is improved with enhanced video data compression codecs such as H.264/MPEG-4 AVC and VC-1.
Blu-ray (up to 400 GB - experimental)
Wii U Optical Disc (25 GB per layer)
HD DVD (discontinued disc format, up to 51 GB triple layer)
CBHD (a derivative of the HD DVD format)
HD VMD
Professional Disc
Announced but not released:
Digital Multilayer Disk
Fluorescent Multilayer Disc
Forward Versatile Disc
Fourth-generation
The following formats go beyond the current third-generation discs and have the potential to hold more than one terabyte (1 TB) of data and at least some are meant for cold data storage in data centers:
Archival Disc
Holographic Versatile Disc
Announced but not released:
LS-R
Stacked Volumetric Optical Disc
5D DVD
3D optical data storage (not a single technology, examples are Hyper CD-ROM and Fluorescent Multilayer Disc)
Overview of optical types
Notes
Recordable and writable optical discs
There are numerous formats of optical direct to disk recording devices on the market, all of which are based on using a laser to change the reflectivity of the digital recording medium in order to duplicate the effects of the pits and lands created when a commercial optical disc is pressed.
Formats such as CD-R and DVD-R are "Write once read many" or write-once, while CD-RW and DVD-RW are rewritable, more like a magnetic recording hard disk drive (HDD).
Media technologies vary, M-DISC uses a different recording technique & media versus DVD-R and BD-R.
Surface error scanning
Optical media can predictively be scanned for errors and media deterioration well before any data becomes unreadable.
A higher rate of errors may indicates deteriorating and/or low quality media, physical damage, an unclean surface and/or media written using a defective optical drive. Those errors can be compensated by error correction to some extent.
Error scanning software includes Nero DiscSpeed, k-probe, Opti Drive Control (formerly "CD speed 2000") and DVD info Pro for Windows, and QPxTool for cross-platform.
Support of error scanning functionality varies per optical drive manufacturer and model.
Error types
There are different types of error measurements, including so-called "C1", "C2" and "CU" errors on CDs, and "PI/PO (parity inner/outer) errors" and the more critical "PI/PO failures" on DVDs. Finer-grain error measurements on CDs supported by very few optical drives are called E11, E21, E31, E21, E22, E32.
"CU" and "POF" represent uncorrectable errors on data CDs and DVDs respectively, thus data loss, and can be a result of too many consecutive smaller errors.
Due to the weaker error correction used on Audio CDs (Red Book standard) and Video CDs (White Book standard), C2 errors already lead to data loss. However, even with C2 errors, the damage is inaudible to some extent.
Blu-ray discs use so-called LDC (Long Distance Codes) and BIS (Burst Indication Subcodes) error parameters. According to the developer of the Opti Drive Control software, a disc can be considered healthy at an LDC error rate below 13 and BIS error rate below 15.
Optical disc manufacturing
Optical discs are made using replication. This process can be used with all disc types. Recordable discs have pre-recorded vital information, like manufacturer, disc type, maximum read and write speeds, etc. In replication, a cleanroom with yellow light is necessary to protect the light-sensitive photoresist and to prevent dust from corrupting the data on the disc.
A glass master is used in replication. The master is placed in a machine that cleans it as much as possible using a rotating brush and deionized water, preparing it for the next step. In the next step, a surface analyzer inspects the cleanliness of the master before photoresist is applied on the master.
The photoresist is then baked in an oven to solidify it. Then, in the exposure process, the master is placed in a turntable where a laser selectively exposes the resist to light. At the same time, a developer and deionized water are applied to the disc to remove the exposed resist. This process forms the pits and lands that represent the data on the disc.
A thin coating of metal is then applied to the master, making a negative of the master with the pits and lands in it. The negative is then peeled off the master and coated in a thin layer of plastic. The plastic protects the coating while a punching press punches a hole into the center of the disc, and punches excess material.
The negative is now a stamper - a part of the mold that will be used for replication. It is placed on one side of the mold with the data side containing the pits and lands facing out. This is done inside an injection molding machine. The machine then closes the mold and injects polycarbonate in the cavity formed by the walls of the mold, which forms or molds the disc with the data on it.
The molten polycarbonate fills the pits or spaces between the lands on the negative, acquiring their shape when it solidifies. This step is somewhat similar to record pressing.
The polycarbonate disc cools quickly and is promply removed from the machine, before forming another disc. The disc is then metallized, covered with a thin reflective layer of aluminum. The aluminum fills the space once occupied by the negative.
A layer of varnish is then applied to protect the aluminum coating and provide a surface suitable for printing. The varnish is applied near the center of the disc, and the disc is spun, evenly distributing the varnish on the surface of the disc. The varnish is hardened using UV light. The discs are then silkscreened or a label is otherwise applied.
Recordable discs add a dye layer, and rewritable discs add a phase change alloy layer instead, which is protected by upper and lower dielectric (electrically insulating) layers. The layers may be sputtered. The additional layer is between the grooves and the reflective layer of the disc. Grooves are made in recordable discs in place of the traditional pits and lands found in replicated discs, and the two can be made in the same exposure process. In DVDs, the same processes as in CDs are carried out, but in a thinner disc. The thinner disc is then bonded to a second, equally thin but blank, disc using UV-curable Liquid optically clear adhesive, forming a DVD disc. This leaves the data in the middle of the disc, which is necessary for DVDs to achieve their storage capacity. In multi layer discs, semi reflective instead of reflective coatings are used for all layers except the last layer, which is the deepest one and uses a traditional reflective coating.
Dual layer DVDs are made slightly differently. After metallization (with a thinner metal layer to allow some light to pass through), base and pit transfer resins are applied and pre-cured in the center of the disc. Then the disc is pressed again using a different stamper, and the resins are completely cured using UV light before being separated from the stamper. Then the disc receives another, thicker metallization layer, and is then bonded to the blank disc using LOCA glue. DVD-R DL and DVD+R DL discs receive a dye layer after curing, but before metallization. CD-R, DVD-R, and DVD+R discs receive the dye layer after pressing but before metallization. CD-RW, DVD-RW and DVD+RW receive a metal alloy layer sandwiched between 2 dielectric layers. HD-DVD is made in the same way as DVD. In recordable and rewritable media, most of the stamper is composed of grooves, not pits and lands. The grooves contain a wobble frequency that is used to locate the position of the reading or writing laser on the disc. DVDs use pre-pits instead, with a constant frequency wobble.
Blu-ray
HTL (high-to-low type) Blu-ray discs are made differently. First, a silicon wafer is used instead of a glass master. The wafer is processed in the same way a glass master would.
The wafer is then electroplated to form a 300-micron thick nickel stamper, which is peeled off from the wafer. The stamper is mounted onto a mold inside a press or embosser.
The polycarbonate discs are molded in a similar fashion to DVD and CD discs. If the discs being produced are BD-Rs or BD-REs, the mold is fitted with a stamper that stamps a groove pattern onto the discs, in lieu of the pits and lands found on BD-ROM discs.
After cooling, a 35 nanometre-thick layer of silver alloy is applied to the disc using sputtering. Then the second layer is made by applying base and pit transfer resins to the disc, and are pre-cured in its center.
After application and pre-curing, the disc is pressed or embossed using a stamper and the resins are immediately cured using intense UV light, before the disc is separated from the stamper. The stamper contains the data that will be transferred to the disc. This process is known as embossing and is the step that engraves the data onto the disc, replacing the pressing process used in the first layer, and it is also used for multi layer DVD discs.
Then, a 30 nanometre-thick layer of silver alloy is then sputtered onto the disc and the process is repeated as many times as required. Each repetition creates a new data layer. (The resins are applied again, pre-cured, stamped (with data or grooves) and cured, silver alloy is sputtered and so on)
BD-R and BD-RE discs receive (through sputtering) a metal (recording layer) alloy (that is sandwiched between two dielectric layers, also sputtered, in BD-RE), before receiving the 30 nanometre metallization (silver alloy, aluminum or gold) layer, which is sputtered. Alternatively, the silver alloy may be applied before the recording layer is applied. Silver alloys are usually used in Blu-rays, and aluminum is usually used on CDs and DVDs. Gold is used in some "Archival" CDs and DVDs, since it is more chemically inert and resistant to corrosion than aluminum, which corrodes into aluminum oxide, which can be seen in disc rot as transparent patches or dots in the disc, that prevent the disc from being read, since the laser light passes through the disc instead of being reflected back into the laser pickup assembly to be read. Normally aluminum doesn't corrode since it has a thin oxide layer that forms on contact with oxygen. In this case it can corrode due to its thinness.
Then, the 98 micron-thick cover layer is applied using UV-curable liquid optically clear adhesive, and a 2 micron-thick hard coat (such as Durabis) is also applied and cured using UV light. In the last step, a 10 nanometre-thick silicon nitride barrier layer is applied to the label side of the disc to protect against humidity. Blu-rays have their data very close to the read surface of the disc, which is necessary for Blu-rays to achieve their capacity.
Discs in large quantities can either be replicated or duplicated. In replication, the process explained above is used to make the discs, while in duplication, CD-R, DVD-R or BD-R discs are recorded and finalized to prevent further recording and allow for wider compatibility. (See Optical disc authoring). The equipment is also different: replication is carried out by fully automated purpose-built machinery whose cost is in the hundreds of thousands of US dollars in the used market, while duplication can be automated (using what's known as an autoloader) or be done by hand, and only requires a small tabletop duplicator.
Specifications
See also
Disc Description Protocol
List of optical disc manufacturers
Universal Disk Format (UDF)
References
External links
Audiovisual introductions in 1884
Compact disc
DVD
Optical discs
Optical disc authoring
Optoelectronics
Optical computer storage media
et:Optiline andmekandja | Operating System (OS) | 1,247 |
SCO Group
SCO, The SCO Group, and The TSG Group are the various names of an American software company in existence from 2002 to 2012 that became known for owning Unix operating system assets that had belonged to the Santa Cruz Operation (original SCO), including the UnixWare and OpenServer technologies, and then, under CEO Darl McBride, pursuing a series of high-profile legal battles known as the SCO-Linux controversies.
The SCO Group began in 2002 with a renaming of Caldera International, accompanied by McBride becoming CEO and a major change in business strategy and direction. The SCO brand was re-emphasized and new releases of UnixWare and OpenServer came out. The company also attempted some initiatives in the e-commerce space with the SCOBiz and SCOx programs. In 2003, the SCO Group claimed that the increasingly popular free Linux operating system contained substantial amounts of Unix code that IBM had improperly put there. The SCOsource division was created to monetize the company's intellectual property by selling Unix license rights to use Linux. The SCO v. IBM lawsuit was filed, asking for billion-dollar damages and setting off one of the top technology battles in the history of the industry. By a year later, four additional lawsuits had been filed involving the company.
Reaction to SCO's actions from the free and open source software community was intensely negative and the general IT industry was not enamored of the actions either. SCO soon became, as Businessweek headlined, "The Most Hated Company In Tech". SCO Group stock rose rapidly during 2003, but then SCOsource revenue became erratic and the stock began a long fall. Despite the industry's attention to the lawsuits, SCO continued to maintain a product focus as well, putting out a major new release of OpenServer that incorporated the UnixWare kernel inside it. SCO also made a major push in the burgeoning smartphones space, launching the Me Inc. platform for mobility services. But despite these actions, the company steadily lost money and shrank in size.
In 2007, SCO suffered a major adverse ruling in the SCO v. Novell case that rejected SCO's claim of ownership of Unix-related copyrights and undermined much of the rest of its legal position. The company filed for Chapter 11 bankruptcy protection soon after and attempted to continue operations. Its mobility and Unix software assets were sold off in 2011, to McBride and UnXis respectively. Renamed to The TSG Group, the company converted to Chapter 7 bankruptcy in 2012. A portion of the SCO v. IBM case continued on until 2021, when a settlement was reached for a tiny fraction of what The SCO Group had initially sued for.
Initial history
Background
The Santa Cruz Operation had been an American software company, founded in 1979 in Santa Cruz, California, that found success during the 1980s and 1990s selling Unix-based operating system products for Intel x86-based server systems. SCO built a large community of value-added resellers that eventually became 15,000 strong and many of its sales of its SCO OpenServer product to small and medium-sized businesses went through those resellers. In 1995, SCO bought the System V Release 4 and UnixWare business from Novell (which had two years earlier acquired the AT&T-offshoot Unix System Laboratories) to improve its technology base. But beginning in the late 1990s, SCO faced increasingly severe competitive pressure, on one side from Microsoft's Windows NT and its successors and on the other side from the free and open source Linux. In 2001, the Santa Cruz Operation sold its rights to Unix and its SCO OpenServer and UnixWare products to Caldera International.
Caldera, based in Lindon, Utah, had been in the business of selling its OpenLinux product but had never been profitable. It attempted to make a combined business out of Linux and Unix but failed to make headway and had suffered continuing financial difficulties. By June 2002 its stock was facing a second delisting notice from NASDAQ and the company had less than four months' cash for operations. As Wired magazine later wrote, the company "faced a nearly hopeless situation."
On June 27, 2002, Caldera International had a change in management, with Darl McBride, formerly an executive with Novell, FranklinCovey, and several start-ups, taking over as CEO from Caldera co-founder Ransom Love.
Back to a SCO name; SCOBiz and SCOx
Change under McBride happened quickly. On August 26, 2002, he announced at the company's annual Forum conference in Las Vegas that Caldera International was changing its name to The SCO Group. He did this via a multimedia display in which an image of Caldera was shattered and replaced by The SCO Group's logo, which was a slightly more stylized version of the old Santa Cruz Operation logo. The attendees at the conference, most of whom were veteran SCO partners and resellers, responded to the announcement with enthusiastic applause. McBride announced, "SCO is back from the dead," and a story in The Register began, "SCO lives again." As part of this, the company adopted SCOX as its trading symbol. (The final legal aspects of the name change did not become complete until May 2003.)
The change back to a SCO-based name reflected recognition of the reality that almost all of the company's revenue was coming from Unix, not Linux, products. For instance, McDonald's had recently expanded its usage of OpenServer from 4,000 to 10,000 stores; indeed, both OpenServer and UnixWare were strong in the replicated sites business. Furthermore the SCO brand was better known than the Caldera one, especially in Europe, and SCO's large, existing reseller and partner channel was resistant to switching to Caldera's product priorities.
McBride emphasized that the OpenServer product was still selling: "What is it with the OpenServer phenomenon? We can't kill it. One customer last month bought $4 million in OpenServer licenses. The customers want to give us money for it. Why don't we just sell it?" As a historical comparison for his strategy of building back up the brand and being more responsive to customers, McBride used a model of the revival of the Harley-Davidson brand in the 1980s. Besides McBride, other company executives, including new senior vice president of technology Opinder Bawa, were heavily involved in the change of direction.
The product name Caldera OpenLinux became "SCO Linux powered by UnitedLinux" and all other Caldera branded names were changed as well.
In particular, the longstanding UnixWare name – which Caldera had changed to Open UNIX – was restored, such that what had been called Open UNIX 8 was now named in proper sequence as UnixWare 7.1.2. Announcements were made that a new OpenServer release, 5.0.7, and a new UnixWare release, 7.1.3, would appear at the end of the year or beginning of the next. Moreover, through a new program called SCO Update, more frequent updates of capabilities were promised beyond that. Caldera's Volution Messaging Server product was retained and renamed SCOoffice Server, but the other Caldera Volution products were split off under the names Volution Technologies, Center 7, and finally Vintela.
Software releases and e-commerce initiatives
In addition to reviving SCO's longtime operating system products, The SCO Group also announced a new venture, SCOBiz. This was a collaboration with the Bellingham, Washington-based firm Vista.com, founded in 1999 by John Wall, in which SCO partners could sell Vista.com's online, web-based e-commerce development and hosting service targeted at small and medium-sized businesses. More importantly, as part of SCOBiz, the two companies would develop a SOAP- and XML-based web services interface to enable Vista.com e-commerce front-ends to communicate with existing back-end SCO-based applications. Industry analysts were somewhat skeptical of the chances for SCOBiz succeeding, as the market was already crowded with application service provider offerings and the dot-com bubble had already burst by that point.
Finally, SCO announced a new program for partners, called SCOx, the key feature of which was that it included a buy-out option that would allow SCOx solution providers to sell their businesses back to SCO. McBride said that the program gave partners a chance at "living the American dream."
The company's financial hole was emphasized when it released its results for the fiscal year ending October 31, 2002 – it lost $25 million on revenues of $64 million.
First there was a Linux release. Caldera International had been one of the founders of the United Linux initiative, along with SuSE, Conectiva, and Turbolinux, and the now-named SCO Linux 4 came out in November 2002, in conjunction with each of the other vendors releasing their versions of the United Linux 1.0 base. The SCO product was targeted towards the small-to-medium business market, whereas the SuSE product was aimed at the enterprise segment and the other two were intended mostly for South American and Asian markets. The common United Linux base (which mostly came from a SuSE code origin), and the promise of common certification across all four products, did attract some support from hardware and software vendors such as IBM, HP, Computer Associates, and SAP. An assessment of SCO Linux 4 in eWeek found that it was a capable product, although the reviewer felt that its Webmin configuration tool was inferior to SuSE's YaST. In terms of service and support, SCO pledged to field a set of escalation engineers that would only be handling SCO Linux issues.
The new Unix operating system releases then come out. UnixWare 7.1.3 was released in December 2002, which featured improved Java support, an included Apache Web Server, and improvements to the previously developed Linux Kernel Personality (LKP) for running Linux applications. In particular, the SCO Group stated that due to superior multiprocessor performance and reliability, Linux applications could run better on UnixWare via LKP than they could on native Linux itself, a stance that dated back to Santa Cruz Operation/Caldera International days. One review, that found UnixWare 7.1.3 lacking in a number of respects, called LKP "the most impressive of UnixWare's capabilities." And SCO OpenServer 5.0.7 was released in February 2003; the release emphasized enhanced hardware support, including new graphic, network and HBA device drivers, support for USB 2.0, improved and updated UDI support, and support for several new Intel and Intel-compatible processors.
The SCOx software framework was announced in April 2003; its aim was to enable SCO's developer and reseller community to be able to connect web services and web-based presentation layers to the over 4,000 different applications that ran small and midsize businesses and branch offices. The web services aspect of SCOx included bundled SOAP/XML support for the Java, C, C++, PHP, and Perl languages. A primary target of the SCOx framework was SCOBiz e-commerce integration, although other uses were possible as well. The planned SCOx architecture overall was composed of layers for e-business services, web services, SSL-based security, a mySCO reseller portal, hosting services, and a software development kit.
But by then, these software releases and e-commerce initiatives had become overshadowed.
In the courts
A focus on intellectual property
As soon as McBride became the head of Caldera International, he became interested in what intellectual property the company possessed. He had been a manager at Novell in 1993 when Novell had bought Unix System Laboratories, and all of its Unix assets, including copyrights, trademarks, and licensing contracts, for $335 million. Novell had subsequently sold its Unix business to the Santa Cruz Operation, which had then sold it to Caldera. So in 2002, McBride said he had thought: "In theory, there should be some value to that property – somewhere between a million and a billion [dollars], right? I just wanted to know what real, tangible intellectual property value the company held." Shortly before the name change to SCO, Caldera went through its existing license agreements, found some that were not being collected upon, and came to arrangements with those licensees representing some $600,000 in annual revenue.
In particular, from the start of his time as CEO, McBride had considered the possibility of claiming ownership of some of the code within Linux. Outgoing Caldera CEO Ransom Love had told him, "Don't do it. You don't want to take on the entire Linux community." During the August 2002 name change announcement, Bawa stated, "We own the source to UNIX; it's that simple. If we own the source, we are entitled to collect the agreed license fees." But at the time, McBride said he had no intention of taking on Linux.
By October 2002, McBride had created an internal organization "to formalize the licensing of our intellectual property"; this effort was provisionally called SCO Tech. Senior vice president Chris Sontag was put in charge of it.
By the end of 2002, McBride and SCO had sought out the services of David Boies of the law firm Boies, Schiller and Flexner as part of an effort to litigate against what it saw was unrightful use of its intellectual property. Boies had gained fame in the industry for leading the U.S. federal government's successful prosecution of Microsoft in United States v. Microsoft Corp.; as McBride subsequently said, "We went for the biggest gun we could find." (Boies' record in other cases was mixed, however, including a high-visibility loss in the 2000 Bush v. Gore Florida election dispute.)
News of the SCO Group's intent to take action regarding Linux first broke on January 10, 2003, in a column by technology reporter Maureen O'Gara of Linuxgram that appeared in Client Server News and Linux Business Week. She wrote that a draft press release concerning SCO's plans had been in the works for several weeks and had been quietly circulated to other companies in the industry. The O'Gara report, unconfirmed as it was, caused some amount of consternation in the Linux community.
On January 22, 2003, creation of the SCOsource division of the company, to manage the licensing of the company's Unix-related intellectual property, was officially announced, as was the hiring of Boies to investigate and oversee legal protection of that property. As the Wall Street Journal reported, Linux users had generally assumed that Linux was created independently of Unix proprietary code, and Linux advocates were immediately concerned that SCO was going to ask large companies using Linux to pay SCO licensing fees to avoid a lawsuit. The first announced license program within SCOsource was called SCO System V for Linux, which was a set of shared libraries intended to allow SCO Unix programs to be run legally on Linux without a user needing to license all of SCO OpenServer or UnixWare as had theretofore been necessary.
The company continued to lose money, on revenues of $13.5 million in the first fiscal quarter of 2003, but McBride was enthusiastic about the prospects for the new SCOsource division, telling investors on a February 26 earnings call that he expected it to bring in $10 million alone in the second fiscal quarter.
Lawsuits begin
On March 6, 2003, SCO filed suit against IBM, claiming that the computer giant had misappropriated trade secrets by transferring portions of its Unix-based AIX operating system into Linux, and asked for at least $1 billion in damages. (The amount was subsequently raised to $3 billion, and later still to $5 billion. The suit initially coincided with SCO's existing relationship with IBM to sell UnixWare on IBM Netfinity systems.) The complaint also alleged breach of contract and tortious interference by IBM against the Santa Cruz Operation for its part in the failed Project Monterey of the late 1990s. Overall, SCO maintained that Linux could not have caught up to "Unix performance standards for complete enterprise functionality" so quickly without coordination by a large company, and that this coordination could have happened through the taking of "methods or concepts" even if not a single line of Unix code appeared within Linux. The SCO v. IBM case was underway; it would come to be considered one of the top technology battles of all time.
Many industry analysts were not impressed by the lawsuit, with one saying, "It's a fairly end-of-life move for the stockholders and managers of that company. ... This is a way of salvaging value out of the SCO franchise they can't get by winning in the marketplace." Other analysts pointed to the deep legal resources IBM had for any protracted fight in the courts, but McBride professed to be nonplussed: "If it takes a couple of years, we're geared to do that." For his part, Boies said he liked David versus Goliath struggles, and his firm would see a substantial gain out of any victory.
In mid-May 2003, SCO sent a letter to some 1,500 companies, cautioning them that using Linux could put them in legal jeopardy. As part of this, SCO proclaimed that Linux contained substantial amounts of Unix System V source code and that, as such, "We believe that Linux is, in material part, an unauthorized derivative of Unix." As CNET wrote, the move "dramatically broaden[ed]" the scope of the company's legal actions.
At the same time, SCO announced it would stop selling its own SCO Linux product. A casualty of this stance was SCO's participation in the United Linux effort, and in turn United Linux itself. While the formal announcement that United Linux had ended did not come until January 2004, in reality the project stopped doing any tangible work soon after SCO filed its lawsuit against IBM.
A few days later, Microsoft – which had long expressed disdain for Linux – said it was acquiring a Unix license from SCO, in order to ensure interoperability with its own products and to ward off any questions about rights. The action was a boon to SCO, which to this point had received little support in the industry for its licensing initiative. Another major computer company, Sun Microsystems, bought an additional level of Unix licensing from SCO to add to what it had originally obtained a decade earlier.
On May 28, 2003, Novell counterattacked, saying its sale of the Unix business to the Santa Cruz Operation back in 1995 did not include the Unix software copyrights, and thus that the SCO Group's legal position was empty. Jack Messman, the CEO of Novell, accused SCO of attempting an extortion plan against Linux users and distributors. Unix has a complex corporate history, with the SCO Group a number of steps removed from the Bell Labs origins of the operating system. Novell and the SCO Group quickly fell into a vocal dispute that revolved around the interpretation of the 1995 asset-transfer agreement between them. That agreement had been uncertain enough at the time that an amendment to it had to be signed in October 1996, and even that was insufficiently unambiguous to now preclude an extended battle between the two companies.
In July 2003, SCO began offering UnixWare licenses for commercial Linux users, stating that "SCO will hold harmless commercial Linux customers that purchase a UnixWare license against any past copyright violations, and for any future use of Linux in a run-only, binary format."
The server-based licenses were priced at $699 per machine, and if they were to become mandatory for Linux users, would represent a tremendous source of revenue for SCO. The potential for this happening was certainly beneficial to SCO's stock price, which during one three-week span in May 2003 tripled in value.
Another counterattack came in August 2003, when Red Hat, Inc. v. SCO Group, Inc. was filed by the largest of the Linux distribution companies.
The SCO Group received a major boost in October 2003 when BayStar Capital, a technology-focused venture capital firm, made a $50 million private placement investment in SCO, to be used towards the company's legal costs and general product development efforts. In December 2003, SCO sent letters to 1,000 Linux customers that in essence accused them of making illegal use of SCO's intellectual property.
Novell continued to insist that it owned the copyrights to Unix. While Novell no longer had a commercial interest in Unix technology itself, it did want to clear the way for Linux, having recently purchased SuSE Linux, the second largest of commercial Linux distributions at the time. On January 20, 2004, the SCO Group filed a slander of title suit against Novell, alleging that Novell had exhibited bad faith in denying SCO's intellectual property rights to Unix and UnixWare and that Novell had made false statements in an effort to persuade companies and organizations not to do business with SCO. The SCO v. Novell court case was underway.
Lawsuits against two Linux end users, SCO Group, Inc. v. DaimlerChrysler Corp. and SCO v. AutoZone were filed on March 3, 2004. The first alleged that Daimler Chrysler had violated the terms of the Unix software agreement it had with SCO, while the second claimed that AutoZone was running versions of Linux that contained unlicensed source code from SCO. As a strategy this move was met by criticism; as Computerworld later sarcastically wrote, "Faced with a skeptical customer base, SCO did what any good business would do to get new customers: sue them for money."
In any case, the stage was set for the next several years' worth of court filings, depositions, hearings, interim rulings, and so on.
Vultus acquisition and a change in SCOx
The SCOsource division got off to a quick start, bringing in $8.8 million during the company's second fiscal quarter, which led to the SCO Group turning a profit for the first time in its Caldera-origined history.
In July 2003, the SCO Group announced it had acquired Vultus Inc. for an unspecified price. Vultus was a start-up company, also based in Lindon, Utah, and the Lindon-based Canopy Group was a major investor in Vultus just as it was the SCO Group. Vultus made the WebFace Solution Suite, a web-based application development environment with a set of browser-based user interface elements that provided a richer UI functionality without the need for Java applets or other plug-ins. Indeed, in putting together WebFace, Vultus was a pioneer in AJAX techniques before that term was even coined.
The acquisition of Vultus resulted in a shift of emphasis in the company's web services initiative, with an announcement being made in August 2003 at SCO Forum that SCOx would now be a web services-based Application Substrate, featuring a combination of tools and APIs from Vultus's WebFace suite and from Ericom Software's Host Publisher development framework.
A year later, in September 2004, this idea materialized when the SCOx Web Services Substrate (WSS) was released for UnixWare 7.1.4. Its aim was to give existing SCO customers a way to "webify" their applications via Ericom's tool and then make the functionality of those applications available via web services. However, as McBride later conceded, the SCOx WSS failed to gain an audience, and it was largely gone from company mention a year later.
Views on infringement claims
In the keynote address at its SCO Forum conference in August 2003, held at the MGM Grand Las Vegas, the SCO Group made an expansive defense of its legal actions. Framed by licensed-from-MGM James Bond music and film clips, McBride portrayed SCO as a valiant warrior for the continuance of proprietary software, saying they were in "a huge raging battle around the globe", that the GNU General Public License that Linux was based on was "about destroying value", and saying that like Bond, they would be thrown into many battles but come out the victor in the end.
Linux advocates had repeatedly asked SCO to enumerate and show the specific areas of code in Linux that SCO thought were infringing on Unix. An analyst for IDC said that if SCO were more forthcoming on the details, "the whole discussion might take a different tone." However, SCO was reluctant to show any such code in public, preferring to keep it secret – a strategy that was commonly adopted in intellectual property litigation.
However during Forum, SCO did publicly show several alleged examples of illegal copying of copyright code in Linux. Until that time, these examples had only been available to people who signed a non-disclosure agreement, which had prohibited them from revealing the information shown to them. SCO claimed the infringements were divided into four separate categories: literal copying, obfuscation, derivative works, and non-literal transfers. The example used by SCO to demonstrate literal copying became known as the atemalloc example. While the name of the original contributor was not revealed by SCO, quick analysis of the code in question pointed to SGI. At this time it was also revealed that the code had already been removed from the Linux kernel, because it duplicated already existing functions.
By early 2004, the small amount of evidence that had been presented publicly was viewed as inconclusive by lawyers and software professionals who were not partisan to either side. As Businessweek wrote, "While there are similarities between some code that SCO claims it owns and material in Linux, it's not clear to software experts that there's a violation." The legal considerations involved were complex, and resolved around subtleties such as how the notion of derivative works should be applied. Furthermore, Novell's argument that it had never transferred copyrights to the Santa Cruz Operation placed a cloud over the SCO Group's legal campaign. Most, but not all, industry observers felt that SCO was unlikely to win. InfoWorld drily noted that Las Vegas bookmakers were not giving odds on the battle, but the three analysts it polled gave odds of 6-to-4 against SCO, 200-to-1 against SCO, and 6-to-4 for SCO.
In any case, while Linux customers may not have been happy about the concerns and threats that the SCO Group was raising, it was unclear whether that was slowing their adoption of Linux; some business media reports indicated that it was, or that it might, while others indicated that it was not.
"The Most Hated Company In Tech"
The stakes were high in the battle the SCO Group had started, involving the future of Unix, Linux, and open source software in general. If SCO were to win its legal battles, the results could be extremely disruptive to the IT industry, especially if SCO's notion of derivative works were to be construed broadly by the courts. Furthermore a SCO victory would be devastating to the open source movement, especially if the legal validity of the GPL license were to be called into question. Conversely, a clear SCO loss would clarify any intellectual property concerns related to Linux, make corporate IT managers feel more relaxed about adopting Linux as a solution, and potentially bolster corporate enthusiasm for the open source movement as a whole.
Linux advocates were incensed by SCO's actions, accusing the company of trying reap financial gain by sowing fear, uncertainty, and doubt (FUD) about Linux within the industry. Linux creator Linus Torvalds said, "I'd dearly love to hear exactly what they think is infringing, but they haven't told anybody. Oh, well. They seem to be more interested in FUD than anything else." Open source advocate Bruce Perens said of SCO, "They don't care who or what they hurt." Industry analyst and open source advocate Gordon Haff said that SCO had thrown a dirty bomb into the Linux user community.
Many Linux enthusiasts approached the issue with a moralistic fervor.
By August 2003, McBride said that pickets had been seen at SCO offices. McBride tended to compare Linux to Napster in the music world, a comparison that could be understood by people outside the technology industry. The assault on open source produced intense feelings in people; Ralph Yarro, chairman of SCO and head of the Canopy Group, and the person characterized by some as the mastermind behind SCO v. IBM, reported that back in his home area in Utah, "I have had friends, good friends, tell me they can't believe what we're doing." Internet message boards such as Slashdot saw many outraged postings. The Yahoo! Finance discussion boards, a popular site at the time for investors, were full of messages urging others to sell SCO stock.
SCO suffered a distributed denial-of-service attack against its website in early May 2003, the first of several times the website would be shut down by hackers. One that began in late January 2004 became the most prolonged, when a denial-of-service attack coming out of the Mydoom computer worm prevented access to the sco.com domain for over a month.
The general IT industry was not pleased with what SCO was doing either. The September 22, 2003 issue of InfoWorld had a dual-orientation cover that, if read right side up, had thumbs-up picture with the text "If SCO Loses", and if read upside down, had a thumbs-down picture with the text "If SCO Wins".
By February of the following year, Businessweek was headlining that the SCO Group was "The Most Hated Company In Tech". A similar characterization was made by the Robert X. Cringely-bylined column in InfoWorld, which in March 2004 called SCO "the Most Despised Technology Company". The cover of a May 2014 issue of Fortune magazine had a photograph of McBride accompanied by the large text "Corporate Enemy No. 1". SCO's actions in suing Linux end users was especially responsible for some forms of corporate distaste towards it.
The company that had previously held that title, Microsoft, had by February 2004 spent a reported $12 million on Unix licenses from SCO. The industry giant said the licenses were taken out as part of normal intellectual property compliance for their Windows Services for UNIX product, which provided a Unix compatibility environment for higher-end Windows systems. Linux advocates, however, saw the move as Microsoft looking for a way to fund SCO's lawsuits in an attempt to damage Linux, a view that was shared by some other large industry rivals such as Oracle Corporation's Larry Ellison. Indeed, Linux advocates had seen Microsoft's hand in the SCO Group's actions from almost the beginning; as Bruce Perens wrote in May 2003: "Who really benefits from this mess? Microsoft, whose involvement in getting a defeated Unix company to take on the missionary work of spreading FUD ... about Linux is finally coming to light." The open source community's antipathy towards Microsoft only increased when it became apparent that Microsoft had played at least some role in introducing the SCO Group to BayStar Capital as a potential investment vehicle (both BayStar and Microsoft said there was no stronger role by Microsoft than that).
The distaste for SCO's actions seeped into evaluations of SCO's product line and technical initiatives as well. Software Development Times acknowledged at one point that "many writers in the tech media, which has a pro-open-source, pro-Linux bias, are subtly or overtly hostile to SCO."
As an instance, in July 2003 a columnist for Computerworld examined the SCO Group acquisition of Vultus and concluded that the purpose was not to acquire its technology or staff but rather that Canopy was playing "a shell game ... to move its companies around" in order to exploit and cash in on the SCO Group's rising stock price.
As an analyst for RedMonk stated, "Regardless of the technology they have, there are a lot of enterprises that are going to be ticked off with them. Some of them are receiving these letters (demanding license fees for Linux). There's a perception among companies we've spoken to that SCO is really out to get acquired or to make their money off of licensing schemes rather than technologies. That's an obstacle to adoption of their products." This kind of attitude was exemplified by an apologetic review of UnixWare 7.1.3 in OSNews in December 2003 that acknowledged that SCO had "earned their now nefarious reputation of pure evil" but that "SCO does actually sell a product" and that the reviewer had to assess it objectively.
Another group of people who found the actions of the SCO Group distasteful were some of those familiar with the Santa Cruz Operation, including those who had worked there and those who had written about it; they became protective of that earlier company's reputation, especially given the possible name confusion regarding the two. In an eWeek column entitled "SCO: When Bad Things Happen to Good Brands", technology journalist David Coursey wrote that "SCO was a good company with a good reputation. In some ways, SCO was Linux before Linux, popularizing Unix on low-cost Intel machines. ... It's a good brand name that deserves better, or at least a decent burial and a wake. But instead, its memory is being trashed by people who don't and maybe can't appreciate the fondness many of us still have for the old Santa Cruz Operation." Science fiction author Charles Stross, who had worked as a tech writer in the original SCO's office in England in the early-mid-1990s, called the SCO Group "the brain-eating zombie of the UNIX world" that had done little more than "play merry hell with the Linux community and take a copious metaphorical shit all over my resumé." More simply, former original SCO employee turned journalist and publisher Sara Isenberg, in writing about the history of tech companies in the Santa Cruz area, wrote about The SCO Group, "I'll spare you the sordid legal details, but by then, it was no longer our SCO."
To be sure, not all former original-SCO employees necessarily felt that way. The company still had developers and other staff at the original Santa Cruz location, as well as at the Murray Hill, New Jersey office that dated back not just to the original SCO but to Novell and Unix System Laboratories and AT&T before that. There was also a development office in Delhi, India, as well as regional offices that in many cases came from original SCO. And in 2006, Santa Cruz Operation co-founder Doug Michels made a return to the SCO Forum stage, with McBride presenting him an award for lifetime achievement.
A major factor in the SCO–Linux battle was the Groklaw website and its author, paralegal Pamela Jones. The site explained in depth the legal principles and procedures that would be involved in the different court cases – giving technology-oriented readers a level of understanding of legal matters they would otherwise not have – and pulled together in an easily browsed form a massive number of official court documents and filings. Additionally, some Groklaw readers attended the court hearings in person and posted their detailed observations afterward. Accompanying these valuable data points on Groklaw was an interpretative commentary, from both Jones and her readers, that was relentlessly pro-open source and anti-SCO, to the point where journalist Andrew Orlowski of The Register pointed out that Groklaw sometimes suffered badly from an online echo chamber effect. In any case, such was Groklaw's influence that SCO made thinly veiled accusations that Jones was, in fact, working on behest of IBM, something that she categorically denied.
The personification of the SCO–Linux battle was no doubt McBride, who was viewed by many as a villain. Columnist Maureen O'Gara, generally seen as at least somewhat sympathetic to SCO's position, characterized McBride as "the most hated man in the computer industry". McBride acknowledged, "I know people want us to go away, but we are not going to go away. We're going to see this through." The Sunday New York Times business section's "Executive Life" feature ran a self-profile of McBride in February 2004, in which he reflected upon his no-nonsense father raising him on a ranch and the difficulties of being a Mormon missionary in Japan and later a Novell executive there, and concluded, "I am absolutely driven by people saying I can't do something."
McBride received death threats serious enough to warrant extra security during his public appearances. Asked in May 2004 to reflect upon what the preceding year had been like, McBride said "This is like ... nothing ... nothing compares to what's happened in the last year."
Financial aspects
SCO's legal campaign coincided with the best financial results it would have, when in fiscal 2003 they had revenues of $79 million and a profit of $3.4 million. The campaign was also initially very beneficial to its stock price. The stock had been under $1.50 in December 2002 and reached a high of $22.29 during mid-October 2003. In some cases jumps in the price occurred when stock analysts initiated coverage of the stock and gave optimistic price targets for it.
But the stock began a downward slide soon after that, and by the end of 2003 about a quarter of all outstanding shares were controlled by short sellers. SCOsource revenue was erratic, with the first half of fiscal 2004 being especially poor.
The SCO group had 340 employees worldwide when the lawsuits were first underway in 2003. By a year later, this count had fallen somewhat to 305 employees.
During 2004, SCO and BayStar had a falling out, in part due to the investment firm being unhappy with SCO's constant presence in the headlines and the passionate arguments it was involved in with open source advocates, and in part due to the ongoing expenses of running a struggling software products business. Both BayStar and Royal Bank of Canada, which had been part of the initial placement, bought out of the investment by mid-year. Nevertheless, by the calculation of the Deseret News, SCO had gained a net $37 million out of the arrangement.
Legal actions were a large expense, costing the SCO Group several million dollars each quarter and hurting financial results. For its third quarter of fiscal 2004, for instance, the company reported revenue of $11.2 million and a loss of $7.4 million, of which $7.2 million was legal expenses. To that point, the company had spent a total of some $15 million on such costs. Accordingly, in August 2004, SCO renegotiated its deal with its lawyers to put into place a cap on legal expenses at $31 million, in return for which Boise, Schiller & Flexner would receive a larger share of any eventual settlement.
McBride continued to come up with new ideas; at the 2004 Forum show he talked about the SCO Marketplace Initiative, which would set up an online exchange where developers could bid on work-for-hire jobs for SCO Unix enhancements that were otherwise not on the SCO product roadmap. Besides helping SCO out, this would set up an alternative to the open source model, one where programmers could "develop-for-fee" rather than "develop-for-free". McBride ultimately envisioned it becoming "an online distribution engine for business applications from a wide variety companies and solution providers." The SCO Marketplace began operation a couple of months later, with jobs posted including the writing of device drivers.
The stock slide continued, and by September 2004 had fallen below the $4 level. The company had some 230 employees worldwide at that point.
During the latter portion of 2004, the California office of the company moved out of Santa Cruz proper, as its longtime 400 Encinal Street office building was mostly empty. The thirty employees still remaining took new space on Scotts Valley Drive in nearby Scotts Valley, California.
By early 2005, the SCO Group was in definite financial trouble. Its court case against IBM did not seem to be going well. Sales for fiscal 2004 dropped by 46 percent compared to the year prior, to less than $43 million, and losses rose by a factor of three to over $16 million. Results for the full fiscal 2004 year were bad: revenues dropped by 46 percent compared to the year prior, falling to around $43 million, and there was a loss on that of over $28 million. The company had to restate three of its quarterly earnings statements due to accounting mistakes and was at risk of being delisted by NASDAQ. During the previous year it had laid off around 100 people, constituting a third of its workforce, and by August 2005 the headcount had fallen to under 200.
The company became independent of The Canopy Group in March 2005, after the settlement of a lawsuit between the Noorda family and Yarro. As part of the settlement, Canopy transferred all of its shares to Yarro.
Meanwhile, products
Company emphasis
While there was an industry impression that the SCO Group was far more focused on lawsuits than bringing forward new and improved products, throughout this period, the large majority of SCO employees were not involved with the legal battle but rather were working on software products. This was a point that McBride never hesitated to point out, for instance saying in August 2005 that the company was spending "98 percent of our resources" on new product development, and only two percent on the active cases in court with AutoZone, IBM, and Novell. The idea of the SCO Group becoming a lawsuits-only company had been proposed by BayStar but it was not something McBride wanted to do. Indeed, McBride expressed at least public optimism that the company could survive on its Unix and other product business even if it lost the court cases.
Nevertheless, there were significant challenges in the product space, as operating system revenue had been falling. SCO still had a market presence in some of its traditional strongholds, such as pharmacy chains and fast-food restaurants. But to some extent, the reliability and stability of products such as OpenServer (and the applications they were typically used for) worked against SCO, as customers did not feel an urgent need to upgrade.
UnixWare 7.1.4 was released in June 2004, with major new features including additional hardware support, improved security, and the abovementioned SCOx web services components. A review in Network World found that the operating system showed strength in terms of server performance and support for Apache and related open source components, but suffered in terms of hardware discovery and ease of installation. The Linux Kernel Personality (LKP), which had earlier been a major selling point of UnixWare 7, was now removed from the product due to the ongoing legal complications. But UnixWare 7.1.4 did come with the OpenServer Kernel Personality (OKP), which allowed OpenServer-built binary applications to run on the more powerful UnixWare platform without modification, and which had earlier been released as an add-on to UnixWare 7.1.3.
SCO announced a Unix roadmap along with the UnixWare release, intending to convince the market that it was making a strong push in software products. Among the items talked about was Smallfoot, a toolkit for developing customized, small-footprint versions of UnixWare for use as an embedded operating system, and an upgrade to the SCOoffice mail and messaging product. But a constant concern was that SCO had difficulty in attracting independent software vendors to support its operating system platform. Perhaps the biggest such hurdle was the lack of support for current versions of the Oracle Database product. Of the problem in general, a manager at a longtime SCO replicated-site customer, Shoppers Drug Mart in Canada, that was migrating to UnixWare 7.1.4 and was otherwise happy with the product's reliability and performance, said: "[Big ISVs] are pushing SCO down to a tier-three vendor. We need a tier-one or a tier-two vendor that will do current ports and certification. We listen to vendors and watch their roadmaps and when SCO disappears that will be a signal [to move on]."
The new SCOoffice release, SCOoffice Server 4.1 for OpenServer 5.0.7, came out in August 2004. SCOoffice consisted of a mixture of proprietary code and open source components and was marketed as a drop-in alternative to Microsoft Exchange Server for small-to-medium businesses, one that would be compatible with Microsoft Outlook (and other common mail clients) but would be less expensive in total cost, be built upon on a more reliable operating system, and have a management interface that could be used by non-technical administrators. Some of the specific technology in the product for interacting with Outlook functions came from Bynari. A review of the SCOoffice technology in PCQuest in 2002 found its ease of installation and features to be good and that it was "a decent package for companies looking for a mail server solution." When originally built by Caldera International, the messaging product had been based on Linux (and UnixWare via LKP), but following the SCO Group's legal actions against Linux it was changed to be based on OpenServer instead, with some disruption to the components that could be included within it. The 4.1 release also contained office collaboration tools for meetings, contacts, and the like. SCOoffice was a consistent product for the SCO Group; at least one, and usually more than one, breakout session about it was held at every Forum conference during the SCO Group era.
"Legend"
By 2005, more than 60 percent of SCO's revenue was still coming from its OpenServer product line and associated support services. This was despite the fact that there had been no major releases to the product in the time since the Santa Cruz Operation and Caldera Systems had merged in 2000. Accordingly, the SCO Group devoted a large effort, consisting of extensive research and development as well as associated product management activities, into producing the more modern OpenServer Release 6, code-named "Legend". After a couple of slips from announced target dates, it was made generally available in June 2005.
The key idea behind Legend was to update to transplant the UnixWare SVR5 kernel into the OpenServer everything else. This gave OpenServer 6 the ability to support 1TB file sizes, the lack of which had become a major limitation of OpenServer 5. In addition, OpenServer 6 could support up to 32 processors, up to 64GB of RAM, had various new security capabilties such as SSH, an IPFilter-based firewall, and IPsec for secure VPNs, and had faster throughput for applications which could make use of real multiple threading.
The launch event was held on June 22, 2005, at Yankee Stadium in New York City. (This prompted a few industry publication headlines of the "SCO Goes To Bat With OpenServer 6" variety.) Hewlett-Packard noted its support for OpenServer 6 on its ProLiant systems. Some SCO partners were quoted as saying they intended to migrate to it.
While some analysts, such as those for IDS and Quandt Analytics, expressed the belief that the release could help SCO upgrade and hold onto its existing customer base, an analyst for Illuminata Inc. was not so optimistic, saying, "In a word, no. Looked at in isolation, there's a lot to like about the new OpenServer. It adds a lot of new capabilities and it finally largely merges the OpenServer and UnixWare trees. But OpenServer is in wild decline – the victim of Windows, Linux and years of SCO mismanagement. Today's SCO is a pariah of the IT industry ... OpenServer is a niche product; SCO needs a miracle."
In practice, despite the good reviews it got from a technical perspective, sales of OpenServer 6 were modest. The company continued to do poorly financially, with fiscal 2005 producing revenues of $36 million and a loss on that of almost $11 million, while fiscal 2006 saw revenues of $29 million together with a loss of over $16 million. Reductions in staff continued and the Scotts Valley office was shut down in late 2006.
Mobility and Me Inc.
The SCO Group's biggest initiative to find a new software business came with what it called Me Inc., first announced at a DEMO conference in California in September 2005.
Me Inc. sought to capitalize on the emergence of smartphones in that it would provide both mobile apps that would run on the phones and an architecture involving a network "edge processor" that would offload processing and storage from the phones themselves and handle authentication, session management, and aggregation of data requests. In such an approach, Me Inc. represented a hosted software as a service (SaaS) offering, with the edge processor representing what would later become referred to as both edge computing and mobile backend as a service. Some of the engineering effort behind Me Inc. came from former Vultus staff, following the failure of the prior SCOx efforts to find a market. Me Inc. initially targeted the Palm Treo line of smartphones. Subsequent support was put into place for the Windows Mobile line of smartphones and some others.
The first services from Me Inc. were Shout, in which users could broadast text or voice messages from a phone to large groups; Vote, in which users could post surveys to large groups and quickly receive a tally back; and Action, in which users could post tasks for others to do and monitor their statuses. An early user of the Shout service was Utah State University, which used it for broadcasting messages to members of its sports booster organization. Me Inc. services were subsequently used by other Utah organizations as well, including the Utah Jazz, the BYU Cougars, and Mayor of Provo Lewis Billings.
In February 2006, SCO announced that the edge processor had the product name EdgeClick. The development environment for it was branded the EdgeBuilder SDK. In addition, a website EdgeClickPark.com was announced, that would act as an Internet ecosystem for the development and selling of mobile applications and services. As SCO marketing executive Tim Negris said, the idea of EdgeClickPark was to provide a mechanism for "individuals and organizations of all kinds to participate in developing, selling and using digital services." Many of these services would come not from SCO itself but from SCO partners, resellers, and ISVs, a channel it was familiar with from the original SCO era. This was reminiscent of McBride's goal for the pre-lawsuits SCOBiz and the post-lawsuits SCO Marketplace Initiative, and McBride had similarly large ambitions for Me Inc. and EdgeClickPark, envisioning it having the same role for mobile software that iTunes had at the time for digital music.
McBride, who had been looking at various new business opportunities for SCO to enter, saw the company's mobility initiative as something that could become a big success in both the business and consumer spaces, saying "We don't know for sure, but we have a little bit of a spark in our eyes that this will be a big deal." The SCO Group's chief technology officer, Sandy Gupta, stated that for the company, "this is clearly a big switch in paradigm." Industry analysts thought that Me Inc. was aimed at something there was clearly a large market for. As one said, "The operating system market is an increasingly difficult place to compete. SCO Group really does need more diversity [and] these recent pushes represent significant diversification of their product portfolio." Software Development Times commended SCO for coming up with the EdgeClickPark idea, saying that it showed an "interesting flair" in providing a place for partnerships and business development. The company also undertook the proposing of customized mobile applications for various businesses and organizations, using the Me Inc. platform as a starting point.
However, the SCO Group being able to succeed in these efforts faced somewhat long odds, in part due to their being up against many kinds of competition in the mobile space and in part due to the negative feelings about SCO that their campaign against Linux had engendered.
Nevertheless, it was all viewed as a positive development; as Software Development Times summarized in a subheading, "Strategy shift to mobile seen as better 'than suing people'".
SCO's mobility initiative was a main theme of the 2006 instance of its SCO Forum conference, held at The Mirage in Las Vegas. McBride said, "Today is the coming out party for Me Inc. Over the next few years, we want to be a leading provider of mobile application software to the marketplace. ... This is a seminal moment for us." The Forum 2006 schedule, subtitled "Mobility Everywhere", held some nineteen different breakout and training sessions related to Me Inc. and EdgeClick, compared to twenty-six sessions for operating system related topics. Eager to drum up interest in the EdgeClick infrastructure and to get developers to attend the 2006 instance of SCO Forum, McBride offered a prize to the developer of the best application built from the EdgeBuilder SDK: a 507-horsepower, V10-engined BMW M5 sports sedan.
One new mobility offering, HipCheck, which allowed the remote monitoring and administration of business-critical servers on smartphones, was given its debut announcement and demonstration at Forum. The HipCheck service, which gave system administrators the ability to conduct secure actions from their phone to correct some kinds of server anomalies or respond to user requests such as resetting passwords, was officially made available in October 2006, with support for monitoring agents running on various levels of Windows and Unix systems. Several upgrades to HipCheck were subsequently made available.
Developed by SCO for FranklinCovey, a Utah-based company that had a line of paper-based planning and organizational products, FCmobilelife was an app for handling personal and organizational task and goal management. (In 2006, SCO had been building a similar app for Day-Timer named DT4, but that collaboration fell through.) In particular, the FCmobilelife app emulated FranklinCovey's methodologies for planning and productivity. Initial versions were released for the Windows Mobile and BlackBerry phones; an app for the iPhone was released in mid-2009.
In October 2008, during SCO Tec Forum 2008, the last Forum ever held, the SCO Mobile Server platform was announced, which was a bundling of the Edgeclick server-side functionality and Me Inc. client development kit on top of a UnixWare 7 or Openserver 6 system. By then UnixWare itself, the company's flagship product, had not seen a new release in some four years.
In the end, despite the company's efforts, the mobile services offerings did not attract that much attention or revenues in the marketplace.
Life in bankruptcy
An adverse ruling
On August 10, 2007, SCO suffered a major adverse ruling in the SCO v. Novell case that rejected SCO's claim of ownership of Unix-related copyrights and undermined much of the rest of its overall legal position. Judge Dale A. Kimball of the United States District Court for the District of Utah issued a 102-page summary judgment which found that Novell, not the SCO Group, was the owner of the Unix copyrights; that Novell could force SCO to drop its copyrights-based claims against IBM; and most immediately from a financial perspective, that SCO owed Novell 95 percent of the revenues generated by the licensing of Unix to companies such as Microsoft and Sun. The only SCO claims left intact by Kimball's judgment were ones against IBM related to contractual provisions from Project Monterey. As the Utah Valley-based Daily Herald newspaper subsequently wrote, Kimball's ruling was "a massive legal setback" for SCO.
An appeal was filed. Meanwhile, the company had few options left, as it had not been doing well anyway – by mid-2007, SCO Group stock had fallen to around $1.56 in value – and it now potentially owed Novell more money than it could pay. On September 14, 2008, the SCO Group filed a voluntary petition for reorganization under Chapter 11 of the United States Bankruptcy Code. Development work continued on both the operating system and mobility fronts, but selling a technology product while in bankruptcy was challenging. And from this point on, many of SCO's actions were dependent upon the approval of the United States Bankruptcy Court for the District of Delaware. Annual results for fiscal 2007 showed yet another decline for the company, with revenues falling to $22 million and a loss of nearly $7 million. And because of the bankruptcy filing, SCO was delisted from NASDAQ on December 27, 2007. Downsizing continued, and the New Jersey development office was moved to smaller space in Florham Park, New Jersey in late 2008.
Potential buyers
The interest of Stephen Norris Capital Partners in the SCO Group started in February 2008, when it put forward a $100 million reorganization and debt financing plan for the company, which it would then take private. Stephen L. Norris had been a co-founder of the large and well-known private equity firm The Carlyle Group. There was also an unnamed Middle East partner in the proposed deal; the Associated Press reported that Prince Al-Waleed bin Talal of Saudi Arabia was involved.
But after a couple of months of due diligence investigation of SCO's operations, finances, and legal situation,
Stephen Norris Capital Partners considered a different course of action, instead proposing to purchase SCO assets outright.
Norris appeared on stage at Forum in October 2008, where possible acquisition and investments plans were shown to attendees.
The company continued to have declining financial performance; the yearly results for fiscal 2008 showed revenues falling to $16 million and a loss of $8.7 million. In January 2009, the SCO Group asked the bankruptcy court to approve a plan wherein its Unix and mobility assets for would be put up for public auction.
That plan did not materialize, and instead in June 2009 a new proposal emerged from a combination of Gulf Capital Partners, of which Stephen Norris was an investor, and MerchantBridge, a London-based, Middle East-focused private equity group, to create an entity called UnXis, which would then buy SCO's software business assets for $2.4 million. At that point the SCO Group had fewer than 70 employees left. This latest plan, too, did not move forward.
Virtualization
The SCO Group's last significant engineering effort revolved around capitalizing on a resurgence of industry interest in hardware virtualization. In this case, such virtualization allowed SCO operating systems to run on newer, more powerful hardware even if SCO did not have support or certification for that hardware, and also allowed SCO customers to take advantage of server consolidation and other benefits of a virtual environment. The initial such release, SCO OpenServer 5.0.7V, came out in August 2009, with support for running on VMware ESX/ESXi hypervisors. The technical changes involved included adding enhanced virtual drivers for storage, networking, and peripherals to the operating system as well as tuning its memory management strategies for the virtual environment.
The virtualization push also included a change in SCO's licensing infrastructure, wherein now licensing would be done on an annual subscription basis.
The company said it would make similar 'V' releases for UnixWare 7.1.4 and OpenServer 6 in the future, but no such releases took place during the SCO Group's lifetime. However, support for the Microsoft Hyper-V hypervisor for OpenServer 5.0.7V was added in early 2010.
Trustee and trial
On August 25, 2009, Edward N. Cahn, a former United States District Judge of the United States District Court for the Eastern District of Pennsylvania and a counsel for the law firm of Blank Rome, was appointed Chapter 11 trustee for The SCO Group.
In October 2009, a restructuring requested by trustee Cahn led to the termination of McBride and the elimination of the CEO position; the existing COO, Jeff Hunsaker, became the top executive in the company. Perhaps the kindest industry press assessment of McBride's tenure came in a column from Steven J. Vaughan-Nichols in Computerworld, who wrote, "You have to give McBride credit. While I dislike SCO, he did an amazing job of fighting a hopeless battle. It's a pity he was working so hard and so well for such a fundamentally wrong cause."
SCO had appealed the August 2007 summary judgment against it in SCO v. Novell and eventually an appeals court had ruled that a trial had to be held on the issue. A three-week trial was held in March 2010, at the conclusion of which the jury reached a unanimous verdict that the Novell did not transfer the Unix copyrights to the Santa Cruz Operation in 1995. The decision spelled the end for the large majority of the SCO Group's legal offensive, leaving only contractual claims against IBM to possibly still pursue.
Sale of assets
In April 2010, SCO's mobility software assets were sold to former CEO McBride for $100,000. In September 2010 the SCO Group finally put up the remainder of its non-lawsuit assets for public auction.
Thus in February 2011, another proposal was made, this time for $600,000, with this iteration of a purchasing company being backed by Norris, MerchantBridge, and Gerson Global Advisors.
The bankruptcy court approved this proposal, as the only other bid submitted was for $18.
The sale was closed on April 11, 2011, with Stephen Norris Capital Partners and MerchantBridge being the final buyers, and UnXis was formed. In particular, UnXis took over the product names, ownership, and maintenance of The SCO Group's flagship operating system products, OpenServer and UnixWare. It also took over some service contracts for existing SCO Group customers; these customers represented some 82 countries and business segments such as finance, retail, fast food, and governmental entities. It would be up to UnXis to hire SCO Group employees, of whom, after years of layoffs and attrition, only handfuls were still left at various locations (for instance, at the Lindon, Utah site, only 7 or 8 people still worked, compared with 115 as recently as February 2008).
The SCO Group's litigation rights against IBM and Novell did not transfer, as UnXis said it had no involvement or interest in such activities. What was left of The SCO Group renamed itself to The TSG Group.
Aftermath
The TSG Group
The TSG Group did not have employees per se; any at the Utah site not hired by UnXis were let go. The jury trial verdict was appealed, but in August 2011 the U.S. 10th Circuit Court of Appeals upheld the verdict and the judge's orders following it, thus bringing to a final end SCO v. Novell.
However, in November 2011 the bankruptcy trustee decided to go on with the surviving contractual claims against IBM, saying that "the Novell ruling does not impact the viability of the estate's claims against IBM." The SCO v. IBM case had previously been closed pending the result of the SCO v. Novell case.
Nonetheless, there was no actual business being conducted by the TSG Group, and in August 2012 they filed to convert their status from Chapter 11 reorganization to Chapter 7 liquidation, stating that "there is no reasonable chance of 'rehabilitation'".
In June 2013, a judge granted the motion of the brankruptcy trustee and reopened consideration of SCO v. IBM. The revived case moved slowly, with a ruling in 2016 being in the favorable direction to IBM, but one in 2017 favorable towards continuing the SCO claims. Industry publications greeted these developments with headlines of the "What is dead may never die" variety.
UnXis changed its name to Xinuos in 2013, and despite SCO v. IBM having been reopened in the courts, reiterated that it had no interest in litigation. Instead Xinuos focused on continuing support for UnixWare and OpenServer customers as well as releasing OpenServer 10, a FreeBSD-based product that legacy customers could migrate to.
McBride turned his purchase of SCO's mobility assets into a company called Shout TV Inc., which was founded in late 2011 and provided social media engagment for sports fans during live events by offering trivia games and prize contests. By 2015, Shout TV had experienced some success, especially in partnership with the Spanish football club Real Madrid. The assets of Shout TV were transferred to a company known as MMA Global Inc. in 2018.
Final conclusion of lawsuits
In August 2021, word came of a possible final settlement in the SCO v. IBM case, wherein documents filed in the case indicated that the bankruptcy trustee for TSG Group and IBM appeared to be on the verge of settling the outstanding, Project Monterey-based, claims in the matter for $14.25 million. While the amount was far less than the SCO Group had originally sought when it began the lawsuits, the trustee recommended accepting the settlement, because "ultimate success of the Trustee's claims against IBM is uncertain" and that pursuing the matter further would be expensive and that "the Settlement Agreement provides an immediate and substantial monetary recovery and creates important liquidity for the benefit of all creditors and claimants." As part of this, the trustee would give up any future related claims against IBM. The matter lay with the U.S. Bankruptcy Court for the District of Delaware, which had been handling the case all along.
On November 8, 2021, the settlement was so made under those terms, with IBM paying the TSG bankruptcy trustee $14.25 million and the trustee giving up all future claims and with each party paying their own legal costs. After 18½ years, SCO v. IBM was finally over.
As it happened, another suit against IBM was still now active, from Xinuos, which earlier in 2021 had reversed direction from their past disavowals of ligitation interest and had filed suit against both IBM and Red Hat, re-alleging old SCO claims about IBM and Project Monterey and alleging new claims that IBM and Red Hat had cornered the operating system market for cloud computing. Unlike the SCO–Linux battles, however, in this case few people in the industry paid the Xinuos action much attention.
In any case, the story of The SCO Group was complete.
Products
SCO UnixWare, a Unix operating system. UnixWare 2.x and below were direct descendants of Unix System V Release 4.2 and was originally developed by AT&T, Univel, Novell and later on The Santa Cruz Operation. UnixWare 7 was sold as a Unix OS combining UnixWare 2 and OpenServer 5 and was based on System V Release 5.
SCO OpenServer, another Unix operating system, which was originally developed by The Santa Cruz Operation. SCO OpenServer 5 was a descendant of SCO UNIX, which is in turn a descendant of XENIX. OpenServer 6 is, in fact, an OpenServer compatibility environment running on a modern SVR5-based Unix kernel.
Smallfoot, an operating system and GUI created specifically for point of sale applications.
SCOBiz, a web-based e-commerce development and hosting site with web services-based integration to existing legacy applications.
SCOx Web Services Substrate, a web services-based framework for modernizing legacy applications.
WebFace, a development environment for rich-UI browser-based Internet applications.
SCOoffice Server, an e-mail and collaboration solution, based on a mixture of open-source and closed-source software.
SCO Marketplace Initiative, an online exchange offering pay-per-project development opportunities.
Me, Inc., a mobile services platform with services including Shout, HipCheck, and FCmobilelife.
List of SCO lawsuits
SCO v. IBM (The SCO Group, Inc. vs. International Business Machines, Inc., case number 2:03cv0294, United States District Court for the District of Utah)
Red Hat v. SCO
SCO v. Novell
SCO v. AutoZone
SCO v. DaimlerChrysler
See also
SCO Forum
References
External links
The SCO Group, Inc. (archived web site caldera.com from 2002-09-14 to 2004-09-01 and sco.com from 2001-05-08)
Groklaw: News and Commentary about SCO lawsuits and Other Related Legal Information
SCOX Bankruptcy information and documents
Financial information for The SCO Group (SCOXQ)
Yahoo! — The SCO Group, Inc. company profile, archive reference
History of The SCO Group at Encyclopedia.com, 2006
Caldera (company)
Defunct software companies of the United States
Defunct companies based in Utah
SCO–Linux disputes
Software companies established in 2002
2002 establishments in Utah
Software companies disestablished in 2011
2012 disestablishments in Utah
Companies that have filed for Chapter 7 bankruptcy
Companies that filed for Chapter 11 bankruptcy in 2007
pl:SCO | Operating System (OS) | 1,248 |
NetKernel
NetKernel is a British software company and software platform by the same name that is used for High Performance Computing, Enterprise Application Integration, and Energy Efficient Computation.
It allows developers to cleanly separate code from architecture. It can be used as an application server, embedded in a Java container or employed as a cloud computing platform.
As a platform, it is an implementation of the resource-oriented computing (ROC) abstraction. ROC is a logical computing model that resides on top of but is completely isolated from the physical realm of code and objects. In ROC, information and services are identified by logical addresses which are resolved to physical endpoints for the duration of a request and then released. Logical indirect addressing results in flexible systems that can be changed while the system is in operation. In NetKernel, the boundary between the logical and physical layers is intermediated by an operation-system caliber microkernel that can perform various transparent optimization.
The idea of using resources to model abstract information stems from the REST architectural style and the World Wide Web. The idea of using a uniform addressing model stems from the Unix operating system. NetKernel can be considered a unification of the Web and Unix implemented as a
software operating system running on a monolithic microkernel within a single computer.
NetKernel was developed by 1060 Research and is offered under a dual open-source software and commercial software license.
History
NetKernel was started at Hewlett-Packard Labs in 1999. It was conceived by Dr. Russ Perry, Dr. Royston Sellman and Dr. Peter Rodgers as a general purpose XML operating environment that could address the needs of the exploding interest in XML dialects for intra-industry XML messaging.
Rodgers saw the web as an implementation of a general abstraction which he extrapolated as ROC, but whereas the web is limited to publishing information; he set about conceiving a solution that could perform computation using similar principles. Working in close partnership with co-founder Tony Butterfield, they discovered a method for writing software that could be executed across a logical model, separated from the physical realm of code and objects. Recognising the potential for this approach, they spun out of HP Labs.
Rodgers and Butterfield begun their company as "1060 Research Limited" in Chipping Sodbury, a small market town on the edge of the Cotsolds region of England in 2002, and over a number of years developed the platform that became NetKernel.
In early 2018, 1060 Research announced that it was appointing a new CEO, Charles Radclyffe. Radclyffe announced to the NetKernel community in February 2018 that the team were working on a new patform based on NKEE 6 which would be fully hosted, programmable and accessible via the web - NetKernel Cloud. Radclyffe resigned after six months.
Concepts
Resource
A resource is identifiable information within a computer system. Resources are an abstract notion and they cannot be manipulated directly. When a resource is requested, a concrete, immutable representation is provided which captures the current state of the resource. This is directly analogous to the way the World Wide Web functions. On the Web, a URL address identifies a globally accessible resource. When a browser issues a request for the resource it is sent a representation of the resource in the response.
Addresses
A resource is identified by an address within an address space. In NetKernel, Uniform Resource Identifier (URI) addresses are used to identify all resources. Unlike the Web, which has a single global address space, NetKernel supports an unlimited number of address spaces and supports relationships between address spaces.
NetKernel supports a variety of URI schemes and introduces new ones specifically applicable to URI addressing within a software system.
Request
The fundamental operation in NetKernel is a resource request, or request. A request consists of a resource URI address and a verb.
Supported verbs include SOURCE, SINK, NEW, DELETE, EXISTS and META. Each request is dispatched to a microkernel which resolves the URI address to a physical endpoint and assigns and schedules a thread for processing. When the endpoint completes processing the microkernel returns the response to the initiating client.
Programming
The fundamental instruction in NetKernel is a resource request, specified by a URI. Mechanisms that sequence URI requests are located above the microkernel. In the current Java-based implementation, requests are dispatched using a Java API. This implies that any language that can call a Java API can be used to program NetKernel.
, the set of languages supported includes:
Java
Ruby
Scala
Clojure
JavaScript
Python 2
Groovy
Beanshell
PHP
DPML
XML related languages such as XQuery
The URI specification itself has sufficient richness to express a functional programming language.
Active URI Scheme
The active URI scheme was proposed by Hewlett-Packard as a means to encode a functional program within a URI.
active: {function-name} [+ {parameter-name} @ {parameter-value-URI}]*
For example, the following URI calls a random number generator
active:random
and the following uses an XSLT service to transform an XML document with an XSLT stylesheet:
active:xslt+operator@file:/style.xsl+operand@file:/document.xml
Because the argument values may be URI addresses themselves, a tree-structured set of function calls can be encoded in a single URI.
Transports
Transports are a mechanism used to introduce requests from outside of NetKernel to the NetKernel address space. Transports are available for the HTTP protocol, JMS (Java Message Service), and CRON. Other transports can be easily added as they are independent from the rest of NetKernel.
The role of the transport is to translate an external request based on one protocol into a NetKernel request with a URI and a specific verb (SOURCE, SINK, etc.) and then to send the returned representation back to the client via the supported protocol.
Two mappings are handled by a transport. The first is between the address space of the externally supported protocol to the internal NetKernel address space. And the second is between the verb or action specified externally into a NetKernel verb.
For example, in the case of the HTTP transport, the external address space is a sub-space of a URL. The following mapping illustrates this point.
http://www.mywebsite.com/publications/...
|
v
file:/src/publications/...
In addition, the HTTP protocol supports methods such as GET, PUT, HEAD, etc. which are mapped to NetKernel verbs.
Scripting languages
A mechanism is needed to issue the URI requests, capture the returned representations, and communicate with clients.
Scripting languages are executed by their runtime engine, which is itself a service. For example, the Groovy language runtime will run a program contained in the file file:/program.gy with the following:
active:groovy+operator@file:/program.gy
See also
Representational State Transfer
Web resource
Jolie
List of user interface markup languages
References
External links
1060 Research
Cross-platform software
Serverless computing
Distributed computing | Operating System (OS) | 1,249 |
Librem
Librem is a line of computers manufactured by Purism, SPC featuring free (libre) software. The laptop line is designed to protect privacy and freedom by providing no non-free (proprietary) software in the operating system or kernel, avoiding the Intel Active Management Technology, and gradually freeing and securing firmware. Librem laptops feature hardware kill switches for the microphone, webcam, Bluetooth and Wi-Fi.
Models
Laptops
Librem 13 and Librem 15
In 2014 Purism launched a crowdfunding campaign on Crowd Supply to fund the creation and production of the Librem 15 laptop, conceived as a modern alternative to existing open-source hardware laptops, all of which used older hardware. The in the name refers to its 15-inch screen size. The campaign succeeded after extending the original campaign, and the laptops were shipped to backers. In a second revision of the laptop, hardware kill switches for the camera, microphone, Wi-Fi, and Bluetooth were added.
After the successful launch of the Librem 15, Purism created another campaign on Crowd Supply for a 13-inch laptop called the Librem 13, which also came with hardware kill switches similar to those on the Librem 15v2. The campaign was again successful and the laptops were shipped to customers.
Purism announced in December 2016 that it would start shipping from inventory rather than building to order with the new batches of Librem 15 and 13.
Purism had two laptop models in production, the Librem 15 (version 4, US$1,599) and Librem 13 (version 4, $1,399).
Comparison of laptops
Librem Mini
The Librem Mini is a small form factor desktop computer, which began shipping on June 2020
Librem 5
On August 24, 2017, Purism started a crowdfunding campaign for the Librem 5, a smartphone aimed to run 100% free software, which would "[focus] on security by design and privacy protection by default". Purism claimed that the phone would become "the world's first ever IP-native mobile handset, using end-to-end encrypted decentralized communication." Purism cooperated with KDE and GNOME in its development of Librem 5.
Security features of the Librem 5 include separation of the CPU from the baseband processor, which, according to Linux Magazine, makes the Librem 5 unique in comparison to other mobile phones. The Librem 5 also features hardware kill switches for Wi-Fi and Bluetooth communication and the phone's camera, microphone, and baseband processor.
The default operating system for the Librem 5 is Purism's PureOS, a Debian GNU/Linux derivative. The operating system uses a new user interface called Phosh, based on Wayland, wlroots, GTK and GNOME middleware. It is planned that Phosh/Plasma Mobile, Ubuntu Touch, and postmarketOS can also be installed on the phone.
The release of the Librem 5 has been postponed several times. In September 2018, Purism announced that the launch date of Librem 5 would be moved from January to April 2019, because of two hardware bugs and the holiday season in Europe and North America. The Librem 5's DevKits for software developers were shipped in December 2018. The launch date was later postponed to the third quarter because of the necessity of further CPU tests. and on September 24, 2019, Purism announced that the first batch of Librem 5 phones had started shipping. The finished version of the Librem 5, known as "Evergreen", was finally shipped on November 18, 2020.
Librem Server
The Librem server is a rack mounted server, released to the public on December 2019.
Librem Key
Announced on 20 September 2018, the Librem Key is a hardware USB security token with multiple features, including integration with a tamper-evident Heads BIOS, that ensures a Librem laptop Basic Input/Output System (BIOS) was not maliciously altered since the last laptop launch. Also a one-time password storage with 3x HMAC-based One-time Password algorithm (HOTP) (RFC 4226) and 15 x Time-based One-time Password algorithm (TOTP) (RFC 6238) and an integrated password manager (16 entries), 40 kbit/s true random number generator, and a tamper-resistant smart card. The key supports type A USB 2.0, has dimensions of 48 x 19 x 7 mm, and weights 6 g.
Operating system
Initially planning to preload its Librem laptops with the Trisquel operating system, Purism eventually moved off the Trisquel platform to rebase onto Debian for the 2.0 release of its PureOS Linux operating system. As an alternative to PureOS, Librem laptops are purchasable with Qubes preloaded. In December 2017 the Free Software Foundation added PureOS to its list of endorsed GNU/Linux distributions.
BIOS
In 2015, Purism began research to port the Librem 13 to coreboot but the effort was initially stalled. By the end of the year, a coreboot developer completed an initial port of the Librem 13 and submitted it for review. In December 2016, hardware enablement developer Youness Alaoui joined Purism and was tasked to complete the coreboot port for the original Librem 13 and prepare a port for the second revision of the device. Since summer 2017, new Librem laptops are shipped with coreboot as their standard BIOS, and updates are available for all older models.
See also
Linux adoption
System76
Pine64
References
Computer hardware
Computer security companies
GNOME Mobile
Linux-based devices
Open-source hardware
Smartphones | Operating System (OS) | 1,250 |
.exe
.exe is a common filename extension denoting an executable file (the main execution point of a computer program) for Microsoft Windows.
File formats
There are numerous file formats which may be used by a file with a extension:
DOS
16-bit DOS MZ executableThe original DOS executable file format. These formats can be identified by the letters "MZ" at the beginning of the file in ASCII. All later formats have an MZ DOS stub header.
16-bit New Executable Introduced with the multitasking MS-DOS 4.0 and also used by 16-bit OS/2 and Windows, NE can be identified by the "NE" in ASCII.
OS/2
32-bit Linear Executable Introduced with OS/2 2.0, these can be identified by the "LX" in ASCII. These can only be run by OS/2 2.0 and higher. They are also used by some DOS extenders.
Mixed 16/32-bit Linear Executable Introduced with OS/2 2.0, these can be identified by the "LE" in ASCII. This format is used for VxD drivers under Windows 3.x, OS/2, and Windows 9x; it is also used by some DOS extenders.
Windows
When a 16-bit or 32-bit Windows executable is run by Windows, execution starts at either the NE or the PE, and ignores the MZ code known as DOS stub. Started in DOS the stub typically displays a message "This program cannot be run in DOS mode" (or similar) before exiting cleanly, this thereby constituting a minimal form of fat binary. A few dual-mode programs (MZ-NE or MZ-PE) such as regedit and older WinZIP self extractors include a more functional DOS section.
32-bit Portable Executable Introduced with Windows NT, these can be identified by the "PE" in ASCII (although not at the beginning; these files also begin with "MZ").
64-bit Portable Executable (PE32+) Introduced by 64-bit versions of Windows, this is a PE file with wider fields. In most cases, code can be written to simply work as either a 32 or 64-bit PE file.
IExpress
IExpress is Windows program that makes self-extracting .exe files. It uses self-extraction directive (.sed) files to extract files, optionally running an installation command. It supports package titles, confirmation prompts, license agreements, and post-install commands using an .inf file.
Other
Besides these, there are also many custom EXE formats, including but not limited to W3 (a collection of LE files, only used in WIN386.EXE), W4 (a compressed collection of LE files, only used in VMM32.VXD), DL, MP, P2, P3 (last three used by
Phar Lap extenders).
See also
Comparison of executable file formats
Executable compression
CMD file (CP/M)
Windows Installer files (msi)
References
Further reading
External links
Dependency Walker
MZ EXE header format
DOS files
DOS technology
Executable file formats
Filename extensions
Windows administration | Operating System (OS) | 1,251 |
Hello World (disambiguation)
A "Hello, World!" program generally is a computer program that outputs or displays the message "Hello, World!".
Hello World may also refer to:
Music
"Hello World!" (composition), song by the Iamus computer
"Hello World" (Tremeloes song), 1969
"Hello World" (Lady Antebellum song), 2010
"Hello World", a song by Nik Kershaw from the album To Be Frank
"Hello, World!", a 2015 song by Bump of Chicken
"Hello World", a 2015 song by Ginny Blackmore
"Hello World", a song by Belle Perez
Albums
Hello World, 2011 album by Back-On
Hello World (Information Society album), 2014
Hello World (Scandal album), 2014
Hello World: The Motown Solo Collection, compilation album by Michael Jackson
Other uses
Helloworld Travel, an Australian-based travel agency
Helloworld (TV program), an Australian travel and lifestyle television program
Hello World (film), a 2019 Japanese animated film
Hello World: How to Be Human in the Age of the Machine, a book by Hannah Fry | Operating System (OS) | 1,252 |
Software studies
Software studies is an emerging interdisciplinary research field, which studies software systems and their social and cultural effects. The implementation and use of software has been studied in recent fields such as cyberculture, Internet studies, new media studies, and digital culture, yet prior to software studies, software was rarely ever addressed as a distinct object of study. To study software as an artifact, software studies draws upon methods and theory from the digital humanities and from computational perspectives on software. Methodologically, software studies usually differs from the approaches of computer science and software engineering, which concern themselves primarily with software in information theory and in practical application; however, these fields all share an emphasis on computer literacy, particularly in the areas of programming and source code. This emphasis on analysing software sources and processes (rather than interfaces) often distinguishes software studies from new media studies, which is usually restricted to discussions of interfaces and observable effects.
History
The conceptual origins of software studies include Marshall McLuhan's focus on the role of media in themselves, rather than the content of media platforms, in shaping culture. Early references to the study of software as a cultural practice appear in Friedrich Kittler's essay, "Es gibt keine Software", Lev Manovich's Language of New Media, and Matthew Fuller's Behind the Blip: Essays on the Culture of Software. Much of the impetus for the development of software studies has come from video game studies, particularly platform studies, the study of video games and other software artifacts in their hardware and software contexts. New media art, software art, motion graphics, and computer-aided design are also significant software-based cultural practices, as is the creation of new protocols and platforms.
The first conference events in the emerging field were Software Studies Workshop 2006 and SoftWhere 2008.
In 2008, MIT Press launched a Software Studies book series with an edited volume of essays (Fuller's Software Studies: A Lexicon), and the first academic program was launched, (Lev Manovich, Benjamin H. Bratton, and Noah Wardrip-Fruin's "Software Studies Initiative" at U. California San Diego).
In 2011, a number of mainly British researchers established Computational Culture, an open-access peer-reviewed journal. The journal provides a platform for "inter-disciplinary enquiry into the nature of the culture of computational objects, practices, processes and structures."
Related fields
Software studies is closely related to a number of other emerging fields in the digital humanities that explore functional components of technology from a social and cultural perspective. Software studies' focus is at the level of the entire program, specifically the relationship between interface and code. Notably related are critical code studies, which is more closely attuned to the code rather than the program, and platform studies, which investigates the relationships between hardware and software.
See also
Cultural studies
Digital sociology
References
Footnotes
Bibliography
Further reading
External links
Software studies bibliography at Monoskop.org
Computing culture
Cultural studies
Digital humanities
Science and technology studies
Software
Technological change | Operating System (OS) | 1,253 |
Orion-128
The Orion-128 () is a DIY computer designed in Soviet Union. It was featured in the Radio magazine in 1990, other materials for the computer were published until 1996. It was the last Intel 8080-based DIY computer in Russia.
Overview
The Orion-128 used the same concepts as the Specialist and had similar specifications, with both advances and flaws. It gained more popularity because it was supported by a more popular magazine. In the early 1990s the computer was produced industrially at the Livny pilot plant of machine graphics means in Oryol Oblast. Much of the software for the Orion-128 was ported by hobbyists from the Specialist and the ZX Spectrum.
Technical specifications
CPU: KR580VM80A (Intel 8080A clone) clocked at 2.5 MHz.
RAM: 128 KiB in original version, expandable to 256 KiB. A bank switching scheme was used.
ROM: 2 KiB contains monitor firmware
Video: three graphics modes with the same image resolution 384 × 256 pixels. Text can be displayed using 64 columns × 25 rows of characters. Images for the upper case Cyrillic and Latin characters in KOI-7 N2 encoding are built in the Monitor ROM. List of graphics modes includes:
monochrome mode (two color palettes available: black and green, yellow and blue)
4 color mode (each pixel has its own color, two palettes available)
16 color mode (each group of 8 horizontal pixels can use one of 16 foreground colors and one of 16 background colors)
Storage media: cassette tape, ROM drive (a special board containing a set of ROM chips). In later years a floppy disk controller and an ATA hard disk controller were developed
Keyboard: 67 keys. The keyboard matrix is attached via programmable peripheral interface chip KR580VV55 (Intel 8255 clone) and scanned by CPU
Peculiarities
"Orion" is partially compatible with "Radio-86RK" in terms of keyboard, standard ROM subroutines and data storage format on the cassette, and with another amateur radio computer, "Specialist" in terms of graphic screen format. Apparently, he also used the idea of an electronic disk from RAM from another domestic computer with 128 kb RAM - Okean-240. The Orion developers, they say, set themselves the task of creating an inexpensive, simple and affordable consumer PC with good graphics capabilities, and they succeeded. In the minimum configuration (without color, with 64 kb RAM), ORION contains only 42 microcircuits, in the standard configuration (128 kb) there are only 59, and expensive or scarce components are not used, you can use obsolete series microcircuits. For the same reasons, the KR580VM80A was used as the CPU, as the cheapest and most affordable. Moreover, the Orion circuitry is such that the processor operates at its maximum frequency of 2.5 MHz without any delays. The same idea of transparent access to RAM is implemented, which was previously applied in the "Specialist" and its clones. Other domestic machines used WAIT cycles to synchronize the processor with the video part, which reduced performance by 25%. This made the Orion, along with the Corvette, the fastest domestic home computer on this processor. For example, Vector-06Ts, which has a much higher clock rate of 3 MHz, is inferior to Orion in terms of speed due to the slowdown of the processor by the video controller.
"Orion" has high graphics capabilities for this class of machines - a resolution of 384x256 allows good graphics in games, although the resolution is still insufficient for text processing; a full-fledged color mode is provided with its own color for each pixel (analogous to CGA, only with a different organization), 4 colors selected from two palettes and visually the number of colors can be increased due to a mosaic of colored dots, as is done in CGA games. This mode is typical for many Western computers of this level (alas, this mode was almost never used by programs, because it was not needed for text, and there was no graphic editor for creating games); and for games and texts there is a convenient 16-color mode (only 2 colors are possible within the screen byte).
The organization of the Orion screen is linear and very convenient for the programmer - the low byte of the address specifies the vertical position of the screen byte, and the high byte indicates its horizontal position. This simplified and accelerated the display of graphics on the screen (a similar screen organization is also in the "Specialist", "Vector" and "Ocean"). A color screen in 16-color mode consists of two planes - the graphics plane and the color plane. For text in a single-color window, this speeds up output and shifting, as before output, the window is first painted over, which halves the amount of output bytes per character (relative to CGA), and with a video in the window, the color simply does not need to be changed. Also, in all video modes, Orion allows you to use up to 4 software-switchable screen buffers. This allows you to output to a currently invisible screen and then instantly turn it on, which eliminates the problems with flickering sprites in dynamic games and the need to deal with this due to interruptions, as in the ZX-Spectrum. On the Orion, even large sprites can be moved across the screen without flickering.
For Orion-128, its developers initially created the author's ORDOS operating system, designed to work not with a disk drive, but with a ROM disk (external ROM readable through PPA), RAM disks (the second and subsequent 60- kilobyte pages of RAM) and a tape recorder. ORDOS made it possible to work comfortably with a computer without having disk drives that were not available at that time (the Okean-240, a small-scale production of Okean-240, also had a similar built-in ROM OS CP / M running on an electronic disk from RAM). Of the serial home computers, the Junior FV-6506, which also used CP / M, had something similar.
As relative shortcomings of "Orion" can be noted only non-optimal screen resolution of 384 * 256 at a video signal frequency of 10 MHz. This leads to the need to use an ugly, and most importantly, non-byte 6*10 font, which (due to the need for masking) is displayed 2.5 times slower than an 8*10 byte font. But in Corvette, Ocean and Vector, a 512 * 256 screen is used, therefore, even with a lower CPU speed and a larger screen buffer, their text processing is much faster and prettier, and the raster occupies the entire screen (while in "Orion" only part of the screen). As a disadvantage, sometimes they point to the lack of a hardware sound generator (the sound is generated purely by software, with a heavy processor load). This is possible because the authors understood that the gaming niche in the country was already occupied by ZX-Spectrum clones.
But the lack of hardware screen shift, contrary to the reviews on some sites, is not at all a disadvantage, because thanks to the vertically linear organization of the screen, the vertical shift of the screen by the stack is fast enough, and the horizontal shift is simply not needed.
References
Soviet computer systems | Operating System (OS) | 1,254 |
Ioctl
In computing, ioctl (an abbreviation of input/output control) is a system call for device-specific input/output operations and other operations which cannot be expressed by regular system calls. It takes a parameter specifying a request code; the effect of a call depends completely on the request code. Request codes are often device-specific. For instance, a CD-ROM device driver which can instruct a physical device to eject a disc would provide an ioctl request code to do so. Device-independent request codes are sometimes used to give userspace access to kernel functions which are only used by core system software or still under development.
The ioctl system call first appeared in Version 7 of Unix under that name. It is supported by most Unix and Unix-like systems, including Linux and macOS, though the available request codes differ from system to system. Microsoft Windows provides a similar function, named "DeviceIoControl", in its Win32 API.
Background
Conventional operating systems can be divided into two layers, userspace and the kernel. Application code such as a text editor resides in userspace, while the underlying facilities of the operating system, such as the network stack, reside in the kernel. Kernel code handles sensitive resources and implements the security and reliability barriers between applications; for this reason, user mode applications are prevented by the operating system from directly accessing kernel resources.
Userspace applications typically make requests to the kernel by means of system calls, whose code lies in the kernel layer. A system call usually takes the form of a "system call vector", in which the desired system call is indicated with an index number. For instance, exit() might be system call number 1, and write() number 4. The system call vector is then used to find the desired kernel function for the request. In this way, conventional operating systems typically provide several hundred system calls to the userspace.
Though an expedient design for accessing standard kernel facilities, system calls are sometimes inappropriate for accessing non-standard hardware peripherals. By necessity, most hardware peripherals (aka devices) are directly addressable only within the kernel. But user code may need to communicate directly with devices; for instance, an administrator might configure the media type on an Ethernet interface. Modern operating systems support diverse devices, many of which offer a large collection of facilities. Some of these facilities may not be foreseen by the kernel designer, and as a consequence it is difficult for a kernel to provide system calls for using the devices.
To solve this problem, the kernel is designed to be extensible, and may accept an extra module called a device driver which runs in kernel space and can directly address the device. An ioctl interface is a single system call by which userspace may communicate with device drivers. Requests on a device driver are vectored with respect to this ioctl system call, typically by a handle to the device and a request number. The basic kernel can thus allow the userspace to access a device driver without knowing anything about the facilities supported by the device, and without needing an unmanageably large collection of system calls.
Uses
Hardware device configuration
A common use of ioctl is to control hardware devices.
For example, on Win32 systems, ioctl calls can communicate with USB devices, or they can discover drive-geometry information of the attached storage-devices.
On OpenBSD and NetBSD, ioctl is used by the pseudo-device driver and the bioctl utility to implement RAID volume management in a unified vendor-agnostic interface similar to ifconfig.
On NetBSD, ioctl is also used by the sysmon framework.
Terminals
One use of ioctl in code exposed to end-user applications is terminal I/O.
Unix operating systems have traditionally made heavy use of command-line interfaces. The Unix command-line interface is built on pseudo terminals (ptys), which emulate hardware text terminals such as VT100s. A pty is controlled and configured as if it were a hardware device, using ioctl calls. For instance, the window size of a pty is set using the TIOCSWINSZ call. The TIOCSTI (terminal I/O control, simulate terminal input) ioctl function can push a character into a device stream.
Kernel extensions
When applications need to extend the kernel, for instance to accelerate network processing, ioctl calls provide a convenient way to bridge userspace code to kernel extensions. Kernel extensions can provide a location in the filesystem that can be opened by name, through which an arbitrary number of ioctl calls can be dispatched, allowing the extension to be programmed without adding system calls to the operating system.
sysctl alternative
According to an OpenBSD developer, ioctl and sysctl are the two system calls for extending the kernel, with sysctl possibly being the simpler of the two.
In NetBSD, the sysmon_envsys framework for hardware monitoring uses ioctl through proplib; whereas OpenBSD and DragonFly BSD instead use sysctl for their corresponding hw.sensors framework. The original revision of envsys in NetBSD was implemented with ioctl before proplib was available, and had a message suggesting that the framework is experimental, and should be replaced by a sysctl(8) interface, should one be developed, which potentially explains the choice of sysctl in OpenBSD with its subsequent introduction of hw.sensors in 2003. However, when the envsys framework was redesigned in 2007 around proplib, the system call remained as ioctl, and the message was removed.
Implementations
Unix
The ioctl system call first appeared in Version 7 Unix, as a renamed stty. An ioctl call takes as parameters:
an open file descriptor
a request code number
either an integer value, possibly unsigned (going to the driver) or a pointer to data (either going to the driver, coming back from the driver, or both).
The kernel generally dispatches an ioctl call straight to the device driver, which can interpret the request number and data in whatever way required. The writers of each driver document request numbers for that particular driver and provide them as constants in a header file.
Some Unix systems, including Linux, have conventions which encode within the request number the size of the data to be transferred to/from the device driver, the direction of the data transfer and the identity of the driver implementing the request. Regardless of whether such a convention is followed, the kernel and the driver collaborate to deliver a uniform error code (denoted by the symbolic constant ENOTTY) to an application which makes a request of a driver which does not recognise it.
The mnemonic ENOTTY (traditionally associated with the textual message "Not a typewriter") derives from the earliest systems that incorporated an ioctl call, where only the teletype (tty) device raised this error. Though the symbolic mnemonic is fixed by compatibility requirements, some modern systems more helpfully render a more general message such as "Inappropriate device control operation" (or a localization thereof).
TCSETS exemplifies an ioctl call on a serial port. The normal read and write calls on a serial port receive and send data bytes. An ioctl(fd,TCSETS,data) call, separate from such normal I/O, controls various driver options like handling of special characters, or the output signals on the port (such as the DTR signal).
Win32
A Win32 DeviceIoControl takes as parameters:
an open object handle (the Win32 equivalent of a file descriptor)
a request code number (the "control code")
a buffer for input parameters
length of the input buffer
a buffer for output results
length of the output buffer
an OVERLAPPED structure, if overlapped I/O is being used.
The Win32 device control code takes into consideration the mode of the operation being performed.
There are 4 defined modes of operation, impacting the security of the device driver -
METHOD_IN_DIRECT: The buffer address is verified to be readable by the user mode caller.
METHOD_OUT_DIRECT: The buffer address is verified to be writable by the user mode caller.
METHOD_NEITHER: User mode virtual addresses are passed to the driver without mapping or validation.
METHOD_BUFFERED: IO Manager controlled shared buffers are used to move data to and from user mode.
Alternatives
Other vectored call interfaces
Devices and kernel extensions may be linked to userspace using additional new system calls, although this approach is rarely taken, because operating system developers try to keep the system call interface focused and efficient.
On Unix operating systems, two other vectored call interfaces are popular: the fcntl ("file control") system call configures open files, and is used in situations such as enabling non-blocking I/O; and the setsockopt ("set socket option") system call configures open network sockets, a facility used to configure the ipfw packet firewall on BSD Unix systems.
Memory mapping
Unix Device interfaces and input/output capabilities are sometimes provided using memory-mapped files. Applications that interact with devices open a location on the filesystem corresponding to the device, as they would for an ioctl call, but then use memory mapping system calls to tie a portion of their address space to that of the kernel. This interface is a far more efficient way to provide bulk data transfer between a device and a userspace application; individual ioctl or read/write system calls inflict overhead due to repeated userspace-to-kernel transitions, where access to a memory-mapped range of addresses incurs no such overhead.
Win32 Buffered IO methods or named file mapping objects can be used; however, for simple device drivers the standard DeviceIoControl METHOD_ accesses are sufficient.
Netlink
Netlink is a socket-like mechanism for inter-process communication (IPC), designed to be a more flexible successor to ioctl.
Implications
Complexity
ioctl calls minimize the complexity of the kernel's system call interface. However, by providing a place for developers to "stash" bits and pieces of kernel programming interfaces, ioctl calls complicate the overall user-to-kernel API. A kernel that provides several hundred system calls may provide several thousand ioctl calls.
Though the interface to ioctl calls appears somewhat different from conventional system calls, there is in practice little difference between an ioctl call and a system call; an ioctl call is simply a system call with a different dispatching mechanism. Many of the arguments against expanding the kernel system call interface could therefore be applied to ioctl interfaces.
To application developers, system calls appear no different from application subroutines; they are simply function calls that take arguments and return values. The runtime libraries of the OS mask the complexity involved in invoking system calls. Unfortunately, runtime libraries do not make ioctl calls as transparent. Simple operations like discovering the IP addresses for a machine often require tangled messes of ioctl calls, each requiring magic numbers and argument structures.
Libpcap and libdnet are two examples of third-party wrapper Unix libraries designed to mask the complexity of ioctl interfaces, for packet capture and packet I/O, respectively.
Security
The user-to-kernel interfaces of mainstream operating systems are often audited heavily for code flaws and security vulnerabilities prior to release. These audits typically focus on the well-documented system call interfaces; for instance, auditors might ensure that sensitive security calls such as changing user IDs are only available to administrative users.
ioctl interfaces are more complicated, more diverse, and thus harder to audit than system calls. Furthermore, because ioctl calls can be provided by third-party developers, often after the core operating system has been released, ioctl call implementations may receive less scrutiny and thus harbor more vulnerabilities. Finally, many ioctl calls, particularly for third-party device drivers, are undocumented.
Because the handler for an ioctl call resides directly in kernel mode, the input from userspace should be validated carefully. Vulnerabilities in device drivers can be exploited by local users by passing invalid buffers to ioctl calls.
Win32 and Unix operating systems can protect a userspace device name from access by applications with specific access controls applied to the device. Security problems can arise when device driver developers do not apply appropriate access controls to the userspace accessible object.
Some modern operating systems protect the kernel from hostile userspace code (such as applications that have been infected by buffer overflow exploits) using system call wrappers. System call wrappers implement role-based access control by specifying which system calls can be invoked by which applications; wrappers can, for instance, be used to "revoke" the right of a mail program to spawn other programs. ioctl interfaces complicate system call wrappers because there are large numbers of them, each taking different arguments, some of which may be required by normal programs.
Further reading
W. Richard Stevens, Advanced Programming in the UNIX Environment (Addison-Wesley, 1992, ), section 3.14.
Generic I/O Control operations in the online manual for the GNU C Library
"DeviceIoControl Documentation at the Microsoft Developer Network
References
Unix
System calls | Operating System (OS) | 1,255 |
University of Michigan Executive System
The University of Michigan Executive System, or UMES, a batch operating system developed at the University of Michigan in 1958, was widely used at many universities. Based on the General Motors Executive System for the IBM 701, UMES was revised to work on the mainframe computers in use at the University of Michigan during this time (IBM 704, 709, and 7090) and to work better for the small student jobs that were expected to be the primary work load at the University.
UMES was in use at the University of Michigan until 1967, when MTS was phased in to take advantage of the newer virtual memory time-sharing technology that became available on the IBM System/360 Model 67.
Programming languages available
FORTRAN
MAD (programming language)
See also
Timeline of operating systems
History of IBM mainframe operating systems
FORTRAN Monitor System
Bell Operating System (BESYS) or Bell Monitor (BELLMON)
SHARE Operating System (SOS)
IBM 7090/94 IBSYS
Compatible Time-Sharing System (CTSS)
Michigan Terminal System (MTS)
Hardware: IBM 701, IBM 704, IBM 709, IBM 7090
External links
University of Michigan Executive System for the IBM 7090 Computer, volumes 1 (General, Utilities, Internal Organization), 2 (Translators), and 3 (Subroutine Libraries), Computing Center, University of Michigan, September, 1965, 1050 pp.
The IBM 7094 and CTSS, Tom Van Vleck
University of Michigan Executive System (UMES) subseries, Computing Center publications, 1965-1999, Bentley Historical Library, University of Michigan, Ann Arbor, Michigan
"A Markovian model of the University of Michigan Executive System", James D. Foey, Communications of the ACM, 1967, No.6
Discontinued operating systems
University of Michigan
1958 software
IBM mainframe operating systems | Operating System (OS) | 1,256 |
Other World Computing
Other World Computing (OWC) is an American computer hardware company and online store for Mac upgrades and accessories located at MacSales.com that was founded in 1988.
History
1980s
In 1988, at age 14, Larry O'Connor began LRO Enterprises, a printer ribbon re-inking business, in his family's barn. A year later, LRO Enterprises reorganized into LRO Computer Sales and began selling computer memory chips via America Online. The company moved into its first facility in Woodstock, Illinois and hired its first employees.
1990s
In 1992, LRO Computer Sales shifted focus to computers by offering hard drives to its customers. In 1993, LRO Computer Sales incorporated in the state of Illinois under the name New Concepts Development Corporation (NCDC). The company then moved into a 2,500-square-foot office space, which expanded to about 6,500-square feet over the next eight years. In 1994, O'Connor renamed LRO Computers Sales "Other World Computing" (OWC), which operates doing business as NCDC. OWC shipped its first OWC-branded acceleration products in 1995 followed by the introduction of the Mercury G3 ZIF upgrade line in 1999.
2000s
OWC expanded and introduced the Mercury Classic Elite line of external storage and offered an iPod case. OWC announced a portable FireWire drive and a FireWire/USB combination product in 2003.
In 2003, OWC released a line of external storage products; the Mercury Extreme product line with a G4/1.33 GHz processor upgrade, the fastest Macintosh processor to that date; an extra high-capacity NuPower replacement battery compatible with PowerBook G3 FireWire (2000/Pismo) and PowerBook G3 Lombard (1999/Bronze Keyboard) models; and the OWC Neptune line of external 7200RPM FireWire storage solutions. OWC launched FasterMac.net, a Macintosh-only Internet access service that provided dialup access throughout the U.S. specifically for Macintosh computer users in 2003.
In 2004, the company also began offering an iPod battery replacement program and introduced the miniStack line of drives to complement Apple's Mac mini. In 2006, OWC introduced the first Dual-HD external FireWire drive RAID available up to 1.5 TB and became the first third party company with memory modules and upgrade kits for the Intel-based Mac Pro that met Apple specifications and first to introduce a Quad Interface external hard drive combining FireWire 800, FireWire 400, USB 2.0, and eSATA connection options in one product – the OWC Mercury Elite-AL Pro Quad Interface.
In January 2007, OWC announced it would be the US distributor of the Axiotron Modbook.
OWC also introduced the OWC Mercury Rack Pro line and the OWC Blu-ray internal and external drives. In April 2009, OWC expanded its storage line with the OWC Mercury Elite Pro Qx2, a desktop hardware RAID storage product.
In 2008, OWC moved into a new corporate headquarters designed to platinum Leadership in Energy and Environmental Design standards.
In October 2009, a Vestas V39-500kW wind turbine started generating more electricity than OWC needed to run the facility, view OWC's think green efforts. OWC said it was the first technology manufacturer/distributor in the U.S. to become totally on-site wind powered.
2010s
Other World Computing was on the Inc. magazine 5000 "Fastest-Growing Privately Owned Companies" and "Computer and Electronics Top 100" list from 2007 through 2013.
In 2010, OWC announced the Mercury Extreme SSD line of 2.5" SATA solid state drives. The OWC Data Doubler, for adding a second internal drive to MacBook, MacBook Pro, Mac mini, and iMac computers was also introduced as was the OWC Slim eSATA ExpressCard Adapter, which adds an eSATA port to Mac and PC notebooks.
In 2011, sales revenue was reported as $88.3 million with about 137 employees.
In January 2019, OWC acquired fellow external computer storage products and accessories manufacturer Akitio.
Solar Power
OWC adopted solar power as an energy source at its two largest locations in Woodstock and Austin, TX. The Woodstock, IL solar system will generate 265,000 kWh per year, when combined with power from the wind turbine it brings the Woodstock headquarters’ total alternative power generation capacity to over one million kilowatts. It also means that over the course of a year, OWC produces more power than it consumes.
A similar but smaller array is on the roof of OWC's Austin,TX building. Energized at the beginning of 2014, 160 solar panels generate approximately one-third of the power consumed by the three-story building, including the majority of power that is consumed by OWC.
Products
OWC markets upgrade kits for iMac, Macbook Pro, Macbook Air, Mac mini, MacBook and Mac Pro.
The Data-Doubler installation kit allows customers to add a second 2.5" SATA hard disk drive or solid state drive to the optical drive bay of a Mac mini, MacBook, MacBook Pro or an iMac. The optical drive can then be repurposed as an external drive.
OWC designs and manufactures solid state drives.
MaxRAM is a line of memory upgrades for Apple products.
References
External links
macsales.com
OWC.com
Computer hardware companies
Companies established in 1988
Companies based in Illinois | Operating System (OS) | 1,257 |
Apple HD SC Setup
Apple HD SC Setup is a small software utility that was bundled with various versions of the classic Mac OS and A/UX operating systems made by Apple Computer. Introduced with Apple's first SCSI hard drive, the Hard Disk 20SC in September 1986, Apple HD SC Setup can update drivers and partition and initialize hard disks. It was often used when reinstalling the operating system of an Apple Macintosh computer, or to repair corrupt partition information on a SCSI hard disk. Prior to its introduction, the formatting of disks was handled exclusively by the Mac's Finder application, or by third-party formatting utility software customized for a specific disk drive.
The version of Apple HD SC Setup that shipped with the classic Mac OS was only able to manipulate hard disks that featured Apple ROMs. Versions of the program that were bundled with A/UX, however, could be used with any SCSI disk. A third-party patch was released enabling standard editions of Apple HD SC Setup to work on any SCSI disk.
In the mid-1990s, when Apple began shipping computers using ATA hard drives, Apple HD SC Setup was joined by Internal HD Format, which could only format IDE drives. Eventually, both Internal HD Format and Apple HD SC Setup were superseded with Drive Setup in 1995, which combined SCSI and IDE formatting abilities, and ultimately by Disk Utility in macOS.
References
External links
Download Apple HD SC Setup 7.3.5 from apple.com
Classic Mac OS software
Products introduced in 1986 | Operating System (OS) | 1,258 |
Unix wars
The Unix wars were struggles between vendors to set a standard for the Unix operating system in the late 1980s and early 1990s.
Origins
Although AT&T Corporation created Unix, by the 1980s, the University of California, Berkeley Computer Systems Research Group was the leading noncommercial Unix developer. In the mid-1980s, the three common versions of Unix were AT&T's System III, the basis of Microsoft's Xenix and the IBM-endorsed PC/IX, among others; AT&T's System V, which it sought to establish as the new Unix standard; and the Berkeley Software Distribution (BSD). All were derived from AT&T's Research Unix, but had diverged considerably. Further, each vendor's version of Unix was different to some degree.
For example, at a mid-1980s Usenix conference, many AT&T staff had buttons which read "System V: Consider it Standard" and a number of major vendors were promoting products based on System V. On the other hand, System V did not yet have TCP/IP networking built in and BSD 4.2 did; vendors of engineering workstations were nearly all using BSD and posters reading "4.2 > V" were available.
A group of vendors formed the X/Open standards group in 1984, with the aim of forming compatible open systems. They chose to base their system on Unix.
X/Open caught AT&T's attention. To increase the uniformity of Unix, AT&T and leading BSD Unix vendor Sun Microsystems started work in 1987 on a unified system. (The feasibility of this had been demonstrated a few years earlier by the US Army Ballistic Research Laboratory's System V environment for BSD Unix.) This was eventually released as System V Release 4 (SVR4).
While this decision was applauded by customers and the trade press, certain other Unix licensees feared Sun would be unduly advantaged. They formed the Open Software Foundation (OSF) in 1988. The same year, AT&T and another group of licensees responded by forming UNIX International (UI). Technical issues soon took a back seat to vicious and public commercial competition between the two "open" versions of Unix, with X/Open holding the middle ground. A 1990 study of various Unix versions' reliability found that on each version, between a quarter and a third of operating system utilities could be made to crash by fuzzing; the researchers attributed this, in part, to the "race for features, power, and performance" resulting from BSD–System V rivalry, which left developers little time to worry about reliability.
Standardization
The 1988 POSIX standard initially concentrated on system C library functions beyond what was included in the forthcoming C standard; later it expanded to specify other aspects of the system environment. POSIX specified a "lowest common denominator" that could be met by both System V and BSD-based variants, as well as some non-UNIX systems, with a reasonable amount of effort.
In March 1993, the major participants in UI and OSF formed the Common Open Software Environment (COSE) alliance, effectively marking the end of the most significant era of the Unix wars. In June, AT&T sold its UNIX assets to Novell, and in October Novell transferred the Unix brand to X/Open.
In 1996, X/Open and the new OSF merged to form the Open Group. COSE work such as the Single UNIX Specification, the current standard for branded Unix, is now the responsibility of the Open Group, which also controls the current POSIX standards.
Since then, occasional bursts of Unix factionalism have broken out, such as the HP/SCO "3DA" alliance in 1995, and Project Monterey in 1998, a teaming of IBM, SCO, Sequent and Intel which was followed by litigation (SCO v. IBM) between IBM and the new SCO, formerly Caldera.
BSD and the rise of Linux
BSD purged copyrighted AT&T code from 1989 to 1994. During this time various open-source BSD x86 derivatives took shape, starting with 386BSD, which was soon succeeded by FreeBSD and NetBSD. OpenBSD emerged in 1995 as a fork of NetBSD, DragonFly BSD as a fork from FreeBSD in 2003. Mac OS X v10.5 is the first operating system with open source BSD code to be certified as fully Unix compliant. BSD systems can claim direct ancestry from Version 7 Unix. Or, according to Open Source advocate Eric Raymond, BSD systems can be considered "genetic Unix", if not "trademark Unix."
During BSD's period of legal turmoil (1992–94), the nearly-complete GNU operating system was made operational by the inclusion of the Linux kernel and lumped together under the label "Linux". GNU had been written from scratch to avoid copyright issues. Linux systems broadly aim for compatibility with POSIX.
See also
Editor war
UNIX System Laboratories, Inc. v. Berkeley Software Design, Inc.
References
Sources
Unix Wars (Living Internet)
The UNIX Wars (Bell Labs)
The UNIX System History and Timeline (The Open Group)
Unix Standards (Eric S. Raymond, The Art of Unix Programming)
Chapter 11. OSF and UNIX International (Peter H. Salus, The Daemon, the GNU and the Penguin)
Unix history
Software wars
Unix standards | Operating System (OS) | 1,259 |
MKS X/Server
MKS X/Server, a commercial X server developed by MKS Inc., allows users to access Unix/Linux systems from a PC computers which run a Microsoft Windows operating system. The product offers both a full 32-bit X server and a native 64-bit X server (for x64 based systems) that operate on various versions of Microsoft Windows.
History
Since 1995 MKS has distributed the SCO/Tarantella XVision Eclipse PC X server product to its customer base as part of its MKS Toolkit product line. In 2006 MKS acquired the source code rights XVision giving the company the ability to maintain and enhance the product as needed by the market and its customers.
MKS X/Server is available as a standalone product, bundled with various versions of the MKS Toolkit product and available as a runtime option to UNIX/Linux applications ported to Windows using the MKS Toolkit for Enterprise Developers application.
Functional overview
Flexible desktop shortcuts to start UNIX programs remotely (with direct import settings of previously installed X servers)
Secure Shell (SSH) is built into terminal emulators, Unix Neighborhood and Remote Program Starter
Cut and paste between UNIX and Windows applications
Zones Desktop manager
X keyboard mapper
Multi-monitor support
OpenGL/GLX extension
English and Japanese support
Hummingbird Exceed compatibility
Supported standards
Full X11 compliance
Native Windows X servers (32 bit and 64 bit)
Latest internet standards like IPv6 and Secure Shell
AF_UNIX support with the MKS NuTCRACKER platform and tunneled SSH connections
System OS requirements
Windows Vista, Windows Vista 64 bit
Windows 2003 Server, Windows 2003 Server x64 Edition
Windows XP, Windows XP x64 Edition
Windows 2000
External links
MKS website
MKS X/Server homepage
See also
X-Win32 - A commercial alternative
Exceed - A commercial alternative
Xming
X servers | Operating System (OS) | 1,260 |
MS Antivirus (malware)
MS Antivirus (also known as Spyware Protect 2009 and Antivirus XP 2008/Antivirus2009/SecurityTool/etc) is a scareware rogue anti-virus which purports to remove virus infections found on a computer running Microsoft Windows. It attempts to scam the user into purchasing a "full version" of the software. The company and the individuals behind Bakasoftware operated under other different 'company' names, including Innovagest2000, Innovative Marketing Ukraine, Pandora Software, LocusSoftware, etc.
Names
Many clones of MS Antivirus that include slight variations have been distributed throughout the web. They are known as XP Antivirus, Vitae Antivirus, Windows Antivirus, Win Antivirus, Antivirus Action, Antivirus Pro 2009, 2010, 2017 or simply just Antivirus Pro, Antivirus 2007, 2008, 2009, 2010, 2011, and 360, AntiMalware GO, Internet Antivirus Plus, System Antivirus, Spyware Guard 2008 and 2009, Spyware Protect 2009, Winweb Security 2008, Antivirus 10, Total Antivirus 2020, Live Protection Suite, System Security, Malware Defender 2009, Ultimate Antivirus2008, Vista Antivirus, General Antivirus, AntiSpywareMaster, Antispyware 2008, XP AntiSpyware 2008, 2009 and 2010, Antivirus Vista 2010, Real Antivirus, WinPCDefender, Antivirus XP Pro, Anti-Virus-1, Antivirus Soft, Vista Antispyware 2012, Antispyware Soft, Antivirus System PRO, Antivirus Live, Vista Anti Malware 2010, Internet Security 2010, XP Antivirus Pro, Security Tool, VSCAN7, Total Security, PC Defender Plus, Disk Antivirus Professional, AVASoft Professional Antivirus, System Care Antivirus, and System Doctor 2014. Another MS Antivirus clone is named ANG Antivirus. This name is used to confuse the user of the software into thinking that it is the legitimate AVG Antivirus before downloading it.
Symptoms of infection
Each variant has its own way of downloading and installing itself onto a computer. MS Antivirus is made to look functional to fool a computer user into thinking that it is a real anti-virus system in order to convince the user to "purchase" it. In a typical installation, MS Antivirus runs a scan on the computer and gives a false spyware report claiming that the computer is infected with spyware. Once the scan is completed, a warning message appears that lists the spyware ‘found’ and the user either has to click on a link or a button to remove it. Regardless of which button is clicked -- "Next" or "Cancel"—a download box will still pop up. This deceptive tactic is an attempt to scare the Internet user into clicking on the link or button to purchase MS Antivirus. If the user decides not to purchase the program, then they will constantly receive pop-ups stating that the program has found infections and that they should register it in order to fix them. This type of behavior can cause a computer to operate more slowly than normal.
MS Antivirus will also occasionally display fake pop-up alerts on an infected computer. These alerts pretend to be a detection of an attack on that computer and the alert prompts the user to activate or purchase the software in order to stop the attack. More seriously it can paste a fake picture of a Blue Screen of Death over the screen and then display a fake startup image telling the user to buy the software. The malware may also block certain Windows programs that allow the user to modify or remove it. Programs such as Regedit can be blocked by this malware. The registry is also modified so the software runs at system startup. The following files may be downloaded to an infected computer:
MSASetup.exe
MSA.exe
MSA.cpl
MSx.exe
Depending on the variant, the files have different names and therefore can appear or be labeled differently. For example, Antivirus 2009 has the .exe file name a2009.exe.
In addition, in an attempt to make the software seem legitimate, MS Antivirus can give the computer symptoms of the "viruses" that it claims are on the computer. For example, some shortcuts on the desktop may be changed to links of sexually explicit websites instead.
Malicious actions
Most variants of this malware will not be overtly harmful, as they usually will not steal a user's information (as spyware) nor critically harm a system. However, the software will act to inconvenience the user by frequently displaying popups that prompt the user to pay to register the software in order to remove non-existent viruses. Some variants are more harmful; they display popups whenever the user tries to start an application or even tries to navigate the hard drive, especially after the computer is restarted. It does this by modifying the Windows registry. This can clog the screen with repeated pop-ups, potentially making the computer virtually unusable. It can also disable real antivirus programs to protect itself from removal. Whichever variant infects a computer, MS Antivirus always uses system resources when running, potentially making an infected computer run more slowly than before.
The malware can also block access to known spyware removal sites and in some instances, searching for "antivirus 2009" (or similar search terms) on a search engine will result in a blank page or an error page. Some variants will also redirect the user from the actual Google search page to a false Google search page with a link to the virus' page that states that the user has a virus and should get Antivirus 2009. In some rare cases, with the newest version of the malware, it can prevent the user from performing a system restore.
Earnings
In November 2008, it was reported that a hacker known as NeoN hacked the Bakasoftware's database, and posted the earnings of the company received from XP Antivirus. The data revealed the most successful affiliate earned USD$158,000 in a week.
Court actions
On December 2, 2008, the U.S. District Court for the District of Maryland issued a temporary restraining order against Innovative Marketing, Inc. and ByteHosting Internet Services, LLC after receiving a request from the Federal Trade Commission (FTC). According to the FTC, the combined malware of WinFixer, WinAntivirus, DriveCleaner, ErrorSafe, and XP Antivirus has fooled over one million people into purchasing the software marketed as security products. The court also froze the assets of the companies in an effort to provide some monetary reimbursement to affected victims. The FTC claims the companies established an elaborate ruse that duped Internet advertising networks and popular Web sites into carrying their advertisements.
According to the FTC complaint, the companies charged in the case operated using a variety of aliases and maintained offices in the countries of Belize and Ukraine (Kiev). ByteHosting Internet Services is based in Cincinnati, Ohio. The complaint also names defendants Daniel Sundin, Sam Jain, Marc D’Souza, Kristy Ross, and James Reno in its filing, along with Maurice D’Souza, who is named relief defendant, for receiving proceeds from the scheme.
See also
Rogue software
Malware
References
External links
XP Antivirus 2009 Description and Removal instructions on About.com
Rogue software
Scareware
Windows malware | Operating System (OS) | 1,261 |
Tiger (security software)
Tiger is a security software for Unix-like computer operating systems. It can be used both as a security audit tool and a host-based intrusion detection system and supports multiple UNIX platforms. Tiger is free under the GPL license and unlike other tools, it needs only of POSIX tools, and is written entirely in shell language.
Tiger is based on a set of modular scripts that can be run either together or independently to check different aspects of a UNIX system including the review of:
available patches not installed
filesystem permissions
dormant users
specific configuration of system files
History
Tiger was originally developed by Douglas Lee Schales, Dave K. Hess, Khalid Warraich, and Dave R. Safford in 1992 at Texas A&M University.
The tool was originally developed to provide a check of UNIX systems on the A&M campus that had to be accessed from off campus and, consequently, required clearance through the network security measures set in place. It was developed after a coordinated attack in August 1992 to computers in the campus. The campus system administrators needed something that any user could use to test the system's security and run if they could figure out how to get it down to their machines. The tool was presented in the Fourth USENIX Security Symposium. It was written at the same time that other auditing tools such as COPS, SATAN and Internet Security Scanner were written. Eventually, after the 2.2.4 version, which was released in 1994, development of Tiger stalled.
Three different forks evolved after Tiger: TARA (developed by Advanced Research Computing Tiger Analytical Research Assistant), one internally developed by the HP corporation by Bryan Gartner and the last one developed for the Debian GNU/Linux distribution by Javier Fernández-Sanguino (current upstream maintainer). All the forks aimed at making Tiger work in newer versions of different UNIX operating systems.
These forks were merged in May 2002 and in June 2002 the new source code, now labeled as the 3.0 release, was published in the download section of the newly created Savannah site. Following this merge, the following releases were published:
The 3.1 release was published in October 2002, it was considered an unstable release and included some new checks, a new autoconf script for automatic configuration, but mostly included fixes for bugs found after testing Tiger in Debian GNU/Linux and in other operating systems. Over 2,200 lines of code and documentation were included in this release.
The 3.2 release was published in May 2003. It improved the stability of the tool and fixed some security problems including a buffer overflow in realpath.
The 3.2.1 release 7 was published in October 2003. It introduced new checks including: check_ndd (for HPUX and SunOS systems), check_passwspec (for Linux and HPUX) check_trusted (for HPUX), check_rootkit (which can interact with the chkrootkit tool), check_xinetd, and, finally, aide_run and integrit_run (integrity file checkers).
The 3.2.2 release was published in August 2007. It introduced support for Tru64, Solaris 8 and 9. This release also introduced the audit scripts, a collection of scripts originally written by Marc Heuse that can be used to do offline audits of systems by recovering all the needed information and putting it into an archive. These scripts are intended for use with security operating systems baselines or checklists.
The 3.2.3 release was published in September 2007. It was mainly a bug fix release which also included new features related to handling exotic filesystems in Linux.
Overview
Tiger has some interesting features including a modular design that is easy to expand. It can be used as an audit tool and a host-based intrusion detection system tool as described in the program's manpage
and in the source code documentation (README.hostids).
Tiger complements Intrusion Detection System (IDS) (from network IDS Snort), to the kernel (Log-based Intrusion Detection System or LIDS, or SNARE for Linux and Systrace for OpenBSD, for example), integrity checkers (many of these: AIDE,
integrit, Samhain, Tripwire...) and logcheckers, providing a framework in which all of them can work together while checking the system configuration and status.
References
Unix security-related software
de:Tiger (Software) | Operating System (OS) | 1,262 |
Linux Software Map
Linux Software Map (LSM) is a standard text file format for describing Linux software. It also refers to the database constructed from these files. LSM is one of the standard methods for announcing a new software release for Linux.
File format
If a Linux program is to be distributed widely, an LSM file may be created to describe the program, normally in a file called software_package_name.lsm. This file begins with and ends with . It has one field on each line. The field name is separated from the value by a colon (:). Mandatory fields are Title, Version, Entered-date, Description, Author and Primary-site.
Example
Here is a what a blank LSM template looks like, at time of writing:
Begin4
Title:
Version:
Entered-date:
Description:
Keywords:
Author:
Maintained-by:
Primary-site:
Alternate-site:
Original-site:
Platforms:
Copying-policy:
End
Database
The collective database of LSM entries can be searched in order to locate software of a particular type. This database has passed through various owners. It was created by Jeff Kopmanis, Lars Wirzenius maintained it for a while, and now the current maintainer is Aaron Schrab (with help from volunteers).
The database can be downloaded in its entirety, or one can perform limited queries using a web interface.
External links
LSM template version 4 on ibiblio.org
Entire LSM Database
Simple LSM Search on ibiblio.org
Advanced LSM Search on ibiblio.org
Instructions for New Entries
Linux
Computer file formats | Operating System (OS) | 1,263 |
MSWLogo
MSWLogo is a programming language which is interpreted, based on the language Logo, with a graphical user interface (GUI) front end. It was developed by George Mills at the Massachusetts Institute of Technology (MIT). Its core is the same as UCBLogo by Brian Harvey. It is free and open-source software, with source code available, in Borland C++.
MSWLogo supports multiple turtle graphics, 3D computer graphics, and allows input from ports COM and LPT. It also supports a windows interface, so input/output (I/O) is available through this GUI, and keyboard and mouse events can trigger interrupts. Simple GIF animations may also be produced on MSWLogo version 6.5 with the command gifsave. The program is also used as educational software. Jim Muller wrote The Great Logo Adventure, a complete Logo manual using MSWLogo as the demonstration language.
MSWLogo has evolved into FMSLogo: An Educational Programming Environment, a free, open source implementation of the language Logo for Microsoft Windows. It is released under the GNU General Public License (GPL) and is mainly developed and maintained by David Costanzo.
Features
MSWLogo, as of version 6.5b, supports many functions, including:
TCP/IP Winsock networking
Win16, Win32, Win32s
Text in all available fonts and sizes.
1024 independent turtles.
Bitmapped turtles
Bitmap cut, paste, stretch
Clipboard text and bitmaps
MIDI devices
Direct I/O to control external hardware
Serial and parallel port communications
Zooming
Tail recursion: optimizes most recursive functions
User error handling
Standard Logo parsing
Save and restore images in .BMP format files
Color bits per pixel: 1, 4, 8, 16, 24
Standard Windows hypertext help
Standard Windows printing
Separate library and work area
Construction of Windows dialog boxes
Event driven programming: mouse, keyboard, timer
Multimedia devices: WAV sound files, CD-ROM control, etc.
Event timers allowing multiprocessing
3D perspective drawing: wire-frame and solid
Animated GIF generation
References
External links
Interpreters (computing)
Educational programming languages
Logo programming language family | Operating System (OS) | 1,264 |
Commodore 64 software
The Commodore 64 amassed a large software library of nearly 10,000 commercial titles, covering most genres from games to business applications, and many others.
Applications, utility, and business software
While the 1541 disk drive's slow performance made the Commodore 64 mostly unsuitable as a business computer, it was still widely used for many important tasks, including computer graphics creation, desktop publishing, and word processing. Info 64, the first magazine produced with desktop publishing tools, was created on and dedicated to the Commodore platform.
The best known art package was perhaps KoalaPainter, primarily because of its own custom graphics tablet user interface - the KoalaPad. Another popular drawing program for the C64 was Doodle!. A Commodore 64 version of The Print Shop existed, allowing users to generate signs and banners with a printer. "The Newsroom" was a desktop publishing suite. Lightpens and CAD drawing software were also commercially produced, such as the Inkwell Lightpen and related tools.
There were many prepackaged wordprocessors available for the Commodore 64, such as PaperClip and Vizawrite, but a popular DIY program was SpeedScript, which was available as a type-in program from Compute!'s Gazette.
The MultiPlan spreadsheet application from Microsoft was ported to the Commodore 64, where it competed against established packages such as Calc Result. The first Lotus 1-2-3-like integrated software package for the 64 was Viza Software's Vizastar. A complete office suite arrived in the form of British made Mini Office II. In Germany and Scandinavia, many popular application programs were published by German company Data Becker. The typical C64 spreadsheet could store 64 columns and 255 rows, or 16,000 cells, but only 5-10% of them could be used at any one time, due to RAM limitations.
Serious Commodore 64 business users, however, were drawn to GEOS. Due to its speed, ease of use, and full suite of office applications and utility software, GEOS provided a work environment similar to that of an early Apple Macintosh. Arguably the best office applications appeared on GEOS because it was graphically advanced and not limited by the Commodore 64's screen area of 40-columns. Being a fully-fledged OS, GEOS brought the arrival of many add-on fonts, accessories, and applications. It also supported most Commodore 64 peripherals and models of third-party printers. KoalaPad and Lightpen users could use GEOS too, which greatly increased the amount of clip art available for the platform. GEOS proved very popular because of low price for the necessary hardware (and of course the capability of the OS). This was due in part to the aggressive pricing of the Commodore 64 as a games machine and home computer (With rebates, the C64 was going for as little as US$100 at the time). This was in comparison to a typical PC for US$2000 (which required MS-DOS, and another $99 for Windows 1.0) or the venerable Mac 512K Enhanced also $2000.
There were numerous sound editing tools for the Commodore 64. Commodore released music composition software which included a keyboard overlay suited for early model Commodore 64s. Software titles such as the Music Construction Set were available for users to compose music with notes, however the only tools which really pushed the C64's sonic capability to the full were mostly demoscene music tools, or pure assembly language. MIDI expansion cartridges and speech synthesizing hardware was also available for more serious musicians. The Prophet64 cartridge was recently released and features a suite of GUI-style applications for sequencing music, drum and rhythm synthesis, MIDI DIN-sync, and taking advantage of the SID chip in other ways, effectively turning the C64 into a true musical instrument that anyone can use. There was also software which could be used to make the Commodore 64 speak, the best-known being SAM.
The first screen shows the C64's BASIC with a small program. The BASIC interpreter does not only allow the user to write programs, but it is also used as command prompt, so in order to load a program a BASIC command needs to be entered.
KoalaPainter is an early paint program. It uses two screens. The first displays a menu and the second is the picture that is being worked on. The program is controlled either by a joystick or with a graphics tablet that was also sold by Koala.
Magic Desk is an application by Commodore that tries to resemble a real type writer. It contains basic editing functions though.
Multiplan is a text-based spreadsheet application, written by Microsoft.
Vizawrite is another text-based word processor for the C64, but looks more like the professional word processors of the early 80s.
GEOS was a graphical user interface, first released in 1987. It was a small revolution at its time, because until then GUIs, other than Apple II Desktop/MouseDesk, were mostly available for the much more powerful 16-bit machines.
geoPaint is a paint program for GEOS. Beside the small resolution it had all capabilities of other GUI-based drawing programs of its time.
geoWrite is a word processor for GEOS. It did not only have a GUI, but also supported many different styles and fonts with the WYSIWYG principle, unlike the other word processors on the C64.
UIFLI (Underlay Interlace Flexible Line Interpreter) is a Graphicsmode on the Commodore 64 invented by DeeKay and Crossbow of Crest in 1995.
Games
By 1985, games were estimated to make up 60 to 70% of Commodore 64 software. Due in part to its advanced sound and graphic hardware, and to the quality and quantity of games written for it, the C64 became better known as a gaming and home entertainment platform than as a serious business computer. Its large installed user base encouraged commercial companies to flood the market with game software, even up until Commodore's demise in 1994. In total over 23,000 unique game titles exist for the Commodore 64.
International Soccer was Commodore's best first-party game; otherwise "the normal standard for Commodore software is mediocrity", InfoWorld stated in 1984. The company did not publish many other games for the C64, instead releasing game cartridges primarily from the failed MAX Machine for the C64. Commodore included an "Ultimax" mode in the Commodore 64's hardware, which allowed the computer to emulate a MAX machine for this purpose.
However, aside from the initial Commodore cartridges, very few cartridge-based games were released for the Commodore. Most third-party game cartridges came from Llamasoft, Activision, and Atarisoft, however some of these games found their way into disk and tape versions too. Only later, when the failed C64GS console was produced, did cartridges make a brief comeback, including the production of a few more cartridge-only games. Crackers managed to port these games to disk later on.
While the 1541 floppy disk drive quickly became universal in the US, in Europe it was common for prepackaged commercial game software to either come on floppy disk or cassette-tape format, and sometimes both. Cassette-based games were usually cheaper than their disk-based counterparts; however, due to the Datasette's lack of speed and random access, many large games (such as role-playing video games) were never made for the cassette format. Despite this, a great deal of software was published only on the cassette format in Europe, including many "budget" games produced by companies like Mastertronic, Firebird, and Codemasters which were released on cassette only and sold for a fraction of the price of full-price commercial software.
Whilst many commercial software companies produced prepackaged game software, an abundant supply of free software was also available. What is noticeable from the Commodore 64's game catalog is that a rather large selection of all C64 games were programmed non-commercially by average Commodore 64 users, with editors included in some games, e.g. Boulder Dash Construction Kit, Pinball Construction Set, SEUCK, The Quill, GameMaker. Given the accessibility of BASIC on the Commodore 64, many BASIC games were created and also ported from other computer platforms and modified for the Commodore 64. In addition, many games exist that were released as Type-in programs from numerous magazines, especially European Commodore magazines. Many books and magazines were published containing listings for games, and public domain software was developed and released from both BBS systems and public domain libraries such as "Binary Zone" in the UK.
There were many classic must-have games produced on the Commodore 64, perhaps too many to mention, including versions of classic video games. Of particular note, the smash hit Impossible Mission produced by Epyx was originally designed for the Commodore 64. Epyx's multievent games (Summer Games, Winter Games, World Games, and California Games) were very popular, as well as perhaps the first driving game with split-screen dynamics, Pitstop II. Most of these games eventually made an appearance on the Commodore DTV joystick unit many years later.
Other hit games such as Boulder Dash, The Sentinel, Archon, and Elite were all given Commodore 64 versions. Cassette users may remember titles such as Master of Magic, Rocketball, One Man and His Droid, and Spellbound on Mastertronic's budget labels. Other notable titles on the Commodore 64 include the Ultima and Bard's Tale role-playing game series. Hewson/Graftgold were responsible for several well-received C64 titles including Paradroid and Uridium—made famous for their metallic bas-relief styled graphic effects and addictive gameplay. System 3 produced The Last Ninja action adventure series originally on the C64. Armalyte, a groundbreaking shoot 'em up title from Thalamus Ltd, and Turrican I & II are among some of the highest rated games for the Commodore 64 (according to Zzap64, which awarded "Gold Medals" to these games).
Notable game designers for the Commodore 64 are: Paul Norman, Danielle Barry (aka Dan Bunten), Andrew Braybrook, Stephen Landrum, Tim and Chris Stamper, Jeff Minter and Tony Gibson just to name a few.
During the final mainstream commercial years of the Commodore 64, Issue 38 of Commodore Format magazine in November 1993 awarded the only 100% rating ever given to a Commodore 64 game in any major Commodore 64 publication. As no game had ever received such a high rating before, and as the commercial Commodore 64 scene was winding down in the mid-1990s, the awarding of 100% was seen as somewhat controversial. The game, titled Mayhem in Monsterland, was developed to exploit a multitude of programming tricks and quirks in the Commodore 64's hardware to the maximum. The impressive use of non-standard colors and scrolling resulted in perhaps the most graphically stunning game ever produced for the Commodore 64. The gameplay itself is similar to that of Nintendo's Super Mario Bros. and SEGA's Sonic the Hedgehog.
Whilst mainstream commercial activity for games no longer exists for the C64, many enthusiasts and hobbyists still write games for the platform. In addition, a few small publishers still sell game software.
Commodore 64 games continue to inspire developers and gamers on modern platforms such as iOS with many games being produced using similar styles of game-play mechanics to those from the Commodore 64 era.
Type-ins, bulletin boards, and disk magazines
Besides prepackaged commercial software, the C64, like the VIC before it, had a large library of type-in programs. Numerous computer magazines offered type-in programs, usually written in BASIC or assembly language or a combination of the two. Because of its immense popularity, many general-purpose magazines that supported other computers offered C64 type-ins (Compute! was one of these), and at its peak, there were many magazines in North America (Ahoy!, Commodore Magazine, Compute!'s Gazette, Power/Play, RUN and Transactor ) dedicated to Commodore computers exclusively. These magazines sometimes had disk companion subscriptions available at extra cost with the programs stored on disk to avoid the need to type them in. The disk magazine Loadstar offered fairly elaborate ready-to-run programs, music, and graphics. Books of type-ins were also common, especially in the machine's early days. There were also many books publishing type-ins for the C-64, sometimes programs that had originally appeared in one of the magazines, but books containing original software were also available.
A large library of public domain and freeware programs, distributed by online services such as Q-Link and CompuServe, BBSs, and user groups also emerged. Commodore also maintained an archive of public domain software, which it offered for sale on diskette. Despite limited RAM and disk capacity, the Commodore 64 was a popular platform for BBS hosting. Some of the most popular installations included the highly optimized and fast Blue Board program, and the Color64 BBS System, which allowed the use of color PETSCII graphics. Many BBS sysops used high-capacity floppy drives like the SFD-1001 or hard drives such as the Lt. Kernal.
Software cracking
The C64 software market had widespread problems with copyright infringement. There were many kinds of copy protection systems employed on both cassette and floppy disk, to prevent the unauthorized copying of commercial Commodore 64 software. Practically all of them were worked around or defeated by crackers and warez groups. The popularity of this activity has been attributed to the large Commodore 64 user base.
Many BBSs offered cracked commercial software, sometimes requiring special access and usually requiring users to maintain an upload/download ratio. A large number of warez groups existed, including Fairlight, which continued to exist more than a decade after the C64's demise. Some members of these groups turned to telephone phreaking and credit card or calling card fraud to make long-distance calls, either to download new titles not yet available locally, or to upload newly cracked titles released by the group.
Not all Commodore 64 users had modems however. For these people, many warez group "swappers" maintained contacts throughout the world. These contacts would usually mass mail cracked floppy disks through the postal service. Also, sneakernets existed at schools and businesses all over the world, as friends and colleagues would trade (and usually later copy) their software collections. At a time before the Internet was widespread, this was the only way for many users to amass huge software libraries. Also, and particularly in Europe, groups of people would hold copy-parties explicitly to copy software, usually irrespective of software licence.
Several popular utilities were sold that contained custom routines to defeat most copy-protection schemes in commercial software. (Fast Hack'em—probably the most popular example—was itself widely redistributed.) Pirates Toolbox was another popular set of tools for copying disks and removing copy protection. Tapes could be copied with special software, but often it was simply done by dubbing the cassette in a dual deck tape recorder, or by relying on an Action Replay cartridge to freeze the program in memory and save to cassette. Cracked games could often be copied manually without any special tools. In Europe, some hardware devices, colloquially known as "black boxes" were available under the counter that connected two C1530 tape decks together at the connection point to the C64 permitting a copy to be made whilst loading a game. This overcame the difficulties in direct dubbing of later games using the high speed loaders that were developed to overcome the very long load times.
BASIC
Like most computers from the late 1970s and 1980s, the Commodore 64 came with a version of the BASIC programming language. It was used for both writing software and for performing the duties of an operating system such as loading software and formatting disks.
The onboard BASIC programming language offered no easy way to tap the machine's advanced graphics and sound capabilities. Accessing these associated memory addresses to make use of the advanced features required using the PEEK and POKE commands, third-party BASIC extensions, such as Simons' BASIC, or to program in assembly language. Commodore had a better implementation of BASIC but chose to ship the C64 with the same BASIC 2.0 used in the VIC-20 to minimize cost. This, however, did not stop countless people making thousands of programs in the BASIC V2 language, and teaching people their first steps in computer programming.
Music
The MOS Technology 6581 SID is the sound chip for the C64, for which many music software programs were written. A musical software tool for the C64, was Kawasaki Synthesizer created in 1983.
Development tools
Aside from games and office applications such as word processors, spreadsheets, and database programs, the C64 was well equipped with development tools from Commodore as well as third-party vendors. Various assembler solutions were available; the MIKRO assembler came in ROM cartridge form and integrated seamlessly with the standard BASIC screen editor. The PAL Assembler by Brad Templeton was also popular. Several companies sold BASIC compilers, C compilers and Pascal compilers, and a subset of Ada, to mention but a few popular languages available for the machine.
The likely most popular entertainment oriented development suite was the Shoot'Em-Up Construction Kit, affectionately known as SEUCK. SEUCK allowed those non-skilled in programming to create original, professional-looking shooting games. Garry Kitchen's Gamemaker and Arcade Game Construction Kit also allowed non-programmers to create simple games with little effort. Text adventure game tools included The Quill and Graphic Adventure Creator development suites. The Pinball Construction Set gave users a pinball machine to design.
Modern-day development tools
Software development on the Commodore 64 never really stopped. There are many tools available today, including IDEs such as CBM prg Studio, Relaunch64, and WUDSN IDE, which is a plug-in for the open source Eclipse IDE. Along with small C compilers such as cc65, there are many assemblers and cross assemblers to be used on modern day PCs:
Turbo Assembler
Kick Assembler by Mads Nielsen
dasm
acme
ca65 (which is part of cc65.)
c64asm
C64List by Jeff Hoag is both a cross assembler and cross-platform BASIC editor/tokenizer that allows developers to write mixed BASIC/assembly programs in a text file on a PC and compile it into a single .prg file that can be executed on an actual C64 or emulator.
Tools such as PuCrunch, an LZ77 data and executable self extracting compression program, are also available released under GNU LGPL. Sprite editors like Sprite Pad allow you to design C64 Sprites and animations using Windows. GoatTracker allows you to write music using modern OSs and uses the ReSID engine.
Using CodeNet it is possible to transfer and execute programs to a C64 via a TCP/IP network cable from a PC. Although this does require an Ethernet adapter on the C64 such as Individual Computers RR-Net or an appropriate version of the 1541 Ultimate.
Retrocomputing efforts
The magnetic tapes and disks upon which home computer software was stored are decaying at an alarming rate. In order to preserve game software and information, efforts are underway to copy from these degrading media onto fresh media which will help ensure a long life for the software and make it available for emulation and archiving. In addition, there are other efforts to archive Commodore 64 documentation, software manuals, magazine articles, and other nostalgia (such as software packaging artwork, game screenshots, and Commodore 64 TV commercials). Commodore 64 game software has been remarkably well documented and preserved - a considerable feat when taking the amount of software available for the platform into consideration.
The GameBase 64 (GB64) organization has an online database of game information, which at version 7 holds information for 21,000 unique game titles. The database is still growing as new information comes to light.
Besides the online database a downloadable offline version exists. Using one of the frontends GameBase (Windows only) or jGameBase (platform independent) you can conveniently browse the database entries and directly start them in an emulator.
The GoodGB64 variant of Cowering's Good Tools allows users to audit their C64 game collections using the GameBase64 database.
There are tools available to transfer original 1541 floppy discs to or from the PC. The Star Commander is a DOS-based tool, cbm4linux is a Linux tool, and cbm4win is a Windows tool to transfer data from an original floppy drive to the PC, or vice versa, using a simple X-cable. There are also tools available, 64HDD, to allow your C64 to directly load D64 software stored on your PC using the same cables. The Individual Computers Catweasel allows PC users to use their own floppy drive to read C64 disks.
In addition, there is now a growing number of emulators available, which allow the use of an emulated C64 on modern computing hardware. These include VICE, which is free and runs on most modern as well as some older platforms; CCS64, which is available for Windows and is written by Per Håkan Sundell; and Power64, which has versions for Mac OS X and OS 9.
Also the Quantum Link service has been reconstructed as Quantum Link Reloaded. It can be accessed with a real Commodore 64, or through the VICE emulator.
Special hardware has also been designed to aid the conservation of software, such as the IDE64 cartridge, which allows the user to connect a modern PC IDE ATA hard drive or a CompactFlash flashcard directly to the machine, giving the possibility to copy software onto the hard drive and use it from there, preventing wear on a decades-old floppy disk.
Nintendo's Virtual Console service offers Commodore 64 games for download on the Wii console in North America and Europe.
References | Operating System (OS) | 1,265 |
Compatible Time-Sharing System
The Compatible Time-Sharing System (CTSS) was the first general purpose time-sharing operating system. Compatible Time Sharing referred to time sharing which was compatible with batch processing; it could offer both time sharing and batch processing concurrently.
CTSS was developed at the MIT Computation Center ("Comp Center"). CTSS was first demonstrated on MIT's IBM 709 (the "blue machine") in November 1961. Routine service to MIT Comp Center users began in the summer of 1963 and was operated there until 1968.
A second deployment of CTSS on a separate IBM 7094 that was received in October 1963 (the "red machine") was used early on in Project MAC until 1969 when the red machine was moved to the Information Processing Center and operated until July 20, 1973. CTSS ran on only those two machines however there were remote CTSS users outside of MIT including ones at the University of Edinburgh and the University of Oxford.
History
John Backus said in the 1954 summer session at MIT that "By time sharing, a big computer could be used as several small ones; there would need to be a reading station for each user". Computers at that time, like IBM 704, were not powerful enough to implement such system, but at the end of 1958, MIT's Computation Center nevertheless added a typewriter input to its 704 with the intent that a programmer or operator could "obtain additional answers from the machine on a time-sharing basis with other programs using the machine simultaneously".
In June 1959, Christopher Strachey published a paper "Time Sharing in Large Fast Computers" at the UNESCO Information Processing Conference in Paris, where he envisaged a programmer debugging a program at a console (like a teletype) connected to the computer, while another program was running in the computer at the same time. Debugging programs was an important problem at that time, because with batch processing, it then often took a day from submitting a changed code, to getting the results. John McCarthy wrote a memo about that at MIT, after which a preliminary study committee and a working committee were established at MIT, to develop time sharing. The committees envisaged many users using the computer at the same time, decided the details of implementing such system at MIT, and started the development of the system.
Experimental Time Sharing System
By July, 1961 a few time sharing commands had become operational on the Computation Center's IBM 709, and in November 1961, Fernando J. Corbató demonstrated at MIT what was called the Experimental Time-Sharing System. On May 3, 1962, F. J. Corbató, M. M. Daggett and R. C. Daley published a paper about that system at the Spring Joint Computer Conference. Robert C. Daley, Peter R. Bos and at least 6 other programmers implemented the operating system, partly based on the Fortran Monitor System.
The system used an IBM 7090, modified by Herbert M. Teager, with added 3 Flexowriters for user consoles, and maybe a timer. Each of the 3 users had two tape units, one for the user's file directory, and one for dumping the core (program in memory). There was also one tape unit for the system commands, there were no disk drives. The memory was 27 k words (36-bit words) for users, and 5 k words for the supervisor (operating system). The input from the consoles was written to the buffers in the supervisor, by interrupts, and when a return character was received, the control was given to the supervisor, which dumped the running code to the tape and decided what to run next. The console commands implemented at the time were login, logout, input, edit, fap, mad, madtrn, load, use, start, skippm, listf, printf, xdump and xundump.
This became the initial version of the Compatible Time-Sharing System. This was apparently the first ever public demonstration of time-sharing; there are other claims, but they refer to special-purpose systems, or with no known papers published. The "compatibility" of CTSS was with background jobs run on the same computer, which generally used more of the compute resources than the time-sharing functions.
Features
The original ELIZA ran on CTSS.
CTSS was the first computer system to implement password login.
CTSS had one of the first computerized text formatting utilities, called RUNOFF (the successor of DITTO).
CTSS had one of the first inter-user messaging implementations, possibly inventing email.
CTSS had one of the first instant messaging systems similar to write.
MIT Computation Center staff member Louis Pouzin created for CTSS a command called RUNCOM, which executed a list of commands contained in a file. (He later created a design for the Multics shell that was implemented by Glenda Schroeder which in turn inspired Unix shell scripts.) RUNCOM also allowed parameter substitution.
CTSS had an implementation of the text editor QED, the predecessor of ed, vi, and vim, with regular expressions added by Ken Thompson.
Implementation
Kernel
CTSS used a modified IBM 7090 mainframe computer that had two 32,768 (32K) 36-bit-word banks of core memory instead of the normal one. One bank was reserved for the time-sharing supervisory program, the other for user programs. CTSS had a protected-mode kernel, the supervisor's functions in the A-core (memory bank A) could be called only by software interrupts, like in the modern operating systems. Causing memory-protection interrupts were used for software interrupts. Processor allocation scheduling with a quantum time unit 200 ms, was controlled by a multilevel feedback queue. It also had some special memory-management hardware, a clock interrupt and the ability to trap certain instructions.
Supervisor subroutines
RDFLXA – Read an input line from console
WRFLX – Write an output line to console
DEAD – Put the user into dead status, with no program in memory
DORMNT – Put the user into dormant status, with program in memory
GETMEM – Get the size of the memory allocation
SETMEM – Set the size of the memory allocation
TSSFIL – Get access to the CTSS system files on the disk
USRFIL – Change back to user's own directory
GETBRK – Get the instruction location counter at quit
Programming languages
CTSS at first had only an assembler FAP and a compiler MAD. Also, Fortran II code could be translated into a MAD code. Later half of the system was written in MAD. Later there were other programming languages like LISP and a version of ALGOL.
File system
Each user had their own directory, and there were also shared directories for groups of people with the same "problem number". Each file had two names, the second indicating its type as did the extension in later system. At first, each file could have one of four modes: temporary, permanent, read-only class 1, and read-only class 2. Read-only class 1 allowed the user to change the mode of the file. Files could also be symbolically linked between directories. A directory listing by listf:
10 FILES 20 TRACKS USED
DATE NAME MODE NO. TRACKS
5/20/63 MAIN MAD P 15
5/17/63 DPFA SYMTB P 1
5/17/63 DPFA BSS P 1
5/17/63 DPFA FAP P 2
Peripherals
Input-output hardware was mostly standard IBM peripherals. These included six data channels connecting to:
Printers, punched card readers and punches
IBM 729 tape drives, an IBM 1301 disk storage, later upgraded to an IBM 1302, with 38 million word capacity
An IBM 7320 drum memory with 186K words that could load a 32K-word memory bank in one second (later upgraded to 0.25 seconds)
Two custom high-speed vector graphics displays
An IBM 7750 transmission control unit capable of supporting up to 112 teleprinter terminals, including IBM 1050 Selectrics and Model 35s. Some of the terminals were located remotely, and the system could be accessed using the public Telex and TWX networks.
Influences
CTSS was described in a paper presented at the 1962 Spring Joint Computer Conference, and greatly influenced the design of other early time-sharing systems.
Maurice Wilkes witnessed CTSS and the design of the Titan Supervisor was inspired by that.
Dennis Ritchie wrote in 1977 that UNIX could be seen as a "modern implementation" of CTSS. Multics, which was also developed by Project MAC, was started in the 1960s as a successor to CTSS – and in turn inspired the development of Unix in 1969. One of the technical terms inherited by these systems from CTSS is daemon.
Incompatible Timesharing System (ITS), another early, revolutionary, and influential MIT time-sharing system, was produced by people who disagreed with the direction taken by CTSS, and later, Multics; the name was a parody of "CTSS", as later the name "Unix" was a parody of "Multics".
See also
Timeline of operating systems
Time-sharing system evolution
References
Further reading
External links
Oral history interview with John McCarthy, Charles Babbage Institute, University of Minnesota. Discusses computer developments at MIT including time sharing.
Oral history interview with Fernando J. Corbató, Charles Babbage Institute, University of Minnesota. Discusses many computer developments at MIT including CTSS.
Oral history interview with Robert M. Fano, Charles Babbage Institute, University of Minnesota. Discusses computer developments at MIT including CTSS.
The IBM 7094 and CTSS: personal memoir of Tom Van Vleck, a system programmer on CTSS
CTSS Source version MIT8C0 in Paul Pierce's collection.
Dave Pitts' IBM 7094 support – Includes a license-free simulator, cross assembler and linker that can be used to build and run CTSS.
CTSS sources and binaries, a more complete kit which runs on SIMH.
CIO: 40 years of Multics, 1969-2009: Interview with CTSS and Multics developer Fernando J. Corbato.
Jerome Saltzer's CTSS bookshelf via CSAIL.
1960s software
1970s software
Discontinued operating systems
Massachusetts Institute of Technology software
Time-sharing operating systems | Operating System (OS) | 1,266 |
MAŤO
The Maťo (Matthew) was an 8-bit personal computer produced in the former Czechoslovakia by Štátny majetok Závadka š.p., Závadka nad Hronom, from 1989 to 1992. Their primary goal was to produce a personal computer as cheaply as possible, and therefore it was also sold as a self-assembly kit. It was basically a modified PMD 85, but without backward compatibility. This, combined with its late arrival to the market, made the MAŤO a commercial failure.
Specifications
MHB 8080A 2.048 MHz CPU
48 KB RAM
16 KB ROM
System monitor and built-in BASIC-G or simple games
Tape recorder interface
Monochromatic TV output
288×256 resolution
Built-in power supply
External links
Old-computers.com - Mato
Maťo (Czech)
Emulator of PMD 85 and compatibles for Win32
Home computers
Science and technology in Czechoslovakia | Operating System (OS) | 1,267 |
List of Linux distributions that run from RAM
This is a list of Linux distributions that can be run entirely from a computer's RAM, meaning that once the OS has been loaded to the RAM, the media it was loaded from can be completely removed, and the distribution will run the PC through the RAM only. This ability allows them to be very fast, since reading and writing data from/to RAM is much faster than on a hard disk drive or solid state drive. Many of these operating systems will load from a removable media such as a Live CD or a Live USB stick. A "frugal" install can also often be completed, allowing loading from a hard disk drive instead.
This feature is implemented in live-initramfs and allows the user to run a live distro that does not run from ram by default by adding toram to the kernel boot parameters.
Additionally some distributions can be configured to run from RAM, such as Ubuntu using the toram option included in the Casper scripts.
Table
See also
tmpfs; by mounting a tmpfs and running files that are placed on this, files and programs can be run from RAM, even on Linux distros that do not run completely in RAM
Clustered file system; network file systems are another way to avoid needing to use a (slow) harddisk (at least faster if using a E-IDE harddisk)
initrd ("initial ramdisk"), a scheme for loading a temporary root file system into memory in the boot process of the Linux kernel.
Lightweight Linux distribution
List of live CDs
List of tools to create Live USB systems
SYSLINUX, a suite of lightweight IBM PC MBR bootloaders for starting up computers with the Linux kernel.
Windows PE, a non-Linux operating system that can also be run from RAM.
References
External links
Light-weight Linux distributions
Light-weight | Operating System (OS) | 1,268 |
GPD Win
GPD Win is a Windows-based handheld computer equipped with a keyboard and gaming controls. It is an x86-based device which runs a full version of Windows 10 Home. The device was envisioned primarily with video game console emulation and PC gaming in mind, but is capable of running any x86 Windows-based application that can run within the confines of the computer's technical specifications. First announced in October 2015, it was crowdfunded via Indiegogo and two other crowdfunding sites in Japan and China, and was released in October 2016.
History
GamePad Digital (GPD) is a technology company based in Shenzhen, China. Among other products, they have created several handheld video game consoles which run Android on ARM architecture. For instance, GPD XD. GPD Win was meant to be a way to play PC games, PC-based video game console emulators, and hypervisors (such as VMware and VirtualBox clients) on a handheld device. The appeal of the Win was intended to be, that an x86 Windows handheld PC console offers far more PC and emulator gaming support than other architectures and operating systems that are widely used on mobile devices (such as Linux or Android on ARM hardware, or proprietary systems). GPD widely touts this ability on the device's Indiegogo page, with video demonstrations.
GamePad Digital first pitched the idea of GPD Win to the community in October 2015 as concept for market research, with further planning in November. By December, the physical design and hardware specifications were determined. By March 2016, initial prototypes were finished, debugged, and shipped to select sources. GPD started accepting pre-orders in June 2016 through several online retailers, including the Indiegogo page. Pre-order backers are offered the device for a discounted price of $330, with an estimated final retail price of $499, but settling on a price of $330 after release. The initial stated goal was $100,000. By August 2016, a small batch shipment to industry personnel were shipped, and by September, the pre-order promotional pricing ceased. GPD started shipping the final product by October 2016, with pre-order backers receiving their devices first.
Software
GPD Win runs Windows 10 Home. GPD stated that per an April 2014 Microsoft decision, Windows is free on all devices with screens smaller than 9 inches. However, devices shipped to backers have a Windows 10 product key to input on initial boot and setup of the device. Unlike most Windows smartphones, GPD Win is able to run any x86 Windows application that can also be run on PC laptops and desktops.
As of April 2017, several patches are available for the Linux kernel that allow a mostly complete functionality of the Win with a full desktop Linux like Ubuntu. There are also ways to get Android to work on GPD Win.
Technical and physical specifications
GPD Win has a full QWERTY keyboard, which includes 67 standard keys and 10 expanded function keys. For gaming, the controller is stylized similar to the OpenPandora and DragonBox Pyra style keyboard and controller layout: one D-pad, two analog sticks, four face buttons, and four shoulder buttons (two on each shoulder).
GPD Win was initially intended to use the Intel Atom x5-Z8500 Cherry Trail CPU.
The graphics processor is an Intel HD Graphics integrated GPU with a base clock speed of 200 MHz and a turbo boost of up to 600 MHz.
GPD Win uses 4GB LPDDR3-1600 RAM, with 64GB ROM eMMC 4.51. It has a single microSD slot that can officially support a maximum of 128GB of storage. However, it can unofficially support a microSD card of 256GB.
GPD Win is 15.5×9.7cm in size. It has a 5.5-inch 1280×720 (720p) H-IPS multi-directional touch screen in 16:9 ratio. It is reinforced by Gorilla Glass 3.
The audio system consists of a built-in speaker using the Realtek ALC5645 driver, and a microphone jack. It supports most popular audio, video, and image formats, including MP3, MP4, 3GP, WMV/WMA FLAC, AVI, MOV, JPG, PNG, and BMP.
GPD Win has a 6700mAh polymer lithium-ion battery with USB C charging interface (5 V/2.5 A). It has a stated ability to play 80 continuous hours of music or 6–8 hours of online video or online gaming. It is Bluetooth 4.0 and 802.11 b/g/n/ac (5 GHz and 2.4 GHz) Wifi capable.
GamePad Digital has continued to support GPD Win past release, taking user feedback into account, with driver and hardware updates. As of January 10, 2017, GPD revised the Win's hardware, providing a fix for Intel graphics driver stability issues, fixing the AC charging/boot-up bug (described in the reviews section), improved cooling, as well as improving the tactile feedback of the D-Pad, buttons, and keyboard. This includes a software update that improves the buttons' responsiveness, and makes changes to the functionality of the built-in pointer.
Release and reception
GamePad Digital began shipping the GPD Win to backers in October 2016. JC Torres of Slashgear gave the Win a 7/10. Stating that it has solid technical specs per expected needs, it's ambitious for being a Windows 10-based handheld console in an industry dominated by Linux-based handhelds, and is well rounded with features. However, he also noted an inconsistent build quality among models, and mediocre sound quality ("loud, but low"). Ultimately, he called it an "exceptional device".
Linus Sebastian made a video review of the GPD Win on his YouTube channel LinusTechTips. He complimented its gaming and multitasking capabilities, and was impressed with the hardware specs and hardware design and features overall (to include more I/O ports and features than for instance, a common MacBook). He did lament that the system had some flaws. Among them: The shoulder buttons felt cheaply assembled; the 5.5" 720P screen was not friendly for scaling, and that the device has a bug where it must not be plugged into the AC adapter when pressing the power button in order to boot up (otherwise, it simply loads to the charging screen. It must be plugged back in only after system boot up starts. This issue has since been fixed in subsequent releases). His official verdict was that deciding whether it was worth the price was up to the user, and that the Win made him excited about the prospect of what UMPCs will be capable of in the near future as the hardware progresses further. He compared it to Apple's first iPhone (while stating that it was not as revolutionary), in that it's a great concept that has some flaws with its execution, but is ambitious, practical, and is set to be much better in the future.
GPD Win 2
GamePad Digital announced the GPD Win 2 In early 2017. The Win 2 is a significant upgrade which is able to run AAA spec games, as well as better video game console emulation. It has an Intel Core m3 CPU, Intel HD 615 graphics, 8GB LPDDR3 RAM, a 128GB M.2 solid-state drive, as well as the same I/O ports as the GPD Win. There are a few external hardware changes, including moving the analog knobs outward, D-input will be dropped, and an additional shoulder button on each shoulder, for six total. The price for crowdfunding backers is $649, with a tentative retail price of $899. The Indiegogo campaign launched on January 15, 2018, with a final release date of May 2018. The Indiegogo campaign saw rapid success, far surpassing its stated goal within days.
See also
Comparison of handheld game consoles
Dragonbox Pyra
GPD XD
GPD Win Max
GPD Win 3
Pandora (console)
PC gaming
Handheld gaming
External links
GPD Win homepage
GPD Win Indiegogo page
References
Handheld personal computers
Indiegogo projects
Windows 10
Subnotebooks | Operating System (OS) | 1,269 |
Open Packaging Conventions
The Open Packaging Conventions (OPC) is a container-file technology initially created by Microsoft to store a combination of XML and non-XML files that together form a single entity such as an Open XML Paper Specification (OpenXPS) document. OPC-based file formats combine the advantages of leaving the independent file entities embedded in the document intact and resulting in much smaller files compared to normal use of XML.
Specifications
The OPC is specified in Part 2 of the Office Open XML standards ISO/IEC 29500:2008 and ECMA-376.
The ISO/IEC 29500-2:2008 specification and the second edition of ECMA-376 makes a normative reference to PKWARE, Inc.'s .ZIP File Format Specification version 6.2.0 (2004), and supplements it with a normative set of clarifications. Note: The older first edition of ECMA-376 makes an informative (i.e., non-normative) reference to the newer PKWARE Inc's ".ZIP File Format Specification" version 6.2.1 (2005). The ZIP format is not specified by any international standard but has widespread community and developer acceptance.
Microsoft submitted a draft in 2006 to the Internet Engineering Task Force for a "pack" URI Scheme (pack://) to be used for URI references to OPC-based packages. The draft expired in 2009, the specified syntax is incompatible with the Internet Standard for URI schemes (STD 66, RFC 3986). The scheme is now listed as historical.
The ISO 19165:1-2018 recommends the use of the Open Packaging Conventions to implement the Geospatial Package defined in the Open Archival Information System.
Usage
Both the XML Paper Specification (XPS) and Office Open XML (OOXML) use Open Packaging Conventions (OPC), which provide a profile of the common ZIP format. In addition to data and document content in XML markup, files in the ZIP package can include other text and binary files in formats such as PNG, BMP, AVI, PDF, RTF, or even an already packaged ODF file. OPC also defines some naming conventions and an indirection method to allow position independence of binary and XML files in the ZIP archive.
OPC files can be opened using common ZIP utilities. OPC allow indirection, chunking and relative indirection.
File formats using the OPC
The OPC is the foundation technology for many new file formats:
Programming
OPC is natively supported in Microsoft .NET Framework 3.0 by the namespace. Open source libraries exist for other languages.
Since Windows 7, OPC is also natively supported in the Windows API through a set of COM interfaces, collectively referred to as Packaging API.
Alternatively, ZIP libraries can be used to create and open OPC files, as long as the correct files are included in the ZIP and the conventions followed.
Package, parts, and relationships
In OPC terminology, the term package corresponds to a ZIP archive and the term part corresponds to a file stored within the ZIP. Every part in a package has a unique URI-compliant part name along with a specified content-type expressed in the form of a MIME media type. A part's content-type explicitly defines the type of data stored in the part and reduces duplication and ambiguity issues inherent with file extensions.
OPC packages can also include relationships that define associations between the package, parts, and external resources. In addition to a hierarchy of directories and parts, OPC packages commonly use relationships to access content through a directed graph of relationship associations. Relationships are composed of four elements:
an identifier (ID)
an optional source (the package or a part within the package)
a relationship type (a URI-style expression that defines the type of the relationship)
a target (a URI to another part within the package or to an external resource)
OPC packages can store parts that contain any type of data (text, images, XML, binary, whatever). The extension ".rels", however, is reserved for storing relationships metadata within "/_rels" subfolders. The subfolder name "_rels", the file extension ".rels" within such directory, and the filename "[Content_Types].xml" in any folder are the only three reserved names for files stored in an OPC package.
/[Content_Types].xml file
This file defines the MIME media types for all the parts stored in the package. The "/[Content_Types].xml" file defines default mappings based on file extensions, along with overrides for specific parts with content-types that are different from the file extension defaults. For example, one of these defined MIME types is:
<Default Extension="rels" ContentType="application/vnd.openxmlformats-package.relationships+xml"/>
/_rels
The root level "/_rels" folder stores the relationships for the package as a whole. The "/_rels" folder normally contains a file named ".rels". "/_rels/.rels" is an XML file where the starting package-level relationships are stored. Normally when opening an OPC-based file, applications start by accessing to the "/_rels/.rels" file to read the starting package-level relationships.
[partname].rels
Each part may have its own relationships. The _rels folders are where one goes to find the relationships for any given part within the package. To find the relationships for a specific part, one looks in the "_rels" folder that is a sibling of that part: If the part has relationships, the "_rels" folder will contain a file that has one's original part name with a ".rels" appended to it. For example, if the content types part file had any relationships, there would be a file called "[Content_Types].xml.rels" inside the "/_rels" folder.
All relationships (including the relations associated to the root package) are represented as XML files. If you open a ".rels" file in a text editor, you can view the actual XML markup that defines all the relationships targeted from that part. A typical relationships file contains XML code like this:
<Relationships xmlns="http://schemas.openxmlformats.org/package/2006/relationships">
<Relationship Id="R0" Type="http://schemas.microsoft.com/xps/2005/06/fixedrepresentation" Target="/FixedDocumentSequence.fdseq"/>
<Relationship Id="R1" Type="http://schemas.openxmlformats.org/package/2006/relationships/metadata/thumbnail" Target="/Documents/1/Metadata/Page1_Thumbnail.JPG"/>
</Relationships>
which defines two relations for the root package, the first one being considered as the root package (here for an early Microsoft XPS document, before it was standardized as Open XML Paper Specification within the openxmlformats collection), and the other one being used to reference an alternate form (here a thumbnail rendered image of the first page of the document).
The main parts of the embedded documents are often stored within a folder named "/Document" (which may contain subdirectories itself, if the file contains several related documents each of them with various parts), and the optional metadata parts that are not needed for processing the main parts of the document are stored in a folder named "/Metadata"; however these actual folder names are actually specified within the XML-formatted data in "[partname].rels" relationship files and the OPC specification allows any folder organisation that is convenient for the application and these two folder names are not required.
Chunking
It encourages documents to be split into small chunks. This is better for reducing the effect of file corruption. And better for data access: for example, all the style information in one XML part, each separate worksheet or table in their own different parts. This allows faster access and less object creation for clients and makes it easier for multiple processes to be working on the same document.
Relative indirection
In the Open Packaging Conventions, each file that has reference has its own _rels file with the indirection lists. This makes it easier to cut and paste some information with all its associated resources in some cases, provides name scoping to remove the chance of name clashing between files, and so on.
References
External links
Download specification ISO/IEC 29500-2:2012
OPC: A New Standard for Packaging Your Data
Essentials of the Open Packaging Conventions
OPC Digital Signatures: Application Guidelines for Common Criteria Security
Packaging team blog
Open Packaging Conventions (OPC) MSDN Forum
The Addressing Model of the Open Packaging Conventions
OPC implementation test documents
OPC package explorer to edit XML parts
ISO 19165-1:2018 ISO 19165 Geographic information – Preservation of digital data and metadata – Part 1: Fundamentals
Computer file formats
Ecma standards
IEC standards
ISO standards
Microsoft initiatives
Office Open XML | Operating System (OS) | 1,270 |
Motorola X8 Mobile Computing System
Motorola X8 Mobile Computing System is a chipset from Motorola for Android-based smartphones, based on Qualcomm Snapdragon System on a chip S4 Pro. CPU of S4 Pro is ARM-compatible dual-core Krait, and GPU of this chip is 4-core Adreno 320. Several low-power DSP chips were added by Motorola to S4 Pro in the chipset to process audio and inputs from other sensors.
Composition
Qualcomm Snapdragon S4 Pro SoC (MSM8960, DT version), built with 28 nm process node, with:
1.7 GHz dual-core ARM-compatible Krait processor
quad-core Adreno 320 GPU
natural language processor - single-core
contextual awareness processor - single-core
Loaded smartphones
Motorola Droid Ultra
Motorola Droid Maxx
Motorola Droid Mini
Motorola Moto X
References
External links
Motorola X8 Mobile Computing System official website (Archived)
Motorola's 'X8 Computing System', Brought To You By Qualcomm And Texas Instruments? // Forbes, 2013-08-28
Computer-related introductions in 2013
ARM architecture
Embedded microprocessors
Motorola products
Qualcomm
System on a chip | Operating System (OS) | 1,271 |
Tony Tebby
Tony Tebby is a computer programmer and the designer of Qdos, the computer operating system used in the Sinclair QL personal computer, while working as an engineer at Sinclair Research in the early 1980s. He left Sinclair Research in 1984 in protest at the premature launch of the QL, and formed QJUMP Ltd., a software house specializing in system software and utilities for the QL, based in Rampton, Cambridgeshire, England.
Prior to this he worked at the Philips Research Laboratories in Redhill, Surrey where he worked on realtime image processing, using electronic hardware rather than software. At that time, software would have been either a batch program on the PRL mainframe computer or, within the departmental laboratory, the Commodore PET.
Among the software developed by QJUMP was SuperToolkit II, a collection of extensions to Qdos and SuperBASIC; a Qdos floppy disk driver which became the de facto standard for the various third-party floppy disk interfaces sold for the QL; and the QJUMP Pointer Environment, which extended the primitive display windowing facility of Qdos into something approaching a full GUI. Tebby also received a commission to write a Qdos-like operating system for the Atari ST; this was called SMS2.
Tebby later moved to Le Grand-Pressigny, France, but continued his involvement in the QL user community. In the early 1990s, he developed SMSQ, a new Qdos-compatible OS, based on SMS2, for the Miracle Systems QXL, a QL emulator card for PCs. An enhanced version of SMSQ was ported to the Atari ST and various other QL emulators, being renamed SMSQ/E. He has also worked on Stella, an embedded operating system for 68000-series and ColdFire processors.
References
Computer programmers
Sinclair Research
Living people
Year of birth missing (living people) | Operating System (OS) | 1,272 |
P8000
The P8000 is a microcomputer system developed in 1987 by the VEB Elektro-Apparate-Werke Berlin-Treptow „Friedrich Ebert“ (EAW) in the German Democratic Republic (DDR, East Germany). It consisted of an 8-bit and a 16-bit microprocessor and a Winchester disk controller. It was intended as a universal programming and development system for multi-user/multi-task applications. The initial list price of the P8000 was 172,125 East German marks (around 860,000–1.7 million DM).
There was also a budget version with only an 8-bit microprocessor.
The 8-bit microcomputer
The 8-bit version of P8000 was completely contained on a single 4-layer printed circuit board. The processor, with a clock frequency of 4 MHz, was based on the U880 microprocessor (near clone of Zilog Z80) and peripheral circuits along with the U8272 floppy-disk controller. Direct memory access was accomplished by U858 DMA controller chip..
The system featured a main memory of 64 KB dynamic RAM, an 8 K EPROM, and a 2 KB static RAM for boot code, system test routines and the system monitor. This extra memory could be moved or turned off in 4-KB-stages in the 64 K address space.
The 8-bit machine had four serial ports, designated tty0 to tty3. These interfaces could operate either as V.24 or IFSS (Interface sternförmig seriell–20 mA current loop) signals. In addition the computer had a parallel port which allowed the connection of an EPROM burner. Another internal 32-bit parallel interface was used for the coupling the 8-bit to a 16-bit microcomputer card. Data exchange was via two built-in 5.25" floppy drives. Two additional 5.25" or 8" floppy drives could be connected externally.
The 16-bit microcomputer
The 16-bit version of P8000 was divided into two functional units: the 16-bit processor card and up to four plug-in memory cards with sizes of 256 KB or 1 MB. The 16-bit processor was contained on a 6-layer PC board. The processor operated at a clock frequency of 4 MHz and was based on the U8001 (Zilog Z8001 clone) 16-bit microprocessor . Three U8010 memory management units (MMUs) augmented with special control logic handled the dynamic memory segment allocation and protected against unauthorized access. In addition the computer had 16 KB EPROM memory and 2 KB static RAM For the system Monitor and self-test routines. The peripheral circuitry was built around the U880 family. Dedicated control logic ensured the coordination with the U8001.
The 16-bit machine had four serial ports, designated tty4 to tty7. The interfaces handled both V.24 and IFSS signals. The 8-bit processor was connected to the 16-bit processor via an internal 32-bit parallel interface. Connection to the external Winchester disk was accomplished by another parallel port.
Winchester disk controller
The Winchester-Disk controller (WDC) is an intelligent disk controller and was located in a separate unit, the Winchester disk enclosure. This unit contains the Winchester disk controller and up to two hard drives.
The WDC was based on the U880 microprocessor. An 8 KB EPROM stored the firmware. A 6 KB static RAM served as a buffer between the host computer to the disk . The WDC communicated with the host over an eight-bit-wide parallel interface in conjunction with additional control logic. The interface handled transmission of data blocks plus the command and acknowledgment information.
The system could support up to two identical drives using an ST-506 interface. The initial drive parameters were stored in the firmware, however with firmware version 4.2 it was possible, to use any MFM disk. From version 4.2 up the first sector of the disk stored the disk parameters in addition to bad sector information,
The terminal
The P8000 terminal served as the input and output device of the P8000. It consisted of a green screen, a keyboard and a controller. The terminal could operate as an ADM-31 or VT100. It could switch between two character sets, which were stored in separate EPROMs. The interface operated in either V.24 or IFSS mode. The controller was based on a U884 and the graphics controller on o KR580WG75 (Intel 8275) chip. It used a piezoelectric speaker to issue audible alerts.
P8000 Compact
Developed in 1989, the P8000 Compact was an enhanced version of the P8000. In this system the Winchester disk controller and hard drive were contained within in the housing of the host computer, eliminating the extra cabinet. In contrast to the P8000, the P8000 Compact was a 16-bit system with a battery-backed real-time clock standard. Optionally the P8000 Compact was also available with a third CPU, the U80601 (Intel 80286 clone). The P8000 Compact was the last computer developed by EAW.
Operating Systems
The following operating systems were available for the P8000:
UDOS – compatible with Z80-RIO
OS/M – compatible with CP/M
IS/M – compatible with ISIS-II
WEGA – compatible with UNIX
IRTS 8000 – a real-time operating system
DCP (only with the third CPU under WEGA ) – compatible with MS-DOS
References
External links
Science and technology in East Germany
Microcomputers
Goods manufactured in East Germany | Operating System (OS) | 1,273 |
CMS-2
CMS-2 is an embedded systems programming language used by the United States Navy. It was an early attempt to develop a standardized high-level computer programming language intended to improve code portability and reusability. CMS-2 was developed primarily for the US Navy’s tactical data systems (NTDS).
CMS-2 was developed by RAND Corporation in the early 1970s and stands for "Compiler Monitor System". The name "CMS-2" is followed in literature by a letter designating the type of target system. For example, CMS-2M targets Navy 16-bit processors, such as the AN/AYK-14.
History
CMS-2 was developed for FCPCPAC (Fleet Computer Programming Center - Pacific) in San Diego, CA. It was implemented by Computer Sciences Corporation in 1968 with design assistance from Intermetrics. The language continued to be developed, eventually supporting a number of computers including the AN/UYK-7 and AN/UYK-43 and UYK-20 and UYK-44 computers.
Language features
CMS-2 was designed to encourage program modularization, permitting independent compilation of portions of a total system. The language is statement oriented. The source is free-form and may be arranged for programming convenience. Data types include fixed-point, floating-point, boolean, character and status. Direct reference to, and manipulation of character and bit strings is permitted. Symbolic machine code may be included, known as direct code.
Program structure
A CMS-2 program is composed of statements. Statements are made up of symbols separated by delimiters. The categories of symbols include operators, identifiers, and constants. The operators are language primitives assigned by the compiler for specific operations or definitions in a program. Identifiers are unique names assigned by the programmer to data units, program elements and statement labels. Constants are known values that may be numeric, Hollerith strings, status values or Boolean.
CMS-2 statements are free form and terminated by a dollar sign. A statement label may be placed at the beginning of a statement for reference purposes.
A CMS-2 source program is composed of two basic types of statement. Declarative statements provide basic control information to the compiler and define the structure of the data associated with a particular program. Dynamic statements cause the compiler to generate executable machine instructions (object code).
Declarative statements defining the data for a program are grouped into units called data designs. Data designs consist of precise definitions for temporary and permanent data storage areas, input areas, output areas and special data units. The dynamic statements that act on data or perform calculations are grouped into procedures. Data designs and procedures are further grouped to form system elements of a CMS-2 program. The compiler combines system elements into a compile time system. A compile time system may stand alone or be part of a larger program.
Data declarative statements
Data declarative statements provide the compiler with information about data element definitions. They define the format, structure and order of data elements in a compile-time system. The three major kinds of data are switches, variables and aggregates.
Switches
Switches provide for the transfer of program control to a specific location in a compile-time system. They contain a set of identifiers or switch points to facilitate program transfers and branches. The switch represents a program address of a statement label or procedure name.
Variables
A variable is a single piece of data. It may consist of one bit, multiple bits or words. A value may be assigned in the variable definition. Variables may hold a constant or changing value. Data types include integers, fix point, floating point, Hollerith character strings, status or Booleans.
Aggregates
Tables hold ordered sets of identically structured information. The common unit of data in a table is an item. Items may be subdivided into fields, the smallest subdivision of a table. Allowable data types contained in fields include integer, fixed point, floating point, Hollerith character string, status or Boolean. An array is an extension of the table concept. The basic structural unit of an array is an item. Array items contain fields as defined by the programmer.
Dynamic statements
Dynamic statements specify processing operations and result in executable code generation by the compiler. A dynamic statement consists of an operator followed by a list of operands and additional operators. An operand may be a single name, a constant, a data-element reference or an expression.
Statement operators
Major CMS-2 operators are summarized below.
Special operators
Special operators facilitate references to data structures and operations on them.
Program structure declarations
The dynamic statements that describe the processing operations of a program are grouped into blocks of statements called procedures.
High level input/output statements
Input/output statements provide communication with hardware devices while running in a non-realtime environment under a monitor system.
Compiler Monitor System 2 (CMS-2)
The Compiler Monitor System 2 (CMS-2) was a system that ran on the UNIVAC CP-642B (AN/USQ-20). The system software included the monitor, compiler, librarian, CP-642 Loader, tape utility and flow charter.
MS-2 monitor
A batch processing operating system that controls execution of CMS-2 components and user jobs run on the CP-642 computer. It provides input/output, software library facilities and debugging tools. Job accounting is also provided.
CMS-2 compiler
A compiler for the CS-1 and CMS-2 languages that generates object code for the CP-642, L-304, AN/UYK-7, 1830A and 1218/1219 computers. During the 1970s there were different versions of the CMS-2 compiler, depending on which computer was used to compile the code. Some source code had to be rewritten to work around some functions. And the different versions of CMS-2 had problems with the debugging tools.
XCMS-2 compiler
An extended CMS-2 compiler, adding language features for the AN/UYK-7 computer. It only generates AN/UYK-7 object code.
CMS-2 librarian
A file management system that provides storage and access to source and object code.
CP-642 Object code loaders
Two object code loaders for loading absolute or relocatable object code.
Tape utility
A set of utilities for managing data on magnetic tape.
CMS-2 flowcharter
The flowcharter software processes flowcharter statements in CMS-2 source code and outputs a flowchart to a high-speed printer.
See also
Ada
AN/AYK-14
AN/UYK-7
AN/UYK-20
AN/UYK-43
AN/UYK-44
AN/USQ-17
AN/USQ-20
JOVIAL
Naval Tactical Data System
TACPOL
References
External links
CMS-2Y Programmers Reference Manual for the AN UYK-7 and AN UYK-43 (October 1986)
the Feasibility of Emulating the AN/UYK-7 Computer on the AADC Signal Processing Element
Dictionary of Programming Languages entry for CMS-2
Embedded systems
Systems programming languages | Operating System (OS) | 1,274 |
Micro-operation
In computer central processing units, micro-operations (also known as a micro-ops or μops, historically also as micro-actions) are detailed low-level instructions used in some designs to implement complex machine instructions (sometimes termed macro-instructions in this context).
Usually, micro-operations perform basic operations on data stored in one or more registers, including transferring data between registers or between registers and external buses of the central processing unit (CPU), and performing arithmetic or logical operations on registers. In a typical fetch-decode-execute cycle, each step of a macro-instruction is decomposed during its execution so the CPU determines and steps through a series of micro-operations. The execution of micro-operations is performed under control of the CPU's control unit, which decides on their execution while performing various optimizations such as reordering, fusion and caching.
Optimizations
Various forms of μops have long been the basis for traditional microcode routines used to simplify the implementation of a particular CPU design or perhaps just the sequencing of certain multi-step operations or addressing modes. More recently, μops have also been employed in a different way in order to let modern CISC processors more easily handle asynchronous parallel and speculative execution: As with traditional microcode, one or more table lookups (or equivalent) is done to locate the appropriate μop-sequence based on the encoding and semantics of the machine instruction (the decoding or translation step), however, instead of having rigid μop-sequences controlling the CPU directly from a microcode-ROM, μops are here dynamically buffered for rescheduling before being executed.
This buffering means that the fetch and decode stages can be more detached from the execution units than is feasible in a more traditional microcoded (or hard-wired) design. As this allows a degree of freedom regarding execution order, it makes some extraction of instruction-level parallelism out of a normal single-threaded program possible (provided that dependencies are checked etc.). It opens up for more analysis and therefore also for reordering of code sequences in order to dynamically optimize mapping and scheduling of μops onto machine resources (such as ALUs, load/store units etc.). As this happens on the μop-level, sub-operations of different machine (macro) instructions may often intermix in a particular μop-sequence, forming partially reordered machine instructions as a direct consequence of the out-of-order dispatching of microinstructions from several macro instructions. However, this is not the same as the micro-op fusion, which aims at the fact that a more complex microinstruction may replace a few simpler microinstructions in certain cases, typically in order to minimize state changes and usage of the queue and re-order buffer space, therefore reducing power consumption. Micro-op fusion is used in some modern CPU designs.
Execution optimization has gone even further; processors not only translate many machine instructions into a series of μops, but also do the opposite when appropriate; they combine certain machine instruction sequences (such as a compare followed by a conditional jump) into a more complex μop which fits the execution model better and thus can be executed faster or with less machine resources involved. This is also known as macro-op fusion.
Another way to try to improve performance is to cache the decoded micro-operations, so that if the same macroinstruction is executed again, the processor can directly access the decoded micro-operations from a special cache, instead of decoding them again. The Execution Trace Cache found in Intel NetBurst microarchitecture (Pentium 4) is a widespread example of this technique. The size of this cache may be stated in terms of how many thousands (or strictly multiple of 1024) of micro-operations it can store: Kμops.
See also
Micro-operation cache
References
Instruction processing
Central processing unit | Operating System (OS) | 1,275 |
Pentium Dual-Core
The Pentium Dual-Core brand was used for mainstream x86-architecture microprocessors from Intel from 2006 to 2009 when it was renamed to Pentium. The processors are based on either the 32-bit Yonah or (with quite different microarchitectures) 64-bit Merom-2M, Allendale, and Wolfdale-3M core, targeted at mobile or desktop computers.
In terms of features, price and performance at a given clock frequency, Pentium Dual-Core processors were positioned above Celeron but below Core and Core 2 microprocessors in Intel's product range. The Pentium Dual-Core was also a very popular choice for overclocking, as it can deliver high performance (when overclocked) at a low price.
Processor cores
In 2006, Intel announced a plan to return the Pentium trademark from retirement to the market, as a moniker of low-cost Core microarchitecture processors based on the single-core Conroe-L but with 1 MB of cache. The identification numbers for those planned Pentiums were similar to the numbers of the latter Pentium Dual-Core microprocessors, but with the first digit "1", instead of "2", suggesting their single-core function. A single-core Conroe-L with 1 MB cache was deemed as not strong enough to distinguish the planned Pentiums from the Celerons, so it was replaced by dual-core central processing units (CPU), adding "Dual-Core" to the line's name. Throughout 2009, Intel changed the name back from Pentium Dual-Core to Pentium in its publications. Some processors were sold under both names, but the newer E5400 through E6800 desktop and SU4100/T4x00 mobile processors were not officially part of the Pentium Dual-Core line.
Yonah
The first processors using the brand appeared in notebook computers in early 2007. Those processors, named Pentium T2060, T2080, and T2130, had the 32-bit Pentium M-derived Yonah core, and closely resembled the Core Duo T2050 processor with the exception of having 1 MB of L2 cache instead of 2 MB. All three of them had a 533 MHz front-side bus (FSB) connecting the CPU with the synchronous dynamic random-access memory (SDRAM). Intel developed the Pentium Dual-Core at the request of laptop manufacturers.
Allendale
Subsequently, on June 3, 2007, Intel released the desktop Pentium Dual-Core branded processors known as the Pentium E2140 and E2160. An E2180 model was released later in September 2007. These processors support the Intel 64 extensions, being based on the newer, 64-bit Allendale core with Core microarchitecture. These closely resembled the Core 2 Duo E4300 processor with the exception of having 1 MB of L2 cache instead of 2 MB. Both of them had an 800 MHz front-side bus (FSB). They targeted the budget market above the Intel Celeron (Conroe-L single-core series) processors featuring only 512 KB of L2 cache. Such a step marked a change in the Pentium brand, relegating it to the budget segment rather than its former position as a mainstream or premium brand.
These CPUs are highly overclockable.
Merom-2M
The mobile version of the Allendale processor, the Merom-2M, was also introduced in 2007, featuring 1MB of L2 cache but only 533 MT/s FSB with the T23xx processors. The bus clock was subsequently raised to 667 MT/s with the T3xxx Pentium processors that are made from the same dies.
Wolfdale-3M
The 45 nm E5200 model was released by Intel on August 31, 2008, with a larger 2MB L2 cache over the 65 nm E21xx series and the 2.5 GHz clock speed. The E5200 model is also a highly overclockable processor, with many reaching over 3.75 GHz clock speed using just the stock Intel cooler. Intel released the E6500K model using this core. The model features an unlocked multiplier, but was only sold in China.
Penryn-3M
The Penryn core is the successor to the Merom core and Intel's 45 nm version of their mobile series of Pentium Dual-Core microprocessors. The FSB is increased from 667 MHz to 800 MHz and the voltage is lowered. Intel released the first Penryn based Pentium Dual-Core, the T4200, in December 2008. Later, mobile Pentium T4000, SU2000 and SU4000 processors based on Penryn were marketed as Pentium.
Rebranding
The Pentium Dual-Core brand has been discontinued in early 2010 and replaced by the Pentium name. The Desktop E6000 series and the OEM-only mobile Pentium SU2000 and all later models were always called Pentium, but the Desktop Pentium Dual-Core E2000 and E5000 series processors had to be rebranded.
Comparison to the Pentium D
Although using the Pentium name, the desktop Pentium Dual-Core is based on the Core microarchitecture, which can clearly be seen when comparing the specification to the Pentium D, which is based on the NetBurst microarchitecture first introduced in the Pentium 4. Below the 2 or 4 MB of shared-L2-cache-enabled Core 2 Duo, the desktop Pentium Dual-Core has 1 or 2 MB of shared L2 Cache. In contrast, the Pentium D processors have either 2 or 4 MB of non-shared L2 cache. Additionally, the fastest-clocked Pentium D has a factory boundary of 3.73 GHz, while the fastest-clocked desktop Pentium Dual-Core reaches 3.2 GHz. A major difference among these processors is that the desktop Pentium Dual Core processors have a TDP of only 65 W while the Pentium D ranges between 95 and 130 W. Despite the reduced clock speed, and lower amounts of cache, Pentium dual-core outperformed Pentium D by a fairly large margin.
See also
Pentium
List of Intel Pentium Dual-Core microprocessors
List of Intel Pentium microprocessors
References
External links
Computer-related introductions in 2006
Intel x86 microprocessors | Operating System (OS) | 1,276 |
AUTOEXEC.BAT
AUTOEXEC.BAT is a system file that was originally on DOS-type operating systems. It is a plain-text batch file in the root directory of the boot device. The name of the file is an abbreviation of "automatic execution", which describes its function in automatically executing commands on system startup; the filename was coined in response to the 8.3 filename limitations of the FAT file system family.
Usage
AUTOEXEC.BAT is read upon startup by all versions of DOS, including MS-DOS version 7.x as used in Windows 95 and Windows 98. Windows ME only parses environment variables as part of its attempts to reduce legacy dependencies, but this can be worked around.
The filename was also used by (DCP), an MS-DOS derivative by the former East-German VEB Robotron.
In Korean versions of MS-DOS/PC DOS 4.01 and higher (except for PC DOS 7 and 2000), if the current country code is set to 82 (for Korea) and no /P:filename is given and no default AUTOEXEC.BAT is found, COMMAND.COM will look for a file named KAUTOEXE.BAT instead in order to ensure that the DBCS frontend drivers will be loaded even without properly set up CONFIG.SYS and AUTOEXEC.BAT files.
Under DOS, the file is executed by the primary copy of the command-line processor (typically COMMAND.COM) once the operating system has booted and the CONFIG.SYS file processing has finished. While DOS by itself provides no means to pass batch file parameters to COMMAND.COM for AUTOEXEC.BAT processing, the alternative command-line processor 4DOS supports a 4DOS.INI AutoExecParams directive and //AutoExecParams= startup option to define such parameters. Under Concurrent DOS, Multiuser DOS and REAL/32, three initial parameters will be passed to either the corresponding STARTxxy.BAT (if it exists) or the generic AUTOEXEC.BAT startup file, %1 holds the virtual console number, %2 the 2-digit terminal number (xx) (with 00 being the main console) and %3 the 1-digit session number (y).
Windows NT and its descendants Windows XP and Windows Vista parse AUTOEXEC.BAT when a user logs on. As with Windows ME, anything other than setting environment variables is ignored. Unlike CONFIG.SYS, the commands in AUTOEXEC.BAT can be entered at the interactive command line interpreter. They are just standard commands that the computer operator wants to be executed automatically whenever the computer is started, and can include other batch files.
AUTOEXEC.BAT is most often used to set environment variables such as keyboard, soundcard, printer, and temporary file locations. It is also used to initiate low level system utilities, such as the following:
Virus scanners
Disk caching software
Mouse drivers
Keyboard drivers
CD drivers
Miscellaneous other drivers
Example
In early versions of DOS, AUTOEXEC.BAT was by default very simple. The DATE and TIME commands were necessary as early PC and XT class machines did not have a battery backed-up real-time clock as default.
@ECHO OFF
CLS
DATE
TIME
VER
In non-US environments, the keyboard driver (like KEYB FR for the French keyboard) was also included. Later versions were often much expanded with numerous third-party device drivers. The following is a basic DOS 5 type AUTOEXEC.BAT configuration, consisting only of essential commands:
@ECHO OFF
PROMPT $P$G
PATH C:\DOS;C:\WINDOWS
SET TEMP=C:\TEMP
SET BLASTER=A220 I7 D1 T2
LH SMARTDRV.EXE
LH DOSKEY
LH MOUSE.COM /Y
This configuration sets common environment variables, loads a disk cache, places common directories into the default PATH, and initializes the DOS mouse / keyboard drivers. The PROMPT command sets the prompt to "C:\>" (when the working directory is the root of the C drive) instead of simply "C>" (the default prompt, indicating only the working drive and not the directory therein).
In general, device drivers were loaded in CONFIG.SYS, and programs were loaded in the AUTOEXEC.BAT file. Some devices, such as mice, could be loaded either as a device driver in CONFIG.SYS, or as a TSR in AUTOEXEC.BAT, depending upon the manufacturer.
In MS-DOS 6.0 and higher, a DOS boot menu is configurable. This can be of great help to users who wish to have optimized boot configurations for various programs, such as DOS games and Windows.
@ECHO OFF
PROMPT $P$G
PATH C:\DOS;C:\WINDOWS
SET TEMP=C:\TEMP
SET BLASTER=A220 I7 D1 T2
GOTO %CONFIG%
:WIN
LH SMARTDRV.EXE
LH MOUSE.COM /Y
WIN
GOTO END
:XMS
LH SMARTDRV.EXE
LH DOSKEY
GOTO END
:END
The GOTO %CONFIG% line informs DOS to look up menu entries that were defined within CONFIG.SYS. Then, these profiles are named here and configured with the desired specific drivers and utilities. At the desired end of each specific configuration, a GOTO command redirects DOS to the :END section. Lines after :END will be used by all profiles.
Dual-booting DOS and Windows 9x
When installing Windows 95 over a preexisting DOS/Windows install, CONFIG.SYS and AUTOEXEC.BAT are renamed to CONFIG.DOS and AUTOEXEC.DOS. This is intended to ease dual booting between Windows 9x and DOS. When booting into DOS, they are temporarily renamed CONFIG.SYS and AUTOEXEC.BAT. Backups of the Windows 9x versions are made as .W40 files.
Windows 9x also installs MSDOS.SYS, a configuration file, which will not boot Windows 95/98 if parameterBOOTGUI=0 is loaded, and instead a DOS prompt will appear on the screen (Windows can still be loaded by calling the WIN command (file WIN.COM). This file contains some switches that designate how the system will boot, one of which controls whether or not the system automatically goes into Windows. This "BootGUI" option must be set to "0" in order to boot to a DOS prompt. By doing this, the system's operation essentially becomes that of a DOS/Windows pairing like with earlier Windows versions. Windows can be started as desired by typing WIN at the DOS prompt.
When installing Caldera DR-DOS 7.02 and higher, the Windows version retains the name AUTOEXEC.BAT, while the file used by the DR-DOS COMMAND.COM is named AUTODOS7.BAT, referred to by the startup parameter /P:filename.ext in the SHELL directive. It also differentiates the CONFIG.SYS file by using the name DCONFIG.SYS.
OS/2
The equivalent to AUTOEXEC.BAT under OS/2 is the OS/2 STARTUP.CMD file, however, genuine DOS sessions booted under OS/2 continue to use AUTOEXEC.BAT.
Windows NT
On Windows NT and its derivatives, Windows 2000, Windows Server 2003 and Windows XP, the equivalent file is called AUTOEXEC.NT and is located in the %SystemRoot%\system32 directory. The file is not used during the operating system boot process; it is executed when the MS-DOS environment is started, which occurs when a DOS application is loaded.
The AUTOEXEC.BAT file may often be found on Windows NT in the root directory of the boot drive. Windows only considers the SET and PATH statements which it contains, in order to define environment variables global to all users. Setting environment variables through this file may be interesting if for example MS-DOS is also booted from this drive (this requires that the drive be FAT-formatted) or to keep the variables across a reinstall. This is an exotic usage today so the file usually remains empty. The Tweak UI applet from the Microsoft PowerToys collection allows to control this feature (Parse AUTOEXEC.BAT at logon).
See also
COMMAND.COM
IBMBIO.COM / IO.SYS
IBMDOS.COM / MSDOS.SYS
SHELL (CONFIG.SYS directive)
CONFIG.SYS
AUTORUN.INF
References
DOS files
Configuration files
MSX-DOS | Operating System (OS) | 1,277 |
Omega2 (computer)
The Onion Omega series of personal single-board computer created by a startup company called Onion that is based in Boston, Toronto and Shenzhen. It is advertised as "the world's smallest Linux Server". The system combines a tiny form factor and power-efficiency with the power of a general purpose Operating System. They ship with a Linux kernel based lightweight operating system for embedded systems called OpenWRT, but is capable of running other lightweight Unix-based operating systems.
The first shipments of the Onion Omega went out in October, 2015.
History
Omega2 is the next generation of the old product Onion makes, Omega. The original Omega was based on the Qualcomm Atheros AR9331 (MIPS architecture) SoC which runs a full Linux operating system designed for embedded system and sold for $19.99. The company has discontinued development of the Omega, and replaced it with the successor, Omega2, using another SoC chip - Mediatek MT7688 which also has a metal cover over the chip. They have also drastically cut the price to $5 (but later increased it to $7.5).
As of the beginning of 2017, Onion has already attracted crowdfunding of more than $850,000 for the Omega2, which has greatly exceeded their initial goal of $440,000.
Hardware Features
Omega2 comes in two versions, the basic Omega2 and Omega2 Plus. Omega2 CPU is based on MIPS architecture running at 580 MHz clock speed, equipped with 64 MB of RAM and 16 MB of flash memory. Omega2 Plus is similar to Omega2, except it has 128 MB RAM and 32 MB memory and a MicroSD slot, sold for $9 USD. The system comes in a small PCB footprint with dual-in-line 16x2mm pins. The board runs at 3.3 volts with an average power consumption of 0.6W. The devices are intended for as headless computers with no graphical interfaces in Embedded systems.
References
External links
Official website
MIPS32 Architecture
2016 establishments in the United States
Educational hardware
Linux-based devices
MIPS architecture
Products introduced in 2016
Single-board computers | Operating System (OS) | 1,278 |
Raspberry Pi
Raspberry Pi () is a series of small single-board computers (SBCs) developed in the United Kingdom by the Raspberry Pi Foundation in association with Broadcom. The Raspberry Pi project originally leaned towards the promotion of teaching basic computer science in schools and in developing countries. The original model became more popular than anticipated, selling outside its target market for uses such as robotics. It is widely used in many areas, such as for weather monitoring, because of its low cost, modularity, and open design. It is typically used by computer and electronic hobbyists, due to its adoption of HDMI and USB devices.
After the release of the second board type, the Raspberry Pi Foundation set up a new entity, named Raspberry Pi Trading, and installed Eben Upton as CEO, with the responsibility of developing technology. The Foundation was rededicated as an educational charity for promoting the teaching of basic computer science in schools and developing countries. Most Pis are made in a Sony factory in Pencoed, Wales, while others are made in China and Japan.
Series and generations
There are three series of Raspberry Pi, and several generations of each have been released. Raspberry Pi SBCs feature a Broadcom system on a chip (SoC) with an integrated ARM-compatible central processing unit (CPU) and on-chip graphics processing unit (GPU), while Raspberry Pi Pico has a RP2040 system on chip with an integrated ARM-compatible central processing unit (CPU).
Raspberry Pi
The first generation (Raspberry Pi Model B) was released in February 2012, followed by the simpler and cheaper Model A.
In 2014, the Foundation released a board with an improved design, Raspberry Pi Model B+. These first generation boards feature ARM11 processors, are approximately credit-card sized and represent the standard mainline form-factor. Improved A+ and B+ models were released a year later. A "Compute Module" was released in April 2014 for embedded applications.
The Raspberry Pi 2 was released in February 2015 and initially featured a 900 MHz 32-bit quad-core ARM Cortex-A7 processor with 1 GB RAM. Revision 1.2 featured a 900 MHz 64-bit quad-core ARM Cortex-A53 processor (the same as that in the Raspberry Pi 3 Model B, but underclocked to 900 MHz).
Raspberry Pi 3 Model B was released in February 2016 with a 1.2 GHz 64-bit quad core ARM Cortex-A53 processor, on-board 802.11n Wi-Fi, Bluetooth and USB boot capabilities.
On Pi Day 2018, the Raspberry Pi 3 Model B+ was launched with a faster 1.4 GHz processor, a three-times faster gigabit Ethernet (throughput limited to ca. 300 Mbit/s by the internal USB 2.0 connection), and 2.4 / 5 GHz dual-band 802.11ac Wi-Fi (100 Mbit/s). Other features are Power over Ethernet (PoE) (with the add-on PoE HAT), USB boot and network boot (an SD card is no longer required).
Raspberry Pi 4 Model B was released in June 2019 with a 1.5 GHz 64-bit quad core ARM Cortex-A72 processor, on-board 802.11ac Wi-Fi, Bluetooth 5, full gigabit Ethernet (throughput not limited), two USB 2.0 ports, two USB 3.0 ports, 2-8 GB of RAM, and dual-monitor support via a pair of micro HDMI (HDMI Type D) ports for up to 4K resolution. The version with 1 GB RAM has been abandoned and the prices of the 2 GB version have been reduced. The 8 GB version has a revised circuit board. The Pi 4 is also powered via a USB-C port, enabling additional power to be provided to downstream peripherals, when used with an appropriate PSU. But the Pi can only be operated with 5 volts and not 9 or 12 volts like other mini computers of this class. The initial Raspberry Pi 4 board has a design flaw where third-party e-marked USB cables, such as those used on Apple MacBooks, incorrectly identify it and refuse to provide power. Tom's Hardware tested 14 different cables and found that 11 of them turned on and powered the Pi without issue. The design flaw was fixed in revision 1.2 of the board, released in late 2019. In mid-2021, Pi 4 B models appeared with the improved Broadcom BCM2711C0. The manufacturer is now using this chip for the Pi 4 B and Pi 400. However, the tack frequency of the Pi 4 B was not increased in the factory.
Raspberry Pi 400 was released in November 2020. It features a custom board that is derived from the existing Raspberry Pi 4, specifically remodelled with a keyboard attached. The case was derived from that of the Raspberry Pi Keyboard. A robust cooling solution (i.e. a broad metal plate) and an upgraded switched-mode power supply allow the Raspberry Pi 400's Broadcom BCM2711C0 processor to be clocked at 1.8 GHz, which is slightly higher than the Raspberry Pi 4 it's based on. The keyboard-computer features 4 GB of LPDDR4 RAM.
Raspberry Pi Zero
A Raspberry Pi Zero with smaller size and reduced input/output (I/O) and general-purpose input/output (GPIO) capabilities was released in November 2015 for US$5.
On 28 February 2017, the Raspberry Pi Zero W was launched, a version of the Zero with Wi-Fi and Bluetooth capabilities, for US$10.
On 12 January 2018, the Raspberry Pi Zero WH was launched, a version of the Zero W with pre-soldered GPIO headers.
On 28 October 2021, the Raspberry Pi Zero 2 W was launched, a version of the Zero W with a system in a package (SiP) designed by Raspberry Pi and based on the Raspberry Pi 3. In contrast to the older ones, the Pi 2 W is 64 bit capable.The price is around US$15.
Raspberry Pi Pico
Raspberry Pi Pico was released in January 2021 with a retail price of $4. It was Raspberry Pi's first board based upon a single microcontroller chip; the RP2040, which was designed by Raspberry Pi in the UK. The Pico has 264 KB of RAM and 2 MB of flash memory. It is programmable in MicroPython, CircuitPython, C and Rust. It has partnered with Vilros, Adafruit, Pimoroni, Arduino and SparkFun to build Accessories for Raspberry Pi Pico and variety of other boards using RP2040 Silicon Platform. Rather than perform the role of general purpose computer (like the others in the range) it is designed for physical computing, similar in concept to an Arduino.
Model comparison
As of 4 May 2021, the Foundation is committed to manufacture most Pi models until at least January 2026. Even the 1 GB Pi4B can still be specially-ordered.
Hardware
The Raspberry Pi hardware has evolved through several versions that feature variations in the type of the central processing unit, amount of memory capacity, networking support, and peripheral-device support.
This block diagram describes models B, B+, A and A+. The Pi Zero models are similar, but lack the Ethernet and USB hub components. The Ethernet adapter is internally connected to an additional USB port. In Model A, A+, and the Pi Zero, the USB port is connected directly to the system on a chip (SoC). On the Pi 1 Model B+ and later models the USB/Ethernet chip contains a five-port USB hub, of which four ports are available, while the Pi 1 Model B only provides two. On the Pi Zero, the USB port is also connected directly to the SoC, but it uses a micro USB (OTG) port. Unlike all other Pi models, the 40 pin GPIO connector is omitted on the Pi Zero, with solderable through-holes only in the pin locations. The Pi Zero WH remedies this.
Processor speed ranges from 700 MHz to 1.4 GHz for the Pi 3 Model B+ or 1.5 GHz for the Pi 4; on-board memory ranges from 256 MB to 8 GB random-access memory (RAM), with only the Raspberry Pi 4 having more than 1 GB. Secure Digital (SD) cards in MicroSDHC form factor (SDHC on early models) are used to store the operating system and program memory, however some models also come with onboard eMMC storage and the Raspberry Pi 4 can also make use of USB-attached SSD storage for its operating system. The boards have one to five USB ports. For video output, HDMI and composite video are supported, with a standard 3.5 mm tip-ring-sleeve jack for audio output. Lower-level output is provided by a number of GPIO pins, which support common protocols like I²C. The B-models have an 8P8C Ethernet port and the Pi 3, Pi 4 and Pi Zero W have on-board Wi-Fi 802.11n and Bluetooth.
Processor
The Broadcom BCM2835 SoC used in the first generation Raspberry Pi includes a 700 MHz ARM1176JZF-S processor, VideoCore IV graphics processing unit (GPU), and RAM. It has a level 1 (L1) cache of 16 KB and a level 2 (L2) cache of 128 KB. The level 2 cache is used primarily by the GPU. The SoC is stacked underneath the RAM chip, so only its edge is visible. The ARM1176JZ(F)-S is the same CPU used in the original iPhone, although at a higher clock rate, and mated with a much faster GPU.
The earlier V1.1 model of the Raspberry Pi 2 used a Broadcom BCM2836 SoC with a 900 MHz 32-bit, quad-core ARM Cortex-A7 processor, with 256 KB shared L2 cache. The Raspberry Pi 2 V1.2 was upgraded to a Broadcom BCM2837 SoC with a 1.2 GHz 64-bit quad-core ARM Cortex-A53 processor, the same one which is used on the Raspberry Pi 3, but underclocked (by default) to the same 900 MHz CPU clock speed as the V1.1. The BCM2836 SoC is no longer in production as of late 2016.
The Raspberry Pi 3 Model B uses a Broadcom BCM2837 SoC with a 1.2 GHz 64-bit quad-core ARM Cortex-A53 processor, with 512 KB shared L2 cache. The Model A+ and B+ are 1.4 GHz
The Raspberry Pi 4 uses a Broadcom BCM2711 SoC with a 1.5 GHz (later models: 1.8 GHz) 64-bit quad-core ARM Cortex-A72 processor, with 1 MB shared L2 cache. Unlike previous models, which all used a custom interrupt controller poorly suited for virtualisation, the interrupt controller on this SoC is compatible with the ARM Generic Interrupt Controller (GIC) architecture 2.0, providing hardware support for interrupt distribution when using ARM virtualisation capabilities.
The Raspberry Pi Zero and Zero W use the same Broadcom BCM2835 SoC as the first generation Raspberry Pi, although now running at 1 GHz CPU clock speed.
The Raspberry Pi Zero W 2 uses the RP3A0-AU CPU, a 1 GHz 64 bit ARM Cortex A53, on 512MB of SDRAM. Documentation states this "system-on-package" is a Broadcom BCM2710A1 package, using a BCM2837 Broadcom chip as core, which is an ARM v8 quad-core. The RPi3 also uses the BCM2837, but at 1.2 GHz, since the Pi Zero W 2 clock is 1 GHz.
The Raspberry Pi Pico uses the RP2040 running at 133 MHz.
Performance
While operating at 700 MHz by default, the first generation Raspberry Pi provided a real-world performance roughly equivalent to 0.041 GFLOPS. On the CPU level the performance is similar to a 300 MHz Pentium II of 1997–99. The GPU provides 1 Gpixel/s or 1.5 Gtexel/s of graphics processing or 24 GFLOPS of general purpose computing performance. The graphical capabilities of the Raspberry Pi are roughly equivalent to the performance of the Xbox of 2001.
Raspberry Pi 2 V1.1 included a quad-core Cortex-A7 CPU running at 900 MHz and 1 GB RAM. It was described as 4–6 times more powerful than its predecessor. The GPU was identical to the original. In parallelised benchmarks, the Raspberry Pi 2 V1.1 could be up to 14 times faster than a Raspberry Pi 1 Model B+.
The Raspberry Pi 3, with a quad-core Cortex-A53 processor, is described as having ten times the performance of a Raspberry Pi 1. Benchmarks showed the Raspberry Pi 3 to be approximately 80% faster than the Raspberry Pi 2 in parallelised tasks.
The Raspberry Pi 4, with a quad-core Cortex-A72 processor, is described as having three times the performance of a Raspberry Pi 3.
Overclocking
Most Raspberry Pi systems-on-chip could be overclocked to 800 MHz, and some to 1000 MHz. There are reports the Raspberry Pi 2 can be similarly overclocked, in extreme cases, even to 1500 MHz (discarding all safety features and over-voltage limitations). In Raspberry Pi OS the overclocking options on boot can be made by a software command running "sudo raspi-config" without voiding the warranty. In those cases the Pi automatically shuts the overclocking down if the chip temperature reaches , but it is possible to override automatic over-voltage and overclocking settings (voiding the warranty); an appropriately sized heat sink is needed to protect the chip from serious overheating.
Newer versions of the firmware contain the option to choose between five overclock ("turbo") presets that, when used, attempt to maximise the performance of the SoC without impairing the lifetime of the board. This is done by monitoring the core temperature of the chip and the CPU load, and dynamically adjusting clock speeds and the core voltage. When the demand is low on the CPU or it is running too hot, the performance is throttled, but if the CPU has much to do and the chip's temperature is acceptable, performance is temporarily increased with clock speeds of up to 1 GHz, depending on the board version and on which of the turbo settings is used.
The overclocking modes are:
none; 700 MHz ARM, 250 MHz core, 400 MHz SDRAM, 0 overvolting,
modest; 800 MHz ARM, 250 MHz core, 400 MHz SDRAM, 0 overvolting,
medium; 900 MHz ARM, 250 MHz core, 450 MHz SDRAM, 2 overvolting,
high; 950 MHz ARM, 250 MHz core, 450 MHz SDRAM, 6 overvolting,
turbo; 1000 MHz ARM, 500 MHz core, 600 MHz SDRAM, 6 overvolting,
Pi 2; 1000 MHz ARM, 500 MHz core, 500 MHz SDRAM, 2 overvolting,
Pi 3; 1100 MHz ARM, 550 MHz core, 500 MHz SDRAM, 6 overvolting. In system information the CPU speed appears as 1200 MHz. When idling, speed lowers to 600 MHz.
In the highest (turbo) mode the SDRAM clock speed was originally 500 MHz, but this was later changed to 600 MHz because of occasional SD card corruption. Simultaneously, in high mode the core clock speed was lowered from 450 to 250 MHz, and in medium mode from 333 to 250 MHz.
The CPU of the first and second generation Raspberry Pi board did not require cooling with a heat sink or fan, even when overclocked, but the Raspberry Pi 3 may generate more heat when overclocked.
RAM
The early designs of the Raspberry Pi Model A and B boards included only 256 MB of random access memory (RAM). Of this, the early beta Model B boards allocated 128 MB to the GPU by default, leaving only 128 MB for the CPU. On the early 256 MB releases of models A and B, three different splits were possible. The default split was 192 MB for the CPU, which should be sufficient for standalone 1080p video decoding, or for simple 3D processing. 224 MB was for Linux processing only, with only a 1080p framebuffer, and was likely to fail for any video or 3D. 128 MB was for heavy 3D processing, possibly also with video decoding. In comparison, the Nokia 701 uses 128 MB for the Broadcom VideoCore IV.
The later Model B with 512 MB RAM, was released on 15 October 2012 and was initially released with new standard memory split files (arm256_start.elf, arm384_start.elf, arm496_start.elf) with 256 MB, 384 MB, and 496 MB CPU RAM, and with 256 MB, 128 MB, and 16 MB video RAM, respectively. But about one week later, the foundation released a new version of start.elf that could read a new entry in config.txt (gpu_mem=xx) and could dynamically assign an amount of RAM (from 16 to 256 MB in 8 MB steps) to the GPU, obsoleting the older method of splitting memory, and a single start.elf worked the same for 256 MB and 512 MB Raspberry Pis.
The Raspberry Pi 2 has 1 GB of RAM.
The Raspberry Pi 3 has 1 GB of RAM in the B and B+ models, and 512 MB of RAM in the A+ model. The Raspberry Pi Zero and Zero W have 512 MB of RAM.
The Raspberry Pi 4 is available with 2, 4 or 8 GB of RAM. A 1 GB model was originally available at launch in June 2019 but was discontinued in March 2020, and the 8 GB model was introduced in May 2020.
Networking
The Model A, A+ and Pi Zero have no Ethernet circuitry and are commonly connected to a network using an external user-supplied USB Ethernet or Wi-Fi adapter. On the the Ethernet port is provided by a built-in USB Ethernet adapter using the SMSC LAN9514 chip. The Raspberry Pi 3 and Pi Zero W (wireless) are equipped with 2.4 GHz WiFi 802.11n and Bluetooth 4.1 based on the Broadcom BCM43438 FullMAC chip with no official support for monitor mode (though it was implemented through unofficial firmware patching) and the Pi 3 also has a 10/100 Mbit/s Ethernet port. The Raspberry Pi 3B+ features dual-band IEEE 802.11b/g/n/ac WiFi, Bluetooth 4.2, and Gigabit Ethernet (limited to approximately 300 Mbit/s by the USB 2.0 bus between it and the SoC). The Raspberry Pi 4 has full gigabit Ethernet (throughput is not limited as it is not funnelled via the USB chip.)
Special-purpose features
The RPi Zero, RPi1A, RPi3A+ and RPi4 can be used as a USB device or "USB gadget", plugged into another computer via a USB port on another machine. It can be configured in multiple ways, for example to show up as a serial device or an ethernet device. Although originally requiring software patches, this was added into the mainline Raspbian distribution in May 2016.
Raspberry Pi models with a newer chipset can boot from USB mass storage, such as from a flash drive. Booting from USB mass storage is not available in the original Raspberry Pi models, the Raspberry Pi Zero, the Raspberry Pi Pico, the Raspberry Pi 2 A models and in Raspberry Pi 2 B models with a lower version than 1.2.
Peripherals
Although often pre-configured to operate as a headless computer, the Raspberry Pi may also optionally be operated with any generic USB computer keyboard and mouse. It may also be used with USB storage, USB to MIDI converters, and virtually any other device/component with USB capabilities, depending on the installed device drivers in the underlying operating system (many of which are included by default).
Other peripherals can be attached through the various pins and connectors on the surface of the Raspberry Pi.
Video
The video controller can generate standard modern TV resolutions, such as HD and Full HD, and higher or lower monitor resolutions as well as older NTSC or PAL standard CRT TV resolutions. As shipped (i.e., without custom overclocking) it can support the following resolutions: 640×350 EGA; 640×480 VGA; 800×600 SVGA; 1024×768 XGA; 1280×720 720p HDTV; 1280×768 WXGA variant; 1280×800 WXGA variant; 1280×1024 SXGA; 1366×768 WXGA variant; 1400×1050 SXGA+; 1600×1200 UXGA; 1680×1050 WXGA+; 1920×1080 1080p HDTV; 1920×1200 WUXGA.
Higher resolutions, up to 2048×1152, may work or even 3840×2160 at 15 Hz (too low a frame rate for convincing video). Allowing the highest resolutions does not imply that the GPU can decode video formats at these resolutions; in fact, the Raspberry Pis are known to not work reliably for H.265 (at those high resolutions), commonly used for very high resolutions (however, most common formats up to Full HD do work).
Although the Raspberry Pi 3 does not have H.265 decoding hardware, the CPU is more powerful than its predecessors, potentially fast enough to allow the decoding of H.265-encoded videos in software. The GPU in the Raspberry Pi 3 runs at higher clock frequencies of 300 MHz or 400 MHz, compared to previous versions which ran at 250 MHz.
The Raspberry Pis can also generate 576i and 480i composite video signals, as used on old-style (CRT) TV screens and less-expensive monitors through standard connectorseither RCA or 3.5 mm phono connector depending on model. The television signal standards supported are PAL-B/G/H/I/D, PAL-M, PAL-N, NTSC and NTSC-J.
Real-time clock
When booting, the time defaults to being set over the network using the Network Time Protocol (NTP). The source of time information can be another computer on the local network that does have a real-time clock, or to a NTP server on the internet. If no network connection is available, the time may be set manually or configured to assume that no time passed during the shutdown. In the latter case, the time is monotonic (files saved later in time always have later timestamps) but may be considerably earlier than the actual time. For systems that require a built-in real-time clock, a number of small, low-cost add-on boards with real-time clocks are available.
The RP2040 microcontroller has a built-in real-time clock but this can not be set automatically without some form of user entry or network facility being added.
Connectors
Pi Pico
Pi Compute Module
Pi Zero
Model A
Model B
General purpose input-output (GPIO) connector
Raspberry Pi 1 Models A+ and B+, Pi 2 Model B, Pi 3 Models A+, B and B+, Pi 4, and Pi Zero, Zero W, and Zero WH GPIO J8 have a 40-pin pinout. Raspberry Pi 1 Models A and B have only the first 26 pins.
In the Pi Zero and Zero W, the 40 GPIO pins are unpopulated, having the through-holes exposed for soldering instead. The Zero WH (Wireless + Header) has the header pins preinstalled.
Model B rev. 2 also has a pad (called P5 on the board and P6 on the schematics) of 8 pins offering access to an additional 4 GPIO connections. These GPIO pins were freed when the four board version identification links present in revision 1.0 were removed.
Models A and B provide GPIO access to the ACT status LED using GPIO 16. Models A+ and B+ provide GPIO access to the ACT status LED using GPIO 47, and the power status LED using GPIO 35.
Specifications
Simplified Model B Changelog
Software
Operating systems
The Raspberry Pi Foundation provides Raspberry Pi OS (formerly called Raspbian), a Debian-based Linux distribution for download, as well as third-party Ubuntu, Windows 10 IoT Core, RISC OS, LibreELEC (specialised media centre distribution) and specialised distributions for the Kodi media centre and classroom management. It promotes Python and Scratch as the main programming languages, with support for many other languages. The default firmware is closed source, while unofficial open source is available. Many other operating systems can also run on the Raspberry Pi. The formally verified microkernel seL4 is also supported. There are several ways of installing multiple operating systems on one SD card.
Other operating systems (not Linux- nor BSD-based)
Broadcom VCOS – Proprietary operating system which includes an abstraction layer designed to integrate with existing kernels, such as ThreadX (which is used on the VideoCore4 processor), providing drivers and middleware for application development. In the case of the Raspberry Pi, this includes an application to start the ARM processor(s) and provide the publicly documented API over a mailbox interface, serving as its firmware. An incomplete source of a Linux port of VCOS is available as part of the reference graphics driver published by Broadcom.
Haiku – an open source BeOS clone that has been compiled for the Raspberry Pi and several other ARM boards. Work on Pi 1 began in 2011, but only the Pi 2 will be supported.
HelenOS – a portable microkernel-based multiserver operating system; has basic Raspberry Pi support since version 0.6.0
Plan 9 from Bell Labs and Inferno (in beta)
RISC OS Pi (a special cut down version RISC OS Pico, for 16 MB cards and larger for all models of Pi 1 & 2, has also been made available.)
Ultibo Core - OS-less unikerel Run Time Library based on Free Pascal. Lazarus IDE (Windows with 3rd party ports to Linux and MacOS). Most Pi models supported.
Windows 10 IoT Core – a zero-price edition of Windows 10 offered by Microsoft that runs natively on the Raspberry Pi 2.
Other operating systems (Linux-based)
Alpine Linux – a Linux distribution based on musl and BusyBox, "designed for power users who appreciate security, simplicity and resource efficiency".
Android Things – an embedded version of the Android operating system designed for IoT device development.
Arch Linux ARM, a port of Arch Linux for ARM processors, and Arch-based Manjaro Linux ARM
Ark OS – designed for website and email self-hosting.
Batocera - a buildroot based Linux OS that uses Emulation Station as its frontend for RetroArch and other emulators plus auxiliary scripts. Instead of a classic Linux distribution with package managers handling individual software updates, Batocera is crafted to behave more like a video game console firmware with all tools and emulators included and updated as a single package during software updates.
CentOS for Raspberry Pi 2 and later
Devuan
emteria.OS – an embedded, managed version of the Android operating system for professional fleet management
Fedora (supports Pi 2 and later since Fedora 25, Pi 1 is supported by some unofficial derivatives) and RedSleeve (a RHEL port) for Raspberry Pi 1
Gentoo Linux
Kali Linux – a Debian-derived distro designed for digital forensics and penetration testing.
OpenEuler,– an Open source Linux OS.
openSUSE, SUSE Linux Enterprise Server 12 SP2 and Server 12 SP3 (Commercial support)
OpenWrt – a highly extensible Linux distribution for embedded devices (typically wireless routers). It supports Pi 1, 2, 3, 4 and Zero W.
postmarketOS - distribution based on Alpine Linux, primarily developed for smartphones.
RetroPie - an offshoot of Raspbian OS that uses Emulation Station as its frontend for RetroArch and other emulators like Mupen64 for retro gaming. Hardware like Freeplay tech can help replace Game boy internals with RetroPie emulation.
Sailfish OS with Raspberry Pi 2 (due to use ARM Cortex-A7 CPU; Raspberry Pi 1 uses different ARMv6 architecture and Sailfish requires ARMv7.)
Slackware ARM – version 13.37 and later runs on the Raspberry Pi without modification. The 128–496 MB of available memory on the Raspberry Pi is at least twice the minimum requirement of 64 MB needed to run Slackware Linux on an ARM or i386 system. (Whereas the majority of Linux systems boot into a graphical user interface, Slackware's default user environment is the textual shell / command line interface.) The Fluxbox window manager running under the X Window System requires an additional 48 MB of RAM.
SolydXK – a light Debian-derived distro with Xfce.
Tiny Core Linux – a minimal Linux operating system focused on providing a base system using BusyBox and FLTK. Designed to run primarily in RAM.
Ubuntu-based: Lubuntu and Xubuntu
Void Linux – a rolling release Linux distribution which was designed and implemented from scratch, provides images based on musl or glibc.
Other operating systems (BSD-based)
FreeBSD
NetBSD
OpenBSD (only on 64-bit platforms, such as Raspberry Pi 3)
Driver APIs
Raspberry Pi can use a VideoCore IV GPU via a binary blob, which is loaded into the GPU at boot time from the SD-card, and additional software, that initially was closed source. This part of the driver code was later released. However, much of the actual driver work is done using the closed source GPU code. Application software makes calls to closed source run-time libraries (OpenMax, OpenGL ES or OpenVG), which in turn call an open source driver inside the Linux kernel, which then calls the closed source VideoCore IV GPU driver code. The API of the kernel driver is specific for these closed libraries. Video applications use OpenMAX, use OpenGL ES and use OpenVG, which both in turn use EGL. OpenMAX and EGL use the open source kernel driver in turn.
Vulkan driver
The Raspberry Pi Foundation first announced it was working on a Vulkan driver in February 2020. A working Vulkan driver running Quake 3 at 100 frames per second on a 3B+ was revealed by a graphics engineer who had been working on it as a hobby project on 20 June. On 24 November 2020 Raspberry Pi Foundation announced that their driver for the Raspberry Pi 4 is Vulkan 1.0 conformant. On 26 October 2021 Raspberry Pi Trading announced that their driver for the Raspberry Pi 4 is Vulkan 1.1 conformant.
Firmware
The official firmware is a freely redistributable binary blob, that is proprietary software. A minimal proof-of-concept open source firmware is also available, mainly aimed at initialising and starting the ARM cores as well as performing minimal startup that is required on the ARM side. It is also capable of booting a very minimal Linux kernel, with patches to remove the dependency on the mailbox interface being responsive. It is known to work on Raspberry Pi 1, 2 and 3, as well as some variants of Raspberry Pi Zero.
Third-party application software
AstroPrint – AstroPrint's wireless 3D printing software can be run on the Pi 2.
C/C++ Interpreter Ch – Released 3 January 2017, C/C++ interpreter Ch and Embedded Ch are released free for non-commercial use for Raspberry Pi, ChIDE is also included for the beginners to learn C/C++.
Minecraft – Released 11 February 2013, a modified version that allows players to directly alter the world with computer code.
RealVNC – Since 28 September 2016, Raspbian includes RealVNC's remote access server and viewer software. This includes a new capture technology which allows directly rendered content (e.g. Minecraft, camera preview and omxplayer) as well as non-X11 applications to be viewed and controlled remotely.
UserGate Web Filter – On 20 September 2013, Florida-based security vendor Entensys announced porting UserGate Web Filter to Raspberry Pi platform.
Steam Link – On 13 December 2018, Valve released official Steam Link game streaming client for the Raspberry Pi 3 and 3 B+.
Software development tools
Arduino IDE – for programming an Arduino.
Algoid – for teaching programming to children and beginners.
BlueJ – for teaching Java to beginners.
Greenfoot – Greenfoot teaches object orientation with Java. Create 'actors' which live in 'worlds' to build games, simulations, and other graphical programs.
Julia – an interactive and cross-platform programming language/environment, that runs on the Pi 1 and later. IDEs for Julia, such as Visual Studio Code, are available. See also Pi-specific GitHub repository JuliaBerry.
Lazarus – a Free Pascal RAD IDE
LiveCode – an educational RAD IDE descended from HyperCard using English-like language to write event-handlers for WYSIWYG widgets runnable on desktop, mobile and Raspberry Pi platforms.
Ninja-IDE – a cross-platform integrated development environment (IDE) for Python.
Processing – an IDE built for the electronic arts, new media art, and visual design communities with the purpose of teaching the fundamentals of computer programming in a visual context.
Scratch – a cross-platform teaching IDE using visual blocks that stack like Lego, originally developed by MIT's Life Long Kindergarten group. The Pi version is very heavily optimised for the limited computer resources available and is implemented in the Squeak Smalltalk system. The latest version compatible with The 2 B is 1.6.
Squeak Smalltalk – a full-scale open Smalltalk.
TensorFlow – an artificial intelligence framework developed by Google. The Raspberry Pi Foundation worked with Google to simplify the installation process through pre-built binaries.
Thonny – a Python IDE for beginners.
V-Play Game Engine – a cross-platform development framework that supports mobile game and app development with the V-Play Game Engine, V-Play apps, and V-Play plugins.
Xojo – a cross-platform RAD tool that can create desktop, web and console apps for Pi 2 and Pi 3.
C-STEM Studio – a platform for hands-on integrated learning of computing, science, technology, engineering, and mathematics (C-STEM) with robotics.
Erlang – a functional language for building concurrent systems with light-weight processes and message passing.
LabVIEW Community Edition – a system-design platform and development environment for a visual programming language from National Instruments.
Accessories
Gertboard – A Raspberry Pi Foundation sanctioned device, designed for educational purposes, that expands the Raspberry Pi's GPIO pins to allow interface with and control of LEDs, switches, analogue signals, sensors and other devices. It also includes an optional Arduino compatible controller to interface with the Pi.
Camera – On 14 May 2013, the foundation and the distributors RS Components & Premier Farnell/Element 14 launched the Raspberry Pi camera board alongside a firmware update to accommodate it. The camera board is shipped with a flexible flat cable that plugs into the CSI connector which is located between the Ethernet and HDMI ports. In Raspbian, the user must enable the use of the camera board by running Raspi-config and selecting the camera option. The camera module costs €20 in Europe (9 September 2013). It uses the OmniVision OV5647 image sensor and can produce 1080p, 720p and 640x480p video. The dimensions are . In May 2016, v2 of the camera came out, and is an 8 megapixel camera using a Sony IMX219.
Infrared Camera – In October 2013, the foundation announced that they would begin producing a camera module without an infrared filter, called the Pi NoIR.
Official Display – On 8 September 2015, The foundation and the distributors RS Components & Premier Farnell/Element 14 launched the Raspberry Pi Touch Display
HAT (Hardware Attached on Top) expansion boardsTogether with the Model B+, inspired by the Arduino shield boards, the interface for HAT boards was devised by the Raspberry Pi Foundation. Each HAT board carries a small EEPROM (typically a CAT24C32WI-GT3) containing the relevant details of the board, so that the Raspberry Pi's OS is informed of the HAT, and the technical details of it, relevant to the OS using the HAT. Mechanical details of a HAT board, which uses the four mounting holes in their rectangular formation, are available online.
High Quality Camera – In May 2020, the 12.3 megapixel Sony IMXZ477 sensor camera module was released with support for C- and CS-mount lenses. The unit initially retailed for US$50 with interchangeable lenses starting at US$25.
e-CAM130_CURB – In Nov 2020, the 13 megapixel ON Semiconductor AR1335 sensor camera module was released with support for S-mount lenses. The unit initially retailed for US$99.
Vulnerability to flashes of light
In February 2015, a switched-mode power supply chip, designated U16, of the Raspberry Pi 2 Model B version 1.1 (the initially released version) was found to be vulnerable to flashes of light, particularly the light from xenon camera flashes and green and red laser pointers. The U16 chip has WL-CSP packaging, which exposes the bare silicon die. The Raspberry Pi Foundation blog recommended covering U16 with opaque material (such as Sugru or Blu-Tak) or putting the Raspberry Pi 2 in a case. This issue was not discovered before the release of the Raspberry Pi 2 because it is not standard or common practice to test susceptibility to optical interference, while commercial electronic devices are routinely subjected to tests of susceptibility to radio interference.
Reception and use
Technology writer Glyn Moody described the project in May 2011 as a "potential ", not by replacing machines but by supplementing them. In March 2012 Stephen Pritchard echoed the BBC Micro successor sentiment in ITPRO. Alex Hope, co-author of the Next Gen report, is hopeful that the computer will engage children with the excitement of programming. Co-author Ian Livingstone suggested that the BBC could be involved in building support for the device, possibly branding it as the BBC Nano. The Centre for Computing History strongly supports the Raspberry Pi project, feeling that it could "usher in a new era". Before release, the board was showcased by ARM's CEO Warren East at an event in Cambridge outlining Google's ideas to improve UK science and technology education.
Harry Fairhead, however, suggests that more emphasis should be put on improving the educational software available on existing hardware, using tools such as Google App Inventor to return programming to schools, rather than adding new hardware choices. Simon Rockman, writing in a ZDNet blog, was of the opinion that teens will have "better things to do", despite what happened in the 1980s.
In October 2012, the Raspberry Pi won T3's Innovation of the Year award, and futurist Mark Pesce cited a (borrowed) Raspberry Pi as the inspiration for his ambient device project MooresCloud. In October 2012, the British Computer Society reacted to the announcement of enhanced specifications by stating, "it's definitely something we'll want to sink our teeth into."
In June 2017, Raspberry Pi won the Royal Academy of Engineering MacRobert Award. The citation for the award to the Raspberry Pi said it was "for its inexpensive credit card-sized microcomputers, which are redefining how people engage with computing, inspiring students to learn coding and computer science and providing innovative control solutions for industry."
Clusters of hundreds of Raspberry Pis have been used for testing programs destined for supercomputers.
Community
The Raspberry Pi community was described by Jamie Ayre of FLOSS software company AdaCore as one of the most exciting parts of the project. Community blogger Russell Davis said that the community strength allows the Foundation to concentrate on documentation and teaching. The community developed a fanzine around the platform called The MagPi which in 2015, was handed over to the Raspberry Pi Foundation by its volunteers to be continued in-house. A series of community Raspberry Jam events have been held across the UK and around the world.
Education
, enquiries about the board in the United Kingdom have been received from schools in both the state and private sectors, with around five times as much interest from the latter. It is hoped that businesses will sponsor purchases for less advantaged schools. The CEO of Premier Farnell said that the government of a country in the Middle East has expressed interest in providing a board to every schoolgirl, to enhance her employment prospects.
In 2014, the Raspberry Pi Foundation hired a number of its community members including ex-teachers and software developers to launch a set of free learning resources for its website. The Foundation also started a teacher training course called Picademy with the aim of helping teachers prepare for teaching the new computing curriculum using the Raspberry Pi in the classroom.
In 2018, NASA launched the JPL Open Source Rover Project, which is a scaled down version of Curiosity rover and uses a Raspberry Pi as the control module, to encourage students and hobbyists to get involved in mechanical, software, electronics, and robotics engineering.
Home automation
There are a number of developers and applications that are using the Raspberry Pi for home automation. These programmers are making an effort to modify the Raspberry Pi into a cost-affordable solution in energy monitoring and power consumption. Because of the relatively low cost of the Raspberry Pi, this has become a popular and economical alternative to the more expensive commercial solutions.
Industrial automation
In June 2014, Polish industrial automation manufacturer TECHBASE released ModBerry, an industrial computer based on the Raspberry Pi Compute Module. The device has a number of interfaces, most notably RS-485/232 serial ports, digital and analogue inputs/outputs, CAN and economical 1-Wire buses, all of which are widely used in the automation industry. The design allows the use of the Compute Module in harsh industrial environments, leading to the conclusion that the Raspberry Pi is no longer limited to home and science projects, but can be widely used as an Industrial IoT solution and achieve goals of Industry 4.0.
In March 2018, SUSE announced commercial support for SUSE Linux Enterprise on the Raspberry Pi 3 Model B to support a number of undisclosed customers implementing industrial monitoring with the Raspberry Pi.
In January 2021, TECHBASE announced a Raspberry Pi Compute Module 4 cluster for AI accelerator, routing and file server use. The device contains one or more standard Raspberry Pi Compute Module 4s in an industrial DIN rail housing, with some versions containing one or more Coral Edge tensor processing units.
Commercial products
The Organelle is a portable synthesizer, a sampler, a sequencer, and an effects processor designed and assembled by Critter & Guitari. It incorporates a Raspberry Pi computer module running Linux.
OTTO is a digital camera created by Next Thing Co. It incorporates a Raspberry Pi Compute Module. It was successfully crowd-funded in a May 2014 Kickstarter campaign.
Slice is a digital media player which also uses a Compute Module as its heart. It was crowd-funded in an August 2014 Kickstarter campaign. The software running on Slice is based on Kodi.
Numerous commercial thin client computer terminals use the Raspberry Pi.
COVID-19 pandemic
In Q1 of 2020, during the coronavirus pandemic, Raspberry Pi computers saw a large increase in demand primarily due to the increase in working from home, but also because of the use of many Raspberry Pi Zeros in ventilators for COVID-19 patients in countries such as Colombia, which were used to combat strain on the healthcare system. In March 2020, Raspberry Pi sales reached 640,000 units, the second largest month of sales in the company's history.
Astro Pi and Proxima
A project was launched in December 2014 at an event held by the UK Space Agency. The Astro Pi was an augmented Raspberry Pi that included a sensor hat with a visible light or infrared camera. The Astro Pi competition, called Principia, was officially opened in January and was opened to all primary and secondary school aged children who were residents of the United Kingdom. During his mission, British ESA astronaut Tim Peake deployed the computers on board the International Space Station. He loaded the winning code while in orbit, collected the data generated and then sent this to Earth where it was distributed to the winning teams. Covered themes during the competition included spacecraft sensors, satellite imaging, space measurements, data fusion and space radiation.
The organisations involved in the Astro Pi competition include the UK Space Agency, UKspace, Raspberry Pi, ESERO-UK and ESA.
In 2017, the European Space Agency ran another competition open to all students in the European Union called Proxima. The winning programs were run on the ISS by Thomas Pesquet, a French astronaut. In December 2021, the Dragon 2 spacecraft launched by NASA had a pair of Astro pi in it.
History
In 2006, early concepts of the Raspberry Pi were based on the Atmel ATmega644 microcontroller. Its schematics and PCB layout are publicly available. Foundation trustee Eben Upton assembled a group of teachers, academics and computer enthusiasts to devise a computer to inspire children. The computer is inspired by Acorn's BBC Micro of 1981. The Model A, Model B and Model B+ names are references to the original models of the British educational BBC Micro computer, developed by Acorn Computers. The first ARM prototype version of the computer was mounted in a package the same size as a USB memory stick. It had a USB port on one end and an HDMI port on the other.
The Foundation's goal was to offer two versions, priced at US$25 and $35. They started accepting orders for the higher priced Model B on 29 February 2012, the lower cost Model A on 4 February 2013. and the even lower cost (US$20) A+ on 10 November 2014. On 26 November 2015, the cheapest Raspberry Pi yet, the Raspberry Pi Zero, was launched at US$5 or £4. According to Upton, the name "Raspberry Pi" was chosen with "Raspberry" as an ode to a tradition of naming early computer companies after fruit, and "Pi" as a reference to the Python programming language.
Pre-launch
August 2011 – 50 alpha boards are manufactured. These boards were functionally identical to the planned Model B, but they were physically larger to accommodate debug headers. Demonstrations of the board showed it running the LXDE desktop on Debian, Quake 3 at 1080p, and Full HD MPEG-4 video over HDMI.
October 2011 – A version of was demonstrated in public, and following a year of development the port was released for general consumption in November 2012.
December 2011 – Twenty-five Model B Beta boards were assembled and tested from one hundred unpopulated PCBs. The component layout of the Beta boards was the same as on production boards. A single error was discovered in the board design where some pins on the CPU were not held high; it was fixed for the first production run. The Beta boards were demonstrated booting Linux, playing a 1080p movie trailer and the Rightware Samurai OpenGL ES benchmark.
Early 2012 – During the first week of the year, the first 10 boards were put up for auction on eBay. One was bought anonymously and donated to the museum at The Centre for Computing History in Cambridge, England. The ten boards (with a total retail price of £220) together raised over £16,000, with the last to be auctioned, serial number No. 01, raising £3,500. In advance of the anticipated launch at the end of February 2012, the Foundation's servers struggled to cope with the load placed by watchers repeatedly refreshing their browsers.
Launch
19 February 2012 – The first proof of concept SD card image that could be loaded onto an SD card to produce a preliminary operating system is released. The image was based on Debian 6.0 (Squeeze), with the LXDE desktop and the Midori browser, plus various programming tools. The image also runs on QEMU allowing the Raspberry Pi to be emulated on various other platforms.
29 February 2012 – Initial sales commence 29 February 2012 at 06:00 UTC;. At the same time, it was announced that the model A, originally to have had 128 MB of RAM, was to be upgraded to 256 MB before release. The Foundation's website also announced: "Six years after the project's inception, we're nearly at the end of our first run of development – although it's just the beginning of the Raspberry Pi story." The web-shops of the two licensed manufacturers selling Raspberry Pi's within the United Kingdom, Premier Farnell and RS Components, had their websites stalled by heavy web traffic immediately after the launch (RS Components briefly going down completely). Unconfirmed reports suggested that there were over two million expressions of interest or pre-orders. The official Raspberry Pi Twitter account reported that Premier Farnell sold out within a few minutes of the initial launch, while RS Components took over 100,000 pre orders on day one. Manufacturers were reported in March 2012 to be taking a "healthy number" of pre-orders.
March 2012 – Shipping delays for the first batch were announced in March 2012, as the result of installation of an incorrect Ethernet port, but the Foundation expected that manufacturing quantities of future batches could be increased with little difficulty if required. "We have ensured we can get them [the Ethernet connectors with magnetics] in large numbers and Premier Farnell and RS Components [the two distributors] have been fantastic at helping to source components," Upton said. The first batch of 10,000 boards was manufactured in Taiwan and China.
8 March 2012 – Release Raspberry Pi Fedora Remix, the recommended Linux distribution, developed at Seneca College in Canada.
March 2012 – The Debian port is initiated by Mike Thompson, former CTO of Atomz. The effort was largely carried out by Thompson and Peter Green, a volunteer Debian developer, with some support from the Foundation, who tested the resulting binaries that the two produced during the early stages (neither Thompson nor Green had physical access to the hardware, as boards were not widely accessible at the time due to demand). While the preliminary proof of concept image distributed by the Foundation before launch was also Debian-based, it differed from Thompson and Green's Raspbian effort in a couple of ways. The POC image was based on then-stable Debian Squeeze, while Raspbian aimed to track then-upcoming Debian Wheezy packages. Aside from the updated packages that would come with the new release, Wheezy was also set to introduce the armhf architecture, which became the raison d'être for the Raspbian effort. The Squeeze-based POC image was limited to the armel architecture, which was, at the time of Squeeze's release, the latest attempt by the Debian project to have Debian run on the newest ARM embedded-application binary interface (EABI). The armhf architecture in Wheezy intended to make Debian run on the ARM VFP hardware floating-point unit, while armel was limited to emulating floating point operations in software. Since the Raspberry Pi included a VFP, being able to make use of the hardware unit would result in performance gains and reduced power use for floating point operations. The armhf effort in mainline Debian, however, was orthogonal to the work surrounding the Pi and only intended to allow Debian to run on ARMv7 at a minimum, which would mean the Pi, an ARMv6 device, would not benefit. As a result, Thompson and Green set out to build the 19,000 Debian packages for the device using a custom build cluster.
Post-launch
16 April 2012 – Reports appear from the first buyers who had received their Raspberry Pi.
20 April 2012 – The schematics for the Model A and Model B are released.
18 May 2012 – The Foundation reported on its blog about a prototype camera module they had tested. The prototype used a module.
22 May 2012 – Over 20,000 units had been shipped.
July 2012 – Release of Raspbian.
16 July 2012 – It was announced that 4,000 units were being manufactured per day, allowing Raspberry Pis to be bought in bulk.
24 August 2012 – Hardware accelerated video (H.264) encoding becomes available after it became known that the existing licence also covered encoding. Formerly it was thought that encoding would be added with the release of the announced camera module. However, no stable software exists for hardware H.264 encoding. At the same time the Foundation released two additional codecs that can be bought separately, MPEG-2 and Microsoft's VC-1. Also it was announced that the Pi will implement CEC, enabling it to be controlled with the television's remote control.
5 September 2012 – The Foundation announced a second revision of the Raspberry Pi Model B. A revision 2.0 board is announced, with a number of minor corrections and improvements.
6 September 2012 – Announcement that in future the bulk of Raspberry Pi units would be manufactured in the UK, at Sony's manufacturing facility in Pencoed, Wales. The Foundation estimated that the plant would produce 30,000 units per month, and would create about 30 new jobs.
15 October 2012 – It is announced that new Raspberry Pi Model Bs are to be fitted with 512 MB instead of 256 MB RAM.
24 October 2012 – The Foundation announces that "all of the VideoCore driver code which runs on the ARM" had been released as free software under a BSD-style licence, making it "the first ARM-based multimedia SoC with functional, vendor-provided (as opposed to partial, reverse engineered) fully open-source drivers", although this claim has not been universally accepted. On 28 February 2014, they also announced the release of full documentation for the VideoCore IV graphics core, and a complete source release of the graphics stack under a 3-clause BSD licence
October 2012 – It was reported that some customers of one of the two main distributors had been waiting more than six months for their orders. This was reported to be due to difficulties in sourcing the CPU and conservative sales forecasting by this distributor.
17 December 2012 – The Foundation, in collaboration with IndieCity and Velocix, opens the Pi Store, as a "one-stop shop for all your Raspberry Pi (software) needs". Using an application included in Raspbian, users can browse through several categories and download what they want. Software can also be uploaded for moderation and release.
3 June 2013 – "New Out of Box Software" or NOOBS is introduced. This makes the Raspberry Pi easier to use by simplifying the installation of an operating system. Instead of using specific software to prepare an SD card, a file is unzipped and the contents copied over to a FAT formatted (4 GB or bigger) SD card. That card can then be booted on the Raspberry Pi and a choice of six operating systems is presented for installation on the card. The system also contains a recovery partition that allows for the quick restoration of the installed OS, tools to modify the config.txt and an online help button and web browser which directs to the Raspberry Pi Forums.
October 2013 – The Foundation announces that the one millionth Pi had been manufactured in the United Kingdom.
November 2013: they announce that the two millionth Pi shipped between 24 and 31 October.
28 February 2014 – On the day of the second anniversary of the Raspberry Pi, Broadcom, together with the Raspberry Pi foundation, announced the release of full documentation for the VideoCore IV graphics core, and a complete source release of the graphics stack under a 3-clause BSD licence.
7 April 2014 – The official Raspberry Pi blog announced the Raspberry Pi Compute Module, a device in a 200-pin DDR2 SO-DIMM-configured memory module (though not in any way compatible with such RAM), intended for consumer electronics designers to use as the core of their own products.
June 2014 – The official Raspberry Pi blog mentioned that the three millionth Pi shipped in early May 2014.
14 July 2014 – The official Raspberry Pi blog announced the Raspberry Pi Model B+, "the final evolution of the original Raspberry Pi. For the same price as the original Raspberry Pi model B, but incorporating numerous small improvements people have been asking for".
10 November 2014 – The official Raspberry Pi blog announced the Raspberry Pi Model A+. It is the smallest and cheapest Raspberry Pi so far and has the same processor and RAM as the Model A. Like the A, it has no Ethernet port, and only one USB port, but does have the other innovations of the B+, like lower power, micro-SD-card slot, and 40-pin HAT compatible GPIO.
2 February 2015 – The official Raspberry Pi blog announced the Raspberry Pi 2. Looking like a Model B+, it has a 900 MHz quad-core ARMv7 Cortex-A7 CPU, twice the memory (for a total of 1 GB) and complete compatibility with the original generation of Raspberry Pis.
14 May 2015 – The price of Model B+ was decreased from US$35 to $25, purportedly as a "side effect of the production optimizations" from the Pi 2 development. Industry observers have sceptically noted, however, that the price drop appeared to be a direct response to the CHIP, a lower-priced competitor discontinued in April 2017.
29 September 2015 – A new version of the Raspbian operating system, based on Debian Jessie, is released.
26 November 2015 – The Raspberry Pi Foundation launched the Raspberry Pi Zero, the smallest and cheapest member of the Raspberry Pi family yet, at 65 mm × 30 mm, and US$5. The Zero is similar to the Model A+ without camera and LCD connectors, while smaller and uses less power. It was given away with the Raspberry Pi magazine Magpi No. 40 that was distributed in the UK and US that day the MagPi was sold out at almost every retailer internationally due to the freebie.
29 February 2016 – Raspberry Pi 3 with a BCM2837 1.2 GHz 64-bit quad processor based on the ARMv8 Cortex-A53, with built-in Wi-Fi BCM43438 802.11n 2.4 GHz and Bluetooth 4.1 Low Energy (BLE). Starting with a 32-bit Raspbian version, with a 64-bit version later to come if "there is value in moving to 64-bit mode". In the same announcement it was said that a new BCM2837 based Compute Module was expected to be introduced a few months later.
February 2016 – The Raspberry Pi Foundation announces that they had sold eight million devices (for all models combined), making it the best-selling UK personal computer, ahead of the Amstrad PCW. Sales reached ten million in September 2016.
25 April 2016 – Raspberry Pi Camera v2.1 announced with 8 Mpixels, in normal and NoIR (can receive IR) versions. The camera uses the Sony IMX219 chip with a resolution of . To make use of the new resolution the software has to be updated.
10 October 2016 – NEC Display Solutions announces that select models of commercial displays to be released in early 2017 will incorporate a Raspberry Pi 3 Compute Module.
14 October 2016 – Raspberry Pi Foundation announces their co-operation with NEC Display Solutions. They expect that the Raspberry Pi 3 Compute Module will be available to the general public by the end of 2016.
25 November 2016 – 11 million units sold.
16 January 2017 – Compute Module 3 and Compute Module 3 Lite are launched.
28 February 2017 – Raspberry Pi Zero W with WiFi and Bluetooth via chip scale antennas launched.
17 August 2017 – The Raspbian operating system is upgraded to a new version, based on Debian Stretch.
14 March 2018 – On Pi Day, Raspberry Pi Foundation introduced Raspberry Pi 3 Model B+ with improvements in the Raspberry PI 3B computers performance, updated version of the Broadcom application processor, better wireless Wi-Fi and Bluetooth performance and addition of the 5 GHz band.
15 November 2018 – Raspberry Pi 3 Model A+ launched.
28 January 2019 – Compute Module 3+ (CM3+/Lite, CM3+/8 GB, CM3+/16 GB and CM3+/32 GB) launched.
24 June 2019 – Raspberry Pi 4 Model B launched, along with a new version of the Raspbian operating system based on Debian Buster.
10 December 2019 – 30 million units sold; sales are about 6 million per year.
28 May 2020 – An 8GB version of the Raspberry Pi 4 is announced for $75. Raspberry Pi OS is split off from Raspbian, and now includes a beta of a 64-bit version that allows programs to use more than 4GB of RAM.
19 October 2020 – Compute Module 4 launched.
2 November 2020 – Raspberry Pi 400 launched. It is a keyboard which incorporates Raspberry Pi 4 into it. GPIO pins of the Raspberry Pi 4 are accessible.
21 January 2021 – Raspberry Pi Pico launched. It is the first microcontroller-class product from Raspberry Pi. It is based on RP2040 Microcontroller developed by Raspberry Pi.
11 May 2021 – 40 million units sold.
30 October 2021 – Raspberry Pi OS (formerly Raspbian) is updated version 11, based on Debian Bullseye. With this release, the default clock speed for revision 1.4 of the Raspberry Pi 4 is increased to 1.8 GHz.
Sales
According to the Raspberry Pi Foundation, more than 5 million Raspberry Pis were sold by February 2015, making it the best-selling British computer. By November 2016 they had sold 11 million units, and 12.5 million by March 2017, making it the third best-selling "general purpose computer". In July 2017, sales reached nearly 15 million, climbing to 19 million in March 2018. By December 2019, a total of 30 million devices had been sold.
See also
Single-board computer
Plug computer
References
Further reading
Raspberry Pi For Dummies; Sean McManus and Mike Cook; 2013; .
Getting Started with Raspberry Pi; Matt Richardson and Shawn Wallace; 2013; .
Raspberry Pi User Guide; Eben Upton and Gareth Halfacree; 2014; .
Hello Raspberry Pi!; Ryan Heitz; 2016; .
External links
Raspberry Pi, Department of Computer Science and Technology, University of Cambridge
Raspberry Pi Wiki, supported by the RPF
The MagPi Magazine
"Raspberry Pi pinout" board GPIO pinout
"Raspberry Pi component map"
"RaspberryPi Boards: Hardware versions/revisions"
ARM1176JZF-S (ARM11 CPU Core) Technical Reference Manual, ARM Ltd.
2012 establishments in the United Kingdom
ARM architecture
British brands
Computers designed in the United Kingdom
British inventions
Computer science education in the United Kingdom
Educational hardware
Linux-based devices
Products introduced in 2012
Single-board computers | Operating System (OS) | 1,279 |
User operation prohibition
The user operation prohibition (abbreviated UOP) is a form of use restriction used on video DVD discs and Blu-ray discs. Most DVD players and Blu-ray players prohibit the viewer from performing a large majority of actions during sections of a DVD that are protected or restricted by this feature, and will display the no symbol or a message to that effect if any of these actions are attempted. It is used mainly for copyright notices or warnings, such as an FBI warning in the United States, and "protected" (i.e., unskippable) commercials.
Countermeasures
Some DVD players ignore the UOP flag, allowing the user full control over DVD playback. Virtually all players that are not purpose-built DVD player hardware (for example, a player program running on a general purpose computer) ignore the flag. There are also modchips available for some standard DVD players for the same purpose. The UOP flag can be removed in DVD ripper software such as: DVD Decrypter, DVD Shrink, AnyDVD, AVS Video Converter, Digiarty WinX DVD Ripper Platinum, MacTheRipper, HandBrake and K9Copy. On many DVD players, pressing stop-stop-play will cause the DVD player to play the movie immediately, ignoring any UOP flags that would otherwise make advertisements, piracy warnings or trailers unskippable.
Nevertheless, removing UOP does not always provide navigation function in the restricted parts of the DVD. This is because those parts are sometimes lacking the navigation commands which allow skipping to the menu or other parts of the DVD. This has become more common in recent titles, in order to circumvent the UOP disabling that many applications or DVD players offer.
Newer DVD players (i.e. post-c. late 2010) have, however, been designed to override the aforementioned counter-countermeasures. The DVD reader software inside the DVD player automatically generates chapters for parts of the DVD lacking navigation commands, allowing them to be fast-forwarded or skipped; pressing the menu button, even in these previously restricted sections, will cause a jump to the main menu.
See also
Comparison of DVD ripper software
DVD Copy Control Association
Hacking of consumer electronics
References
External links
User Prohibited Operations flag documentation, Unofficial DVD Specifications
Microsoft Windows DVD Info, Get Current UOPS, List of User Operations that can be controlled by this flag.
Digital rights management
DVD
Hardware restrictions | Operating System (OS) | 1,280 |
Hybris (software)
Hybris or libhybris is a compatibility layer for computers running Linux distributions based on the GNU C library or Musl, intended for using software written for Bionic-based Linux systems, which mainly includes Android libraries and device drivers.
History
Hybris was initially written by Carsten Munk, a Mer developer, who released it on GitHub on 5 August 2012 and publicly announced the project later that month. Munk has since been hired by Jolla as their Chief Research Engineer.
Hybris has also been picked up by the Open webOS community for WebOS Ports, by Canonical for Ubuntu Touch and by the AsteroidOS project.
In April 2013, Munk announced that Hybris has been extended to allow Wayland compositors to use graphic device drivers written for Android. Weston has had support for libhybris since version 1.3, which was released on 11 October 2013.
Features
Hybris loads "Android libraries, and overrides some symbols from bionic with glibc" calls, making it possible to use Bionic-based software, such as binary-only Android drivers, on glibc-based Linux distributions.
Hybris can also translate Android's EGL calls into Wayland EGL calls, allowing Android graphic drivers to be used on Wayland-based systems. This feature was initially developed by Collabora's Pekka Paalanen for his Android port of Wayland.
See also
C standard library
Free and open-source graphics device driver
References
External links
C (programming language) libraries
C standard library
Compatibility layers
Embedded Linux
Free computer libraries
Free software programmed in C
Software using the Apache license | Operating System (OS) | 1,281 |
IBM System/360 Model 65
The IBM System/360 Model 65 is a member of the IBM System/360 family of computers. It was announced April 1965, and replaced two models, the Model 60 and Model 62, announced one year prior but never shipped. It was finally discontinued in March 1974.
Models
There are six models of the 360/65. They vary by the amount of core memory with which the system is offered. The G65, H65, I65, IH65 and J65 submodels are configured with 128K, 256K, 512K, 768K or 1M of core memory, respectively. By 1974 the smallest G model had been discontinued. The MP (multiprocessor) model was added supporting from 512K to 2MB of system memory. The system can also attach IBM 2361 Large Capacity Storage (LCS) modules which provide up to 8MB of additional storage, however with a considerably slower memory cycle time of 8 microseconds compared to the 750 nanoseconds of processor storage.
Relative performance
The performance of the Model 65 is more than triple that of a 360/50, whereas the Model 75, the next step up, was less than double that of a 360/65.
Features
The Model 65 implements the complete System/360 "universal instruction set" architecture, including floating-point, decimal, and character operations as standard features. It offers optional compatibility features to permit emulation of the IBM 7040 and 7044 and the IBM 7090 and 7094, the IBM 7070 and 7074, and the IBM 7080.
Main memory in the Model 65 can be interleaved for faster access.
The Model 65, like the Model 67, was available in a dual-CPU offering. Multi-processing (Dual-CPU) systems have two I65s, IH65s or J65s, and "the main storage of each processing unit is accessible to the other."
Systems software
The Model 65 supports BPS, DOS/360, TOS/360, and OS/360 - PCP, MFT and MVT. MVT was a fairly typical choice of operating system for the Model 65.
A special version of MVT, called MP65, is needed for the multi-process (dual CPU) Model. MP65 uses special CPU-to-CPU/Multisystem mode instructions, such as Write Direct. The Model 65MP implements asymmetric multiprocessing: "certain input output equipment is accessible from only one processor."
Time-sharing can be provided on a Model 65 using IBM's Time Sharing Option (TSO).
See also
Smithsonian Institution – has a System/360 Model 65, though it is no longer on public display
References
System 360 Model 65 | Operating System (OS) | 1,282 |
Windows Messenger service
Messenger service is a network-based system notification Windows service by Microsoft that was included in some earlier versions of Microsoft Windows.
This retired technology, although it has a similar name, is not related in any way to the later, Internet-based Microsoft Messenger service for instant messaging or to Windows Messenger and Windows Live Messenger (formerly named MSN Messenger) client software.
Utilities
WinPopup sends messages from one Windows computer to another on the same LAN. It is available in all Windows versions from Windows for Workgroups 3.1 to Windows Me, but has never been included with Windows NT-based operating systems. WinPopup works by means of the NetBEUI protocol.
There is also a port to Linux with an extended feature called LinPopUp, which allows adding Linux computers to the set. Linpopup is an X Window graphical port of Winpopup, and a package for Debian linux. It runs over Samba. Linpopup does not have to run all the time, can run minimized, and its messages are encrypted with a strong cypher. The traditional Unix functional equivalent of WinPopUp would be the wall and write commands.
Uses
The Messenger Service was originally designed for use by system administrators to notify Windows users about their networks. It was used maliciously to present pop-up advertisements to users over the Internet (by using mass-messaging systems which sent a desired message to a specified range of IP addresses). Even though Windows XP included a firewall, it was not enabled by default. Because of this, many users received such messages. As a result of this abuse, the Messenger Service was disabled by default in Windows XP Service Pack 2.
The Messenger Service was discontinued in Windows Vista and Windows Server 2008 and replaced by the old MSG.exe.
Architecture
The Messenger service in Windows 2000 and Windows XP uses the NetBIOS over TCP/IP (NetBT) protocol. The service waits for a message, then it displays it onscreen. The alternative way to send a message is to write it to a MailSlot named messngr. It requires UDP ports 135, 137, and 138 and TCP ports 135, 139, and 445 to work. If access to the ports from outside a network is not blocked, it can lead to the aforementioned spam issue. In Windows NT 3.5, NT 3.51 and NT 4.0, Messenger used the older NetBIOS protocol. (NetBIOS is not installed with Windows 2000.)
Messenger service can be used by either Net Send command from a command-line interface. In addition, the Alerter service uses Messenger to send administrative alerts to network subscribers.
See also
Comparison of LAN messengers
LAN messenger
Alerter service
List of Microsoft Windows components
Messaging spam
References
External links
Official Microsoft sources
Knowledgebase entry for Messenger Service of Windows
Command documentation for Net send
Disabling The Messenger Service in Windows XP
Windows services
LAN messengers | Operating System (OS) | 1,283 |
Disk image
A disk image, in computing, is a computer file containing the contents and structure of a disk volume or of an entire data storage device, such as a hard disk drive, tape drive, floppy disk, optical disc, or USB flash drive. A disk image is usually made by creating a sector-by-sector copy of the source medium, thereby perfectly replicating the structure and contents of a storage device independent of the file system. Depending on the disk image format, a disk image may span one or more computer files.
The file format may be an open standard, such as the ISO image format for optical disc images, or a disk image may be unique to a particular software application.
The size of a disk image can be large because it contains the contents of an entire disk. To reduce storage requirements, if an imaging utility is filesystem-aware it can omit copying unused space, and it can compress the used space.
History
Disk images were originally (in the late 1960s) used for backup and disk cloning of mainframe disk media. The early ones were as small as 5 megabytes and as large as 330 megabytes, and the copy medium was magnetic tape, which ran as large as 200 megabytes per reel. Disk images became much more popular when floppy disk media became popular, where replication or storage of an exact structure was necessary and efficient, especially in the case of copy protected floppy disks.
Uses
Disk images are used for duplication of optical media including DVDs, Blu-ray discs, etc. It is also used to make perfect clones of hard disks.
A virtual disk may emulate any type of physical drive, such as a hard disk drive, tape drive, key drive, floppy drive, CD/DVD/BD/HD DVD, or a network share among others; and of course, since it is not physical, requires a virtual reader device matched to it (see below). An emulated drive is typically created either in RAM for fast read/write access (known as a RAM disk), or on a hard drive. Typical uses of virtual drives include the mounting of disk images of CDs and DVDs, and the mounting of virtual hard disks for the purpose of on-the-fly disk encryption ("OTFE").
Some operating systems such as Linux and macOS have virtual drive functionality built-in (such as the loop device), while others such as older versions of Microsoft Windows require additional software. Starting from Windows 8, Windows includes native virtual drive functionality.
Virtual drives are typically read-only, being used to mount existing disk images which are not modifiable by the drive. However some software provides virtual CD/DVD drives which can produce new disk images; this type of virtual drive goes by a variety of names, including "virtual burner".
Enhancement
Using disk images in a virtual drive allows users to shift data between technologies, for example from CD optical drive to hard disk drive. This may provide advantages such as speed and noise (hard disk drives are typically four or five times faster than optical drives, are quieter, suffer from less wear and tear, and in the case of solid-state drives, are immune to some physical trauma). In addition it may reduce power consumption, since it may allow just one device (a hard disk) to be used instead of two (hard disk plus optical drive).
Virtual drives may also be used as part of emulation of an entire machine (a virtual machine).
Software distribution
Since the spread of broadband, CD and DVD images have become a common medium for Linux distributions. Applications for macOS are often delivered online as an Apple Disk Image containing a file system that includes the application, documentation for the application, and so on. Online data and bootable recovery CD images are provided for customers of certain commercial software companies.
Disk images may also be used to distribute software across a company network, or for portability (many CD/DVD images can be stored on a hard disk drive). There are several types of software that allow software to be distributed to large numbers of networked machines with little
or no disruption to the user. Some can even be scheduled to update only at night so that machines are not disturbed during business hours. These technologies reduce end-user impact and greatly reduce the time and man-power needed to ensure a secure corporate environment. Efficiency is also increased because there is much less opportunity for human error. Disk images may also be needed to transfer software to machines without a compatible physical disk drive.
For computers running macOS, disk images are the most common file type used for software downloads, typically downloaded with a web browser. The images are typically compressed Apple Disk Image (.dmg suffix) files. They are usually opened by directly mounting them without using a real disk. The advantage compared with some other technologies, such as Zip and RAR archives, is they do not need redundant drive space for the unarchived data.
Software packages for Windows are also sometimes distributed as disk images including ISO images. While Windows versions prior to Windows 7 do not natively support mounting disk images to the files system, several software options are available to do this; see Comparison of disc image software.
Security
Virtual hard disks are often used in on-the-fly disk encryption ("OTFE") software such as FreeOTFE and TrueCrypt, where an encrypted "image" of a disk is stored on the computer. When the disk's password is entered, the disk image is "mounted", and made available as a new volume on the computer. Files written to this virtual drive are written to the encrypted image, and never stored in cleartext.
The process of making a computer disk available for use is called "mounting", the process of removing it is called "dismounting" or "unmounting"; the same terms are used for making an encrypted disk available or unavailable.
Virtualization
A hard disk image is interpreted by a Virtual Machine Monitor as a system administrator using terms of naming, a hard disk image for a certain Virtual Machine monitor has a specific file.
Hard drive imaging is used in several major application areas:
Forensic imaging is the process that involves copying the contents and recording an image of the entire drives contents (imaging) into a single file (or a very small number of files). A component of forensic imaging involves verification of the values imaged to ensure the integrity of the file(s) imaged. Forensic images are created using software tools that can be acquired. Some tools have added forensic functionality previously mentioned; it is typically used to replicate the contents of the hard drive for use in another system. This can typically be done by software programs as it only structure are files themselves.
Data recovery imaging is the process of imaging each sector, systematically, on the source drive to another destination storage medium from which required files can then be retrieved. In data recovery situations, one cannot always rely on the integrity of their particular file structure and therefore a complete sector copy is mandatory to imaging end there though. Forensic images are typically acquired using software tools compatible with their system. Note that some forensic imaging software tools may have limitations in terms of the software's ability to communicate, diagnose, or repair storage mediums that (often times) are experiencing errors or even a failure of some internal component.
System backup
Some backup programs only back up user files; boot information and files locked by the operating system, such as those in use at the time of the backup, may not be saved on some operating systems. A disk image contains all files, faithfully replicating all data, including file attributes and the file fragmentation state. For this reason, it is also used for backing up optical media (CDs and DVDs, etc.), and allows the exact and efficient recovery after experimenting with modifications to a system or virtual machine, in one go.
There are benefits and drawbacks to both "file-based" and "bit-identical" image backup methods. Files that don't belong to installed programs can usually be backed up with file-based backup software, and this is preferred because file-based backup usually saves more time or space because they never copy unused space (as a bit-identical image does), they usually are capable of incremental backups, and generally have more flexibility. But for files of installed programs, file-based backup solutions may fail to reproduce all necessary characteristics, particularly with Windows systems. For example, in Windows certain registry keys use short filenames, which are sometimes not reproduced by file-based backup, some commercial software uses copy protection that will cause problems if a file is moved to a different disk sector, and file-based backups do not always reproduce metadata such as security attributes. Creating a bit-identical disk image is one way to ensure the system backup will be exactly as the original. Bit-identical images can be made in Linux with dd, available on nearly all live CDs.
Most commercial imaging software is "user-friendly" and "automatic" but may not create bit-identical images. These programs have most of the same advantages, except that they may allow restoring to partitions of a different size or file-allocation size, and thus may not put files on the same exact sector. Additionally, if they do not support Windows Vista, they may slightly move or realign partitions and thus make Vista unbootable (see Windows Vista startup process).
Rapid deployment of clone systems
Large enterprises often need to buy or replace new computer systems in large numbers. Installing operating system and programs into each of them one by one requires a lot of time and effort and has a significant possibility of human error. Therefore, system administrators use disk imaging to quickly clone the fully prepared software environment of a reference system. This method saves time and effort and allows administrators to focus on each systems unique idiosyncrasies they must bear.
There are several types of disk imaging software available that use single instancing technology to reduce the time, bandwidth, and storage required to capture and archive disk images. This makes it possible to rebuild and transfer
information-rich disk images at lightning speeds, which is a significant improvement over the days when programmers spent hours configuring each machine within an organization.
Legacy hardware emulation
Emulators frequently use disk images to simulate the floppy drive of the computer being emulated. This is usually simpler to program than accessing a real floppy drive (particularly if the disks are in a format not supported by the host operating system), and allows a large library of software to be managed.
Copy protection circumvention
A mini image is an optical disc image file in a format that fakes the disk's content to bypass CD/DVD copy protection.
Because they are the full size of the original disk, Mini Images are stored instead. Mini Images are small, on the order of kilobytes, and contain just the information necessary to bypass CD-checks. Therefore; the Mini Image is a form of a No-CD crack, for unlicensed games, and legally backed up games. Mini images do not contain the real data from an image file, just the code that is needed to satisfy the CD-check. They cannot provide CD or DVD backed data to the computer program such as on-disk image or video files.
Creation
Creating a disk image is achieved with a suitable program. Different disk imaging programs have varying capabilities, and may focus on hard drive imaging (including hard drive backup, restore and rollout), or optical media imaging (CD/DVD images).
A virtual disk writer or virtual burner is a computer program that emulates an actual disc authoring device such as a CD writer or DVD writer. Instead of writing data to an actual disc, it creates a virtual disk image. A virtual burner, by definition, appears as a disc drive in the system with writing capabilities (as opposed to conventional disc authoring programs that can create virtual disk images), thus allowing software that can burn discs to create virtual discs.
File formats
Apple Disk Image
IMG (file format)
VHD (file format)
VDI (file format)
VMDK
QCOW
Utilities
RawWrite and WinImage are examples of floppy disk image file writer/creator for MS-DOS and Microsoft Windows. They can be used to create raw image files from a floppy disk, and write such image files to a floppy.
In Unix or similar systems the dd program can be used to create disk images, or to write them to a particular disk. It is also possible to mount and access them at block level using a loop device.
Apple Disk Copy can be used on Classic Mac OS and macOS systems to create and write disk image files.
Authoring software for CDs/DVDs such as Nero Burning ROM can generate and load disk images for optical media.
See also
Boot image
Card image
Comparison of disc image software
Disk cloning
El Torito (CD-ROM standard)
ISO image, an archive file of an optical media volume
Loop device
Mtools
no-CD crack
Protected Area Run Time Interface Extension Services (PARTIES)
ROM image
Software cracking
References
External links
Software repository including RAWRITE2
Archive formats
Compact Disc and DVD copy protection
Computer file formats
Disk image emulators
Hacker culture
Hardware virtualization
Optical disc authoring
Warez | Operating System (OS) | 1,284 |
Kudzu (computer daemon)
Kudzu is a computer hardware probing program (written by Red Hat) which relies on a library of hardware device information. It is not to be confused with kudzu, a vine-like plant.
Description
When the computer boots, kudzu detects changes in the running system's hardware configuration, if any, and activates the newly detected hardware (or removal of hardware). kudzu only runs at boot time, and then exits. There is no performance penalty during normal operation. (Since Fedora release 9, kudzu is superseded by HAL) kudzu detects and configures new and/or changed hardware on a system.
When started, kudzu detects the current hardware, and checks it against a database stored in /etc/sysconfig/hwconf, if one exists. It then determines if any hardware has been added or removed from the system. If so, it gives the users the opportunity to configure any added hardware, and unconfigure any removed hardware. It then updates the database in /etc/sysconfig/hwconf. If no previous database exists, kudzu attempts to determine what devices have already been configured, by looking at /etc/modprobe.conf, /etc/sysconfig/network-scripts/, and /etc/X11/xorg.conf.
Options usage
—help, -?
Print help information.
-q, --quiet
Run 'quietly'; do only configuration that doesn't require user input.
-s, --safe
Do only 'safe' probes that won't disturb hardware. Currently, this disables the serial probe, the DDC monitor probe, and the PS/2 probe.
-t, --timeout [seconds]
This sets the timeout for the initial dialog. If no key is pressed before the timeout elapses, kudzu exits, and /etc/sysconfig/hwconf is not updated.
-k, --kernel [version]
When determining whether a module exists, use the specified kernel version. (If this is not set, it defaults to the current kernel version.) Do not specify suffixes such as 'smp' or 'summit'; these are automatically searched.
-b, --bus [bus]
Only probe on the specified bus.
-c, --class [class]
Only probe for the specified class.
-f, --file [file]
Read hardware probe info from file file and do not do an actual probe.
-p, --probe
Print probe information to the screen, and do not actually configure or unconfigure any devices.
Files
/etc/sysconfig/hwconf
Listing of current installed hardware.
/etc/sysconfig/kudzu
Configuration for the boot-time hardware probe. Set 'SAFE' to something other than 'no' to force only safe probes.
/etc/modprobe.conf
Module configuration file.
/etc/sysconfig/network-scripts/ifcfg-*
Network interface configuration files.
Bugs
The serial probe will disturb any currently in-use devices, and returns odd results if used on machines acting as serial consoles. On some older graphics cards, the DDC probe can do strange things.
Running kudzu to configure network adapters post-boot after the network has started may have unintended results.
References
External links
https://fedoraproject.org/wiki/Anaconda/Features/NoMoreKudzu
Free utility software
Red Hat software
Linux process- and task-management-related software | Operating System (OS) | 1,285 |
Cyberware
Cyberware is a relatively new and unknown field (a proto-science, or more adequately a "proto-technology"). In science fiction circles, however, it is commonly known to mean the hardware or machine parts implanted in the human body and acting as an interface between the central nervous system and the computers or machinery connected to it.
More formally:
Cyberware is technology that attempts to create a working interface between machines/computers and the human nervous system, including the brain.
Examples of potential cyberware cover a wide range, but current research tends to approach the field from one of two different angles: interfaces or prosthetics.
Interfaces ("Headware")
The first variety attempts to connect directly with the brain. The data-jack is probably the best-known, having heavily featured in works of fiction (even in mainstream productions such as Johnny Mnemonic, the cartoon Exosquad, and The Matrix). It is the most difficult object to implement, but it is also the most important in terms of interfacing directly with the mind. In science fiction the data-jack is the envisioned I/O port for the brain. Its job is to translate thoughts into something meaningful to a computer, and to translate something from a computer into meaningful thoughts for humans. Once perfected, it would allow direct communication between computers and the human mind.
Large university laboratories conduct most of the experiments done in the area of direct neural interfaces. For ethical reasons, the tests are usually performed on animals or slices of brain tissue from donor brains. The mainstream research focuses on electrical impulse monitoring, recording and translating the many different electrical signals that the brain transmits. A number of companies are working on what is essentially a "hands-free" mouse or keyboard. This technology uses these brain signals to control computer functions. These interfaces are sometimes called brain-machine interfaces (BMI).
The more intense research, concerning full in-brain interfaces, is being studied, but is in its infancy. Few can afford the huge cost of such enterprises, and those who can find the work slow-going and very far from the ultimate goals. Research has reached the level where limited control over a computer is possible using thought commands alone. After being implanted with a Massachusetts-based firm Cyberkinetics chip called BrainGate, a quadriplegic man was able to compose and check email.
Prosthetics ("Bodyware")
The second variety of cyberware consists of a more modern form of the rather old field of prosthetics. Modern prostheses attempt to deliver a natural functionality and appearance. In the sub-field where prosthetics and cyberware cross over, experiments have been done where microprocessors, capable of controlling the movements of an artificial limb, are attached to the severed nerve-endings of the patient. The patient is then taught how to operate the prosthetic, trying to learn how to move it as though it were a natural limb.
Crossing over between prostheses and interfaces are those pieces of equipment attempting to replace lost senses. An early success in this field is the cochlear implant. A tiny device inserted into the inner ear, it replaces the functionality of damaged, or missing, hair cells (the cells that, when stimulated, create the sensation of sound). This device comes firmly under the field of prosthetics, but experiments are also being performed to tap into the brain. Coupled with a speech-processor, this could be a direct link to the speech centres of the brain.
See also
Biomechatronics
Biorobotics
Brain–computer interface
Brain-reading
Central nervous system
Cybernetics
Cyborg
Cyborgs in fiction
Neural engineering
Neuroprosthetics
Neurosecurity
Posthumanization
Simulated reality
Transhuman
Wetware (brain)
References
External links
The open-source programmable chip Electroencephalography project
The programmable chip Electroencephalography project BLog
The open-source Electroencephalography project
Cyberware Technology by Taryn East - Source containing the rest of the work found on this article
Department of Membrane and Neurophysics at the Max-Planck-Institute in Martinsried, Germany - an institute working on nerve cell/chip interconnection
Wetware Technology
Brain–computer interfacing
Neuroprosthetics | Operating System (OS) | 1,286 |
Scientific Research Institute of System Development
Scientific Research Institute of System Analysis (abbrev. SRISA/NIISI RAS, , ) - is Russian state research and development institution in the field of complex applications, an initiative of the Russian Academy of Sciences. The mission of the institute is to resolve complex applied problems on the basis of fundamental and applied mathematics in combination with the methods of practical computing. Founded by the Decree no. 1174 of the Presidium of the USSR Academy of Sciences on October 1, 1986.
Research fields
Main lines of activities:
research in the field of theoretical and applied problems on information security,
research in the field of automation of programming,
research in the field of creating computer models of the objects with complex geometry and topology for the open scalable system of parallel information processing,
research in the field of applied informatics.
Practical results of the institute are embedded into the developed architectures and very-large-scale integration devices, operating systems, real-time operating systems and microelectronics components.
Development
Microprocessors
The SRISA has designed several MIPS compatible CPUs for general purpose calculations. These include:
KOMDIV-32 () is a family of 32-bit microprocessors, MIPS-I ISA
KOMDIV-64 () is a family of 64-bit microprocessors, MIPS-IV ISA
Operating systems
Since 1998 the SRISA department of System Programming has develop several successive UNIX-like real-time operating system (RTOS) that include:
POSIX 1003.1-compatible RTOS developed since January 1998; the network sockets, however, were borrowed from Free BSD; it supported TCP/IP protocol and X Window suite; it runs on MIPS based CPUs mentioned above.
POSIX 1003.1 and Arinc 653-compatible RTOS was first exhibited at SofTool-2008, -2009, and -2010 in Moscow. It was joint project between Alt Linux and SRISA teams.
Notable people
Vladimir Betelin, academician,Scientific Supervisor
Israel Gelfand, academician, Chief Science Officer of SRISA
Vladimir Platonov, academician, Chief Science Officer of SRISA
Maksim Moshkow, employee, creator of the largest and the oldest Russian electronic library "Lib.ru"
External links
Official site of NIISI RAS
References
Institutes of the Russian Academy of Sciences
Research institutes in the Soviet Union
Computing in the Soviet Union
Computer science institutes | Operating System (OS) | 1,287 |
LiteStep
LiteStep is a Windows Shell replacement for Windows 9x and up, licensed under the terms of the GNU General Public License (GPL).
LiteStep replaces the Windows Shell which provides access to the graphical user interface on Windows-based computers. Depending on the theme used, it can replace or remove shell elements, such as the start menu and taskbar. It can also be used to create informational-type displays. Aside from the core executable, LiteStep is made up of modules, some of which are included with the initial installation. Other modules, which a theme may require to function properly, are automatically downloaded. The modules and core provide users with the ability to create anything from minimal environments, to elaborate and heavily scripted desktops. Customizations are provided in the form of themes, which may be created or modified with a text editor. A theme for LiteStep is a collection of configurations, scripts, and/or images which are distributed in a file with the zip or lsz extension. The lsz file extension is a renamed zip file, which is associated with the LiteStep Theme Installer.
History
LiteStep was inspired by AfterStep, which in turn was inspired by NeXTSTEP. LiteStep was initially developed by Francis Gastellu as a closed-source project until April 1998 (version b23), and was then entirely rewritten (versions 24 and up). LiteStep later inspired DarkStep, which supports scripting, and PureLS. LiteStep also inspired Phil Stopford in 1999 to start LDE(X), which was a complete and production-stable LiteStep-based Windows interface replacement. LiteStep is one of the oldest remaining Windows shell replacements.
Over time, and due to the rise of popularity in freeform skinning, LiteStep desktop designs have tended to drift away from the AfterStep layouts seen under pre-0.24 versions, and LiteStep theming has become an art form in itself, being referred to as an "OS equivalent of an expandable Leatherman multi-tool".
Example
Theme.rc
The following is an example of an OTS2 theme.rc configuration file to be loaded at LiteStep's execution. OTS2 is the second generation of the Open Theme Standard, which is to be followed for themes to be compatible with the LiteStep structure. The theme.rc file is the entry point for all LiteStep themes.
;Lines preceded by a semicolon are not parsed by the LiteStep core.
;This indicates to the LiteStep core that the theme is OTS2 compliant.
OTSMajorVersion 2
OTSMinorVersion 0
ThemeName "Name of Theme Here"
ThemeAuthor "Name of Author Here"
; This defines a variable named "ConfigDir" to shorten defining where configuration files are located, in the next section.
ConfigDir "$ThemeDir$Config\"
;The "Include" command tells the LiteStep core to parse the defined file. Configuration files are defined at the user's disrection for organization purposes.
Include "$ConfigDir$themevars.rc"
Include "$ConfigDir$xlabel.rc"
Include "$ConfigDir$lsxcommand.rc"
Include "$ConfigDir$xpopup.rc"
Include "$ConfigDir$xtaskbar.rc"
Include "$ConfigDir$xtray.rc"
Include "$ConfigDir$vwm.rc"
;*NetLoadModule module-ver# tells the NetLoadModule2.dll to load the following modules for use with the loaded theme.
*NetLoadModule jdesk-0.75
*NetLoadModule xpopup-2.1
*NetLoadModule lsxcommand-2.0.2
*NetLoadModule rabidvwm-1.2.2
*NetLoadModule xtray-2.2.2
*NetLoadModule xtaskbar-2.3.4
*NetLoadModule xlabel-4.3
Explanation
The LiteStep interface is composed of modules, most having the extension .dll. They are loaded by themes through a text configuration file named theme.rc. To load different modules you would write a line like this, to invoke LiteStep's NetLoadModule.dll:
*NetLoadModule ModuleName-version#
NetLoadModule.dll is itself a module that is loaded in a default LiteStep setup. The command *NetLoadModule tells NetLoadModule.dll to load a module for use in the current theme.
LiteStep and its themes rely on variables, with many already hardcoded into the core. Variables are surrounded with $...$. $LiteStepDir$, for example, is the directory that litestep.exe resides in.
Other variables can be manually set by writing a line in any configuration file like this:
Firefox "C:\progra~1\Mozill~1\firefox.exe"
You could then use the variable $Firefox$ instead of the full path to the executable.
Module configurations can span over different files for the sake of organization. The command "include," seen below, tells LiteStep to load the specified file. The variable $ThemeDir$ is the directory of the theme being currently used. Put together with "Config\someconfig.rc" will result in the settings of file someconfig.rc being loaded from \Theme\Config\someconfig.rc.
include "$ThemeDir$Config\someconfig.rc"
Modules are what make LiteStep look and behave the way you want it to. There are graphical modules that are used to build GUI elements and non-graphical modules used to create hotkeys, watch window classes for scripted events, and create LiteStep-specific commands called !bang commands. !bang commands are a way to execute event-driven functions within a given theme. These commands can reference files, folders, namespaces, executables, or elements of the theme itself. Many !bang commands are hardcoded into the LiteStep core, and others may be provided through user scripts or through the currently loaded modules. Bangs are the primary way you control the modules. !bangs can be triggered through a hotkey, popup menu, shortcut, or through module-specific events.
Some of the most popular modules include:
lsxcommand.dll: This module creates a commandline where you can enter bang commands, filepath commands i.e. notepad.exe or C:\, and urls.
v_bang-lite.dll: This module creates bangs to control Winamp. The bangs can then be used in User Interface elements such as shortcuts, or hotkeys, etc.
hotkey.dll: This type of module lets you create custom hotkeys, which can be used to execute a hardcoded bang command, or a module specific bang command.
xlabel.dll: Theme developers use xlabel for creating box-like windows on the desktop. These "boxes" can be used for informational texts (ex: cpu usage, memory usage, uptime, song playing, etc.) or images, or a combination of both. xlabel can also be used to create buttons for running !bang commands, scripts, or opening system programs. Basically, anything you would like to display and interact with can be made using xlabel.
See also
List of alternative shells for Windows
Notes
External links
Project website
LOSI (LiteStep Installer)
LiteStep Community (Themes, Apps and User Community)
Desktop shell replacement
Windows-only free software
Application launchers
Graphical user interface elements | Operating System (OS) | 1,288 |
Bus (computing)
In computer architecture, a bus (shortened form of the Latin omnibus, and historically also called data highway) is a communication system that transfers data between components inside a computer, or between computers. This expression covers all related hardware components (wire, optical fiber, etc.) and software, including communication protocols.
Early computer buses were parallel electrical wires with multiple hardware connections, but the term is now used for any physical arrangement that provides the same logical function as a parallel electrical bus. Modern computer buses can use both parallel and bit serial connections, and can be wired in either a multidrop (electrical parallel) or daisy chain topology, or connected by switched hubs, as in the case of USB.
Background and nomenclature
Computer systems generally consist of three main parts:
The central processing unit (CPU) that processes data,
The memory that holds the programs and data to be processed, and
I/O (input/output) devices as peripherals that communicate with the outside world.
An early computer might contain a hand-wired CPU of vacuum tubes, a magnetic drum for main memory, and a punch tape and printer for reading and writing data respectively. A modern system might have a multi-core CPU, DDR4 SDRAM for memory, a solid-state drive for secondary storage, a graphics card and LCD as a display system, a mouse and keyboard for interaction, and a Wi-Fi connection for networking. In both examples, computer buses of one form or another move data between all of these devices.
In most traditional computer architectures, the CPU and main memory tend to be tightly coupled. A microprocessor conventionally is a single chip which has a number of electrical connections on its pins that can be used to select an "address" in the main memory and another set of pins to read and write the data stored at that location. In most cases, the CPU and memory share signalling characteristics and operate in synchrony. The bus connecting the CPU and memory is one of the defining characteristics of the system, and often referred to simply as the system bus.
It is possible to allow peripherals to communicate with memory in the same fashion, attaching adaptors in the form of expansion cards directly to the system bus. This is commonly accomplished through some sort of standardized electrical connector, several of these forming the expansion bus or local bus. However, as the performance differences between the CPU and peripherals varies widely, some solution is generally needed to ensure that peripherals do not slow overall system performance. Many CPUs feature a second set of pins similar to those for communicating with memory, but able to operate at very different speeds and using different protocols. Others use smart controllers to place the data directly in memory, a concept known as direct memory access. Most modern systems combine both solutions, where appropriate.
As the number of potential peripherals grew, using an expansion card for every peripheral became increasingly untenable. This has led to the introduction of bus systems designed specifically to support multiple peripherals. Common examples are the SATA ports in modern computers, which allow a number of hard drives to be connected without the need for a card. However, these high-performance systems are generally too expensive to implement in low-end devices, like a mouse. This has led to the parallel development of a number of low-performance bus systems for these solutions, the most common example being the standardized Universal Serial Bus (USB). All such examples may be referred to as peripheral buses, although this terminology is not universal.
In modern systems the performance difference between the CPU and main memory has grown so great that increasing amounts of high-speed memory is built directly into the CPU, known as a cache. In such systems, CPUs communicate using high-performance buses that operate at speeds much greater than memory, and communicate with memory using protocols similar to those used solely for peripherals in the past. These system buses are also used to communicate with most (or all) other peripherals, through adaptors, which in turn talk to other peripherals and controllers. Such systems are architecturally more similar to multicomputers, communicating over a bus rather than a network. In these cases, expansion buses are entirely separate and no longer share any architecture with their host CPU (and may in fact support many different CPUs, as is the case with PCI). What would have formerly been a system bus is now often known as a front-side bus.
Given these changes, the classical terms "system", "expansion" and "peripheral" no longer have the same connotations. Other common categorization systems are based on the bus's primary role, connecting devices internally or externally, PCI vs. SCSI for instance. However, many common modern bus systems can be used for both; SATA and the associated eSATA are one example of a system that would formerly be described as internal, while certain automotive applications use the primarily external IEEE 1394 in a fashion more similar to a system bus. Other examples, like InfiniBand and I²C were designed from the start to be used both internally and externally.
Internal buses
The internal bus, also known as internal data bus, memory bus, system bus or front-side bus, connects all the internal components of a computer, such as CPU and memory, to the motherboard. Internal data buses are also referred to as local buses, because they are intended to connect to local devices. This bus is typically rather quick and is independent of the rest of the computer operations.
External buses
The external bus, or expansion bus, is made up of the electronic pathways that connect the different external devices, such as printer etc., to the computer.
Address bus
An address bus is a bus that is used to specify a physical address. When a processor or DMA-enabled device needs to read or write to a memory location, it specifies that memory location on the address bus (the value to be read or written is sent on the data bus). The width of the address bus determines the amount of memory a system can address. For example, a system with a 32-bit address bus can address 232 (4,294,967,296) memory locations. If each memory location holds one byte, the addressable memory space is 4 GiB.
Address multiplexing
Early processors used a wire for each bit of the address width. For example, a 16-bit address bus had 16 physical wires making up the bus. As the buses became wider and lengthier, this approach became expensive in terms of the number of chip pins and board traces. Beginning with the Mostek 4096 DRAM, address multiplexing implemented with multiplexers became common. In a multiplexed address scheme, the address is sent in two equal parts on alternate bus cycles. This halves the number of address bus signals required to connect to the memory. For example, a 32-bit address bus can be implemented by using 16 lines and sending the first half of the memory address, immediately followed by the second half memory address.
Typically two additional pins in the control bus -- a row-address strobe (RAS) and the column-address strobe (CAS) -- are used to tell the DRAM whether the address bus is currently sending the first half of the memory address or the second half.
Implementation
Accessing an individual byte frequently requires reading or writing the full bus width (a word) at once. In these instances the least significant bits of the address bus may not even be implemented - it is instead the responsibility of the controlling device to isolate the individual byte required from the complete word transmitted. This is the case, for instance, with the VESA Local Bus which lacks the two least significant bits, limiting this bus to aligned 32-bit transfers.
Historically, there were also some examples of computers which were only able to address words -- word machines.
Memory bus
The memory bus is the bus which connects the main memory to the memory controller in computer systems. Originally, general-purpose buses like VMEbus and the S-100 bus were used, but to reduce latency, modern memory buses are designed to connect directly to DRAM chips, and thus are designed by chip standards bodies such as JEDEC. Examples are the various generations of SDRAM, and serial point-to-point buses like SLDRAM and RDRAM. An exception is the Fully Buffered DIMM which, despite being carefully designed to minimize the effect, has been criticized for its higher latency.
Implementation details
Buses can be parallel buses, which carry data words in parallel on multiple wires, or serial buses, which carry data in bit-serial form. The addition of extra power and control connections, differential drivers, and data connections in each direction usually means that most serial buses have more conductors than the minimum of one used in 1-Wire and UNI/O. As data rates increase, the problems of timing skew, power consumption, electromagnetic interference and crosstalk across parallel buses become more and more difficult to circumvent. One partial solution to this problem has been to double pump the bus. Often, a serial bus can be operated at higher overall data rates than a parallel bus, despite having fewer electrical connections, because a serial bus inherently has no timing skew or crosstalk. USB, FireWire, and Serial ATA are examples of this. Multidrop connections do not work well for fast serial buses, so most modern serial buses use daisy-chain or hub designs.
Network connections such as Ethernet are not generally regarded as buses, although the difference is largely conceptual rather than practical. An attribute generally used to characterize a bus is that power is provided by the bus for the connected hardware. This emphasizes the busbar origins of bus architecture as supplying switched or distributed power. This excludes, as buses, schemes such as serial RS-232, parallel Centronics, IEEE 1284 interfaces and Ethernet, since these devices also needed separate power supplies. Universal Serial Bus devices may use the bus supplied power, but often use a separate power source. This distinction is exemplified by a telephone system with a connected modem, where the RJ11 connection and associated modulated signalling scheme is not considered a bus, and is analogous to an Ethernet connection. A phone line connection scheme is not considered to be a bus with respect to signals, but the Central Office uses buses with cross-bar switches for connections between phones.
However, this distinctionthat power is provided by the busis not the case in many avionic systems, where data connections such as ARINC 429, ARINC 629, MIL-STD-1553B (STANAG 3838), and EFABus (STANAG 3910) are commonly referred to as “data buses” or, sometimes, "databuses". Such avionic data buses are usually characterized by having several equipments or Line Replaceable Items/Units (LRI/LRUs) connected to a common, shared media. They may, as with ARINC 429, be simplex, i.e. have a single source LRI/LRU or, as with ARINC 629, MIL-STD-1553B, and STANAG 3910, be duplex, allow all the connected LRI/LRUs to act, at different times (half duplex), as transmitters and receivers of data.
Bus multiplexing
The simplest system bus has completely separate input data lines, output data lines, and address lines.
To reduce cost, most microcomputers have a bidirectional data bus, re-using the same wires for input and output at different times.
Some processors use a dedicated wire for each bit of the address bus, data bus, and the control bus.
For example, the 64-pin STEbus is composed of 8 physical wires dedicated to the 8-bit data bus, 20 physical wires dedicated to the 20-bit address bus, 21 physical wires dedicated to the control bus, and 15 physical wires dedicated to various power buses.
Bus multiplexing requires fewer wires, which reduces costs in many early microprocessors and DRAM chips.
One common multiplexing scheme, address multiplexing, has already been mentioned.
Another multiplexing scheme re-uses the address bus pins as the data bus pins, an approach used by conventional PCI and the 8086.
The various "serial buses" can be seen as the ultimate limit of multiplexing, sending each of the address bits and each of the data bits, one at a time, through a single pin (or a single differential pair).
History
Over time, several groups of people worked on various computer bus standards, including the IEEE Bus Architecture Standards Committee (BASC), the IEEE "Superbus" study group, the open microprocessor initiative (OMI), the open microsystems initiative (OMI), the "Gang of Nine" that developed EISA, etc.
First generation
Early computer buses were bundles of wire that attached computer memory and peripherals. Anecdotally termed the "digit trunk", they were named after electrical power buses, or busbars. Almost always, there was one bus for memory, and one or more separate buses for peripherals. These were accessed by separate instructions, with completely different timings and protocols.
One of the first complications was the use of interrupts. Early computer programs performed I/O by waiting in a loop for the peripheral to become ready. This was a waste of time for programs that had other tasks to do. Also, if the program attempted to perform those other tasks, it might take too long for the program to check again, resulting in loss of data. Engineers thus arranged for the peripherals to interrupt the CPU. The interrupts had to be prioritized, because the CPU can only execute code for one peripheral at a time, and some devices are more time-critical than others.
High-end systems introduced the idea of channel controllers, which were essentially small computers dedicated to handling the input and output of a given bus. IBM introduced these on the IBM 709 in 1958, and they became a common feature of their platforms. Other high-performance vendors like Control Data Corporation implemented similar designs. Generally, the channel controllers would do their best to run all of the bus operations internally, moving data when the CPU was known to be busy elsewhere if possible, and only using interrupts when necessary. This greatly reduced CPU load, and provided better overall system performance.
To provide modularity, memory and I/O buses can be combined into a unified system bus. In this case, a single mechanical and electrical system can be used to connect together many of the system components, or in some cases, all of them.
Later computer programs began to share memory common to several CPUs. Access to this memory bus had to be prioritized, as well. The simple way to prioritize interrupts or bus access was with a daisy chain. In this case signals will naturally flow through the bus in physical or logical order, eliminating the need for complex scheduling.
Minis and micros
Digital Equipment Corporation (DEC) further reduced cost for mass-produced minicomputers, and mapped peripherals into the memory bus, so that the input and output devices appeared to be memory locations. This was implemented in the Unibus of the PDP-11 around 1969.
Early microcomputer bus systems were essentially a passive backplane connected directly or through buffer amplifiers to the pins of the CPU. Memory and other devices would be added to the bus using the same address and data pins as the CPU itself used, connected in parallel. Communication was controlled by the CPU, which read and wrote data from the devices as if they are blocks of memory, using the same instructions, all timed by a central clock controlling the speed of the CPU. Still, devices interrupted the CPU by signaling on separate CPU pins.
For instance, a disk drive controller would signal the CPU that new data was ready to be read, at which point the CPU would move the data by reading the "memory location" that corresponded to the disk drive. Almost all early microcomputers were built in this fashion, starting with the S-100 bus in the Altair 8800 computer system.
In some instances, most notably in the IBM PC, although similar physical architecture can be employed, instructions to access peripherals (in and out) and memory (mov and others) have not been made uniform at all, and still generate distinct CPU signals, that could be used to implement a separate I/O bus.
These simple bus systems had a serious drawback when used for general-purpose computers. All the equipment on the bus had to talk at the same speed, as it shared a single clock.
Increasing the speed of the CPU becomes harder, because the speed of all the devices must increase as well. When it is not practical or economical to have all devices as fast as the CPU, the CPU must either enter a wait state, or work at a slower clock frequency temporarily, to talk to other devices in the computer. While acceptable in embedded systems, this problem was not tolerated for long in general-purpose, user-expandable computers.
Such bus systems are also difficult to configure when constructed from common off-the-shelf equipment. Typically each added expansion card requires many jumpers in order to set memory addresses, I/O addresses, interrupt priorities, and interrupt numbers.
Second generation
"Second generation" bus systems like NuBus addressed some of these problems. They typically separated the computer into two "worlds", the CPU and memory on one side, and the various devices on the other. A bus controller accepted data from the CPU side to be moved to the peripherals side, thus shifting the communications protocol burden from the CPU itself. This allowed the CPU and memory side to evolve separately from the device bus, or just "bus". Devices on the bus could talk to each other with no CPU intervention. This led to much better "real world" performance, but also required the cards to be much more complex. These buses also often addressed speed issues by being "bigger" in terms of the size of the data path, moving from 8-bit parallel buses in the first generation, to 16 or 32-bit in the second, as well as adding software setup (now standardised as Plug-n-play) to supplant or replace the jumpers.
However, these newer systems shared one quality with their earlier cousins, in that everyone on the bus had to talk at the same speed. While the CPU was now isolated and could increase speed, CPUs and memory continued to increase in speed much faster than the buses they talked to. The result was that the bus speeds were now very much slower than what a modern system needed, and the machines were left starved for data. A particularly common example of this problem was that video cards quickly outran even the newer bus systems like PCI, and computers began to include AGP just to drive the video card. By 2004 AGP was outgrown again by high-end video cards and other peripherals and has been replaced by the new PCI Express bus.
An increasing number of external devices started employing their own bus systems as well. When disk drives were first introduced, they would be added to the machine with a card plugged into the bus, which is why computers have so many slots on the bus. But through the 1980s and 1990s, new systems like SCSI and IDE were introduced to serve this need, leaving most slots in modern systems empty. Today there are likely to be about five different buses in the typical machine, supporting various devices.
Third generation
"Third generation" buses have been emerging into the market since about 2001, including HyperTransport and InfiniBand. They also tend to be very flexible in terms of their physical connections, allowing them to be used both as internal buses, as well as connecting different machines together. This can lead to complex problems when trying to service different requests, so much of the work on these systems concerns software design, as opposed to the hardware itself. In general, these third generation buses tend to look more like a network than the original concept of a bus, with a higher protocol overhead needed than early systems, while also allowing multiple devices to use the bus at once.
Buses such as Wishbone have been developed by the open source hardware movement in an attempt to further remove legal and patent constraints from computer design.
The Compute Express Link (CXL) is an open standard interconnect for high-speed CPU-to-device and CPU-to-memory, designed to accelerate next-generation data center performance.
Examples of internal computer buses
Parallel
ASUS Media Bus proprietary, used on some ASUS Socket 7 motherboards
Computer Automated Measurement and Control (CAMAC) for instrumentation systems
Extended ISA or EISA
Industry Standard Architecture or ISA
Low Pin Count or LPC
MBus
MicroChannel or MCA
Multibus for industrial systems
NuBus or IEEE 1196
OPTi local bus used on early Intel 80486 motherboards.
Conventional PCI
Parallel ATA (also known as Advanced Technology Attachment, ATA, PATA, IDE, EIDE, ATAPI, etc.), Hard disk drive, optical disk drive, tape drive peripheral attachment bus
S-100 bus or IEEE 696, used in the Altair 8800 and similar microcomputers
SBus or IEEE 1496
SS-50 Bus
Runway bus, a proprietary front side CPU bus developed by Hewlett-Packard for use by its PA-RISC microprocessor family
GSC/HSC, a proprietary peripheral bus developed by Hewlett-Packard for use by its PA-RISC microprocessor family
Precision Bus, a proprietary bus developed by Hewlett-Packard for use by its HP3000 computer family
STEbus
STD Bus (for STD-80 [8-bit] and STD32 [16-/32-bit]), FAQ
Unibus, a proprietary bus developed by Digital Equipment Corporation for their PDP-11 and early VAX computers.
Q-Bus, a proprietary bus developed by Digital Equipment Corporation for their PDP and later VAX computers.
VESA Local Bus or VLB or VL-bus
VMEbus, the VERSAmodule Eurocard bus
PC/104
PC/104-Plus
PCI-104
PCI/104-Express
PCI/104
Zorro II and Zorro III, used in Amiga computer systems
Serial
1-Wire
HyperTransport
I²C
I3C (bus)
SLIMbus
PCI Express or PCIe
Serial ATA (SATA), Hard disk drive, solid state drive, optical disc drive, tape drive peripheral attachment bus
Serial Peripheral Interface (SPI) bus
UNI/O
SMBus
Examples of external computer buses
Parallel
HIPPI High Performance Parallel Interface
IEEE-488 (also known as GPIB, General-Purpose Interface Bus, and HPIB, Hewlett-Packard Instrumentation Bus)
PC Card, previously known as PCMCIA, much used in laptop computers and other portables, but fading with the introduction of USB and built-in network and modem connections
Serial
Camera Link
CAN bus ("Controller Area Network")
eSATA
ExpressCard
Fieldbus
IEEE 1394 interface (FireWire)
RS-232
RS-485
Thunderbolt
USB
Examples of internal/external computer buses
Futurebus
InfiniBand
PCI Express External Cabling
QuickRing
Scalable Coherent Interface (SCI)
Small Computer System Interface (SCSI), Hard disk drive and tape drive peripheral attachment bus
Serial Attached SCSI (SAS) and other serial SCSI buses
Thunderbolt
Yapbus, a proprietary bus developed for the Pixar Image Computer
See also
Address decoder
Bus contention
Bus error
Bus mastering
Communication endpoint
Control bus
Crossbar switch
Memory address
Front-side bus (FSB)
External Bus Interface (EBI)
Harvard architecture
Master/slave (technology)
Network On Chip
List of device bandwidths
List of network buses
Software bus
References
External links
Computer hardware buses and slots pinouts with brief descriptions
Digital electronics
Motherboard
Communication interfaces | Operating System (OS) | 1,289 |
Vanilla software
In computer science, vanilla is the term used to refer when computer software and sometimes also other computing-related systems like computer hardware or algorithms are not customized from their original form, i.e., they are used without any customizations or updates applied to them. Vanilla software has become a widespread de facto industry standard, widely used by businesses and individuals. The term comes from the traditional standard flavor of ice cream, vanilla. According to Eric S. Raymond's The New Hacker's Dictionary, "vanilla" means more "default" than "ordinary".
Examples of how to use "vanilla" in a sentence:
As one of the earliest examples, IBM's mainframe text publishing system BookMaster, provides a default way to specify which parts of a book to publish, called "vanilla", and a fancier way, called "mocha".
The term "vanilla" is sometimes also used for hardware components. For instance, in the 1990s non-upgraded Amiga home computers were called "(plain) vanilla"; similarly, it was later also applied to PC parts.
For Unix-based kernels, a "vanilla kernel" refers to a kernel that has been unmodified by any third-party source. For instance, the vanilla Linux kernel is often given a Linux distribution–specific "flavour" by being heavily modified.
In his book End of Ignorance, Charles Winborne refers to a static page that is ″only a text file, but one that links to accompanying files″ as a plain-vanilla web page.
Fans of the video game Minecraft, usually refer to the game without mods as "vanilla".
JavaScript, when used without any libraries or third party plugins is referred to as "vanilla Javascript".
See also
Commercial off-the-shelf
Mod (video games)
Out of the box (feature)
Plain vanilla
Turnkey
References
Computing terminology
de:Vanilla software | Operating System (OS) | 1,290 |
Single-level store
Single-level storage (SLS) or single-level memory is a computer storage term which has had two meanings. The two meanings are related in that in both, pages of memory may be in primary storage (RAM) or in secondary storage (disk), and that the physical location of a page is unimportant to a process.
The term originally referred to what is now usually called virtual memory, which was introduced in 1962 by the Atlas system at Manchester.
In modern usage, the term usually refers to the organization of a computing system in which there are no files, only persistent objects (sometimes called segments), which are mapped into processes' address spaces (which consist entirely of a collection of mapped objects). The entire storage of the computer is thought of as a single two-dimensional plane of addresses (segment, and address within segment).
The persistent object concept was first introduced by Multics in the mid-1960s, in a project shared by MIT, General Electric and Bell Labs. It also was implemented as virtual memory, with the actual physical implementation including a number of levels of storage types. (Multics, for instance, had three levels: originally, main memory, a high-speed drum, and disks.)
IBM holds patents to single-level storage as implemented in the IBM i operating system on IBM Power Systems and its predecessors as far back as the System/38 that was released in 1978.
Design
With a single-level storage the entire storage of a computer is thought of as a single two-dimensional plane of addresses, pointing to pages. Pages may be in primary storage (RAM) or in secondary storage (disk); however, the current location of an address is unimportant to a process. The operating system takes on the responsibility of locating pages and making them available for processing. If a page is in primary storage, it is immediately available. If a page is on disk, a page fault occurs and the operating system brings the page into primary storage. No explicit I/O to secondary storage is done by processes: instead, reads from secondary storage are done as the result of page faults; writes to secondary storage are done when pages that have been modified since being read from secondary storage into primary storage are written back to their location in secondary storage.
System/38 and IBM i design
IBM's design of the single-level storage was originally conceived and pioneered by Frank Soltis and Glenn Henry in the late 1970s as a way to build a transitional implementation to computers with 100% solid state memory. The thinking at the time was that disk drives would become obsolete, and would be replaced entirely with some form of solid state memory. System/38 was designed to be independent of the form of hardware memory used for secondary storage. This has not come to be, however, because while solid state memory has become exponentially cheaper, disk drives have also become similarly cheaper; thus, the price ratio in favour of disk drives continues: very much higher capacities than solid state memory, very much slower to access, and much less expensive.
In IBM i, the operating system believes it has access to an almost unlimited storage array of 'real memory' (i.e., primary storage). An address translator maps the available real memory to physical memory, residing on disk drives (either 'spinning' or solid-state), or on a SAN server (such as the V7000). The operating system simply places an object at an address in its memory space. The OS "doesn't know" (or care) if the object is physically in memory or on a slower disk-storage device. The Licensed Internal Code, atop which the OS runs, handles page faults on object pages not in physical memory, reading the page into an available page frame in primary storage.
With the IBM i implementation of single-level storage, page faults are divided into two categories. These are database faults and non-database faults. Database faults occur when a page associated with a relational database object like a table, view or index is not currently in primary storage. Non-database faults occur when any other type of object is not currently in primary storage.
IBM i treats all secondary storage as a single pool of data, rather than as a collection of multiple pools (file systems), as is usually done on other operating systems such as systems like Unix-like systems and Microsoft Windows. It intentionally scatters the pages of all objects across all disks so that the objects can be stored and retrieved much more rapidly. As a result, an IBM i server rarely becomes disk bound. Single-level storage operating systems also allow CPU, memory and disk resources to be freely substituted for each other at run time to smooth out performance bottlenecks.
See also
System/38
IBM i
Extremely Reliable Operating System
Memory-mapped file
References
IBM i
AS/400 | Operating System (OS) | 1,291 |
Ultra 1
The Ultra 1 is a family of Sun Microsystems workstations based on the 64-bit UltraSPARC microprocessor. It was the first model in the Ultra series of Sun computers, which succeeded the SPARCstation series. It launched in November 1995 alongside the MP-capable Ultra 2 and shipped with Solaris 2.5. It is capable of running other operating systems such as Linux and BSD.
Specifications
The Ultra 1 was available in a variety of specifications. The Ultra 1 Creator3D 170E launched with a list price of US$27,995 - along with the Ultra 1 Model 140, and Ultra 1 Creator 170E.
CPU
Three different CPU speeds were available - 143 MHz (Model 140), 167 MHz (Model 170) and 200 MHz (Model 200).
Models
Model numbers with an E suffix (Sun service code A12, code-named Electron) had two instead of three SBus slots, and added a UPA slot to allow the use of an optional Creator framebuffer. In addition, the E models had Wide SCSI and Fast Ethernet interfaces, in place of the narrow SCSI and 10Base-T Ethernet of the standard Ultra 1 (service code A11, code-named Neutron).
Memory
The Ultra 1 uses 200-pin 5V ECC 60ns SIMMs in pairs, the same memory used in the SPARCstation 20.
Similar Machines
Similar Sun machines were the Netra i 1 servers which had the same chassis and the UltraServer 1/Ultra Enterprise 1 servers .
See also
Ultra series
References
External links
Ultra 1 Series Reference Manual
Ultra 1 Series Service Manual
Ultra 1 Creator Series Reference Manual
Ultra 1 Creator Series Service Manual
Workstations Product Library Documentation
Sun workstations
SPARC microprocessor products | Operating System (OS) | 1,292 |
IBM 700/7000 series
The IBM 700/7000 series is a series of large-scale (mainframe) computer systems that were made by IBM through the 1950s and early 1960s. The series includes several different, incompatible processor architectures. The 700s use vacuum-tube logic and were made obsolete by the introduction of the transistorized 7000s. The 7000s, in turn, were eventually replaced with System/360, which was announced in 1964. However the 360/65, the first 360 powerful enough to replace 7000s, did not become available until November 1965. Early problems with OS/360 and the high cost of converting software kept many 7000s in service for years afterward.
Architectures
The IBM 700/7000 series has six completely different ways of storing data and instructions:
First scientific (36/18-bit words): 701 (Defense Calculator)
Later scientific (36-bit words, hardware floating-point): 704, 709, 7040, 7044, 7090, 7094
Commercial (variable-length character strings): 702, 705, 7080
1400 series (variable-length character strings): 7010
Decimal (10-digit words): 7070, 7072, 7074
Supercomputer (64-bit words): 7030 "Stretch"
The 700 class use vacuum tubes, the 7000 class is transistorized. All machines (like most other computers of the time) use magnetic core memory; except for early 701 and 702 models, which initially used Williams tube CRT memory and were later converted to magnetic core memory.
Software compatibility issues
Early computers were sold without software. As operating systems began to emerge, having four different mainframe architectures plus the 1400 midline architectures became a major problem for IBM since it meant at least four different programming efforts were required.
The System/360 combines the best features of the 7000 and 1400 series architectures into a single design both for commercial computing and for scientific and engineering computing. However, its architecture is not compatible with those of the 7000 and 1400 series, so some 360 models have optional features that allow them to emulate the 1400 and 7000 instruction sets in microcode. One of the selling points of the System/370, the successor of the 360 introduced in mid-1970, was improved 1400/7000 series emulation, which could be done under operating system control rather than shutting down and restarting in emulation mode as was required for emulation of 7040/44, 7070/72/74, 7080 and 7090/94 on most of the 360s.
Peripherals
While the architectures differ, the machines in the same class use the same electronics technologies and generally use the same peripherals. Tape drives generally use 7-track format, with the IBM 727 for vacuum tube machines and the 729 for transistor machines. Both the vacuum tube and most transistor models use the same card readers, card punches, and line printers that were introduced with the 701. These units, the IBM 711, 721, and 716, are based on IBM accounting machine technology and even include plugboard control panels. They are relatively slow and it was common for 7000 series installations to include an IBM 1401, with its much faster peripherals, to do card-to-tape and tape-to-line-printer operations off-line. Three later machines, the 7010, the 7040 and the 7044, adopted peripherals from the midline IBM 1400 series. Some of the technology for the 7030 was used in data channels and peripheral devices on other 7000 series computers, e.g., 7340 Hypertape.
First scientific architecture (701)
Known as the Defense Calculator while in development in the IBM Poughkeepsie Laboratory, this machine was formally unveiled April 7, 1953 as the IBM 701 Electronic Data Processing Machine.
Data formats
Numbers are either 36 bits or 18 bits long, only fixed point.
Fixed-point numbers are stored in binary sign/magnitude format.
Instruction format
Instructions are 18 bits long, single address.
Sign (1 bit) – Whole-word (-) or Half-word (+) operand address
Opcode (5 bits) – 32 instructions
Address (12 bits) – 4096 Half-word addresses
To expand the memory from 2048 to 4096 words, a 33rd instruction was added that uses the most-significant bit of its address field to select the bank. (This instruction was probably created using the "No OP" instruction, which appears to have been the only instruction with unused bits, as it originally ignored its address field. However, documentation on this new instruction is not currently available.)
Registers
Processor registers consisted of:
AC – 38-bit Accumulator
MQ – 36-bit Multiplier-Quotient
Memory
2,048 or 4,096 – 36-bit binary words with six-bit characters
Later scientific architecture (704/709/7090/7094)
IBM's 36-bit scientific architecture was used for a variety of computation-intensive applications. First machines were the vacuum-tube 704 and 709, followed by the transistorized 7090, 7094, 7094-II, and the lower-cost 7040 and 7044. The ultimate model was the Direct Coupled System (DCS) consisting of a 7094 linked to a 7044 that handled input and output operations.
Data formats
Numbers are 36 bits long, both fixed point and floating point.
Fixed-point numbers are stored in binary sign/magnitude format.
Single-precision floating-point numbers have a magnitude sign, an 8-bit excess-128 exponent and a 27-bit magnitude
Double-precision floating-point numbers, introduced on the 7094, have a magnitude sign, a 17-bit excess-65536 exponent, and a 54-bit magnitude
Alphameric characters are 6-bit BCD, packed six to a word.
Instruction format
The basic instruction format is a three-bit prefix, fifteen-bit decrement, three-bit tag, and fifteen-bit address. The prefix field specifies the class of instruction. The decrement field often contains an immediate operand to modify the results of the operation, or is used to further define the instruction type. The three bits of the tag specify three (seven in the 7094) index registers, the contents of which are subtracted from the address to produce an effective address. The address field either contains an address or an immediate operand.
Registers
Processor registers consisted of:
AC – 38-bit Accumulator
MQ – 36-bit Multiplier-Quotient
XR – 15-bit Index Registers (three or seven)
SI – 36-bit Sense Indicator
The accumulator (and multiplier-quotient) registers operate in sign/magnitude format.
The Index registers operate using two's complement format and when used to modify an instruction address are subtracted from the address in the instruction. On machines with three index registers, if the tag has two or three bits set (i.e. selected multiple registers) then their values are ORed together before being subtracted. The IBM 7094, with seven index registers has a "compatibility" mode to permit programs from earlier machines that used this trick to continue to be used.
The Sense Indicators permit interaction with the operator via panel switches and lights.
Memory
704: 4,096 or 8,192 or 32,768 – 36-bit binary words with six-bit characters
709, 7090, 7094, 7094 II, 7040, 7044: 32,768 – 36-bit binary words with six-bit characters
Input/output
The 709/7090 series use Data Synchronizer Channels for high speed input/output, such as tape and disk. The basic 7-bit DSCs, e.g., 7607, execute their own simple programs from the computer memory that controls the transfer of data between memory and the I/O devices; the more advanced 9-bit 7909 supports more sophisticated channel programs. Because the unit record equipment on the 709x was so slow, Punched card I/O and high-speed printing were often performed by transferring magnetic tapes to and from an off-line IBM 1401. Later, the data channels were used to connect a 7090 to a 7040 or a 7094 to a 7044 to form the IBM 7094/7044 Direct Coupled System (DCS). In that configuration, the 7044, which could use faster 1400 series peripherals, primarily handled I/O.
FORTRAN assembly program
The FORTRAN Assembly Program (FAP) is an assembler for the 709, 7090, and 7094 under IBM's makeshift FMS (Fortran Monitor System) and IBSYS operating systems. An earlier assembler was SCAT (SHARE Compiler-Assembler-Translator). Macros were added to FAP by Bell Laboratories (BE-FAP), and the final 7090/7094 assembler was IBMAP, under IBSYS/IBJOB.
Its pseudo-operation BSS, used to reserve memory, is the origin of the common name of the "BSS section", still used in many assembly languages today for designating reserved memory address ranges of the type not having to be saved in the executable image.
Commercial architecture (702/705/7080)
The IBM 702 and IBM 705 are similar, and the 705 can run many 702 programs without modification, but they are not completely compatible.
The IBM 7080 is a transistorized version of the 705, with various improvements. For backward compatibility it can be run in 705 I mode, 705 II mode, 705 III mode, or full 7080 mode.
Data format
Data is represented by a variable-length string of characters terminated by a Record mark.
Instruction format
Five characters: one character opcode and four character address – OAAAA
Registers
702
two Accumulators (A & B) – 512 characters
705
one Accumulator – 256 characters
14 auxiliary storage units – 16 characters
one auxiliary storage unit – 32 characters
7080
one Accumulator – 256 characters
30 auxiliary storage units – 512 characters
32 communication storage units – 8 characters
Memory
702
2,000 to 10,000 characters in Williams tubes (in increments of 2,000 characters)
Character cycle rate – 23 microseconds
705 (models I, II, or III)
20,000 or 40,000 or 80,000 characters of core memory
Character cycle rate – 17 microseconds or 9.8 microseconds
7080
80,000 or 160,000 characters of Core memory
Character cycle rate – 2.18 microseconds
Input/output
The 705 and the basic 7080 use channels with a 7-bit interface. The 7080 can be equipped with 7908 data channels to attach faster devices using a 9-bit interface.
1400 series architecture (7010)
The 700/7000 commercial architecture inspired the very successful IBM 1400 series of mid-sized business computers. In turn, IBM later introduced a mainframe version of the IBM 1410 called the IBM 7010.
Data format
Data is represented by a variable length string of characters terminated by a word mark.
Instruction format
Variable length: 1, 2, 6, 7, 11, or 12 characters.
Registers
None, all instructions operated on memory. However, fifteen five-character fields in fixed locations in low memory can be treated as index registers, whose values can be added to the address specified in an instruction. Also, certain internal registers that would today be invisible, such as the addresses of the characters being currently processed, are exposed to the programmer.
Memory
100,000 characters
Decimal architecture (7070/7072/7074)
The IBM 7070, IBM 7072, and IBM 7074 are decimal, fixed-word-length machines. They use a ten-digit word like the smaller and older IBM 650, but are not instruction set compatible with the 650.
Data format
Word length – 10 decimal digits plus sign
Digit encoding – two-out-of-five code
Floating point – optional, with a two-digit exponent
Three signs for each word – Plus, Minus, and Alpha
Plus and Minus indicate 10-digit numeric values
Alpha indicates five characters of text coded by pairs of digits. 61 = A, 91 = 1.
Instruction format
All instructions use one word
Two-digit opcode (including sign, Plus or Minus only)
Two-digit index register
Two-digit field control – allows selecting sets of digits, shifting left or right
Four-digit address
Registers
All registers use one word and can also be addressed as memory.
Accumulators – three (addresses 9991, 9992, and 9993 – standard; 99991, 99992, and 99993 – extended 7074)
Program register – one (address 9995 – standard; 99995 – extended 7074)
Addressable from console only. Stores current instruction.
Instruction counter – one (address 9999 – standard; 99999 – extended 7074)
Addressable from console only
Index registers – 99 (addresses 0001-0099)
Memory
5000 to 9990 words (standard)
15000 to 30000 words (extended 7074)
Access time – 6 microseconds (7070/7072), 4 microseconds (7074)
Add time – 72 microseconds (7070), 12 microseconds (7072), 10 microseconds (7074)
Input/output
The 707x uses channels with a 7-bit interface. The 7070 and 7074 can be equipped with 7907 data channels to attach faster devices using a 9-bit interface.
Timeline
An IBM 7074 was used by the U.S. Internal Revenue Service in 1962.
The IBM 7700 Data Acquisition System is not a member of the IBM 7000 series, despite its number and its announcement date of December 2, 1963.
Performance
All of the 700 and 7000 series machines predate standard performance measurement tools such as the Whetstone (1972), Dhrystone (1984), LINPACK (1979), or Livermore loops (1986) benchmarks.
In the table below, the Gibson and Knight measurements report speed, where higher numbers are better; the TRIDIA measurement reports time, where lower numbers are better.
See also
IBM 650
Notes
References
External links
IBM Mainframe family tree
The Architecture of IBM's Early Computers (PDF)
C Gordon Bell, Computer Structures: Readings and Examples, McGraw-Hill, 1971; part 6, section 1, "The IBM 701-7094 II Sequence, a Family by Evolution",
IBM 705
IBM 7030 Stretch
IBM 7070
IBM 7094
IBM 7090/94 Architecture
Jack Harper's FAP page
Birth of an Unwanted IBM Computer, by Bob Bemer
Series
Series | Operating System (OS) | 1,293 |
Instruction window
An instruction window in computer architecture refers to the set of instructions which can execute out-of-order in a speculative processor.
In particular, in a conventional design, the instruction window consists of all instructions which are in the re-order buffer (ROB). In such a processor, any instruction within the instruction window can be executed when its operands are ready. Out-of-order processors derive their name because this may occur out-of-order (if operands to a younger instruction are ready before those of an older instruction).
The instruction window has a finite size, and new instructions can enter the window (usually called dispatch or allocate) only when other instructions leave the window (usually called retire or commit). Instructions enter and leave the instruction window in program order, and an instruction can only leave the window when it is the oldest instruction in the window and it has been completed. Hence, the instruction window can be seen as a sliding window in which the instructions can become out-of-order. All execution within the window is speculative (i.e., side-effects are not applied outside the CPU) until it is committed in order to support asynchronous exception handling like interrupts.
This paradigm is also known as restricted dataflow because instructions within the window execute in dataflow order (not necessarily in program order) but the window in which this occurs is restricted (of finite size).
The instruction window is distinct from pipelining: instructions in an in-order pipeline are not in an instruction window in the conventionally understood sense, because they cannot execute out of order with respect to one another. Out-of-order processors are usually built around pipelines, but many of the pipeline stages (e.g., front-end instruction fetch and decode stages) are not considered to be part of the instruction window.
See also
Superscalar processor
References
Computer architecture
Instruction processing | Operating System (OS) | 1,294 |
Z/Architecture
z/Architecture, initially and briefly called ESA Modal Extensions (ESAME), is IBM's 64-bit complex instruction set computer (CISC) instruction set architecture, implemented by its mainframe computers. IBM introduced its first z/Architecture-based system, the z900, in late 2000. Later z/Architecture systems include the IBM z800, z990, z890, System z9, System z10, zEnterprise 196, zEnterprise 114, zEC12, zBC12, z13, z14 and z15.
z/Architecture retains backward compatibility with previous 32-bit-data/31-bit-addressing architecture ESA/390 and its predecessors all the way back to the 32-bit-data/24-bit-addressing System/360. The IBM z13 is the last z Systems server to support running an operating system in ESA/390 architecture mode. However, all 24-bit and 31-bit problem-state application programs originally written to run on the ESA/390 architecture will be unaffected by this change.
Each z/OS address space, called a 64-bit address space, is 16 exabytes in size.
Code (or mixed) spaces
Most operating systems for the z/Architecture, including z/OS, generally restrict code execution to the first 2 GB (31 address bits, or 231 addressable bytes) of each virtual address space for reasons of efficiency and compatibility rather than because of architectural limits. The z/OS implementation of the Java programming language is an exception. The z/OS virtual memory implementation supports multiple 2 GB address spaces, permitting more than 2 GB of concurrently resident program code. The 64-bit version of Linux on IBM Z allows code to execute within 64-bit address ranges.
Data-only spaces
For programmers who need to store large amounts of data, the 64-bit address space usually suffices.
Dataspaces and hiperspaces
Applications that need more than a 16 exabyte data address space can employ extended addressability techniques, using additional address spaces or data-only spaces. The data-only spaces that are available for user programs are called:
dataspaces (sometimes referred to as "data spaces") and
hiperspaces (High performance space).
These spaces are similar in that both are areas of virtual storage that a program can create, and can be up to 2 gigabytes. Unlike an address space, a dataspace or hiperspace contains only user data; it does not contain system control blocks or common areas. Program code cannot run in a dataspace or a hiperspace.
A dataspace differs from a hiperspace in that dataspaces are byte-addressable, whereas hiperspaces are page-addressable.
IBM mainframe expanded storage
Traditionally IBM Mainframe memory has been byte-addressable. This kind of memory is termed "Central Storage". IBM Mainframe processors through much of the 1980s and 1990s supported another kind of memory: Expanded Storage.
Expanded Storage is 4KB-page addressable. When an application wants to access data in Expanded Storage it must first be moved into Central Storage. Similarly, data movement from Central Storage to Expanded Storage is done in multiples of 4KB pages. Initially page movement was performed using relatively expensive instructions, by paging subsystem code.
The overhead of moving single and groups of pages between Central and Expanded Storage was reduced with the introduction
of the MVPG (Move Page) instruction and the ADMF (Asynchronous Data Mover Facility) capability.
The MVPG instruction and ADMF are explicitly invoked—generally by middleware in z/OS or z/VM (and ACP?)—to access data in expanded storage. Some uses are namely:
MVPG is used by VSAM Local Shared Resources (LSR) buffer pool management to access buffers in a hiperspace in Expanded Storage.
Both MVPG and ADMF are used by DB2 to access hiperpools. Hiperpools are portions of a buffer pool located in a hiperspace.
VM Minidisk Caching.
Until the mid-1990s Central and Expanded Storage were physically different areas of memory on the processor. Since the mid-1990s Central and Expanded Storage were merely assignment choices for the underlying processor memory.
These choices were made based on specific expected uses:
For example, Expanded Storage is required for the Hiperbatch function (which uses the MVPG instruction to access its hiperspaces).
In addition to the hiperspace and paging cases mentioned above there are other uses of expanded storage, including:
Virtual I/O (VIO) to Expanded Storage which stored temporary data sets in simulated devices in Expanded Storage. (This function has been replaced by VIO in Central Storage.)
VM Minidisk Caching.
z/OS removed the support for Expanded Storage. All memory in z/OS is now Central Storage. z/VM 6.4 fulfills Statement of Direction to drop support for all use of Expanded Storage.
MVPG and ADMF
MVPG
IBM described MVPG as "moves a single page and the central processor cannot execute any other instructions until the page move is completed."
The MVPG mainframe instruction (MoVe PaGe, opcode X'B254') has been compared to the MVCL (MoVe Character Long) instruction, both of which can move more than 256 bytes within main memory using a single instruction. These instructions do not comply with definitions for atomicity, although they can be used as a single instruction within documented timing and non-overlap restrictions.
The need to move more than 256 bytes within main memory had historically been addressed with software (MVC loops), MVCL, which was introduced with the 1970 announcement of the System/370, and MVPG, patented and announced by IBM in 1989, each have advantages.
ADMF
ADMF (Asynchronous Data Mover Facility), which was introduced in 1992, goes beyond the capabilities of the MVPG (Move Page) instruction, which is limited to a single page, and can move groups of pages between Central and Expanded Storage.
A macro instruction named IOSADMF, which has been described as an API that avoids "direct, low-level use of ADMF," can be used to read or write data to or from a hiperspace. Hiperspaces are created using DSPSERV CREATE.
To provide reentrancy, IOSADMF is used together with a "List form" and "Execute form."
z/Architecture operating systems
The z/VSE Version 4, z/TPF Version 1 and z/VM Version 5 operating systems, and presumably their successors, require z/Architecture.
z/Architecture supports running multiple concurrent operating systems and applications even if they use different address sizes. This allows software developers to choose the address size that is most advantageous for their applications and data structures.
Platform Solutions Inc. (PSI) previously marketed Itanium-based servers which were compatible with z/Architecture. IBM bought PSI in July 2008, and the PSI systems are no longer available. FLEX-ES, zPDT and the Hercules emulator also implement z/Architecture. Hitachi mainframes running newer releases of the VOS3 operating system implement ESA/390 plus Hitachi-unique CPU instructions, including a few 64-bit instructions. While Hitachi was likely inspired by z/Architecture, and formally collaborated with IBM on the z900-G2/z800 CPUs introduced in 2002, Hitachi's machines are not z/Architecture-compatible.
On July 7, 2009, IBM on occasion of announcing a new version of one of its operating systems implicitly stated that Architecture Level Set 4 (ALS 4) exists, and is implemented on the System z10 and subsequent machines. The ALS 4 is also specified in LOADxx as ARCHLVL 3, whereas the earlier z900, z800, z990, z890, System z9 specified ARCHLVL 2. Earlier announcements of System z10 simply specified that it implements z/Architecture with some additions: 50+ new machine instructions, 1 MB page frames, and hardware decimal floating point unit (HDFU).
Notes
References
Further reading
Preshing on Programming - Atomic vs. Non-Atomic Operations
Principles of Computer Design - Atomicity
IBM mainframe technology
Instruction set architectures
Computer-related introductions in 2000
mainframe expanded storage
64-bit computers | Operating System (OS) | 1,295 |
.NET
.NET (pronounced as "dot net"; previously named .NET Core) is a free and open-source, managed computer software framework for Windows, Linux, and macOS operating systems. It is a cross-platform successor to .NET Framework. The project is primarily developed by Microsoft employees by way of the .NET Foundation, and released under the MIT License.
History
On November 12, 2014, Microsoft announced .NET Core, in an effort to include cross-platform support for .NET, including Linux and macOS, source for the .NET Core CoreCLR implementation, source for the "entire [...] library stack" for .NET Core, and the adoption of a conventional ("bazaar"-like) open-source development model under the stewardship of the .NET Foundation. Miguel de Icaza describes .NET Core as a "redesigned version of .NET that is based on the simplified version of the class libraries", and Microsoft's Immo Landwerth explained that .NET Core would be "the foundation of all future .NET platforms". At the time of the announcement, the initial release of the .NET Core project had been seeded with a subset of the libraries' source code and coincided with the relicensing of Microsoft's existing .NET reference source away from the restrictions of the Ms-RSL. Landwerth acknowledged the disadvantages of the formerly selected shared license, explaining that it made codename Rotor "a non-starter" as a community-developed open source project because it did not meet the criteria of an Open Source Initiative (OSI) approved license.
1.0 was released on June 27, 2016, along with Microsoft Visual Studio 2015 Update 3, which enables .NET Core development. 1.0.4 and .NET Core 1.1.1 were released along with .NET Core Tools 1.0 and Visual Studio 2017 on March 7, 2017.
.NET Core 2.0 was released on August 14, 2017, along with Visual Studio 2017 15.3, ASP.NET Core 2.0, and Entity Framework Core 2.0. 2.1 was released on May 30, 2018. NET Core 2.2 was released on December 4, 2018.
.NET Core 3 was released on September 23, 2019. .NET Core 3 adds support for Windows desktop application development and significant performance improvements throughout the base library.
In November 2020, Microsoft released .NET 5.0, which replaced .NET Framework. The "Core" branding was removed and version 4.0 was skipped to avoid conflation with .NET Framework. It addresses the patent concerns related to the .NET Framework.
In November 2021, Microsoft released .NET 6.0.
.NET Core 2.1 and later, i.e. including .NET 5, support Alpine Linux (Alpine primarily supports and uses musl libc).
As of .NET 5, Windows Arm64 is natively supported. Previously, .NET on ARM was applications compiled for the x86 architecture, thereby meaning the applications were using the ARM emulation layer.
Language support
.NET fully supports C# and F# (and C++/CLI as of 3.1; only enabled on Windows) and supports Visual Basic .NET (for version 15.5 in .NET Core 5.0.100-preview.4, and some old versions supported in old .NET Core).
VB.NET compiles and runs on .NET, but as of .NET Core 3.1, the separate Visual Basic Runtime is not implemented. Microsoft initially announced that .NET Core 3 would include the Visual Basic Runtime, but after two years the timeline for such support was updated to .NET 5.
Architecture
.NET supports four cross-platform scenarios: ASP.NET Core web apps; command-line/console apps; libraries; and Universal Windows Platform apps. Prior to .NET Core 3.0, it did not implement Windows Forms or Windows Presentation Foundation (WPF), which render the standard GUI for desktop software on Windows. Now, however, .NET Core 3 supports desktop technologies Windows Forms, WPF, and Universal Windows Platform (UWP). It is also possible to write cross-platform graphical applications using .NET with the GTK# language-binding for the GTK widget toolkit.
.NET supports use of NuGet packages. Unlike .NET Framework, which is serviced using Windows Update, .NET relies on its package manager to receive updates. Starting in December 2020, however, .NET updates started being delivered via Windows Update as well.
The two main components of .NET are CoreCLR and CoreFX, which are comparable to the Common Language Runtime (CLR) and the Framework Class Library (FCL) of the .NET Framework's Common Language Infrastructure (CLI) implementation.
As a CLI implementation of Virtual Execution System (VES), CoreCLR is a complete runtime and virtual machine for managed execution of CLI programs and includes a just-in-time compiler called RyuJIT. also contains CoreRT, the runtime optimized to be integrated into AOT compiled native binaries.
As a CLI implementation of the foundational Standard Libraries, CoreFX shares a subset of APIs, however, it also comes with its own APIs that are not part of the . A variant of the .NET library is used for UWP.
The .NET command-line interface offers an execution entry point for operating systems and provides developer services like compilation and package management.
Mascot
The official community mascot of .NET is the .NET Bot (stylized as "dotnet bot" or "dotnet-bot"). The dotnet bot served as the placeholder developer for the initial check-in of the .NET source code when it was open-sourced. It has since been used as the official mascot.
Notes
References
Further reading
External links
Overview of .NET Framework (MSDN)
.NET GitHub repository
.NET implementations
Cross-platform software
Microsoft application programming interfaces
Microsoft development tools
Microsoft free software
Software using the MIT license
2016 software | Operating System (OS) | 1,296 |
Wait (system call)
In computer operating systems, a process (or task) may wait on another process to complete its execution. In most systems, a parent process can create an independently executing child process. The parent process may then issue a wait system call, which suspends the execution of the parent process while the child executes. When the child process terminates, it returns an exit status to the operating system, which is then returned to the waiting parent process. The parent process then resumes execution.
Modern operating systems also provide system calls that allow a process's thread to create other threads and wait for them to terminate ("join" them) in a similar fashion.
An operating system may provide variations of the wait call that allow a process to wait for any of its child processes to exit, or to wait for a single specific child process (identified by its process ID) to exit.
Some operating systems issue a signal (SIGCHLD) to the parent process when a child process terminates, notifying the parent process and allowing it to retrieve the child process's exit status.
The exit status returned by a child process typically indicates whether the process terminated normally or abnormally. For normal termination, this status also includes the exit code (usually an integer value) that the process returned to the system. During the first 20 years of UNIX, only the low 8 bits of the exit code have been available to the waiting parent. In 1989 with SVR4, a new call waitid has been introduced that returns all bits from the exit call in a structure called siginfo_t in the structure member si_status. Waitid is a mandatory part of the POSIX standard since 2001.
Zombies and orphans
When a child process terminates, it becomes a zombie process, and continues to exist as an entry in the system process table even though it is no longer an actively executing program. Under normal operation it will typically be immediately waited on by its parent, and then reaped by the system, reclaiming the resource (the process table entry). If a child is not waited on by its parent, it continues to consume this resource indefinitely, and thus is a resource leak. Such situations are typically handled with a special "reaper" process that locates zombies and retrieves their exit status, allowing the operating system to then deallocate their resources.
Conversely, a child process whose parent process terminates before it does becomes an orphan process. Such situations are typically handled with a special "root" (or "init") process, which is assigned as the new parent of a process when its parent process exits. This special process detects when an orphan process terminates and then retrieves its exit status, allowing the system to deallocate the terminated child process.
If a child process receives a signal, a waiting parent will then continue execution leaving an orphan process behind. Hence it is sometimes needed to check the argument set by wait, waitpid or waitid and, in the case that WIFSIGNALED is true, wait for the child process again to deallocate resources.
See also
exit (system call)
fork (system call)
Spawn (computing)
Wait (command)
References
Process (computing)
C POSIX library
System calls | Operating System (OS) | 1,297 |
IBM DS8000 series
The IBM DS8000 series (early IBM System Storage DS8000 series) is an IBM storage media platform with hybrid flash and hard disk storage for IBM mainframes and other enterprise grade computing environments.
Description
This series formerly designed as line of cabinet-size solutions, prior to more compact and affordable rack-mount DS6000 series. In 2015 the DS6000 line were discontinued, and the all-flash entry-level DS8882F model was released as rack-mount successor of DS6000 line.
All IBM DS storage lines based on a IBM Power CPUs and use an IBM Power Systems servers as controllers.
Models
TotalStorage models:
DS8100 - released in 2004
Dual 2-core POWER5+-based controllers
Can contain up to 384 drives (Fibre Channel or SATA)
DS8300 - released in 2004
Dual 4-core POWER5+-based controllers (based on p570 servers)
Can contain up to 1024 drives (Fibre Channel or SATA)
System Storage models:
DS8100 Turbo - released in 2006
DS8300 Turbo - released in 2006
DS8700 - released in 2009
Dual 2- or 4-core POWER6-based controllers
Can contain up to 1024 drives (3.5” 15K RPM Fibre Channel HDD or enterprise flash drives)
DS8800 - released in 2010
Dual 2- or 4-core POWER6+-based controllers
Can contain up to 1536 drives (2.5": 10K or 15K RPM HDD or SSD enterprise flash SAS-2 drives, or 3.5": Nearline-SAS drives)
DS8870 - released in 2012
Dual 2-, 4-, 8- or 16-core POWER7-based controllers (Since December 2013 based on POWER7+)
Running SMT-4 for 64 threads
1 TiB Cache
Can contain up to 1536 drives (2.5": 10K or 15K RPM HDD or enterprise flash SAS-2, or 3.5": Nearline-SAS drives) + 120 1.8" flash cards in the High-Performance Flash Enclosure (HPFE)
High Performance Flash Enclosure: integrates and optimizes flash technology in the DS8870 (High-performance flash enclosure fits into existing DS8870 bay)
Up to 8 Flash Enclosures per System : 96 TB raw per system
DS8880 Family - released in end of 2015; base models with mixed storage (DS8884 and DS8886) and all-flash solutions (DS888#F).
DS8882F - all-flash version for rack-mounting (17U, 16U without KVM)
DS8884
Dual 6-core POWER8-based controllers
Running SMT-4 for 24 threads
Up to 256 GiB Cache
Can contain up to 783 HDD or SSD drives + 120 1.8" flash cards in the High-Performance Flash Enclosure (HPFE)
2.5" 10K or 15K RPM drives and enterprise flash SAS-2 drives
3.5" Nearline-SAS drives
High Performance Flash Enclosure: integrates and optimizes flash technology in the DS8884F (Flash enclosure can fits into existing DS8870 bay)
Up to 4 Flash Enclosures per System : 48 TB raw per system
DS8886
Dual 8- to 24-core POWER8-based controllers
Running SMT-4 for 96 threads
Up to 2 TiB Cache
Can contain up to 1536 HDD or SSD drives + 240 1.8" flash cards in the High-Performance Flash Enclosure (HPFE)
2.5" 10K or 15K RPM drives and enterprise flash SAS-2 drives
3.5" Nearline-SAS drives
High Performance Flash Enclosure: integrates and optimizes flash technology in the DS8886F
Up to 8 Flash Enclosures per System : 96 TB raw per system
DS8888F
Dual 48-core POWER8-based controllers
Running SMT-4 for 192 threads
Up to 2 TiB Cache
High Performance Flash Enclosure: integrates and optimizes flash technology in the DS8888F
Can contain up to 480 1.8" flash cards in the High-Performance Flash Enclosure (HPFE)
Up to 16 Flash Enclosures per System : 192 TB raw per system
DS89#0F - released in 2020
IBM DS8910F - Rack-mounting (20U, 19U without KVM)
based on a dual IBM Power Systems S922, S914, or S924 controllers
IBM DS8950F
42U assembled rack cabinet
See also
IBM storage
IBM Storwize - x86-based rackable analog, HDD-oriented (discontinued)
IBM XIV - x86-based cabinet-size analog, HDD-oriented (discontinued)
IBM FlashSystem - x86-based Flash analog
Unit Control Block
References
IBM storage servers | Operating System (OS) | 1,298 |
Active object (Symbian OS)
An active object framework is a callback-based form of multitasking for computer systems. Specifically, it is a form of cooperative multitasking and is an important feature of the Symbian operating system.
Within the framework, active objects may make requests of asynchronous services (e.g. sending an SMS message). When an asynchronous request is made, control is returned to the calling object immediately (i.e. without waiting for the call to complete). The caller may choose to do other things before it returns control back to the operating system, which typically schedules other tasks or puts the machine to sleep. When it makes the request, the calling object includes a reference to itself.
When the asynchronous task completes, the operating system identifies the thread containing the requesting active object, and wakes it up. An "active scheduler" in the thread identifies the object that made the request, and passes control back to that object.
The implementation of active objects in Symbian is based around each thread having a "request semaphore". This is incremented when a thread makes an asynchronous request, and decremented when the request is completed. When there are no outstanding requests, the thread is put to sleep.
In practice there may be many active objects in a thread, each doing its own task. They can interact by requesting things of each other, and of active objects in other threads. They may even request things of themselves.
This is an implementation of a very old idea that was developed to handle software interruptions in the 70s. The operating system was acting as the first object and the peripheral as the second one.
External links
developer.symbian.org
Symbian OS | Operating System (OS) | 1,299 |