text
stringlengths 101
134k
| type
stringclasses 12
values | __index_level_0__
int64 0
14.7k
|
---|---|---|
Open source
Open source is source code that is made freely available for possible modification and redistribution. Products include permission to use the source code, design documents, or content of the product. The open-source model is a decentralized software development model that encourages open collaboration.
A main principle of open-source software development is peer production, with products such as source code, blueprints, and documentation freely available to the public. The open-source movement in software began as a response to the limitations of proprietary code. The model is used for projects such as in open-source appropriate technology, and open-source drug discovery.
Open source promotes universal access via an open-source or free license to a product's design or blueprint, and universal redistribution of that design or blueprint. Before the phrase open source became widely adopted, developers and producers have used a variety of other terms. Open source gained hold with the rise of the Internet. The open-source software movement arose to clarify copyright, licensing, domain, and consumer issues.
Generally, open source refers to a computer program in which the source code is available to the general public for use or modification from its original design. Code is released under the terms of a software license. Depending on the license terms, others may then download, modify, and publish their version (fork) back to the community.
Many large formal institutions have sprung up to support the development of the open-source movement, including the Apache Software Foundation, which supports community projects such as the open-source framework Apache Hadoop and the open-source HTTP server Apache HTTP.
History
The sharing of technical information predates the Internet and the personal computer considerably. For instance, in the early years of automobile development a group of capital monopolists owned the rights to a 2-cycle gasoline-engine patent originally filed by George B. Selden. By controlling this patent, they were able to monopolize the industry and force car manufacturers to adhere to their demands, or risk a lawsuit.
In 1911, independent automaker Henry Ford won a challenge to the Selden patent. The result was that the Selden patent became virtually worthless and a new association (which would eventually become the Motor Vehicle Manufacturers Association) was formed. The new association instituted a cross-licensing agreement among all US automotive manufacturers: although each company would develop technology and file patents, these patents were shared openly and without the exchange of money among all the manufacturers. By the time the US entered World War II, 92 Ford patents and 515 patents from other companies were being shared among these manufacturers, without any exchange of money (or lawsuits).
Early instances of the free sharing of source code include IBM's source releases of its operating systems and other programs in the 1950s and 1960s, and the SHARE user group that formed to facilitate the exchange of software. Beginning in the 1960s, ARPANET researchers used an open "Request for Comments" (RFC) process to encourage feedback in early telecommunication network protocols. This led to the birth of the early Internet in 1969.
The sharing of source code on the Internet began when the Internet was relatively primitive, with software distributed via UUCP, Usenet, IRC, and Gopher. BSD, for example, was first widely distributed by posts to comp.os.linux on the Usenet, which is also where its development was discussed. Linux followed in this model.
Open source as a term
The term "open source" was first proposed by a group of people in the free software movement who were critical of the political agenda and moral philosophy implied in the term "free software" and sought to reframe the discourse to reflect a more commercially minded position. In addition, the ambiguity of the term "free software" was seen as discouraging business adoption. However, the ambiguity of the word "free" exists primarily in English as it can refer to cost. The group included Christine Peterson, Todd Anderson, Larry Augustin, Jon Hall, Sam Ockman, Michael Tiemann and Eric S. Raymond. Peterson suggested "open source" at a meeting held at Palo Alto, California, in reaction to Netscape's announcement in January 1998 of a source code release for Navigator. Linus Torvalds gave his support the following day, and Phil Hughes backed the term in Linux Journal. Richard Stallman, the founder of the free software movement, initially seemed to adopt the term, but later changed his mind. Netscape released its source code under the Netscape Public License and later under the Mozilla Public License.
Raymond was especially active in the effort to popularize the new term. He made the first public call to the free software community to adopt it in February 1998. Shortly after, he founded The Open Source Initiative in collaboration with Bruce Perens.
The term gained further visibility through an event organized in April 1998 by technology publisher Tim O'Reilly. Originally titled the "Freeware Summit" and later known as the "Open Source Summit", the event was attended by the leaders of many of the most important free and open-source projects, including Linus Torvalds, Larry Wall, Brian Behlendorf, Eric Allman, Guido van Rossum, Michael Tiemann, Paul Vixie, Jamie Zawinski, and Eric Raymond. At that meeting, alternatives to the term "free software" were discussed. Tiemann argued for "sourceware" as a new term, while Raymond argued for "open source". The assembled developers took a vote, and the winner was announced at a press conference the same evening.
"Open source" has never managed to entirely supersede the older term "free software", giving rise to the combined term free and open-source software (FOSS).
Economics
Some economists agree that open-source is an information good or "knowledge good" with original work involving a significant amount of time, money, and effort. The cost of reproducing the work is low enough that additional users may be added at zero or near zero costthis is referred to as the marginal cost of a product. Copyright creates a monopoly so that the price charged to consumers can be significantly higher than the marginal cost of production. This allows the author to recoup the cost of making the original work. Copyright thus creates access costs for consumers who value the work more than the marginal cost but less than the initial production cost. Access costs also pose problems for authors who wish to create a derivative work—such as a copy of a software program modified to fix a bug or add a feature, or a remix of a song—but are unable or unwilling to pay the copyright holder for the right to do so.
Being organized as effectively a "consumers' cooperative", open source eliminates some of the access costs of consumers and creators of derivative works by reducing the restrictions of copyright. Basic economic theory predicts that lower costs would lead to higher consumption and also more frequent creation of derivative works. Organizations such as Creative Commons host websites where individuals can file for alternative "licenses", or levels of restriction, for their works.
These self-made protections free the general society of the costs of policing copyright infringement.
Others argue that since consumers do not pay for their copies, creators are unable to recoup the initial cost of production and thus have little economic incentive to create in the first place. By this argument, consumers would lose out because some of the goods they would otherwise purchase would not be available. In practice, content producers can choose whether to adopt a proprietary license and charge for copies, or an open license. Some goods which require large amounts of professional research and development, such as the pharmaceutical industry (which depends largely on patents, not copyright for intellectual property protection) are almost exclusively proprietary, although increasingly sophisticated technologies are being developed on open-source principles.
There is evidence that open-source development creates enormous value. For example, in the context of open-source hardware design, digital designs are shared for free and anyone with access to digital manufacturing technologies (e.g. RepRap 3D printers) can replicate the product for the cost of materials. The original sharer may receive feedback and potentially improvements on the original design from the peer production community.
Many open source projects have a high economic value. According to the Battery Open Source Software Index (BOSS), the ten economically most important open source projects are:
The rank given is based on the activity regarding projects in online discussions, on GitHub, on search activity in search engines and on the influence on the labour market.
Licensing alternatives
Alternative arrangements have also been shown to result in good creation outside of the proprietary license model. Examples include:
Creation for its own sake – For example, Wikipedia editors add content for recreation. Artists have a drive to create. Both communities benefit from free starting material.
Voluntary after-the-fact donations – used by shareware, street performers, and public broadcasting in the United States.
Patron – For example, open access publishing relies on institutional and government funding of research faculty, who also have a professional incentive to publish for reputation and career advancement. Works of the U.S. federal government are automatically released into the public domain.
Freemium – Give away a limited version for free and charge for a premium version (potentially using a dual license).
Give away the product and charge something related – Charge for support of open-source enterprise software, give away music but charge for concert admission.
Give away work in order to gain market share – Used by artists, in corporate software to spoil a dominant competitor (for example in the browser wars and the Android operating system).
For own use – Businesses or individual software developers often create software to solve a problem, bearing the full cost of initial creation. They will then open source the solution, and benefit from the improvements others make for their own needs. Communalizing the maintenance burden distributes the cost across more users; free riders can also benefit without undermining the creation process.
Open collaboration
The open-source model is a decentralized software development model that encourages open collaboration, meaning "any system of innovation or production that relies on goal-oriented yet loosely coordinated participants who interact to create a product (or service) of economic value, which they make available to contributors and noncontributors alike." A main principle of open-source software development is peer production, with products such as source code, blueprints, and documentation freely available to the public. The open-source movement in software began as a response to the limitations of proprietary code. The model is used for projects such as in open-source appropriate technology, and open-source drug discovery.
The open-source model for software development inspired the use of the term to refer to other forms of open collaboration, such as in Internet forums, mailing lists and online communities. Open collaboration is also thought to be the operating principle underlining a gamut of diverse ventures, including TEDx and Wikipedia.
Open collaboration is the principle underlying peer production, mass collaboration, and wikinomics. It was observed initially in open source software, but can also be found in many other instances, such as in Internet forums, mailing lists, Internet communities, and many instances of open content, such as Creative Commons. It also explains some instances of crowdsourcing, collaborative consumption, and open innovation.
Riehle et al. define open collaboration as collaboration based on three principles of egalitarianism, meritocracy, and self-organization. Levine and Prietula define open collaboration as "any system of innovation or production that relies on goal-oriented yet loosely coordinated participants who interact to create a product (or service) of economic value, which they make available to contributors and noncontributors alike." This definition captures multiple instances, all joined by similar principles. For example, all of the elements — goods of economic value, open access to contribute and consume, interaction and exchange, purposeful yet loosely coordinated work — are present in an open source software project, in Wikipedia, or in a user forum or community. They can also be present in a commercial website that is based on user-generated content. In all of these instances of open collaboration, anyone can contribute and anyone can freely partake in the fruits of sharing, which are produced by interacting participants who are loosely coordinated.
An annual conference dedicated to the research and practice of open collaboration is the International Symposium on Wikis and Open Collaboration (OpenSym, formerly WikiSym). As per its website, the group defines open collaboration as "collaboration that is egalitarian (everyone can join, no principled or artificial barriers to participation exist), meritocratic (decisions and status are merit-based rather than imposed) and self-organizing (processes adapt to people rather than people adapt to pre-defined processes)."
Open-source license
Open source promotes universal access via an open-source or free license to a product's design or blueprint, and universal redistribution of that design or blueprint. Before the phrase open source became widely adopted, developers and producers used a variety of other terms. Open source gained hold in part due to the rise of the Internet. The open-source software movement arose to clarify copyright, licensing, domain, and consumer issues.
An open-source license is a type of license for computer software and other products that allows the source code, blueprint or design to be used, modified or shared (with or without modification) under defined terms and conditions. This allows end users and commercial companies to review and modify the source code, blueprint or design for their own customization, curiosity or troubleshooting needs. Open-source licensed software is mostly available free of charge, though this does not necessarily have to be the case. Licenses which only permit non-commercial redistribution or modification of the source code for personal use only are generally not considered as open-source licenses. However, open-source licenses may have some restrictions, particularly regarding the expression of respect to the origin of software, such as a requirement to preserve the name of the authors and a copyright statement within the code, or a requirement to redistribute the licensed software only under the same license (as in a copyleft license). One popular set of open-source software licenses are those approved by the Open Source Initiative (OSI) based on their Open Source Definition (OSD).
Applications
Social and political views have been affected by the growth of the concept of open source. Advocates in one field often support the expansion of open source in other fields. But Eric Raymond and other founders of the open-source movement have sometimes publicly argued against speculation about applications outside software, saying that strong arguments for software openness should not be weakened by overreaching into areas where the story may be less compelling. The broader impact of the open-source movement, and the extent of its role in the development of new information sharing procedures, remain to be seen.
The open-source movement has inspired increased transparency and liberty in biotechnology research, for example CAMBIA Even the research methodologies themselves can benefit from the application of open-source principles. It has also given rise to the rapidly-expanding open-source hardware movement.
Computer software
Open-source software is software which source code is published and made available to the public, enabling anyone to copy, modify and redistribute the source code without paying royalties or fees.
Open-source code can evolve through community cooperation. These communities are composed of individual programmers as well as large companies. Some of the individual programmers who start an open-source project may end up establishing companies offering products or services incorporating open-source programs. Examples of open-source software products are:
Linux (that much of world's server parks are running)
MediaWiki (that Wikipedia is based upon)
Many more:
List of free and open-source software packages
List of formerly proprietary software
Electronics
Open-source hardware is hardware which initial specification, usually in a software format, is published and made available to the public, enabling anyone to copy, modify and redistribute the hardware and source code without paying royalties or fees. Open-source hardware evolves through community cooperation. These communities are composed of individual hardware/software developers, hobbyists, as well as very large companies. Examples of open-source hardware initiatives are:
Openmoko: a family of open-source mobile phones, including the hardware specification and the operating system.
OpenRISC: an open-source microprocessor family, with architecture specification licensed under GNU GPL and implementation under LGPL.
Sun Microsystems's OpenSPARC T1 Multicore processor. Sun has released it under GPL.
Arduino, a microcontroller platform for hobbyists, artists and designers.
Simputer, an open hardware handheld computer, designed in India for use in environments where computing devices such as personal computers are deemed inappropriate.
LEON: A family of open-source microprocessors distributed in a library with peripheral IP cores, open SPARC V8 specification, implementation available under GNU GPL.
Tinkerforge: A system of open-source stackable microcontroller building blocks. Allows control of motors and read out sensors with the programming languages C, C++, C#, Object Pascal, Java, PHP, Python and Ruby over a USB or Wifi connection on Windows, Linux and Mac OS X. All of the hardware is licensed under CERN OHL (CERN Open Hardware License).
Open Compute Project: designs for computer data center including power supply, Intel motherboard, AMD motherboard, chassis, racks, battery cabinet, and aspects of electrical and mechanical design.
Food and beverages
Some publishers of open-access journals have argued that data from food science and gastronomy studies should be freely available to aid reproducibility. A number of people have published creative commons licensed recipe books.
Open-source colas – cola soft drinks, similar to Coca-Cola and Pepsi, whose recipe is open source and developed by volunteers. The taste is said to be comparable to that of the standard beverages. Most corporations producing beverages hold their formulas as closely guarded secrets.
Free Beer (originally Vores Øl) – is an open-source beer created by students at the IT-University in Copenhagen together with Superflex, an artist collective, to illustrate how open-source concepts might be applied outside the digital world.
Digital content
Open-content projects organized by the Wikimedia Foundation – Sites such as Wikipedia and Wiktionary have embraced the open-content Creative Commons content licenses. These licenses were designed to adhere to principles similar to various open-source software development licenses. Many of these licenses ensure that content remains free for re-use, that source documents are made readily available to interested parties, and that changes to content are accepted easily back into the system. Important sites embracing open-source-like ideals are Project Gutenberg and Wikisource, both of which post many books on which the copyright has expired and are thus in the public domain, ensuring that anyone has free, unlimited access to that content.
Open ICEcat is an open catalog for the IT, CE and Lighting sectors with product data-sheets based on Open Content License agreement. The digital content are distributed in XML and URL formats.
Google Sketchup's 3D Warehouse is an open-source design community centered around the use of proprietary software that's free.
The University of Waterloo Stratford Campus invites students every year to use its three-storey Christie MicroTiles wall as a digital canvas for their creative work.
Medicine
Pharmaceuticals – There have been several proposals for open-source pharmaceutical development, which led to the establishment of the Tropical Disease Initiative and the Open Source Drug Discovery for Malaria Consortium.
Genomics – The term "open-source genomics" refers to the combination of rapid release of sequence data (especially raw reads) and crowdsourced analyses from bioinformaticians around the world that characterised the analysis of the 2011 E. coli O104:H4 outbreak.
OpenEMR – OpenEMR is an ONC-ATB Ambulatory EHR 2011-2012 certified electronic health records and medical practice management application. It features fully integrated electronic health, records, practice management, scheduling, electronic billing, and is the base for many EHR programs.
Science and engineering
Research – The Science Commons was created as an alternative to the expensive legal costs of sharing and reusing scientific works in journals etc.
Research – The Open Solar Outdoors Test Field (OSOTF) is a grid-connected photovoltaic test system, which continuously monitors the output of a number of photovoltaic modules and correlates their performance to a long list of highly accurate meteorological readings. The OSOTF is organized under open-source principles – All data and analysis is to be made freely available to the entire photovoltaic community and the general public.
Engineering – Hyperloop, a form of high-speed transport proposed by entrepreneur Elon Musk, which he describes as "an elevated, reduced-pressure tube that contains pressurized capsules driven within the tube by a number of linear electric motors".
Construction – WikiHouse is an open-source project for designing and building houses.
Energy research - The Open Energy Modelling Initiative promotes open-source models and open data in energy research and policy advice.
Robotics
An open-source robot is a robot whose blueprints, schematics, or source code are released under an open-source model
Other
Open-source principles can be applied to technical areas such as digital communication protocols and data storage formats.
Open-design – which involves applying open-source methodologies to the design of artifacts and systems in the physical world. It is very nascent but has huge potential.
Open-source appropriate technology (OSAT) refers to technologies that are designed in the same fashion as free and open-source software. These technologies must be "appropriate technology" (AT) – meaning technology that is designed with special consideration to the environmental, ethical, cultural, social, political, and economic aspects of the community it is intended for. An example of this application is the use of open-source 3D printers like the RepRap to manufacture appropriate technology.
Teaching – which involves applying the concepts of open source to instruction using a shared web space as a platform to improve upon learning, organizational, and management challenges. An example of an Open-source courseware is the Java Education & Development Initiative (JEDI). Other examples include Khan Academy and wikiversity. At the university level, the use of open-source-appropriate technology classroom projects has been shown to be successful in forging the connection between science/engineering and social benefit: This approach has the potential to use university students' access to resources and testing equipment in furthering the development of appropriate technology. Similarly OSAT has been used as a tool for improving service learning.
There are few examples of business information (methodologies, advice, guidance, practices) using the open-source model, although this is another case where the potential is enormous. ITIL is close to open source. It uses the Cathedral model (no mechanism exists for user contribution) and the content must be bought for a fee that is small by business consulting standards (hundreds of British pounds). Various checklists are published by government, banks or accounting firms.
An open-source group emerged in 2012 that is attempting to design a firearm that may be downloaded from the internet and "printed" on a 3D Printer. Calling itself Defense Distributed, the group wants to facilitate "a working plastic gun that could be downloaded and reproduced by anybody with a 3D printer".
Agrecol, a German NGO has developed an open-source licence for seeds operating with copyleft and created OpenSourceSeeds as a respective service provider. Breeders that apply the license to their new invented material prevent it from the threat of privatisation and help to establish a commons-based breeding sector as an alternative to the commercial sector.
Open Source Ecology, farm equipment and global village construction kit.
"Open" versus "free" versus "free and open"
Free and open-source software (FOSS) or Free/Libre and open-source software (FLOSS) is openly shared source code that is licensed without any restrictions on usage, modification, or distribution. Confusion persists about this definition because the "Free", also known as "Libre", refers to the freedom of the product not the price, expense, cost, or charge. For example, "being free to speak" is not the same as "free beer".
Conversely, Richard Stallman argues the obvious meaning of term "open source" is that the source code is public/accessible for inspection, without necessarily any other rights granted, although the proponents of the term say the conditions in the Open Source Definition must be fulfilled.
"Free and open" should not be confused with public ownership (state ownership), deprivatization (nationalization), anti-privatization (anti-corporate activism), or transparent behavior.
GNU
GNU Manifesto
Richard Stallman
Gratis versus libre (no cost vs no restriction)
Software
Generally, open source refers to a computer program in which the source code is available to the general public for use for any (including commercial) purpose, or modification from its original design. Open-source code is meant to be a collaborative effort, where programmers improve upon the source code and share the changes within the community. Code is released under the terms of a software license. Depending on the license terms, others may then download, modify, and publish their version (fork) back to the community.
List of free and open-source software packages
Open-source license, a copyright license that makes the source code available with a product
The Open Source Definition, as used by the Open Source Initiative for open source software
Open-source model, a decentralized software development model that encourages open collaboration
Open-source software, software which permits the use and modification of its source code
History of free and open-source software
Open-source software advocacy
Open-source software development
Open-source-software movement
Open-source video games
List of open-source video games
Business models for open-source software
Comparison of open-source and closed-source software
Diversity in open-source software
MapGuide Open Source, a web-based map-making platform to develop and deploy web mapping applications and geospatial web services (not to be confused with OpenStreetMap (OSM), a collaborative project to create a free editable map of the world).
Agriculture, economy, manufacturing and production
Open-source appropriate technology (OSAT), is designed for environmental, ethical, cultural, social, political, economic, and community aspects
Open-design movement, development of physical products, machines and systems via publicly shared design information, including free and open-source software and open-source hardware, among many others:
Open Architecture Network, improving global living conditions through innovative sustainable design
OpenCores, a community developing digital electronic open-source hardware
Open Design Alliance, develops Teigha, a software development platform to create engineering applications including CAD software
Open Hardware and Design Alliance (OHANDA), sharing open hardware and designs via free online services
Open Source Ecology (OSE), a network of farmers, engineers, architects and supporters striving to manufacture the Global Village Construction Set (GVCS)
OpenStructures (OSP), a modular construction model where everyone designs on the basis of one shared geometrical OS grid
Open manufacturing or "Open Production" or "Design Global, Manufacture Local", a new socioeconomic production model to openly and collaboratively produce and distribute physical objects
Open-source architecture (OSArc), emerging procedures in imagination and formation of virtual and real spaces within an inclusive universal infrastructure
Open-source cola, cola soft drinks made to open-sourced recipes
Open-source hardware, or open hardware, computer hardware, such as microprocessors, that is designed in the same fashion as open source software
List of open-source hardware projects
Open-source product development (OSPD), collaborative product and process openness of open-source hardware for any interested participants
Open-source robotics, physical artifacts of the subject are offered by the open design movement
Open Source Seed Initiative, open source varieties of crop seeds, as an alternative to patent-protected seeds sold by large agriculture companies.
Science and medicine
Open science, the movement to make scientific research, data and dissemination accessible to all levels of an inquiring society, amateur or professional
Open science data, a type of open data focused on publishing observations and results of scientific activities available for anyone to analyze and reuse
Open Science Framework and the Center for Open Science
Open Source Lab (disambiguation), several laboratories
Open-Source Lab (book), a 2014 book by Joshua M. Pearce
See also: The antithesis of open science is Scientism, a blind faith in profit driven proprietary (closed) science and marketing (ie. proprietary software, proprietary protocols, fields of private biomedical engineering, biological patents, chemical patents (drugs), minimal sufficiency of disclosure, etc.).
Open-notebook science, the practice of making the entire primary record of a research project publicly available online as it is recorded
Open Source Physics (OSP), a National Science Foundation and Davidson College project to spread the use of open source code libraries that take care of much of the heavy lifting for physics
Open Source Geospatial Foundation
NASA Open Source Agreement (NOSA), an OSI-approved software license
List of open-source software for mathematics
List of open-source bioinformatics software
List of open-source health software
List of open-source health hardware
Media
Open-source film, open source movies
List of open-source films
Open Source Cinema, a collaborative website to produce a documentary film
Open-source journalism, commonly describes a spectrum on online publications, forms of innovative publishing of online journalism, and content voting, rather than the sourcing of news stories by "professional" journalists
Open-source investigation
See also: Crowdsourcing, crowdsourced journalism, crowdsourced investigation, trutherism, and historical revisionism considered "fringe" by corporate media.
Open-source record label, open source music
"Open Source", a 1960s rock song performed by The Magic Mushrooms
Open Source (radio show), a radio show using open content information gathering methods hosted by Christopher Lydon
Open textbook, an open copyright licensed textbook made freely available online for students, teachers, and the public
Organizations
Open Source Initiative (OSI), an organization dedicated to promote open source
Open Source Software Institute
Journal of Open Source Software
Open Source Day, the dated varies from year to year for an international conference for fans of open solutions from Central and Eastern Europe
Open Source Developers' Conference
Open Source Development Labs (OSDL), a non-profit corporation that provides space for open-source project
Open Source Drug Discovery, a collaborative drug discovery platform for neglected tropical diseases
Open Source Technology Group (OSTG), news, forums, and other SourceForge resources for IT
Open source in Kosovo
Open Source University Meetup
New Zealand Open Source Awards
Procedures
Open security, application of open source philosophies to computer security
Open Source Information System, the former name of an American unclassified network serving the U.S. intelligence community with open source intelligence, since mid-2006 the content of OSIS is now known as Intelink-U while the network portion is known as DNI-U
Open-source intelligence, an intelligence gathering discipline based on information collected from open sources (not to be confused with open-source artificial intelligence such as Mycroft (software)).
Society
The rise of open-source culture in the 20th century resulted from a growing tension between creative practices that involve require access to content that is often copyrighted, and restrictive intellectual property laws and policies governing access to copyrighted content. The two main ways in which intellectual property laws became more restrictive in the 20th century were extensions to the term of copyright (particularly in the United States) and penalties, such as those articulated in the Digital Millennium Copyright Act (DMCA), placed on attempts to circumvent anti-piracy technologies.
Although artistic appropriation is often permitted under fair-use doctrines, the complexity and ambiguity of these doctrines creates an atmosphere of uncertainty among cultural practitioners. Also, the protective actions of copyright owners create what some call a "chilling effect" among cultural practitioners.
The idea of an "open-source" culture runs parallel to "Free Culture," but is substantively different. Free culture is a term derived from the free software movement, and in contrast to that vision of culture, proponents of open-source culture (OSC) maintain that some intellectual property law needs to exist to protect cultural producers. Yet they propose a more nuanced position than corporations have traditionally sought. Instead of seeing intellectual property law as an expression of instrumental rules intended to uphold either natural rights or desirable outcomes, an argument for OSC takes into account diverse goods (as in "the Good life") and ends.
Sites such as ccMixter offer up free web space for anyone willing to license their work under a Creative Commons license. The resulting cultural product is then available to download free (generally accessible) to anyone with an Internet connection. Older analog technologies such as the telephone or television have limitations on the kind of interaction users can have.
Through various technologies such as peer-to-peer networks and blogs, cultural producers can take advantage of vast social networks to distribute their products. As opposed to traditional media distribution, redistributing digital media on the Internet can be virtually costless. Technologies such as BitTorrent and Gnutella take advantage of various characteristics of the Internet protocol (TCP/IP) in an attempt to totally decentralize file distribution.
Government
Open politics (sometimes known as Open-source politics) is a political process that uses Internet technologies such as blogs, email and polling to provide for a rapid feedback mechanism between political organizations and their supporters. There is also an alternative conception of the term Open-source politics which relates to the development of public policy under a set of rules and processes similar to the open-source software movement.
Open-source governance is similar to open-source politics, but it applies more to the democratic process and promotes the freedom of information.
Open-source political campaigns refer specifically to political campaigns.
The South Korean government wants to increase its use of free and open-source software, in order to decrease its dependence on proprietary software solutions. It plans to make open standards a requirement, to allow the government to choose between multiple operating systems and web browsers. Korea's Ministry of Science, ICT & Future Planning is also preparing ten pilots on using open-source software distributions.
Ethics
Open-source ethics is split into two strands:
Open-source ethics as an ethical school – Charles Ess and David Berry are researching whether ethics can learn anything from an open-source approach. Ess famously even defined the AoIR Research Guidelines as an example of open-source ethics.
Open-source ethics as a professional body of rules – This is based principally on the computer ethics school, studying the questions of ethics and professionalism in the computer industry in general and software development in particular.
Religion
Irish philosopher Richard Kearney has used the term "open-source Hinduism" to refer to the way historical figures such as Mohandas Gandhi and Swami Vivekananda worked upon this ancient tradition.
Media
Open-source journalism formerly referred to the standard journalistic techniques of news gathering and fact checking, reflecting open-source intelligence, a similar term used in military intelligence circles. Now, open-source journalism commonly refers to forms of innovative publishing of online journalism, rather than the sourcing of news stories by a professional journalist. In the 25 December 2006 issue of TIME magazine this is referred to as user created content and listed alongside more traditional open-source projects such as OpenSolaris and Linux.
Weblogs, or blogs, are another significant platform for open-source culture. Blogs consist of periodic, reverse chronologically ordered posts, using a technology that makes webpages easily updatable with no understanding of design, code, or file transfer required. While corporations, political campaigns and other formal institutions have begun using these tools to distribute information, many blogs are used by individuals for personal expression, political organizing, and socializing. Some, such as LiveJournal or WordPress, utilize open-source software that is open to the public and can be modified by users to fit their own tastes. Whether the code is open or not, this format represents a nimble tool for people to borrow and re-present culture; whereas traditional websites made the illegal reproduction of culture difficult to regulate, the mutability of blogs makes "open sourcing" even more uncontrollable since it allows a larger portion of the population to replicate material more quickly in the public sphere.
Messageboards are another platform for open-source culture. Messageboards (also known as discussion boards or forums), are places online where people with similar interests can congregate and post messages for the community to read and respond to. Messageboards sometimes have moderators who enforce community standards of etiquette such as banning spammers. Other common board features are private messages (where users can send messages to one another) as well as chat (a way to have a real time conversation online) and image uploading. Some messageboards use phpBB, which is a free open-source package. Where blogs are more about individual expression and tend to revolve around their authors, messageboards are about creating a conversation amongst its users where information can be shared freely and quickly. Messageboards are a way to remove intermediaries from everyday life—for instance, instead of relying on commercials and other forms of advertising, one can ask other users for frank reviews of a product, movie or CD. By removing the cultural middlemen, messageboards help speed the flow of information and exchange of ideas.
OpenDocument is an open document file format for saving and exchanging editable office documents such as text documents (including memos, reports, and books), spreadsheets, charts, and presentations. Organizations and individuals that store their data in an open format such as OpenDocument avoid being locked into a single software vendor, leaving them free to switch software if their current vendor goes out of business, raises their prices, changes their software, or changes their licensing terms to something less favorable.
Open-source movie production is either an open call system in which a changing crew and cast collaborate in movie production, a system in which the result is made available for re-use by others or in which exclusively open-source products are used in the production. The 2006 movie Elephants Dream is said to be the "world's first open movie", created entirely using open-source technology.
An open-source documentary film has a production process allowing the open contributions of archival material footage, and other filmic elements, both in unedited and edited form, similar to crowdsourcing. By doing so, on-line contributors become part of the process of creating the film, helping to influence the editorial and visual material to be used in the documentary, as well as its thematic development. The first open-source documentary film is the non-profit WBCN and the American Revolution, which went into development in 2006, and will examine the role media played in the cultural, social and political changes from 1968 to 1974 through the story of radio station WBCN-FM in Boston. The film is being produced by Lichtenstein Creative Media and the non-profit Center for Independent Documentary. Open Source Cinema is a website to create Basement Tapes, a feature documentary about copyright in the digital age, co-produced by the National Film Board of Canada.
Open-source film-making refers to a form of film-making that takes a method of idea formation from open-source software, but in this case the 'source' for a filmmaker is raw unedited footage rather than programming code. It can also refer to a method of film-making where the process of creation is 'open' i.e. a disparate group of contributors, at different times contribute to the final piece.
Open-IPTV is IPTV that is not limited to one recording studio, production studio, or cast. Open-IPTV uses the Internet or other means to pool efforts and resources together to create an online community that all contributes to a show.
Education
Within the academic community, there is discussion about expanding what could be called the "intellectual commons" (analogous to the Creative Commons). Proponents of this view have hailed the Connexions Project at Rice University, OpenCourseWare project at MIT, Eugene Thacker's article on "open-source DNA", the "Open Source Cultural Database", Salman Khan's Khan Academy and Wikipedia as examples of applying open source outside the realm of computer software.
Open-source curricula are instructional resources whose digital source can be freely used, distributed and modified.
Another strand to the academic community is in the area of research. Many funded research projects produce software as part of their work. There is an increasing interest in making the outputs of such projects available under an open-source license. In the UK the Joint Information Systems Committee (JISC) has developed a policy on open-source software. JISC also funds a development service called OSS Watch which acts as an advisory service for higher and further education institutions wishing to use, contribute to and develop open-source software.
On 30 March 2010, President Barack Obama signed the Health Care and Education Reconciliation Act, which included $2 billion over four years to fund the TAACCCT program, which is described as "the largest OER (open education resources) initiative in the world and uniquely focused on creating curricula in partnership with industry for credentials in vocational industry sectors like manufacturing, health, energy, transportation, and IT".
Innovation communities
The principle of sharing pre-dates the open-source movement; for example, the free sharing of information has been institutionalized in the scientific enterprise since at least the 19th century. Open-source principles have always been part of the scientific community. The sociologist Robert K. Merton described the four basic elements of the community—universalism (an international perspective), communalism (sharing information), objectivity (removing one's personal views from the scientific inquiry) and organized skepticism (requirements of proof and review) that describe the (idealised) scientific community.
These principles are, in part, complemented by US law's focus on protecting expression and method but not the ideas themselves. There is also a tradition of publishing research results to the scientific community instead of keeping all such knowledge proprietary. One of the recent initiatives in scientific publishing has been open access—the idea that research should be published in such a way that it is free and available to the public. There are currently many open access journals where the information is available free online, however most journals do charge a fee (either to users or libraries for access). The Budapest Open Access Initiative is an international effort with the goal of making all research articles available free on the Internet.
The National Institutes of Health has recently proposed a policy on "Enhanced Public Access to NIH Research Information". This policy would provide a free, searchable resource of NIH-funded results to the public and with other international repositories six months after its initial publication. The NIH's move is an important one because there is significant amount of public funding in scientific research. Many of the questions have yet to be answered—the balancing of profit vs. public access, and ensuring that desirable standards and incentives do not diminish with a shift to open access.
Benjamin Franklin was an early contributor eventually donating all his inventions including the Franklin stove, bifocals, and the lightning rod to the public domain.
New NGO communities are starting to use the open-source technology as a tool. One example is the Open Source Youth Network started in 2007 in Lisboa by ISCA members.
Open innovation is also a new emerging concept which advocate putting R&D in a common pool. The Eclipse platform is openly presenting itself as an Open innovation network.
Arts and recreation
Copyright protection is used in the performing arts and even in athletic activities. Some groups have attempted to remove copyright from such practices.
In 2012, Russian music composer, scientist and Russian Pirate Party member Victor Argonov presented detailed raw files of his electronic opera "2032" under free license CC-BY-NC 3.0 (later relicensed under CC-BY-SA 4.0). This opera was originally composed and published in 2007 by Russian label MC Entertainment as a commercial product, but then the author changed its status to free. In his blog he said that he decided to open raw files (including wav, midi and other used formats) to the public in order to support worldwide pirate actions against SOPA and PIPA. Several Internet resources called "2032" the first open-source musical opera in history.
Other related movements
The following are events and applications that have been developed via the open source community, and echo the ideologies of the open source movement.
Open Education Consortium — an organization composed of various colleges that support open source and share some of their material online. This organization, headed by Massachusetts Institute of Technology, was established to aid in the exchange of open source educational materials.
Wikipedia — user-generated online encyclopedia with sister projects in academic areas, such as Wikiversity — a community dedicated to the creation and exchange of learning materials
Project Gutenberg — prior to the existence of Google Scholar Beta, this was the first supplier of electronic books and the very first free library project
Synthetic Biology- This new technology is potentially important because it promises to enable cheap, lifesaving new drugs as well as helping to yield biofuels that may help to solve our energy problem. Although synthetic biology has not yet come out of its "lab" stage, it has potential to become industrialized in the near future. In order to industrialize open source science, there are some scientists who are trying to build their own brand of it.
Ideologically-related movements
The open-access movement is a movement that is similar in ideology to the open source movement. Members of this movement maintain that academic material should be readily available to provide help with "future research, assist in teaching and aid in academic purposes." The Open access movement aims to eliminate subscription fees and licensing restrictions of academic materials
The free-culture movement is a movement that seeks to achieve a culture that engages in collective freedom via freedom of expression, free public access to knowledge and information, full demonstration of creativity and innovation in various arenas and promotion of citizen liberties.
Creative Commons is an organization that "develops, supports, and stewards legal and technical infrastructure that maximizes digital creativity, sharing, and innovation." It encourages the use of protected properties online for research, education, and creative purposes in pursuit of a universal access. Creative Commons provides an infrastructure through a set of copyright licenses and tools that creates a better balance within the realm of "all rights reserved" properties. The Creative Commons license offers a slightly more lenient alternative to "all rights reserved" copyrights for those who do not wish to exclude the use of their material.
The Zeitgeist Movement is an international social movement that advocates a transition into a sustainable "resource-based economy" based on collaboration in which monetary incentives are replaced by commons-based ones with everyone having access to everything (from code to products) as in "open source everything". While its activism and events are typically focused on media and education, TZM is a major supporter of open source projects worldwide since they allow for uninhibited advancement of science and technology, independent of constraints posed by institutions of patenting and capitalist investment.
P2P Foundation is an "international organization focused on studying, researching, documenting and promoting peer to peer practices in a very broad sense". Its objectives incorporate those of the open source movement, whose principles are integrated in a larger socio-economic model.
See also
Access to Knowledge movement (A2K)
Cooperative
Decentralization
Decentralized computing
Distributed data storage
Distributed file systems
Internet privacy
Privacy software
Free Beer
Free-culture movement
Free Knowledge Foundation
Freedom of contract
OpenBTS
Open catalogue
Open Compute Project
Open Data Institute
Open education
Open educational resources
Open format
Open Knowledge International
Open copyright license
Open publishing
Open research
Open-source curriculum (OSC)
Open-source governance
Open politics
Open-source religion
Open-source unionism
Open standard
Paywall
Peer-to-peer
Radical transparency
Sharing economy
Social collaboration
Solidarity economy
Tactical Technology Collective
Voluntary association
Voluntaryism or Agorism
Terms based on open source
Open implementation
Open security
Open-source record label
Open standard
Other
Open Sources: Voices from the Open Source Revolution (book)
Commons-based peer production
Digital rights
Diseconomies of scale
Free content
Gift economy
Glossary of legal terms in technology
Mass collaboration
Network effect
Open Source Initiative
Openness
Proprietary software
References
Further reading
Karl Fogel. Producing Open Source Software (How to run a successful free-software project). Free PDF version available.
(wiki)
Nettingsmeier, Jörn. "So What? I Don't Hack!" eContact! 11.3Logiciels audio " open source " / Open Source for Audio Application (September 2009). Montréal: CEC.
Various authors. eContact! 11.3Logiciels audio " open source " / Open Source for Audio Application (September 2009). Montréal: CEC.
Various authors. "Open Source Travel Guide [wiki]". eContact! 11.3Logiciels audio " open source " / Open Source for Audio Application (September 2009). Montréal: CEC.
Literature on legal and economic aspects
v. Engelhardt, S. (2008): "Intellectual Property Rights and Ex-Post Transaction Costs: the Case of Open and Closed Source Software", Jena Economic Research Papers 2008-047. (PDF)
European Commission. (2006). Economic impact of open source software on innovation and the competitiveness of the Information and Communication Technologies sector in the EU. Brussels.
(wiki)
earlier revision (PDF)
earlier revision
External links
Academic publishing
Business models
Collaborative software
Computer law
Data publishing
Free culture movement
Intellectual property law
Open access (publishing)
Open content projects
Open educational resources
Open formats
Open hardware electronic devices
Open hardware organizations and companies
Open science
Open-source movement
Scholarly communication
Articles containing video clips | Operating System (OS) | 1,000 |
VAXELN
VAXELN (typically pronounced "VAX-elan") is a discontinued real-time operating system for the VAX family of computers produced by the Digital Equipment Corporation (DEC) of Maynard, Massachusetts.
As with RSX-11 and VMS, Dave Cutler was the principal force behind the development of this operating system. Cutler's team developed the product after moving to the Seattle, Washington area to form the DECwest Engineering Group; DEC's first engineering group outside New England. Initial target platforms for VAXELN were the backplane interconnect computers such as the model code-named Scorpio. When VAXELN was well under way, Cutler spearheaded the next project, the MicroVAX I, the first VAX microcomputer. Although it was a low-volume product compared with the New England-developed MicroVAX II, the MicroVAX I demonstrated the set of architectural decisions needed to support a single-board implementation of the VAX computer family, and it also provided a platform for embedded system applications written for VAXELN.
The VAXELN team made the decision, for the first release, to use the programming language Pascal as its system programming language. The development team built the first product in approximately 18 months. Other languages, including C, Ada, and Fortran were supported in later releases of the system as optional extras. A relational database, named VAX Rdb/ELN was another optional component of the system. Later versions of VAXELN supported an X11 server named EWS (VAXELN Window Server). VAXELN with EWS was used as the operating system for the VT1300 X terminal, and was sometimes used to convert old VAXstation hardware into X terminals. Beginning with version 4.3, VAXELN gained support for TCP/IP networking and a subset of POSIX APIs.
VAXELN allowed the creation of a self-contained embedded system application that would run on VAX (and later MicroVAX) hardware with no other operating system present. The system was debuted in Las Vegas in the early 1980s, with a variety of amusing application software written by the development team, ranging from a system that composed and played minuets to a robotic system that played and solved the Tower of Hanoi puzzle.
VAXELN was not ported to the DEC Alpha architecture, and instead was replaced with a Digital-supported port of VxWorks to Alpha, and a VAXELN application programming interface (API) compatibility layer for that platform. In 1999, SMART Modular Technologies acquired Compaq's (formerly Digital's) embedded systems division, which included VAXELN.
Origin of name
The system was originally supposed to be named Executive for Local Area Network (ELAN), but DEC discovered at the last minute that the word Elan was trademarked in a European country where DEC wished to conduct business. The company holding the trademark was the Slovenian sports equipment manufacturer Elan. To avoid litigation, DEC quickly renamed it to VAXELN by dropping the A, much to the disgruntlement of the developers. Some documentation and marketing material had already been printed referring to the product as ELAN, and samples of these posters were prized for many years by members of the original team.
References
External links
Introduction to VAXELN
DEC operating systems
Discontinued operating systems
Real-time operating systems | Operating System (OS) | 1,001 |
Device file
In Unix-like operating systems, a device file or special file is an interface to a device driver that appears in a file system as if it were an ordinary file. There are also special files in DOS, OS/2, and Windows. These special files allow an application program to interact with a device by using its device driver via standard input/output system calls. Using standard system calls simplifies many programming tasks, and leads to consistent user-space I/O mechanisms regardless of device features and functions.
Device files usually provide simple interfaces to standard devices (such as printers and serial ports), but can also be used to access specific unique resources on those devices, such as disk partitions. Additionally, device files are useful for accessing system resources that have no connection with any actual device, such as data sinks and random number generators.
There are two general kinds of device files in Unix-like operating systems, known as character special files and block special files. The difference between them lies in how much data is read and written by the operating system and hardware. These together can be called device special files in contrast to named pipes, which are not connected to a device but are not ordinary files either.
MS-DOS borrowed the concept of special files from Unix but renamed them devices. Because early versions of MS-DOS did not support a directory hierarchy, devices were distinguished from regular files by making their names reserved words, for example: the infamous CON. These were chosen for a degree of compatibility with CP/M and are still present in modern Windows for backwards compatibility.
In some Unix-like systems, most device files are managed as part of a virtual file system traditionally mounted at /dev, possibly associated with a controlling daemon, which monitors hardware addition and removal at run time, making corresponding changes to the device file system if that's not automatically done by the kernel, and possibly invoking scripts in system or user space to handle special device needs. The FreeBSD, DragonFly BSD and Darwin have a dedicated file system devfs; device nodes are managed automatically by this file system, in kernel space. Linux used to have a similar devfs implementation, but it was abandoned later, and then removed since version 2.6.17; Linux now primarily uses a user space implementation known as udev, but there are many variants.
In Unix systems which support chroot process isolation, such as Solaris Containers, typically each chroot environment needs its own /dev; these mount points will be visible on the host OS at various nodes in the global file system tree. By restricting the device nodes populated into chroot instances of /dev, hardware isolation can be enforced by the chroot environment (a program can not meddle with hardware that it can neither see nor name—an even stronger form of access control than Unix file system permissions).
MS-DOS managed hardware device contention (see TSR) by making each device file exclusive open. An application attempting to access a device already in use would discover itself unable to open the device file node. A variety of device driver semantics are implemented in Unix and Linux concerning concurrent access.
Unix and Unix-like systems
Device nodes correspond to resources that an operating system's kernel has already allocated. Unix identifies those resources by a major number and a minor number, both stored as part of the structure of a node. The assignment of these numbers occurs uniquely in different operating systems and on different computer platforms. Generally, the major number identifies the device driver and the minor number identifies a particular device (possibly out of many) that the driver controls: in this case, the system may pass the minor number to a driver. However, in the presence of dynamic number allocation, this may not be the case (e.g. on FreeBSD 5 and up).
As with other special file types, the computer system accesses device nodes using standard system calls and treats them like regular computer files. Two standard types of device files exist; unfortunately their names are rather counter-intuitive for historical reasons, and explanations of the difference between the two are often incorrect as a result.
Character devices
Character special files or character devices provide unbuffered, direct access to the hardware device. They do not necessarily allow programs to read or write single characters at a time; that is up to the device in question. The character device for a hard disk, for example, will normally require that all reads and writes be aligned to block boundaries and most certainly will not allow reading a single byte.
Character devices are sometimes known as raw devices to avoid the confusion surrounding the fact that a character device for a piece of block-based hardware will typically require programs to read and write aligned blocks.
Block devices
Block special files or block devices provide buffered access to hardware devices, and provide some abstraction from their specifics. Unlike character devices, block devices will always allow the programmer to read or write a block of any size (including single characters/bytes) and any alignment. The downside is that because block devices are buffered, the programmer does not know how long it will take before written data is passed from the kernel's buffers to the actual device, or indeed in what order two separate writes will arrive at the physical device. Additionally, if the same hardware exposes both character and block devices, there is a risk of data corruption due to clients using the character device being unaware of changes made in the buffers of the block device.
Most systems create both block and character devices to represent hardware like hard disks. FreeBSD and Linux notably do not; the former has removed support for block devices, while the latter creates only block devices. In Linux, to get a character device for a disk, one must use the "raw" driver, though one can get the same effect as opening a character device by opening the block device with the Linux-specific flag.
Pseudo-devices
Device nodes on Unix-like systems do not necessarily have to correspond to physical devices. Nodes that lack this correspondence form the group of pseudo-devices. They provide various functions handled by the operating system. Some of the most commonly used (character-based) pseudo-devices include:
accepts and discards all input written to it; provides an end-of-file indication when read from.
accepts and discards all input written to it; produces a continuous stream of null characters (zero-value bytes) as output when read from.
produces a continuous stream of null characters (zero-value bytes) as output when read from, and generates an ("disk full") error when attempting to write to it.
produces bytes generated by the kernel's cryptographically secure pseudorandom number generator. Its exact behavior varies by implementation, and sometimes variants such as or are also provided.
Additionally, BSD-specific pseudo-devices with an interface may also include:
allows userland processes to control PF through an interface.
provides access to devices otherwise not found as nodes, used by to implement RAID management in OpenBSD and NetBSD.
used by NetBSD's envsys framework for hardware monitoring, accessed in the userland through by the utility.
Node creation
Nodes are created by the system call. The command-line program for creating nodes is also called . Nodes can be moved or deleted by the usual filesystem system calls (, ) and commands (, ).
Some Unix versions include a script named makedev or MAKEDEV to create all necessary devices in the directory . It only makes sense on systems whose devices are statically assigned major numbers (e.g., by means of hardcoding it in their kernel module).
While some other Unix systems such as FreeBSD, used kernel-based device node management via devfs only, and not supporting manual node creation. system call and command exist to keep compatibility with POSIX, but manually created device nodes outside devfs will not function at all.
Naming conventions
The following prefixes are used for the names of some devices in the hierarchy, to identify the type of device:
: line printers (compare lp)
: pseudo-terminals (virtual terminals)
: terminals
Some additional prefixes have come into common use in some operating systems:
: frame buffer
: (platform) floppy disks, though this same abbreviation is also commonly used to refer to file descriptor
: ("classic") IDE driver (previously used for ATA hard disk drive, ATAPI optical disc drives, etc.)
: the master device on the first ATA channel (usually identified by major number 3 and minor number 0)
: the slave device on the first ATA channel
: the master device on the second ATA channel
: the slave device on the second ATA channel
, : parallel ports
: Main memory (character device)
NVMe driver:
: first registered device's device controller (character device)
: first registered device's first namespace (block device)
: first registered device's first namespace's first partition (block device)
MMC driver:
: storage driver for MMC media (SD cards, eMMC chips on laptops, etc.)
: first registered device
: first registered device's first partition
SCSI driver, also used by libATA (modern PATA/SATA driver), USB, IEEE 1394, etc.:
: mass-storage driver (block device)
: first registered device
, etc.: second, third, etc. registered devices
: Enclosure driver
: generic SCSI layer
: "ROM" driver (data-oriented optical disc drives; scd is just a secondary alias)
: magnetic tape driver
: terminals
: (platform) serial port driver
: USB serial converters, modems, etc.
The canonical list of the prefixes used in Linux can be found in the Linux Device List, the official registry of allocated device numbers and directory nodes for the Linux operating system.
For most devices, this prefix is followed by a number uniquely identifying the particular device. For hard drives, a letter is used to identify devices and is followed by a number to identify partitions. Thus a file system may "know" an area on a disk as , for example, or "see" a networked terminal session as associated with .
On disks using the typical PC master boot record, the device numbers of primary and the optional extended partition are numbered 1 through 4, while the indexes of any logical partitions are 5 and onwards, regardless of the layout of the former partitions (their parent extended partition does not need to be the fourth partition on the disk, nor do all four primary partitions have to exist).
Device names are usually not portable between different Unix-like system variants, for example, on some BSD systems, the IDE devices are named , , etc.
devfs
devfs is a specific implementation of a device file system on Unix-like operating systems, used for presenting device files. The underlying mechanism of implementation may vary, depending on the OS.
Maintaining these special files on a physically implemented file system (i.e., hard drive) is inconvenient, and as it needs kernel assistance anyway, the idea arose of a special-purpose logical file system that is not physically stored.
Also, defining when devices are ready to appear is not entirely trivial. The devfs approach is for the device driver to request creation and deletion of devfs entries related to the devices it enables and disables.
PC DOS, TOS, OS/2, and Windows
A device file is a reserved keyword used in PC DOS, TOS, OS/2, and Windows systems to allow access to certain ports and devices.
MS-DOS borrowed the concept of special files from Unix but renamed them devices. Because early versions of MS-DOS did not support a directory hierarchy, devices were distinguished from regular files by making their names reserved words. This means that certain file names were reserved for devices, and should not be used to name new files or directories.
The reserved names themselves were chosen to be compatible with "special files" handling of PIP command in CP/M. There were two kinds of devices in DOS: Block Devices (used for disk drives) and Character Devices (generally all other devices, including COM and PRN devices).
DOS uses device files for accessing printers and ports. Most versions of Windows also contain this support, which can cause confusion when trying to make files and folders of certain names, as they cannot have these names. Versions 2.x of MS-DOS provide the AVAILDEV CONFIG.SYS parameter that, if set to FALSE, makes these special names only active if prefixed with \DEV\, thus allowing ordinary files to be created with these names.
GEMDOS, the DOS-like part of Atari TOS, supported similar device names to DOS, but unlike DOS it required a trailing ":" character (on DOS, this is optional) to identify them as devices as opposed to normal filenames (thus "CON:" would work on both DOS and TOS, but "CON" would name an ordinary file on TOS but the console device on DOS). In MiNT and MagiC, a special UNIX-like unified filesystem view accessed via the "U:" drive letter also placed device files in "U:\DEV".
Using shell redirection and pipes, data can be sent to or received from a device. For example, typing the following will send the file c:\data.txt to the printer:
TYPE c:\data.txt > PRN
PIPE, MAILSLOT, and MUP are other standard Windows devices.
IOCS
The 8-bit operating system of Sharp pocket computers like the PC-E500, PC-E500S etc. consists of a BASIC interpreter, a DOS 2-like File Control System (FCS) implementing a rudimentary 12-bit FAT-like filesystem, and a BIOS-like Input/Output Control System (IOCS) implementing a number of standard character and block device drivers as well as special file devices including STDO:/SCRN: (display), STDI:/KYBD: (keyboard), COM: (serial I/O), STDL:/PRN: (printer), CAS: (cassette tape), E:/F:/G: (memory file), S1:/S2:/S3: (memory card), X:/Y: (floppy), SYSTM: (system), and NIL: (function).
Implementations
See also
devfsd
sysfs
Block size
Blocking
Buffer
File system
Hardware abstraction
Storage area network
User space
Unix file types
udev
References
Further reading
Interfaces of the Linux kernel
Pseudo file systems supported by the Linux kernel
Special-purpose file systems
Unix file system technology | Operating System (OS) | 1,002 |
Open Source Routing Machine
The Open Source Routing Machine or OSRM is a C++ implementation of a high-performance routing engine for shortest paths in road networks. Licensed under the permissive 2-clause BSD license, OSRM is a free network service. OSRM supports Linux, FreeBSD, Windows, and Mac OS X platform.
Overview
It combines sophisticated routing algorithms with the open and free road network data of the OpenStreetMap (OSM) project. Shortest path computation on a continental sized network can take up to several seconds if it is done without a so-called speedup-technique. OSRM uses an implementation of contraction hierarchies and is able to compute and output a shortest path between any origin and destination within a few milliseconds, whereby the pure route computation takes much less time. Most effort is spent in annotating the route and transmitting the geometry over the network.
Since it is designed with OpenStreetMap compatibility in mind, OSM data files can be easily imported. A demo installation is sponsored by Karlsruhe Institute of Technology and previously by Geofabrik. The screen shot image shown is since September 2015 out of date with loss of attendant routing service features.
OSRM was part of the 2011 Google Summer of Code class.
Features
'Click-to-drag' dynamic routing, in the manner of Google Maps
Alternative routes
Free-to-use API
Free and open-source under the simplified two-clause BSD license
See also
GraphHopper
References
Further reading
External links
Project homepage
Demonstration from the project's homepage
Free software programmed in C++
OpenStreetMap
Route planning software
Web mapping
Software using the BSD license | Operating System (OS) | 1,003 |
Comparison of the Java and .NET platforms
Comparison of the Java and .NET platforms.
Legal issues
.NET
The Mono project aims to avoid infringing on any patents or copyrights and, to the extent that they are successful, the project can be safely distributed and used under the GPL. On November 2, 2006, Microsoft and Novell announced a joint agreement whereby Microsoft promised not to sue Novell or its customers for patent infringement. According to a statement on the blog of Mono project leader Miguel de Icaza, this agreement only extends to Mono for Novell developers and users. Because of the possible threat of Microsoft patents, the FSF recommends that people avoid creating software that depends on Mono or C#.
The Microsoft–Novell agreement was criticized by some in the open source community because it violates the principles of giving equal rights to all users of a particular program (see Agreement with Microsoft and Mono and Microsoft's patents).
In response to the Microsoft–Novell agreement, the Free Software Foundation revised its GNU General Public License to close the loophole used by Microsoft and Novell to bypass the GPL's very strong and protective provisions on patent deals (considered by Microsoft as restrictive). The FSF also stated that by selling coupons for Novell's Linux software, the mechanism by which Microsoft circumvented the GNU license, it considers Microsoft to be a Linux vendor, and thereby subject to the full terms and conditions laid out in the GPL.
The .NET landscape started to change in 2013, when Microsoft decided to open-source many of its core .NET technologies under Apache License, with even more donated to newly formed .NET Foundation in 2014. Open-sourced technologies include ASP.NET MVC, Entity Framework, Managed Extensibility Framework, Roslyn compiler-as-a-service infrastructure (together with C# and Visual Basic .NET compilers), F# functional-first language compiler, and many more. Microsoft and Xamarin announced collaboration, with the intent to increase cross-platform availability of .NET on Mac OS, Linux, and mobile devices.
Microsoft released in June 2016 .NET Core 1.0, which is an open-source cross-platform environment and a lean version of the pure Windows implementation.
Traditional computer applications
Desktop applications
Although Java's AWT (Abstract Window Toolkit) and Swing libraries are not shy of features, Java has struggled to establish a foothold in the desktop market. Sun Microsystems was also slow, in the eyes of some, to promote Java to developers and end-users in a way that makes it an appealing choice for desktop software. Even technologies such as Java Web Start, which have few parallels within rival languages and platforms, have barely been promoted.
The release of Java version 6.0 on December 11, 2006, saw a renewed focus on the desktop market with an extensive set of new tools for closer integration with the desktop. At the 2007 JavaOne conference Sun made further desktop related announcements, including a new language aimed at taking on Adobe Flash (JavaFX), a new lightweight way of downloading the JRE that sees the initial footprint reduced to under 2 Mb, and a renewed focus on multimedia libraries.
An alternative to AWT and Swing is the Standard Widget Toolkit (SWT), which was originally developed by IBM and now maintained by the Eclipse Foundation. It attempts to achieve improved performance and visualization of Java desktop applications by relying on underlying native libraries where possible.
On Windows, Microsoft's .NET is a popular desktop development providing both Windows Forms (a lightweight wrapper around the Win32 API), Windows Presentation Foundation, and Silverlight. With the integration of .NET into the Windows platform, .NET apps are first class citizens in the Windows environment with tighter OS integration and native look and feel compared to Java's Swing.
Outside of Windows, Silverlight is portable to the Mac OS X desktop. Mono is also becoming more common in open-source and free software systems due to its inclusion on many Linux desktop environments.
Server applications
This is probably the arena in which the two platforms are closest to being considered rivals. Java, through its Java EE (a.k.a. Java Platform Enterprise Edition) platform, and .NET through ASP.NET, compete to create web-based dynamic content and applications.
Both platforms are well used and supported in this market. Of the top 1,000 websites, approximately 24% use ASP.NET and also 24% use Java, whereas of all the websites approximately 17% use ASP.NET and 3% use Java.
Some of Oracle's Java-related license agreements for Java EE define aspects of the Java platform as a trade secret, and prohibit the end user from contributing to a third-party Java environment. Specifically, at least one current license for an Oracle Java EE development package contains the following terms: "You may make a single archival copy of Software, but otherwise may not copy, modify, or distribute Software." — "Unless enforcement is prohibited by applicable law, you may not decompile, or reverse engineer Software." — "You may not publish or provide the results of any benchmark or comparison tests run on Software to any third party without the prior written consent of Oracle." — "Software is confidential and copyrighted." However, while Oracle's software is subject to the above license terms, Oracle's Java EE API reference has been implemented under an open-source license by the WildFly (originally JBoss) and JOnAS projects.
Microsoft's implementation of ASP.NET is not part of the standardized CLI and, while Microsoft's runtime environment and development tools are not subject to comparable secrecy agreements to Java EE, the official Microsoft tools are not open source or free software, and require Windows servers. However, a cross-platform free software ASP.NET implementation is part of the Mono project (minus Web Parts and Web Services Enhancements). Mono supports ASP.NET 4.0 including Web Forms, Microsoft AJAX, and ASP.NET MVC.
Embedded applications
Mobile applications
Google's popular Android platform for mobile application is based on Java. Google adopted a customised virtual machine called Dalvik to optimise the execution of Java code for mobile devices.
Oracle provides Java ME; a reference implementation for mobile OEM vendors. Java ME is made up of various profiles that are subsets of the Java desktop environment with additional libraries targeted at mobile and set-top-box development. Java ME has a very large base within the mobile phone and PDA markets, with only the cheapest devices now devoid of a KVM (a cut-down JVM for use on devices with limited processing power). Java software, including many games, is commonplace.
While many feature phones include a JVM, they are not always heavily used by users (particularly in South Africa). Initially Java applications on most phones typically consisted of menuing systems, small games, or systems to download ringtones etc. However, more-powerful phones are increasingly being sold with simple applications pre-loaded, such as translation dictionaries, world clock displays (darkness/light, time zones, etc.), and calculators. Some of these are written in Java, although how often phone owners actually use them is probably unknown.
Microsoft currently ships the .NET Compact Framework that runs on Windows CE and mobile devices, set-top boxes, and PDAs as well as the Xbox 360. Microsoft also provides the .NET Micro Framework for embedded developers with limited resources.
Alternatively, Novell licenses embeddable versions of Mono to third parties to use in their devices, and Xamarin commercially distributes the MonoDroid and MonoTouch framework for Android and iPhone development, respectively.
Windows Phone 7 uses Silverlight for native apps, but Windows Phone 8 has C# and XAML as the main languages.
Home entertainment technologies
Java has found a market in digital television, where it can be used to provide software that sits alongside programming, or extends the capabilities of a given set-top box. TiVo, for example, has a facility called "Home Media Engine", which allows Java TV software to be transmitted to an appropriate TiVo device to complement programming or provide extra functionality (for example, personalized stock tickers on a business news program).
A variant of Java has been accepted as the official software tool for use on the next generation optical disc technology Blu-ray, via the BD-J interactive platform. This will mean that interactive content, such as menus, games, downloadables, etc. on all Blu-ray optical discs will be created under a variant of the Java platform.
Rather than using Java, HD DVD (the defunct high-definition successor to DVD) used a technology jointly developed by Microsoft and Disney called HDi that was based on XML, CSS, JavaScript, and other technologies that are comparable to those used by standard web browsers.
The BD-J platform API is more extensive than its iHD rival, with an alleged 8,000 methods and interfaces, as opposed to iHD's 400. And while Microsoft is pushing iHD's XML presentation layer by including it with Windows Vista, iHD is still a newcomer in a market sector where Java technologies are already commonplace.
However, the fact that the HD DVD format has been abandoned in favor of Blu-ray means that HDi is no longer supported on any optical disc format, making the BD-J format a clear winner.
Runtime inclusion in operating systems
.NET/Mono
On Windows, Microsoft has promoted .NET as its flagship development platform by including the .NET runtime in Windows XP Service Pack 2 and 3, Windows Server 2003, Windows Vista, Windows Server 2008 and Windows 7. Microsoft also distributes the Visual Studio Express development environment at no cost, and the Visual Studio Community development environment at no cost, with limited use for organizations.
.NET Framework 3.5 runtime is not pre-installed on versions of Windows prior to Vista SP1, and must be downloaded by the user, which has been criticized because of its large size (65 MB download for .NET 3.5).
While neither .NET nor Mono are installed with Mac OS X out-of-the-box, the Mono project can be downloaded and installed separately, for free, for any Mac user who wants to build or run C# and .NET software. As of 13 May 2008, Mono's System.WindowsForms 2.0 is API-complete (contains 100% of classes, methods etc. in Microsoft's System.WindowsForms 2.0); also System.WindowsForms 2.0 works natively on Mac OS X.
C# and the CLI are included and used in a number of Linux- and BSD-based operating systems by way of including the free software Mono Project.
As a result of inclusion of .NET or Mono runtimes in the distributions of Windows and Linux, non-GUI applications that use the programming interfaces that are common to both .NET and Mono can be developed in C# or any other .NET language and then deployed across many operating systems and processor architectures using a runtime environment that is available as a part of the operating system's installation. Both Microsoft .NET and the Mono project have complete support for the Ecma- and ISO-standardized C# language and .NET runtime, and many of Microsoft's non-standardized .NET programming interfaces have been implemented or are under development in Mono, but each environment includes many components that have not been implemented in the other.
Java
No current version of Windows ships with Java; they stopped shipping with Windows XP SP1a.
Java was pre-installed on all new Apple computers beginning with Mac OS X 10.0 and ending with 10.6, after which Java 6 became an optional Apple download. Java 7 and later releases are provided by Oracle.
Java comes pre-installed with many commercial Unix flavors, including those from Hewlett Packard, IBM, and Oracle. As of June 2009, the Debian, Fedora 9, Mandriva, OpenSUSE, Slackware extra, and Ubuntu 8.04 distributions are available with OpenJDK, based completely on free and open-source code. Since June 2008, OpenJDK passed all of the compatibility tests in the Java SE 6 JCK and can claim to be a fully compatible Java 6 implementation. OpenJDK can run complex applications such as Eclipse, GlassFish, WildFly, or Netbeans.
The Operating System Distributor License for Java (DLJ) was a Sun initiative to ease distribution issues with operating systems based on Linux or OpenSolaris.
If Java is not installed on a computer by default, it may be downloaded by the user as a Web plugin. The Web plugin process has been criticized because of the size of the Java plugin. Unlike other plugins, the Java download is a full runtime environment capable of running not just applets, but full applications and dynamic WebStart apps. Because of this, the perceived download footprint is larger than some web plugins. However, compared to Java, other popular browser plugins have larger sizes: Java 6 JRE is 13 MB, but Acrobat Reader is 33 MB, QuickTime 19 MB, Windows Media Player 25 MB, the .NET Framework 3.0 runtime is 54 MB, and the .NET Framework 3.5 runtime is 197 MB (it's a united package for x86, x64, and IA-64; each part has approximately 60 MB).
At the JavaOne event in May 2007, Sun announced that the deployment issues with Java would be solved in two major updates during the lifespan of Java 6 (the changes will not be held over to Java 7.) These include:
The introduction of a new consumer JRE edition, with an initial 2 Mb footprint and the ability to download the remaining 9 Mb in sections using an on-demand methodology.
The development of drop-in cross-platform JavaScript code, which can be used from a Web page to install the necessary JRE for a given applet or Rich Internet Application to run, if necessary.
An improvement in support for automatically downloading updates to the JRE.
support for pre-loading of the JRE, so applets and applications written in Java start up almost instantaneously.
See also
Comparison of C# and Java languages
Java programming language
References
External links
Moving to C# and the .NET Framework at MSDN
ECMA-335 Common Language Infrastructure (CLI), 4th edition (June 2006) - free download of Ecma CLI standard
ISO/IEC 23271:2006 Common Language Infrastructure (CLI) Partitions I to VI - the official ISO/IEC CLI standard
.NET
Java (programming language)
Java and .NET platforms | Operating System (OS) | 1,004 |
Citadel (software)
Citadel is the name of a bulletin board system (BBS) computer program, and of the genre of programs it inspired. Citadels were notable for their room-based structure (see below) and relatively heavy emphasis on messages and conversation as opposed to gaming and files. The first Citadel came online in 1980 with a single 300 baud modem; eventually many versions of the software, both clones and those descended from the original code base (but all usually called "Citadels"), became popular among BBS callers and sysops, particularly in areas such as the Pacific Northwest, Northern California and Upper Midwest of the United States, where development of the software was ongoing. Citadel BBSes were most popular in the late 1980s and early 1990s, but when the Internet became more accessible for online communication, Citadels began to decline. However, some versions of the software, from small community BBSes to large systems supporting thousands of simultaneous users, are still in use today. Citadel development has always been collaborative with a strong push to keep the source code in the public domain. This makes Citadel one of the oldest surviving FOSS projects.
The Citadel user interface
The utilization of a natural metaphor, the concept of rooms devoted to topics, marked Citadel's main advancement over previous BBS packages in the area of organization. Messages are associated with rooms, to which the user moves in order to participate in discussions; similarly, a room could optionally give access to the underlying file system, permitting the organization of available files in an organic manner. Most installations permitted any user to create a room, resulting in a dynamic ebb and flow closer to true conversation than most other BBS packages achieved. Certain versions of Citadel extend the metaphor of rooms with “hallways” and/or “floors,” organizing groups of rooms according to system requirement. By contrast, previous bulletin board software emphasized the availability of files, with a single uncoupled message area that could only be read linearly, forward or backward.
Citadel further improved the user experience in the area of command and control. Based on Alan Kay’s philosophy of user-interface design, “Simple things should be simple; complex things should be possible,” and influenced by the fact that Citadel was developed in an era of 300 baud modems, the basic and most heavily used commands are accessed via single keystrokes. The most common commands are Goto (the next room with new messages), New messages (display the New messages in the room to the user), and Enter a message into the room. Other single keystroke commands exist as well, such as Known rooms, which lists the rooms known to the user.
This elegantly small command set made the system so usable that many daily users during Citadel’s golden era were never aware that Citadel also provided sophisticated capabilities. These are known as the “dot” commands and build logically from the set of single keystroke commands. A simple example would be the requirement to go directly to a specified room. The user would type oto , where the text between the brackets is user typed, while the rest is filled in by the system. A more complex example might be .Read All rooms Zmodem New messages (.RAZN), which results in all of the new messages in all of the rooms known to the user being sent to the user via the ZMODEM protocol. Filters for users, keyword searches, and other capabilities have been implemented, depending on the version of Citadel.
History
Citadel was originally written for the CP/M operating system in 1981 by Jeff Prothero, known to the nascent Citadel world as Cynbe ru Taren (CrT). Unlike most BASIC-based BBS programs of the time, it was written in a fairly standard dialect of C known as BDS C, a compiler written and distributed by Leor Zolman. The first installation came online in December, 1981, running on a Heathkit H-89, and in its 6 month lifetime achieved immediate success.
Version 2 debuted on David Mitchell's ICS BBS, and with the release of 2.11, Prothero's involvement with the project ended following a conflict centered around a user called "sugar bunny". He released the source to the public domain and it became available as a download from various systems as well as through the C Users Group.
At this point, the history of Citadel becomes complex as many individuals began modifying the source to their own ends, and lacking modern distributed source tracking, innovations were never incorporated into a central source repository, as such a thing did not exist. Initially, Bruce King, David Bonn (releasing under the name Stonehenge), Caren Park, and James Shields, amongst others, picked up the opportunity in the Seattle area.
The longest lived fork from the 2.10 code started in the American Midwest, when Hue White (aka Hue, Jr.) ported the code to MS-DOS and called it Citadel-86 ("C-86"). His board, Citadel-86 Test System, served not only as a discussion board and distribution center for the software, but also was the focal point for a lively Citadel-86 community in the 612 area code (the Twin Cities), which at their peak numbered roughly forty systems, and probably more than 100 over the years. Numerous suggestions from sysops and users, both local and national, guided the growth of Citadel-86, including the addition of a network capability as well as enhancements to the command set. Hue's contributions were substantial enough that several other porting projects used Citadel-86 as source material, such as Asgard-86 (MS-DOS), Macadel (Macintosh), STadel (Atari ST, fnordadel), Citadel-68K (Amiga), and Citadel:K2NE (MS-DOS), and many of these contributed back to Hue Jr's project. Most of these ports were compatible with the growing Citadel-86 network (C86Net). Local systems would network with each other on a demand basis (due to the work of David Parsons), while the long haul network was serviced late at night.
An early fork from Citadel-86 was DragCit, written by The Dragon. DragCit also introduced networking code, but the DragCit network was not generally compatible with the Citadel-86 network. DragCit forked to several more versions, eventually leading to efforts to merge several code bases under the guidance of Matt Pfleger, Richard Goldfinder, Brent Bottles, Don Kimberlin, and Elisabeth Perrin, the end result being Citadel+, a multiuser capable version of the software, which also included advanced scripting, user control of message displays, and other features.
Other Citadel implementations
Implementations that share the familiar Citadel user interface, but are not derived from the original Citadel code base, are also common. They have ranged from vanity projects such as a Citadel-like control program to control the serial port of an advanced graphing calculator, to full-blown efforts to modernize the Citadel interface with modern protocols.
Some of the more notable ones included Glenn Gorman's TRS-80 BASIC implementation called Minibin, a clone of Cit-86 intended to run on a Unix running on Motorola processors called Cit/68, and a Unix version, technically called Citadel/UX but referred to simply as "Citadel" in the mainstream open source community. This version of Citadel is still being developed, extending the Citadel metaphor to enable what its developers call "a messaging and collaboration platform (for) connecting communities of people together": a groupware platform.
Several efforts have also been made to present the Citadel paradigm as a web service, including Webadel, written by Jarrin Jambik, a former Citadel-86 sysop, and Anansi-web, anansi-web.com hosted by former Citadel-86 Sysop, Ultravox the Muse. The only current actively developed web-enabled Citadels are Citadel/UX and PenguinCit, a PHP-based Citadel.
Active Citadels
References
External links
The Citadel Archive (archived), the largest repository of historical information about Citadel implementations. Contains archived software of many different Citadel versions, as well as the Citadel Family Tree (archived), which shows the relationship of the various code branches descending from the original Citadel.
Homepage for the modern Citadel software, an open source project
Early text file (1982), about CrT's Citadel and its earliest descendants
The release notes from Citadel 2.1 in 1982 (archived), containing interesting comments from CrT about the basic philosophy behind the Citadel user interface.
Bulletin board system software
DOS software
1981 software
Public-domain software with source code
Free software | Operating System (OS) | 1,005 |
ARM architecture family
ARM (stylised in lowercase as arm, formerly an acronym for Advanced RISC Machines and originally Acorn RISC Machine) is a family of reduced instruction set computer (RISC) instruction set architectures for computer processors, configured for various environments. Arm Ltd. develops the architectures and licenses them to other companies, who design their own products that implement one or more of those architectures, including system on a chip (SoC) and system on module (SoM) designs, that incorporate different components such as memory, interfaces, and radios. It also designs cores that implement these instruction set architectures and licenses these designs to many companies that incorporate those core designs into their own products.
There have been several generations of the ARM design. The original ARM1 used a 32-bit internal structure but had a 26-bit address space that limited it to 64 MB of main memory. This limitation was removed in the ARMv3 series, which has a 32-bit address space, and several additional generations up to ARMv7 remained 32-bit. Released in 2011, the ARMv8-A architecture added support for a 64-bit address space and 64-bit arithmetic with its new 32-bit fixed-length instruction set. Arm Ltd. has also released a series of additional instruction sets for different rules; the "Thumb" extension adds both 32- and 16-bit instructions for improved code density, while Jazelle added instructions for directly handling Java bytecode. More recent changes include the addition of simultaneous multithreading (SMT) for improved performance or fault tolerance.
Due to their low costs, minimal power consumption, and lower heat generation than their competitors, ARM processors are desirable for light, portable, battery-powered devices, including smartphones, laptops and tablet computers, and other embedded systems. However, ARM processors are also used for desktops and servers, including the world's fastest supercomputer. With over 200 billion ARM chips produced, , ARM is the most widely used family of instruction set architectures (ISA) and the ISAs produced in the largest quantity. Currently, the widely used Cortex cores, older "classic" cores, and specialised SecurCore cores variants are available for each of these to include or exclude optional capabilities.
History
BBC Micro
Acorn Computers' first widely successful design was the BBC Micro, introduced in December 1981. This was a relatively conventional machine based on the MOS Technology 6502 CPU but ran at roughly double the performance of competing designs like the Apple II due to its use of faster dynamic random-access memory (DRAM). Typical DRAM of the era ran at about 2 MHz; Acorn arranged a deal with Hitachi for a supply of faster 4 MHz parts.
Machines of the era generally shared memory between the processor and the framebuffer, which allowed the processor to quickly update the contents of the screen without having to perform separate input/output (I/O). As the timing of the video display is exacting, the video hardware had to have priority access to that memory. Due to a quirk of the 6502's design, the CPU left the memory untouched for half of the time. Thus by running the CPU at 1 MHz, the video system could read data during those down times, taking up the total 2 MHz bandwidth of the RAM. In the BBC Micro, the use of 4 MHz RAM allowed the same technique to be used, but running at twice the speed. This allowed it to outperform any similar machine on the market.
Acorn Business Computer
1981 was also the year that the IBM Personal Computer was introduced. Using the recently introduced Intel 8088, a 16-bit CPU compared to the 6502's 8-bit design, it was able to offer higher overall performance. Its introduction changed the desktop computer market radically: what had been largely a hobby and gaming market emerging over the prior five years began to change to a must-have business tool where the earlier 8-bit designs simply could not compete. Even newer 32-bit designs were also coming to market, such as the Motorola 68000 and National Semiconductor NS32016.
Acorn began considering how to compete in this market and produced a new paper design named the Acorn Business Computer. They set themselves the goal of producing a machine with ten times the performance of the BBC Micro, but at the same price. This would outperform and underprice the PC. At the same time, the recent introduction of the Apple Lisa brought the graphical user interface (GUI) concept to a wider audience and suggested the future belonged to machines with a GUI. The Lisa, however, cost $9,995, as it was packed with support chips, large amounts of memory, and a hard disk drive, all very expensive then.
The engineers then began studying all of the CPU designs available. Their conclusion about the existing 16-bit designs was that they were a lot more expensive and were still "a bit crap", offering only slightly higher performance than their BBC Micro design. They also almost always demanded a large number of support chips to operate even at that level, which drove up the cost of the computer as a whole. These systems would simply not hit the design goal. They also considered the new 32-bit designs, but these cost even more and had the same issues with support chips. According to Sophie Wilson, all the processors tested at that time performed about the same, with about a 4 Mbit/second bandwidth.
Two key events led Acorn down the path to ARM. One was the publication of a series of reports from the University of California, Berkeley, which suggested that a simple chip design could nevertheless have extremely high performance, much higher than the latest 32-bit designs on the market. The second was a visit by Steve Furber and Sophie Wilson to the Western Design Center, a company run by Bill Mensch and his sister, which had become the logical successor to the MOS team and was offering new versions like the WDC 65C02. The Acorn team saw high school students producing chip layouts on Apple II machines, which suggested that anyone could do it. In contrast, a visit to another design firm working on modern 32-bit CPU revealed a team with over a dozen members which were already on revision H of their design and yet it still contained bugs. This cemented their late 1983 decision to begin their own CPU design, the Acorn RISC Machine.
Design concepts
The original Berkeley RISC designs were in some sense teaching systems, not designed specifically for outright performance. To its basic register-heavy concept, ARM added a number of the well-received design notes of the 6502. Primary among them was the ability to quickly serve interrupts, which allowed the machines to offer reasonable input/output performance with no added external hardware. To offer similar high-performance interrupts as the 6502, the ARM design limited its physical address space to 24 bits of pointers addressing 4-byte words, so 26 bits of pointers addressing bytes, resulting in 64 MB. All ARM instructions are aligned on word boundaries so that an instruction address is a word address, so the program counter (PC) thus only needed to be 24 bits. This 24-bit size allowed the PC to be stored along with eight processor flags in a single 32-bit register. That meant that on the reception of an interrupt, the entire machine state could be saved in a single operation, whereas had the PC been a full 32-bit value, it would require separate operations to store the PC and the status flags.
Another change, and among the most important in terms of practical real-world performance, was the modification of the instruction set to take advantage of page mode DRAM. Recently introduced, page mode allowed subsequent accesses of memory to run twice as fast if they were roughly in the same location, or "page". Berkeley's design did not consider page mode, and treated all memory equally. The ARM design added special vector-like memory access instructions, the "S-cycles", that could be used to fill or save multiple registers in a single page using page mode. This doubled memory performance when they could be used, and was especially important for graphics performance.
The Berkeley RISC designs used register windows to reduce the number of register saves and restores performed in procedure calls; the ARM design did not adopt this.
Wilson developed the instruction set, writing a simulation of the processor in BBC BASIC that ran on a BBC Micro with a second 6502 processor. This convinced Acorn engineers they were on the right track. Wilson approached Acorn's CEO, Hermann Hauser, and requested more resources. Hauser gave his approval and assembled a small team to design the actual processor based on Wilson's ISA. The official Acorn RISC Machine project started in October 1983.
ARM1
Acorn chose VLSI Technology as the "silicon partner", as they were a source of ROMs and custom chips for Acorn. Acorn provided the design and VLSI provided the layout and production. The first samples of ARM silicon worked properly when first received and tested on 26 April 1985. Known as ARM1, these versions ran at 6 MHz.
The first ARM application was as a second processor for the BBC Micro, where it helped in developing simulation software to finish development of the support chips (VIDC, IOC, MEMC), and sped up the CAD software used in ARM2 development. Wilson subsequently rewrote BBC BASIC in ARM assembly language. The in-depth knowledge gained from designing the instruction set enabled the code to be very dense, making ARM BBC BASIC an extremely good test for any ARM emulator.
ARM2
The result of the simulations on the ARM1 boards led to the late 1986 introduction of the ARM2 design running at 8 MHz, and the early 1987 speed-bumped version at 10 to 12 MHz. A significant change in the underlying architecture was the addition of a Booth multiplier, whereas formerly multiplication had to be carried out in software. Further, a new Fast Interrupt reQuest mode, FIQ for short, allowed registers 8 through 14 to be replaced as part of the interrupt itself. This meant FIQ requests did not have to save out their registers, further speeding interrupts.
The ARM2 was roughly seven times the performance of a typical 7 MHz 68000-based system like the Commodore Amiga or Macintosh SE. It was twice as fast as an Intel 80386 running at 16 MHz, and about the same speed as a multi-processor VAX-11/784 superminicomputer. The only systems that beat it were the Sun SPARC and MIPS R2000 RISC-based workstations. Further, as the CPU was designed for high-speed I/O, it dispensed with many of the support chips seen in these machines, notably, it lacked any dedicated direct memory access (DMA) controller which was often found on workstations. The graphics system was also simplified based on the same set of underlying assumptions about memory and timing. The result was a dramatically simplified design, offering performance on par with expensive workstations but at a price point similar to contemporary desktops.
The ARM2 featured a 32-bit data bus, 26-bit address space and 27 32-bit registers. The ARM2 had a transistor count of just 30,000, compared to Motorola's six-year-older 68000 model with around 68,000. Much of this simplicity came from the lack of microcode, which represents about one-quarter to one-third of the 68000's transistors, and the lack of (like most CPUs of the day) a cache. This simplicity enabled the ARM2 to have low power consumption, yet offer better performance than the Intel 80286.
A successor, ARM3, was produced with a 4 KB cache, which further improved performance. The address bus was extended to 32 bits in the ARM6, but program code still had to lie within the first 64 MB of memory in 26-bit compatibility mode, due to the reserved bits for the status flags.
Advanced RISC Machines Ltd. – ARM6
In the late 1980s, Apple Computer and VLSI Technology started working with Acorn on newer versions of the ARM core. In 1990, Acorn spun off the design team into a new company named Advanced RISC Machines Ltd., which became ARM Ltd. when its parent company, Arm Holdings plc, floated on the London Stock Exchange and NASDAQ in 1998. The new Apple-ARM work would eventually evolve into the ARM6, first released in early 1992. Apple used the ARM6-based ARM610 as the basis for their Apple Newton PDA.
Early licensees
In 1994, Acorn used the ARM610 as the main central processing unit (CPU) in their RiscPC computers. DEC licensed the ARMv4 architecture and produced the StrongARM. At 233 MHz, this CPU drew only one watt (newer versions draw far less). This work was later passed to Intel as part of a lawsuit settlement, and Intel took the opportunity to supplement their i960 line with the StrongARM. Intel later developed its own high performance implementation named XScale, which it has since sold to Marvell. Transistor count of the ARM core remained essentially the same throughout these changes; ARM2 had 30,000 transistors, while ARM6 grew only to 35,000.
Market share
In 2005, about 98% of all mobile phones sold used at least one ARM processor. In 2010, producers of chips based on ARM architectures reported shipments of 6.1 billion ARM-based processors, representing 95% of smartphones, 35% of digital televisions and set-top boxes and 10% of mobile computers. In 2011, the 32-bit ARM architecture was the most widely used architecture in mobile devices and the most popular 32-bit one in embedded systems. In 2013, 10 billion were produced and "ARM-based chips are found in nearly 60 percent of the world's mobile devices".
Licensing
Core licence
Arm Ltd.'s primary business is selling IP cores, which licensees use to create microcontrollers (MCUs), CPUs, and systems-on-chips based on those cores. The original design manufacturer combines the ARM core with other parts to produce a complete device, typically one that can be built in existing semiconductor fabrication plants (fabs) at low cost and still deliver substantial performance. The most successful implementation has been the ARM7TDMI with hundreds of millions sold. Atmel has been a precursor design center in the ARM7TDMI-based embedded system.
The ARM architectures used in smartphones, PDAs and other mobile devices range from ARMv5 to ARMv8-A.
In 2009, some manufacturers introduced netbooks based on ARM architecture CPUs, in direct competition with netbooks based on Intel Atom.
Arm Ltd. offers a variety of licensing terms, varying in cost and deliverables. Arm Ltd. provides to all licensees an integratable hardware description of the ARM core as well as complete software development toolset (compiler, debugger, software development kit) and the right to sell manufactured silicon containing the ARM CPU.
SoC packages integrating ARM's core designs include Nvidia Tegra's first three generations, CSR plc's Quatro family, ST-Ericsson's Nova and NovaThor, Silicon Labs's Precision32 MCU, Texas Instruments's OMAP products, Samsung's Hummingbird and Exynos products, Apple's A4, A5, and A5X, and NXP's i.MX.
Fabless licensees, who wish to integrate an ARM core into their own chip design, are usually only interested in acquiring a ready-to-manufacture verified semiconductor intellectual property core. For these customers, Arm Ltd. delivers a gate netlist description of the chosen ARM core, along with an abstracted simulation model and test programs to aid design integration and verification. More ambitious customers, including integrated device manufacturers (IDM) and foundry operators, choose to acquire the processor IP in synthesizable RTL (Verilog) form. With the synthesizable RTL, the customer has the ability to perform architectural level optimisations and extensions. This allows the designer to achieve exotic design goals not otherwise possible with an unmodified netlist (high clock speed, very low power consumption, instruction set extensions, etc.). While Arm Ltd. does not grant the licensee the right to resell the ARM architecture itself, licensees may freely sell manufactured products such as chip devices, evaluation boards and complete systems. Merchant foundries can be a special case; not only are they allowed to sell finished silicon containing ARM cores, they generally hold the right to re-manufacture ARM cores for other customers.
Arm Ltd. prices its IP based on perceived value. Lower performing ARM cores typically have lower licence costs than higher performing cores. In implementation terms, a synthesisable core costs more than a hard macro (blackbox) core. Complicating price matters, a merchant foundry that holds an ARM licence, such as Samsung or Fujitsu, can offer fab customers reduced licensing costs. In exchange for acquiring the ARM core through the foundry's in-house design services, the customer can reduce or eliminate payment of ARM's upfront licence fee.
Compared to dedicated semiconductor foundries (such as TSMC and UMC) without in-house design services, Fujitsu/Samsung charge two- to three-times more per manufactured wafer. For low to mid volume applications, a design service foundry offers lower overall pricing (through subsidisation of the licence fee). For high volume mass-produced parts, the long term cost reduction achievable through lower wafer pricing reduces the impact of ARM's NRE (Non-Recurring Engineering) costs, making the dedicated foundry a better choice.
Companies that have developed chips with cores designed by Arm Holdings include Amazon.com's Annapurna Labs subsidiary, Analog Devices, Apple, AppliedMicro (now: MACOM Technology Solutions), Atmel, Broadcom, Cavium, Cypress Semiconductor, Freescale Semiconductor (now NXP Semiconductors), Huawei, Intel, Maxim Integrated, Nvidia, NXP, Qualcomm, Renesas, Samsung Electronics, ST Microelectronics, Texas Instruments and Xilinx.
Built on ARM Cortex Technology licence
In February 2016, ARM announced the Built on ARM Cortex Technology licence, often shortened to Built on Cortex (BoC) licence. This licence allows companies to partner with ARM and make modifications to ARM Cortex designs. These design modifications will not be shared with other companies. These semi-custom core designs also have brand freedom, for example Kryo 280.
Companies that are current licensees of Built on ARM Cortex Technology include Qualcomm.
Architectural licence
Companies can also obtain an ARM architectural licence for designing their own CPU cores using the ARM instruction sets. These cores must comply fully with the ARM architecture. Companies that have designed cores that implement an ARM architecture include Apple, AppliedMicro (now: Ampere Computing), Broadcom, Cavium (now: Marvell), Digital Equipment Corporation, Intel, Nvidia, Qualcomm, Samsung Electronics, Fujitsu, and NUVIA Inc.
ARM Flexible Access
On 16 July 2019, ARM announced ARM Flexible Access. ARM Flexible Access provides unlimited access to included ARM intellectual property (IP) for development. Per product licence fees are required once a customer reaches foundry tapeout or prototyping.
75% of ARM's most recent IP over the last two years are included in ARM Flexible Access. As of October 2019:
CPUs: Cortex-A5, Cortex-A7, Cortex-A32, Cortex-A34, Cortex-A35, Cortex-A53, Cortex-R5, Cortex-R8, Cortex-R52, Cortex-M0, Cortex-M0+, Cortex-M3, Cortex-M4, Cortex-M7, Cortex-M23, Cortex-M33
GPUs: Mali-G52, Mali-G31. Includes Mali Driver Development Kits (DDK).
Interconnect: CoreLink NIC-400, CoreLink NIC-450, CoreLink CCI-400, CoreLink CCI-500, CoreLink CCI-550, ADB-400 AMBA, XHB-400 AXI-AHB
System Controllers: CoreLink GIC-400, CoreLink GIC-500, PL192 VIC, BP141 TrustZone Memory Wrapper, CoreLink TZC-400, CoreLink L2C-310, CoreLink MMU-500, BP140 Memory Interface
Security IP: CryptoCell-312, CryptoCell-712, TrustZone True Random Number Generator
Peripheral Controllers: PL011 UART, PL022 SPI, PL031 RTC
Debug & Trace: CoreSight SoC-400, CoreSight SDC-600, CoreSight STM-500, CoreSight System Trace Macrocell, CoreSight Trace Memory Controller
Design Kits: Corstone-101, Corstone-201
Physical IP: Artisan PIK for Cortex-M33 TSMC 22ULL including memory compilers, logic libraries, GPIOs and documentation
Tools & Materials: Socrates IP ToolingARM Design Studio, Virtual System Models
Support: Standard ARM Technical support, ARM online training, maintenance updates, credits towards onsite training and design reviews
Cores
Arm Holdings provides a list of vendors who implement ARM cores in their design (application specific standard products (ASSP), microprocessor and microcontrollers).
Example applications of ARM cores
ARM cores are used in a number of products, particularly PDAs and smartphones. Some computing examples are Microsoft's first generation Surface, Surface 2 and Pocket PC devices (following 2002), Apple's iPads and Asus's Eee Pad Transformer tablet computers, and several Chromebook laptops. Others include Apple's iPhone smartphones and iPod portable media players, Canon PowerShot digital cameras, Nintendo Switch hybrid, the Wii security processor and 3DS handheld game consoles, and TomTom turn-by-turn navigation systems.
In 2005, Arm Holdings took part in the development of Manchester University's computer SpiNNaker, which used ARM cores to simulate the human brain.
ARM chips are also used in Raspberry Pi, BeagleBoard, BeagleBone, PandaBoard and other single-board computers, because they are very small, inexpensive and consume very little power.
32-bit architecture
The 32-bit ARM architecture (ARM32), such as ARMv7-A (implementing AArch32; see section on ARMv8-A for more on it), was the most widely used architecture in mobile devices .
Since 1995, various versions of the ARM Architecture Reference Manual (see ) have been the primary source of documentation on the ARM processor architecture and instruction set, distinguishing interfaces that all ARM processors are required to support (such as instruction semantics) from implementation details that may vary. The architecture has evolved over time, and version seven of the architecture, ARMv7, defines three architecture "profiles":
A-profile, the "Application" profile, implemented by 32-bit cores in the Cortex-A series and by some non-ARM cores
R-profile, the "Real-time" profile, implemented by cores in the Cortex-R series
M-profile, the "Microcontroller" profile, implemented by most cores in the Cortex-M series
Although the architecture profiles were first defined for ARMv7, ARM subsequently defined the ARMv6-M architecture (used by the Cortex M0/M0+/M1) as a subset of the ARMv7-M profile with fewer instructions.
CPU modes
Except in the M-profile, the 32-bit ARM architecture specifies several CPU modes, depending on the implemented architecture features. At any moment in time, the CPU can be in only one mode, but it can switch modes due to external events (interrupts) or programmatically.
User mode: The only non-privileged mode.
FIQ mode: A privileged mode that is entered whenever the processor accepts a fast interrupt request.
IRQ mode: A privileged mode that is entered whenever the processor accepts an interrupt.
Supervisor (svc) mode: A privileged mode entered whenever the CPU is reset or when an SVC instruction is executed.
Abort mode: A privileged mode that is entered whenever a prefetch abort or data abort exception occurs.
Undefined mode: A privileged mode that is entered whenever an undefined instruction exception occurs.
System mode (ARMv4 and above): The only privileged mode that is not entered by an exception. It can only be entered by executing an instruction that explicitly writes to the mode bits of the Current Program Status Register (CPSR) from another privileged mode (not from user mode).
Monitor mode (ARMv6 and ARMv7 Security Extensions, ARMv8 EL3): A monitor mode is introduced to support TrustZone extension in ARM cores.
Hyp mode (ARMv7 Virtualization Extensions, ARMv8 EL2): A hypervisor mode that supports Popek and Goldberg virtualization requirements for the non-secure operation of the CPU.
Thread mode (ARMv6-M, ARMv7-M, ARMv8-M): A mode which can be specified as either privileged or unprivileged. Whether the Main Stack Pointer (MSP) or Process Stack Pointer (PSP) is used can also be specified in CONTROL register with privileged access. This mode is designed for user tasks in RTOS environment but it's typically used in bare-metal for super-loop.
Handler mode (ARMv6-M, ARMv7-M, ARMv8-M): A mode dedicated for exception handling (except the RESET which are handled in Thread mode). Handler mode always uses MSP and works in privileged level.
Instruction set
The original (and subsequent) ARM implementation was hardwired without microcode, like the much simpler 8-bit 6502 processor used in prior Acorn microcomputers.
The 32-bit ARM architecture (and the 64-bit architecture for the most part) includes the following RISC features:
Load/store architecture.
No support for unaligned memory accesses in the original version of the architecture. ARMv6 and later, except some microcontroller versions, support unaligned accesses for half-word and single-word load/store instructions with some limitations, such as no guaranteed atomicity.
Uniform 16 × 32-bit register file (including the program counter, stack pointer and the link register).
Fixed instruction width of 32 bits to ease decoding and pipelining, at the cost of decreased code density. Later, the Thumb instruction set added 16-bit instructions and increased code density.
Mostly single clock-cycle execution.
To compensate for the simpler design, compared with processors like the Intel 80286 and Motorola 68020, some additional design features were used:
Conditional execution of most instructions reduces branch overhead and compensates for the lack of a branch predictor in early chips.
Arithmetic instructions alter condition codes only when desired.
32-bit barrel shifter can be used without performance penalty with most arithmetic instructions and address calculations.
Has powerful indexed addressing modes.
A link register supports fast leaf function calls.
A simple, but fast, 2-priority-level interrupt subsystem has switched register banks.
Arithmetic instructions
ARM includes integer arithmetic operations for add, subtract, and multiply; some versions of the architecture also support divide operations.
ARM supports 32-bit × 32-bit multiplies with either a 32-bit result or 64-bit result, though Cortex-M0 / M0+ / M1 cores don't support 64-bit results. Some ARM cores also support 16-bit × 16-bit and 32-bit × 16-bit multiplies.
The divide instructions are only included in the following ARM architectures:
ARMv7-M and ARMv7E-M architectures always include divide instructions.
ARMv7-R architecture always includes divide instructions in the Thumb instruction set, but optionally in its 32-bit instruction set.
ARMv7-A architecture optionally includes the divide instructions. The instructions might not be implemented, or implemented only in the Thumb instruction set, or implemented in both the Thumb and ARM instruction sets, or implemented if the Virtualization Extensions are included.
Registers
Registers R0 through R7 are the same across all CPU modes; they are never banked.
Registers R8 through R12 are the same across all CPU modes except FIQ mode. FIQ mode has its own distinct R8 through R12 registers.
R13 and R14 are banked across all privileged CPU modes except system mode. That is, each mode that can be entered because of an exception has its own R13 and R14. These registers generally contain the stack pointer and the return address from function calls, respectively.
Aliases:
R13 is also referred to as SP, the Stack Pointer.
R14 is also referred to as LR, the Link Register.
R15 is also referred to as PC, the Program Counter.
The Current Program Status Register (CPSR) has the following 32 bits.
M (bits 0–4) is the processor mode bits.
T (bit 5) is the Thumb state bit.
F (bit 6) is the FIQ disable bit.
I (bit 7) is the IRQ disable bit.
A (bit 8) is the imprecise data abort disable bit.
E (bit 9) is the data endianness bit.
IT (bits 10–15 and 25–26) is the if-then state bits.
GE (bits 16–19) is the greater-than-or-equal-to bits.
DNM (bits 20–23) is the do not modify bits.
J (bit 24) is the Java state bit.
Q (bit 27) is the sticky overflow bit.
V (bit 28) is the overflow bit.
C (bit 29) is the carry/borrow/extend bit.
Z (bit 30) is the zero bit.
N (bit 31) is the negative/less than bit.
Conditional execution
Almost every ARM instruction has a conditional execution feature called predication, which is implemented with a 4-bit condition code selector (the predicate). To allow for unconditional execution, one of the four-bit codes causes the instruction to be always executed. Most other CPU architectures only have condition codes on branch instructions.
Though the predicate takes up four of the 32 bits in an instruction code, and thus cuts down significantly on the encoding bits available for displacements in memory access instructions, it avoids branch instructions when generating code for small if statements. Apart from eliminating the branch instructions themselves, this preserves the fetch/decode/execute pipeline at the cost of only one cycle per skipped instruction.
An algorithm that provides a good example of conditional execution is the subtraction-based Euclidean algorithm for computing the greatest common divisor. In the C programming language, the algorithm can be written as:
int gcd(int a, int b) {
while (a != b) // We enter the loop when a<b or a>b, but not when a==b
if (a > b) // When a>b we do this
a -= b;
else // When a<b we do that (no if(a<b) needed since a!=b is checked in while condition)
b -= a;
return a;
}
The same algorithm can be rewritten in a way closer to target ARM instructions as:
loop:
// Compare a and b
GT = a > b;
LT = a < b;
NE = a != b;
// Perform operations based on flag results
if(GT) a -= b; // Subtract *only* if greater-than
if(LT) b -= a; // Subtract *only* if less-than
if(NE) goto loop; // Loop *only* if compared values were not equal
return a;
and coded in assembly language as:
; assign a to register r0, b to r1
loop: CMP r0, r1 ; set condition "NE" if (a != b),
; "GT" if (a > b),
; or "LT" if (a < b)
SUBGT r0, r0, r1 ; if "GT" (Greater Than), a = a-b;
SUBLT r1, r1, r0 ; if "LT" (Less Than), b = b-a;
BNE loop ; if "NE" (Not Equal), then loop
B lr ; if the loop is not entered, we can safely return
which avoids the branches around the then and else clauses. If r0 and r1 are equal then neither of the SUB instructions will be executed, eliminating the need for a conditional branch to implement the while check at the top of the loop, for example had SUBLE (less than or equal) been used.
One of the ways that Thumb code provides a more dense encoding is to remove the four-bit selector from non-branch instructions.
Other features
Another feature of the instruction set is the ability to fold shifts and rotates into the data processing (arithmetic, logical, and register-register move) instructions, so that, for example, the statement in C language:
a += (j << 2);
could be rendered as a one-word, one-cycle instruction:
ADD Ra, Ra, Rj, LSL #2
This results in the typical ARM program being denser than expected with fewer memory accesses; thus the pipeline is used more efficiently.
The ARM processor also has features rarely seen in other RISC architectures, such as PC-relative addressing (indeed, on the 32-bit ARM the PC is one of its 16 registers) and pre- and post-increment addressing modes.
The ARM instruction set has increased over time. Some early ARM processors (before ARM7TDMI), for example, have no instruction to store a two-byte quantity.
Pipelines and other implementation issues
The ARM7 and earlier implementations have a three-stage pipeline; the stages being fetch, decode and execute. Higher-performance designs, such as the ARM9, have deeper pipelines: Cortex-A8 has thirteen stages. Additional implementation changes for higher performance include a faster adder and more extensive branch prediction logic. The difference between the ARM7DI and ARM7DMI cores, for example, was an improved multiplier; hence the added "M".
Coprocessors
The ARM architecture (pre-ARMv8) provides a non-intrusive way of extending the instruction set using "coprocessors" that can be addressed using MCR, MRC, MRRC, MCRR and similar instructions. The coprocessor space is divided logically into 16 coprocessors with numbers from 0 to 15, coprocessor 15 (cp15) being reserved for some typical control functions like managing the caches and MMU operation on processors that have one.
In ARM-based machines, peripheral devices are usually attached to the processor by mapping their physical registers into ARM memory space, into the coprocessor space, or by connecting to another device (a bus) that in turn attaches to the processor. Coprocessor accesses have lower latency, so some peripherals—for example, an XScale interrupt controller—are accessible in both ways: through memory and through coprocessors.
In other cases, chip designers only integrate hardware using the coprocessor mechanism. For example, an image processing engine might be a small ARM7TDMI core combined with a coprocessor that has specialised operations to support a specific set of HDTV transcoding primitives.
Debugging
All modern ARM processors include hardware debugging facilities, allowing software debuggers to perform operations such as halting, stepping, and breakpointing of code starting from reset. These facilities are built using JTAG support, though some newer cores optionally support ARM's own two-wire "SWD" protocol. In ARM7TDMI cores, the "D" represented JTAG debug support, and the "I" represented presence of an "EmbeddedICE" debug module. For ARM7 and ARM9 core generations, EmbeddedICE over JTAG was a de facto debug standard, though not architecturally guaranteed.
The ARMv7 architecture defines basic debug facilities at an architectural level. These include breakpoints, watchpoints and instruction execution in a "Debug Mode"; similar facilities were also available with EmbeddedICE. Both "halt mode" and "monitor" mode debugging are supported. The actual transport mechanism used to access the debug facilities is not architecturally specified, but implementations generally include JTAG support.
There is a separate ARM "CoreSight" debug architecture, which is not architecturally required by ARMv7 processors.
Debug Access Port
The Debug Access Port (DAP) is an implementation of an ARM Debug Interface.
There are two different supported implementations, the Serial Wire JTAG Debug Port (SWJ-DP) and the Serial Wire Debug Port (SW-DP).
CMSIS-DAP is a standard interface that describes how various debugging software on a host PC can communicate over USB to firmware running on a hardware debugger, which in turn talks over SWD or JTAG to a CoreSight-enabled ARM Cortex CPU.
DSP enhancement instructions
To improve the ARM architecture for digital signal processing and multimedia applications, DSP instructions were added to the set. These are signified by an "E" in the name of the ARMv5TE and ARMv5TEJ architectures. E-variants also imply T, D, M, and I.
The new instructions are common in digital signal processor (DSP) architectures. They include variations on signed multiply–accumulate, saturated add and subtract, and count leading zeros.
SIMD extensions for multimedia
Introduced in the ARMv6 architecture, this was a precursor to Advanced SIMD, also named Neon.
Jazelle
Jazelle DBX (Direct Bytecode eXecution) is a technique that allows Java bytecode to be executed directly in the ARM architecture as a third execution state (and instruction set) alongside the existing ARM and Thumb-mode. Support for this state is signified by the "J" in the ARMv5TEJ architecture, and in ARM9EJ-S and ARM7EJ-S core names. Support for this state is required starting in ARMv6 (except for the ARMv7-M profile), though newer cores only include a trivial implementation that provides no hardware acceleration.
Thumb
To improve compiled code density, processors since the ARM7TDMI (released in 1994) have featured the Thumb instruction set, which have their own state. (The "T" in "TDMI" indicates the Thumb feature.) When in this state, the processor executes the Thumb instruction set, a compact 16-bit encoding for a subset of the ARM instruction set. Most of the Thumb instructions are directly mapped to normal ARM instructions. The space saving comes from making some of the instruction operands implicit and limiting the number of possibilities compared to the ARM instructions executed in the ARM instruction set state.
In Thumb, the 16-bit opcodes have less functionality. For example, only branches can be conditional, and many opcodes are restricted to accessing only half of all of the CPU's general-purpose registers. The shorter opcodes give improved code density overall, even though some operations require extra instructions. In situations where the memory port or bus width is constrained to less than 32 bits, the shorter Thumb opcodes allow increased performance compared with 32-bit ARM code, as less program code may need to be loaded into the processor over the constrained memory bandwidth.
Unlike processor architectures with variable length (16- or 32-bit) instructions, such as the Cray-1 and Hitachi SuperH, the ARM and Thumb instruction sets exist independently of each other. Embedded hardware, such as the Game Boy Advance, typically have a small amount of RAM accessible with a full 32-bit datapath; the majority is accessed via a 16-bit or narrower secondary datapath. In this situation, it usually makes sense to compile Thumb code and hand-optimise a few of the most CPU-intensive sections using full 32-bit ARM instructions, placing these wider instructions into the 32-bit bus accessible memory.
The first processor with a Thumb instruction decoder was the ARM7TDMI. All ARM9 and later families, including XScale, have included a Thumb instruction decoder. It includes instructions adopted from the Hitachi SuperH (1992), which was licensed by ARM. ARM's smallest processor families (Cortex M0 and M1) implement only the 16-bit Thumb instruction set for maximum performance in lowest cost applications.
Thumb-2
Thumb-2 technology was introduced in the ARM1156 core, announced in 2003. Thumb-2 extends the limited 16-bit instruction set of Thumb with additional 32-bit instructions to give the instruction set more breadth, thus producing a variable-length instruction set. A stated aim for Thumb-2 was to achieve code density similar to Thumb with performance similar to the ARM instruction set on 32-bit memory.
Thumb-2 extends the Thumb instruction set with bit-field manipulation, table branches and conditional execution. At the same time, the ARM instruction set was extended to maintain equivalent functionality in both instruction sets. A new "Unified Assembly Language" (UAL) supports generation of either Thumb or ARM instructions from the same source code; versions of Thumb seen on ARMv7 processors are essentially as capable as ARM code (including the ability to write interrupt handlers). This requires a bit of care, and use of a new "IT" (if-then) instruction, which permits up to four successive instructions to execute based on a tested condition, or on its inverse. When compiling into ARM code, this is ignored, but when compiling into Thumb it generates an actual instruction. For example:
; if (r0 == r1)
CMP r0, r1
ITE EQ ; ARM: no code ... Thumb: IT instruction
; then r0 = r2;
MOVEQ r0, r2 ; ARM: conditional; Thumb: condition via ITE 'T' (then)
; else r0 = r3;
MOVNE r0, r3 ; ARM: conditional; Thumb: condition via ITE 'E' (else)
; recall that the Thumb MOV instruction has no bits to encode "EQ" or "NE".
All ARMv7 chips support the Thumb instruction set. All chips in the Cortex-A series, Cortex-R series, and ARM11 series support both "ARM instruction set state" and "Thumb instruction set state", while chips in the Cortex-M series support only the Thumb instruction set.
Thumb Execution Environment (ThumbEE)
ThumbEE (erroneously called Thumb-2EE in some ARM documentation), which was marketed as Jazelle RCT (Runtime Compilation Target), was announced in 2005, first appearing in the Cortex-A8 processor. ThumbEE is a fourth instruction set state, making small changes to the Thumb-2 extended instruction set. These changes make the instruction set particularly suited to code generated at runtime (e.g. by JIT compilation) in managed Execution Environments. ThumbEE is a target for languages such as Java, C#, Perl, and Python, and allows JIT compilers to output smaller compiled code without impacting performance.
New features provided by ThumbEE include automatic null pointer checks on every load and store instruction, an instruction to perform an array bounds check, and special instructions that call a handler. In addition, because it utilises Thumb-2 technology, ThumbEE provides access to registers r8–r15 (where the Jazelle/DBX Java VM state is held). Handlers are small sections of frequently called code, commonly used to implement high level languages, such as allocating memory for a new object. These changes come from repurposing a handful of opcodes, and knowing the core is in the new ThumbEE state.
On 23 November 2011, Arm Holdings deprecated any use of the ThumbEE instruction set, and ARMv8 removes support for ThumbEE.
Floating-point (VFP)
VFP (Vector Floating Point) technology is a floating-point unit (FPU) coprocessor extension to the ARM architecture (implemented differently in ARMv8 – coprocessors not defined there). It provides low-cost single-precision and double-precision floating-point computation fully compliant with the ANSI/IEEE Std 754-1985 Standard for Binary Floating-Point Arithmetic. VFP provides floating-point computation suitable for a wide spectrum of applications such as PDAs, smartphones, voice compression and decompression, three-dimensional graphics and digital audio, printers, set-top boxes, and automotive applications. The VFP architecture was intended to support execution of short "vector mode" instructions but these operated on each vector element sequentially and thus did not offer the performance of true single instruction, multiple data (SIMD) vector parallelism. This vector mode was therefore removed shortly after its introduction, to be replaced with the much more powerful Advanced SIMD, also named Neon.
Some devices such as the ARM Cortex-A8 have a cut-down VFPLite module instead of a full VFP module, and require roughly ten times more clock cycles per float operation. Pre-ARMv8 architecture implemented floating-point/SIMD with the coprocessor interface. Other floating-point and/or SIMD units found in ARM-based processors using the coprocessor interface include FPA, FPE, iwMMXt, some of which were implemented in software by trapping but could have been implemented in hardware. They provide some of the same functionality as VFP but are not opcode-compatible with it. FPA10 also provides extended precision, but implements correct rounding (required by IEEE 754) only in single precision.
VFPv1 Obsolete
VFPv2An optional extension to the ARM instruction set in the ARMv5TE, ARMv5TEJ and ARMv6 architectures. VFPv2 has 16 64-bit FPU registers.
VFPv3 or VFPv3-D32Implemented on most Cortex-A8 and A9 ARMv7 processors. It is backwards compatible with VFPv2, except that it cannot trap floating-point exceptions. VFPv3 has 32 64-bit FPU registers as standard, adds VCVT instructions to convert between scalar, float and double, adds immediate mode to VMOV such that constants can be loaded into FPU registers.
VFPv3-D16 As above, but with only 16 64-bit FPU registers. Implemented on Cortex-R4 and R5 processors and the Tegra 2 (Cortex-A9).
VFPv3-F16 Uncommon; it supports IEEE754-2008 half-precision (16-bit) floating point as a storage format.
VFPv4 or VFPv4-D32Implemented on Cortex-A12 and A15 ARMv7 processors, Cortex-A7 optionally has VFPv4-D32 in the case of an FPU with Neon. VFPv4 has 32 64-bit FPU registers as standard, adds both half-precision support as a storage format and fused multiply-accumulate instructions to the features of VFPv3.
VFPv4-D16 As above, but it has only 16 64-bit FPU registers. Implemented on Cortex-A5 and A7 processors in the case of an FPU without Neon.
VFPv5-D16-M Implemented on Cortex-M7 when single and double-precision floating-point core option exists.
In Debian Linux, and derivatives such as Ubuntu and Linux Mint, armhf (ARM hard float) refers to the ARMv7 architecture including the additional VFP3-D16 floating-point hardware extension (and Thumb-2) above. Software packages and cross-compiler tools use the armhf vs. arm/armel suffixes to differentiate.
Advanced SIMD (Neon)
The Advanced SIMD extension (aka Neon or "MPE" Media Processing Engine) is a combined 64- and 128-bit SIMD instruction set that provides standardised acceleration for media and signal processing applications. Neon is included in all Cortex-A8 devices, but is optional in Cortex-A9 devices. Neon can execute MP3 audio decoding on CPUs running at 10 MHz, and can run the GSM adaptive multi-rate (AMR) speech codec at 13 MHz. It features a comprehensive instruction set, separate register files, and independent execution hardware. Neon supports 8-, 16-, 32-, and 64-bit integer and single-precision (32-bit) floating-point data and SIMD operations for handling audio and video processing as well as graphics and gaming processing. In Neon, the SIMD supports up to 16 operations at the same time. The Neon hardware shares the same floating-point registers as used in VFP. Devices such as the ARM Cortex-A8 and Cortex-A9 support 128-bit vectors, but will execute with 64 bits at a time, whereas newer Cortex-A15 devices can execute 128 bits at a time.
A quirk of Neon in ARMv7 devices is that it flushes all subnormal numbers to zero, and as a result the GCC compiler will not use it unless , which allows losing denormals, is turned on. "Enhanced" Neon defined since ARMv8 does not have this quirk, but as of GCC 8.2 the same flag is still required to enable Neon instructions. On the other hand, GCC does consider Neon safe on AArch64 for ARMv8.
ProjectNe10 is ARM's first open-source project (from its inception; while they acquired an older project, now named Mbed TLS). The Ne10 library is a set of common, useful functions written in both Neon and C (for compatibility). The library was created to allow developers to use Neon optimisations without learning Neon, but it also serves as a set of highly optimised Neon intrinsic and assembly code examples for common DSP, arithmetic, and image processing routines. The source code is available on GitHub.
ARM Helium technology
Helium is the M-Profile Vector Extension (MVE). It adds more than 150 scalar and vector instructions.
Security extensions
TrustZone (for Cortex-A profile)
The Security Extensions, marketed as TrustZone Technology, is in ARMv6KZ and later application profile architectures. It provides a low-cost alternative to adding another dedicated security core to an SoC, by providing two virtual processors backed by hardware based access control. This lets the application core switch between two states, referred to as worlds (to reduce confusion with other names for capability domains), to prevent information leaking from the more trusted world to the less trusted world. This world switch is generally orthogonal to all other capabilities of the processor, thus each world can operate independently of the other while using the same core. Memory and peripherals are then made aware of the operating world of the core and may use this to provide access control to secrets and code on the device.
Typically, a rich operating system is run in the less trusted world, with smaller security-specialised code in the more trusted world, aiming to reduce the attack surface. Typical applications include DRM functionality for controlling the use of media on ARM-based devices, and preventing any unapproved use of the device.
In practice, since the specific implementation details of proprietary TrustZone implementations have not been publicly disclosed for review, it is unclear what level of assurance is provided for a given threat model, but they are not immune from attack.
Open Virtualization is an open source implementation of the trusted world architecture for TrustZone.
AMD has licensed and incorporated TrustZone technology into its Secure Processor Technology. Enabled in some but not all products, AMD's APUs include a Cortex-A5 processor for handling secure processing. In fact, the Cortex-A5 TrustZone core had been included in earlier AMD products, but was not enabled due to time constraints.
Samsung Knox uses TrustZone for purposes such as detecting modifications to the kernel, storing certificates and attestating keys.
TrustZone for ARMv8-M (for Cortex-M profile)
The Security Extension, marketed as TrustZone for ARMv8-M Technology, was introduced in the ARMv8-M architecture. While containing similar concepts to TrustZone for ARMv8-A, it has a different architectural design, as world switching is performed using branch instructions instead of using exceptions. It also supports safe interleaved interrupt handling from either world regardless of the current security state. Together these features provide low latency calls to the secure world and responsive interrupt handling. ARM provides a reference stack of secure world code in the form of Trusted Firmware for M and PSA Certified.
No-execute page protection
As of ARMv6, the ARM architecture supports no-execute page protection, which is referred to as XN, for eXecute Never.
Large Physical Address Extension (LPAE)
The Large Physical Address Extension (LPAE), which extends the physical address size from 32 bits to 40 bits, was added to the ARMv7-A architecture in 2011. Physical address size is larger, 44 bits, in Cortex-A75 and Cortex-A65AE.
ARMv8-R and ARMv8-M
The ARMv8-R and ARMv8-M architectures, announced after the ARMv8-A architecture, share some features with ARMv8-A, but don't include any 64-bit AArch64 instructions.
ARMv8.1-M
The ARMv8.1-M architecture, announced in February 2019, is an enhancement of the ARMv8-M architecture. It brings new features including:
A new vector instruction set extension. The M-Profile Vector Extension (MVE), or Helium, is for signal processing and machine learning applications.
Additional instruction set enhancements for loops and branches (Low Overhead Branch Extension).
Instructions for half-precision floating-point support.
Instruction set enhancement for TrustZone management for Floating Point Unit (FPU).
New memory attribute in the Memory Protection Unit (MPU).
Enhancements in debug including Performance Monitoring Unit (PMU), Unprivileged Debug Extension, and additional debug support focus on signal processing application developments.
Reliability, Availability and Serviceability (RAS) extension.
64/32-bit architecture
ARMv8
ARMv8-A
Announced in October 2011, ARMv8-A (often called ARMv8 while the ARMv8-R is also available) represents a fundamental change to the ARM architecture. It adds an optional 64-bit architecture (e.g. Cortex-A32 is a 32-bit ARMv8-A CPU while most ARMv8-A CPUs support 64-bit), named "AArch64", and the associated new "A64" instruction set. AArch64 provides user-space compatibility with ARMv7-A, the 32-bit architecture, therein referred to as "AArch32" and the old 32-bit instruction set, now named "A32". The Thumb instruction set is referred to as "T32" and has no 64-bit counterpart. ARMv8-A allows 32-bit applications to be executed in a 64-bit OS, and a 32-bit OS to be under the control of a 64-bit hypervisor. ARM announced their Cortex-A53 and Cortex-A57 cores on 30 October 2012. Apple was the first to release an ARMv8-A compatible core in a consumer product (Apple A7 in iPhone 5S). AppliedMicro, using an FPGA, was the first to demo ARMv8-A. The first ARMv8-A SoC from Samsung is the Exynos 5433 used in the Galaxy Note 4, which features two clusters of four Cortex-A57 and Cortex-A53 cores in a big.LITTLE configuration; but it will run only in AArch32 mode.
To both AArch32 and AArch64, ARMv8-A makes VFPv3/v4 and advanced SIMD (Neon) standard. It also adds cryptography instructions supporting AES, SHA-1/SHA-256 and finite field arithmetic. AArch64 was introduced in ARMv8-A and its subsequent revision. AArch64 is not included in the 32-bit ARMv8-R and ARMv8-M architectures.
ARMv8-R
Optional AArch64 support was added to the ARMv8-R profile, with the first ARM core implementing it being the Cortex-R82. It adds the A64 instruction set.
ARMv9
ARMv9-A
Announced in March 2021, the updated architecture places a focus on secure execution and compartmentalisation.
Arm SystemReady
Arm SystemReady, formerly named Arm ServerReady, is a certification program that helps land the generic off-the-shelf operating systems and hypervisors on to the Arm-based systems from datacenter servers to industrial edge and IoT devices. The key building blocks of the program are the specifications for minimum hardware and firmware requirements that the operating systems and hypervisors can rely upon. These specifications are:
Base System Architecture (BSA) and the market segment specific supplements (e.g., Server BSA supplement)
Base Boot Requirements (BBR) and Base Boot Security Requirements (BBR)
These specifications are co-developed by Arm Holdings and its partners in the System Architecture Advisory Committee (SystemArchAC).
Architecture Compliance Suite (ACS) is the test tools that help to check the compliance of these specifications. The Arm SystemReady Requirements Specification documents the requirements of the certifications.
This program was introduced by Arm Holdings in 2020 at the first DevSummit event. Its predecessor Arm ServerReady was introduced in 2018 at the Arm TechCon event. This program currently includes four bands:
SystemReady SR: this band is for servers that support operating systems and hypervisors that expect UEFI, ACPI and SMBIOS interfaces.
SystemReady LS: this band is for servers that hyperscalers use to support Linux operating systems that expect LinuxBoot firmware along with the ACPI and SMBIOS interfaces.
SystemReady ES: this band is for the industrial edge and IoT devices that support operating systems and hypervisors that expect UEFI, ACPI and SMBIOS interfaces.
SystemReady IR: this band is for the industrial edge and IoT devices that support operating systems that expect UEFI and devicetree interfaces.
PSA Certified
PSA Certified, formerly named Platform Security Architecture, is an architecture-agnostic security framework and evaluation scheme. It is intended to help secure Internet of Things (IoT) devices built on system-on-a-chip (SoC) processors. It was introduced to increase security where a full trusted execution environment is too large or complex.
The architecture was introduced by Arm Holdings in 2017 at the annual TechCon event. Although the scheme is architecture agnostic, it was first implemented on Arm Cortex-M processor cores intended for microcontroller use. PSA Certified includes freely available threat models and security analyses that demonstrate the process for deciding on security features in common IoT products. It also provides freely downloadable application programming interface (API) packages, architectural specifications, open-source firmware implementations, and related test suites.
Following the development of the architecture security framework in 2017, the PSA Certified assurance scheme launched two years later at Embedded World in 2019. PSA Certified offers a multi-level security evaluation scheme for chip vendors, OS providers and IoT device makers. The Embedded World presentation introduced chip vendors to Level 1 Certification. A draft of Level 2 protection was presented at the same time. Level 2 certification became a useable standard in February 2020.
The certification was created by PSA Joint Stakeholders to enable a security-by-design approach for a diverse set of IoT products. PSA Certified specifications are implementation and architecture agnostic, as a result they can be applied to any chip, software or device. The certification also removes industry fragmentation for IoT product manufacturers and developers.
Operating system support
32-bit operating systems
Historical operating systems
The first 32-bit ARM-based personal computer, the Acorn Archimedes, was originally intended to run an ambitious operating system called ARX. The machines shipped with RISC OS which was also used on later ARM-based systems from Acorn and other vendors. Some early Acorn machines were also able to run a Unix port called RISC iX. (Neither is to be confused with RISC/os, a contemporary Unix variant for the MIPS architecture.)
Embedded operating systems
The 32-bit ARM architecture is supported by a large number of embedded and real-time operating systems, including:
A2
Android
ChibiOS/RT
Deos
DRYOS
eCos
embOS
FreeRTOS
Integrity
Linux
Micro-Controller Operating Systems
Mbed
MINIX 3
MQX
Nucleus PLUS
NuttX
Operating System Embedded (OSE)
OS-9
Pharos
Plan 9
PikeOS
QNX
RIOT
RTEMS
RTXC Quadros
SCIOPTA
ThreadX
TizenRT
T-Kernel
VxWorks
Windows Embedded Compact
Windows 10 IoT Core
Zephyr
Mobile device operating systems
The 32-bit ARM architecture is the primary hardware environment for most mobile device operating systems such as:
Android
BlackBerry OS/BlackBerry 10
Chrome OS
Mobian
Sailfish
postmarketOS
Tizen
Ubuntu Touch
webOS
Formerly, but now discontinued:
Bada
Firefox OS
MeeGo
iOS 10 and earlier
Symbian
Windows 10 Mobile
Windows RT
Windows Phone
Windows Mobile
Desktop/server operating systems
The 32-bit ARM architecture is supported by RISC OS and by multiple Unix-like operating systems including:
FreeBSD
NetBSD
OpenBSD
OpenSolaris
several Linux distributions, such as:
Debian
Armbian
Gentoo
Ubuntu
Raspberry Pi OS (formerly Raspbian)
Slackware
64-bit operating systems
Embedded operating systems
Integrity
OSE
SCIOPTA
seL4
Pharos
FreeRTOS
QNX
Zephyr
Mobile device operating systems
Android supports ARMv8-A in Android Lollipop (5.0) and later.
iOS supports ARMv8-A in iOS 7 and later on 64-bit Apple SoCs. iOS 11 and later only supports 64-bit ARM processors and applications.
Mobian
PostmarketOS
Desktop/server operating systems
Support for ARMv8-A was merged into the Linux kernel version 3.7 in late 2012. ARMv8-A is supported by a number of Linux distributions, such as:
Debian
Armbian
Alpine Linux
Ubuntu
Fedora
openSUSE
SUSE Linux Enterprise
RHEL
Raspberry Pi OS (formerly Raspbian. Beta version as of early 2022)
Support for ARMv8-A was merged into FreeBSD in late 2014.
OpenBSD has experimental ARMv8 support as of 2017.
NetBSD has ARMv8 support as of early 2018.
Windows 10 – runs 32-bit "x86 and 32-bit ARM applications", as well as native ARM64 desktop apps. Support for 64-bit ARM apps in the Microsoft Store has been available since November 2018.
macOS has ARM support starting with macOS Big Sur as of late 2020. Rosetta 2 adds support for x86-64 applications but not virtualization of x86-64 computer platforms.
Porting to 32- or 64-bit ARM operating systems
Windows applications recompiled for ARM and linked with Winelib, from the Wine project, can run on 32-bit or 64-bit ARM in Linux, FreeBSD, or other compatible operating systems. x86 binaries, e.g. when not specially compiled for ARM, have been demonstrated on ARM using QEMU with Wine (on Linux and more), but do not work at full speed or same capability as with Winelib.
Notes
See also
RISC
RISC-V
Apple silicon
ARM big.LITTLE – ARM's heterogeneous computing architecture
DynamIQ
ARM Accredited Engineer – certification program
ARMulator – an instruction set simulator
Amber (processor core) – an open-source ARM-compatible processor core
AMULET microprocessor – an asynchronous implementation of the ARM architecture
Comparison of ARMv7-A cores
Comparison of ARMv8-A cores
Unicore – a 32-register architecture based heavily on a 32-bit ARM
Meltdown (security vulnerability)
Spectre (security vulnerability)
References
Citations
Bibliography
Further reading
External links
, ARM Ltd.
Architecture manuals
- covers ARMv4, ARMv4T, ARMv5T, (ARMv5TExP), ARMv5TE, ARMv5TEJ, and ARMv6
ARM Virtualization Extensions
Quick Reference Cards
Instructions: Thumb, ARM and Thumb-2, Vector Floating Point
Opcodes: Thumb, Thumb, ARM, ARM, GNU Assembler Directives
Acorn Computers
Articles with example code
Computer-related introductions in 1983
Instruction set architectures | Operating System (OS) | 1,006 |
ELM327
The ELM327 is a programmed microcontroller produced by ELM Electronics for translating the on-board diagnostics (OBD) interface found in most modern cars. The ELM327 command protocol is one of the most popular PC-to-OBD interface standards and is also implemented by other vendors.
The original ELM327 is implemented on the PIC18F2480 microcontroller from Microchip Technology.
ELM327 is one of a family of OBD translators from ELM Electronics. Other variants implement only a subset of the OBD protocols.
In June 2020 ELM Electronics announced it was closing the business in June 2022.
Uses
The ELM327 abstracts the low-level protocol and presents a simple interface that can be called via a UART, typically by a hand-held diagnostic tool or a computer program connected by USB, RS-232, Bluetooth or Wi-Fi. New applications include smartphones.
There are a large number of programs available that connect to the ELM327.
The function of such software may include supplementary vehicle instrumentation, reporting and clearing of error codes
ELM327 Functions:
Read diagnostic trouble codes, both generic and manufacturer-specific.
Clear some trouble codes and turn off the MIL ("Malfunction Indicator Light", more commonly known as the "Check Engine Light")
Display current sensor data
Engine RPM
Calculated Load Value
Coolant Temperature
Fuel System Status
Vehicle Speed
Short Term Fuel Trim
Long Term Fuel Trim
Intake Manifold Pressure
Timing Advance
Intake Air Temperature
Air Flow Rate
Absolute Throttle Position
Oxygen sensor voltages/associated short term fuel trims
Fuel System status
Fuel Pressure
Protocols supported
The protocols supported by ELM327 are:
SAE J1850 PWM (41.6 kbit/s)
SAE J1850 VPW (10.4 kbit/s)
ISO 9141-2 (5 baud init, 10.4 kbit/s)
ISO 14230-4 KWP (5 baud init, 10.4 kbit/s)
ISO 14230-4 KWP (fast init, 10.4 kbit/s)
ISO 15765-4 CAN (11 bit ID, 500 kbit/s)
ISO 15765-4 CAN (29 bit ID, 500 kbit/s)
ISO 15765-4 CAN (11 bit ID, 250 kbit/s)
ISO 15765-4 CAN (29 bit ID, 250 kbit/s)
SAE J1939 (250kbit/s)
SAE J1939 (500kbit/s)
Command set
The ELM327 command set is similar to the Hayes AT commands.
Other versions
The ELM327 is a PIC microcontroller that has been customized with ELM Electronics' proprietary code that implements the testing protocols. When ELM Electronics sold version 1.0 of its ELM327, it did not enable the copy protection feature of the PIC microcontroller. Consequently, anyone could buy a genuine ELM327, and read ELM's proprietary binary microcontroller software using a device programmer. With this software, pirates could trivially produce ELM327 clones by purchasing the same microcontroller chips and programming them with the copied code. ELM327 copies were widely sold in devices claiming to contain an ELM327 device, and problems have been reported with the copies. The problems reflect bugs that were present in ELM's version 1.0 microcode; those making the clones may continue to sell the old version.
Although these copies may contain the ELM327 v1.0 code, they may falsely report the version number as the current version provided by the genuine ELM327, and in some cases report an as-yet non-existent version. Released software versions for the ELM327 are 1.0, 1.0a, 1.1, 1.2, 1.2a, 1.3, 1.3a, 1.4, 1.4b, 2.0, 2.1, 2.2 and 2.3 only. The actual functions of these copies are nonetheless limited to the functions of the original ELM327 v1.0, with their inherent deficiencies.
Version outline
v1.0
Initial public release, the ELM327 v1.0 supported:
– SAEJ1850 PWM and VPW,-
– ISO 9141-2 (10.4 and 9.6 kbps),
– ISO 14230-4 (10.4 and 9.6 kbps),
– ISO 15765-4 CAN (250 and 500 kbps)-
The RS232 baud rates were only 9.6 kbps or 38.4 kbps
v1.0a
– J1850 VPW timing adjustment for some ’99 – ’00 GM trucks.
v1.1
– Introduced Programmable Parameters
– Added Flow Control commands
v1.2
– RS232 baud rates are adjustable to 500 kbps
– Programmable Parameters can be reset with a jumper
– Introduced Adaptive Timing
– Added SAE J1939 support (protocol A)
– Added user defined CAN protocols B and C
– Modified KWP protocols to allow four byte headers
v1.2a
– Changed error detection to catch KWP 4 byte headers if no data or checksum
– Added check to prevent CAN mask corruption on certain Flow Control sends
v1.3
– Adaptive Timing tuned a little differently
– Several J1939 improvements
– New CAN CRA commands to help setting masks and filters
– New CAN D0/D1 commands for printing of message dlc
– New CAN RTR command for sending same
– Added space character control in responses
– New STOPPED message for user interrupts during searches
– Introduced LV RESET message for resets from low voltage
– New @2 and @3 commands for storing of unique identifier
– Added ability to state the number of responses desired
v1.3a (still available)
– Added wiring checks for when the J1962 CAN pins are used for other functions
v1.4
– Added Low Power mode (‘sleep’ function)
– Added extended addressing mode for CAN protocols
– Added 4800 baud ISO 9141 and ISO 14230 support
– Allow manual control over ISO 9141 and ISO 14230 initiation
– Provided a single EEPROM byte for user data storage
– All interrupts now say STOPPED (not just when searching)
– Many new Programmable Parameters and additions
v1.4a
Elm Electronics never made a v1.4a
v1.4b (no longer available)
– New CSM command to have active or passive CAN monitoring
– New CRA command to quickly reset changed masks and filters
– Several SAE J1939 updates
v1.5
Elm Electronics never made a v1.5
v2.0
– New Activity Monitor watches OBD pins
– Wake from Low Power now retains settings
– AT CRAs accept ‘don’t care’s (X’s)
– New PP’s provide extensive ISO/KWP control
– Increased the RS232 Tx buffer to 512 bytes
– Brownout reset voltage reduced to 2.8V
v2.1
– Speed increases
– Processes ‘Response Pending’ (7F xx 78) replies
– CAN searches now measure frequency and require a match
v2.2
– AT CS command now shows CAN frequency
– Added 12500 and 15625 bps ISO/KWP baud rates
– New AT CER hh command allows defining the CEA Rx address
– New IFR modes 4,5,6 control J1850 IFR sending while monitoring
– Added PP 1F to allow KWP length to include the checksum byte
– Increased PP19 from 31 to 4F
v2.3 (latest release)
– New AT FT command adds another layer of filtering
– Added three CAN Flow Control modes for experimenters
– Response Pending now works with CAN Extended Addressing
– New AT IA, and C0/C1 commands
– Better noise tolerance on RS232 Rx
Slightly more detailed changes may be viewed in their latest datasheet in the Version History chapter (pag. 94-95).
See also
On-Board Diagnostics
OBD-II PIDs
OBDuino
OpenXC
References
External links
ELM327 Basics and types of ELM327 adapters
ELM327 product page
ELM Electronics Scan Tools References
Microcontrollers
Automotive electronics | Operating System (OS) | 1,007 |
Computer multitasking
In computing, multitasking is the concurrent execution of multiple tasks (also known as processes) over a certain period of time. New tasks can interrupt already started ones before they finish, instead of waiting for them to end. As a result, a computer executes segments of multiple tasks in an interleaved manner, while the tasks share common processing resources such as central processing units (CPUs) and main memory. Multitasking automatically interrupts the running program, saving its state (partial results, memory contents and computer register contents) and loading the saved state of another program and transferring control to it. This "context switch" may be initiated at fixed time intervals (pre-emptive multitasking), or the running program may be coded to signal to the supervisory software when it can be interrupted (cooperative multitasking).
Multitasking does not require parallel execution of multiple tasks at exactly the same time; instead, it allows more than one task to advance over a given period of time. Even on multiprocessor computers, multitasking allows many more tasks to be run than there are CPUs.
Multitasking is a common feature of computer operating systems since at least the 1960s. It allows more efficient use of the computer hardware; where a program is waiting for some external event such as a user input or an input/output transfer with a peripheral to complete, the central processor can still be used with another program. In a time-sharing system, multiple human operators use the same processor as if it was dedicated to their use, while behind the scenes the computer is serving many users by multitasking their individual programs. In multiprogramming systems, a task runs until it must wait for an external event or until the operating system's scheduler forcibly swaps the running task out of the CPU. Real-time systems such as those designed to control industrial robots, require timely processing; a single processor might be shared between calculations of machine movement, communications, and user interface.
Often multitasking operating systems include measures to change the priority of individual tasks, so that important jobs receive more processor time than those considered less significant. Depending on the operating system, a task might be as large as an entire application program, or might be made up of smaller threads that carry out portions of the overall program.
A processor intended for use with multitasking operating systems may include special hardware to securely support multiple tasks, such as memory protection, and protection rings that ensure the supervisory software cannot be damaged or subverted by user-mode program errors.
The term "multitasking" has become an international term, as the same word is used in many other languages such as German, Italian, Dutch, Romanian, Czech, Danish and Norwegian.
Multiprogramming
In the early days of computing, CPU time was expensive, and peripherals were very slow. When the computer ran a program that needed access to a peripheral, the central processing unit (CPU) would have to stop executing program instructions while the peripheral processed the data. This was usually very inefficient.
The first computer using a multiprogramming system was the British Leo III owned by J. Lyons and Co. During batch processing, several different programs were loaded in the computer memory, and the first one began to run. When the first program reached an instruction waiting for a peripheral, the context of this program was stored away, and the second program in memory was given a chance to run. The process continued until all programs finished running.
The use of multiprogramming was enhanced by the arrival of virtual memory and virtual machine technology, which enabled individual programs to make use of memory and operating system resources as if other concurrently running programs were, for all practical purposes, nonexistent.
Multiprogramming gives no guarantee that a program will run in a timely manner. Indeed, the first program may very well run for hours without needing access to a peripheral. As there were no users waiting at an interactive terminal, this was no problem: users handed in a deck of punched cards to an operator, and came back a few hours later for printed results. Multiprogramming greatly reduced wait times when multiple batches were being processed.
Cooperative multitasking
Early multitasking systems used applications that voluntarily ceded time to one another. This approach, which was eventually supported by many computer operating systems, is known today as cooperative multitasking. Although it is now rarely used in larger systems except for specific applications such as CICS or the JES2 subsystem, cooperative multitasking was once the only scheduling scheme employed by Microsoft Windows and classic Mac OS to enable multiple applications to run simultaneously. Cooperative multitasking is still used today on RISC OS systems.
As a cooperatively multitasked system relies on each process regularly giving up time to other processes on the system, one poorly designed program can consume all of the CPU time for itself, either by performing extensive calculations or by busy waiting; both would cause the whole system to hang. In a server environment, this is a hazard that makes the entire environment unacceptably fragile.
Preemptive multitasking
Preemptive multitasking allows the computer system to more reliably guarantee to each process a regular "slice" of operating time. It also allows the system to deal rapidly with important external events like incoming data, which might require the immediate attention of one or another process. Operating systems were developed to take advantage of these hardware capabilities and run multiple processes preemptively. Preemptive multitasking was implemented in the PDP-6 Monitor and MULTICS in 1964, in OS/360 MFT in 1967, and in Unix in 1969, and was available in some operating systems for computers as small as DEC's PDP-8; it is a core feature of all Unix-like operating systems, such as Linux, Solaris and BSD with its derivatives, as well as modern versions of Windows.
At any specific time, processes can be grouped into two categories: those that are waiting for input or output (called "I/O bound"), and those that are fully utilizing the CPU ("CPU bound"). In primitive systems, the software would often "poll", or "busywait" while waiting for requested input (such as disk, keyboard or network input). During this time, the system was not performing useful work. With the advent of interrupts and preemptive multitasking, I/O bound processes could be "blocked", or put on hold, pending the arrival of the necessary data, allowing other processes to utilize the CPU. As the arrival of the requested data would generate an interrupt, blocked processes could be guaranteed a timely return to execution.
The earliest preemptive multitasking OS available to home users was Sinclair QDOS on the Sinclair QL, released in 1984, but very few people bought the machine. Commodore's Amiga, released the following year, was the first commercially successful home computer to use the technology, and its multimedia abilities make it a clear ancestor of contemporary multitasking personal computers. Microsoft made preemptive multitasking a core feature of their flagship operating system in the early 1990s when developing Windows NT 3.1 and then Windows 95. It was later adopted on the Apple Macintosh by Mac OS X that, as a Unix-like operating system, uses preemptive multitasking for all native applications.
A similar model is used in Windows 9x and the Windows NT family, where native 32-bit applications are multitasked preemptively. 64-bit editions of Windows, both for the x86-64 and Itanium architectures, no longer support legacy 16-bit applications, and thus provide preemptive multitasking for all supported applications.
Real time
Another reason for multitasking was in the design of real-time computing systems, where there are a number of possibly unrelated external activities needed to be controlled by a single processor system. In such systems a hierarchical interrupt system is coupled with process prioritization to ensure that key activities were given a greater share of available process time.
Multithreading
As multitasking greatly improved the throughput of computers, programmers started to implement applications as sets of cooperating processes (e. g., one process gathering input data, one process processing input data, one process writing out results on disk). This, however, required some tools to allow processes to efficiently exchange data.
Threads were born from the idea that the most efficient way for cooperating processes to exchange data would be to share their entire memory space. Thus, threads are effectively processes that run in the same memory context and share other resources with their parent processes, such as open files. Threads are described as lightweight processes because switching between threads does not involve changing the memory context.
While threads are scheduled preemptively, some operating systems provide a variant to threads, named fibers, that are scheduled cooperatively. On operating systems that do not provide fibers, an application may implement its own fibers using repeated calls to worker functions. Fibers are even more lightweight than threads, and somewhat easier to program with, although they tend to lose some or all of the benefits of threads on machines with multiple processors.
Some systems directly support multithreading in hardware.
Memory protection
Essential to any multitasking system is to safely and effectively share access to system resources. Access to memory must be strictly managed to ensure that no process can inadvertently or deliberately read or write to memory locations outside the process's address space. This is done for the purpose of general system stability and data integrity, as well as data security.
In general, memory access management is a responsibility of the operating system kernel, in combination with hardware mechanisms that provide supporting functionalities, such as a memory management unit (MMU). If a process attempts to access a memory location outside its memory space, the MMU denies the request and signals the kernel to take appropriate actions; this usually results in forcibly terminating the offending process. Depending on the software and kernel design and the specific error in question, the user may receive an access violation error message such as "segmentation fault".
In a well designed and correctly implemented multitasking system, a given process can never directly access memory that belongs to another process. An exception to this rule is in the case of shared memory; for example, in the System V inter-process communication mechanism the kernel allocates memory to be mutually shared by multiple processes. Such features are often used by database management software such as PostgreSQL.
Inadequate memory protection mechanisms, either due to flaws in their design or poor implementations, allow for security vulnerabilities that may be potentially exploited by malicious software.
Memory swapping
Use of a swap file or swap partition is a way for the operating system to provide more memory than is physically available by keeping portions of the primary memory in secondary storage. While multitasking and memory swapping are two completely unrelated techniques, they are very often used together, as swapping memory allows more tasks to be loaded at the same time. Typically, a multitasking system allows another process to run when the running process hits a point where it has to wait for some portion of memory to be reloaded from secondary storage.
Programming
Processes that are entirely independent are not much trouble to program in a multitasking environment. Most of the complexity in multitasking systems comes from the need to share computer resources between tasks and to synchronize the operation of co-operating tasks.
Various concurrent computing techniques are used to avoid potential problems caused by multiple tasks attempting to access the same resource.
Bigger systems were sometimes built with a central processor(s) and some number of I/O processors, a kind of asymmetric multiprocessing.
Over the years, multitasking systems have been refined. Modern operating systems generally include detailed mechanisms for prioritizing processes, while symmetric multiprocessing has introduced new complexities and capabilities.
See also
Process state
Task switching
References
Concurrent computing
Operating system technology | Operating System (OS) | 1,008 |
WinFixer
WinFixer was a family of scareware rogue security programs developed by Winsoftware which claimed to repair computer system problems on Microsoft Windows computers if a user purchased the full version of the software. The software was mainly installed without the user's consent. McAfee claimed that "the primary function of the free version appears to be to alarm the user into paying for registration, at least partially based on false or erroneous detections." The program prompted the user to purchase a paid copy of the program.
The WinFixer web page (see the image) said it "is a useful utility to scan and fix any system, registry and hard drive errors. It ensures system stability and performance, frees wasted hard-drive space and recovers damaged Word, Excel, music and video files."
However, these claims were never verified by any reputable source. In fact, most sources considered this program to actually reduce system stability and performance. The sites went defunct in December 2008 after actions taken by the Federal Trade Commission.
Installation methods
The WinFixer application was known to infect users using the Microsoft Windows operating system, and was browser independent. One infection method involved the Emcodec.E trojan, a fake codec scam. Another involves the use of the Vundo family of trojans.
Typical infection
The infection usually occurred during a visit to a distributing website using a web browser. A message appeared in a dialog box or popup asking the user if they wanted to install WinFixer, or claimed a user's machine was infected with malware, and requested the user to run a free scan. When the user chose any of the options or tried to close this dialog (by clicking 'OK' or 'Cancel' or by clicking the corner 'X'), it would trigger a pop-up window and WinFixer would download and install itself, regardless of the user's wishes.
"Trial" offer
A free "trial" offer of this program was sometimes found in pop-ups. If the "trial" version was downloaded and installed, it would execute a "scan" of the local machine and a couple of non-existent trojans and viruses would be "discovered", but no further action would be undertaken by the program. To obtain a quarantine or removal, WinFixer required the purchase of the program. However, the alleged unwanted bugs were bogus, only serving to persuade the owner to buy the program.
WinFixer application
Once installed, WinFixer frequently launched pop-ups and prompted the user to follow its directions. Because of the intricate way in which the program installed itself into the host computer (including making dozens of registry edits), successful removal would have taken a fairly long time if done manually. When running, its process could be found in the task manager and be stopped, but would automatically relaunch itself after a period of time.
WinFixer was also known to modify the Windows Registry so that it started up automatically with every reboot, and scanned the user's computer.
Firefox popup
The Mozilla Firefox browser was vulnerable to initial infection by WinFixer. Once installed, WinFixer was known to exploit the SessionSaver extension for the Firefox browser. The program caused popups on every startup asking the user to download WinFixer, by adding lines containing the word 'WinFixer' to the prefs.js file.
Removal
Removal of WinFixer proved difficult because it actively undid whatever the user attempted. Frequently, procedures that worked on one system would not work on another because there were a large number of variants. Some sites provided manual techniques to remove infections that automated cleanup tools could not remove.
Domain ownership
The company that made WinFixer, Winsoftware Ltd., claimed to be based in Liverpool, England (Stanley Street, postcode: 13088.) However, this address was proven to be false.
The domain WINFIXER.COM on the whois database showed it was owned by a void company in Ukraine and another in Warsaw, Poland. According to Alexa Internet, the domain was owned by Innovative Marketing, Inc., 1876 Hutson St, Honduras.
According to the public key certificate provided by GTE CyberTrust Solutions, Inc., the server secure.errorsafe.com was operated by ErrorSafe Inc. at 1878 Hutson Street, Belize City, BZ.
Running traceroute on Winfixer domains showed that most of the domains were hosted from servers at setupahost.net, which used Shaw Business Solutions AKA Bigpipe as their backbone.
Technical information
Technical
WinFixer was closely related to Aurora Network's Nail.exe hijacker/spyware program. In worst-case scenarios, it would embed itself in Internet Explorer and become part of the program, thus being nearly impossible to remove. The program was also closely related to the Vundo trojan.
Variants
Windows Police Pro
Windows Police Pro was a variant of WinFixer. David Wood wrote in Microsoft TechNet that in March 2009, the Microsoft Malware Protection Center saw ASC Antivirus, the virus' first version. Microsoft did not detect any changes to the virus until the end of July that year when a second variant, Windows Antivirus Pro, appeared. Although multiple new virus versions have since appeared, the virus has been renamed only once, to Windows Police Pro. Microsoft added the virus to its Malicious Software Removal Tool in October 2009.
The virus generated numerous persistent popups and messages displaying false scan reports intended to convince users that their computers were infected with various forms of malware that do not exist. When users attempted to close the popup message, they received confirmation dialog boxes that switched the "Purchase full version" and "Continue evaluating" buttons. Windows Police Pro generated a counterfeit Windows Security Center that warned users about the fake malware.
Bleeping Computer and the syndicated "Propeller Heads" column recommended using Malwarebytes' Anti-Malware to remove Windows Police Pro permanently. Microsoft TechNet and Softpedia recommended using Microsoft's Malicious Software Removal Tool to get rid of the malware.
Effects on the public
Class action lawsuit
On September 29, 2006, a San Jose woman filed a lawsuit over WinFixer and related "fraudware" in Santa Clara County Superior Court; however, in 2007 the lawsuit was dropped. In the lawsuit, the plaintiffs charged that the WinFixer software "eventually rendered her computer's hard drive unusable. The program infecting her computer also ejected her CD-ROM drive and displayed Virus warnings."
Ads on Windows Live Messenger
On February 18, 2007, a blog called "Spyware Sucks" reported that the popular instant messaging application Windows Live Messenger had inadvertently promoted WinFixer by displaying a WinFixer advertisement from one of Messenger's ad hosts. A similar occurrence was also reported on some MSN Groups pages. There were other reports before this one (one from Patchou, the creator of Messenger Plus!), and people had contacted Microsoft about the incidents. Whitney Burk from Microsoft issued this problem in his official statement:
Federal Trade Commission
On December 2, 2008, the Federal Trade Commission requested and received a temporary restraining order against Innovative Marketing, Inc., ByteHosting Internet Services, LLC, and individuals Daniel Sundin, Sam Jain, Marc D’Souza, Kristy Ross, and James Reno, the creators of WinFixer and its sister products. The complaint alleged that the products' advertising, as well as the products themselves, violated United States consumer protection laws. However, Innovative Marketing flouted the court order and was fined $8,000 per day in civil contempt.
On September 24, 2012, Kristy Ross was fined $163 million by the Federal Trade Commission for her part in this.
The article goes on to say that the WinFixer family of software was simply a con but does not acknowledge that it was in fact a program that made many computers unusable.
Notes
References
External links
McAfee's Entry on WinFixer
Symantec’s Entry on WinFixer and removal instructions
Symantec's entry on ErrorSafe - a sister spyware application
FTC complaint
Rogue software
Scareware
Hacking in the 2000s | Operating System (OS) | 1,009 |
Michael J. Karels
Michael J. (Mike) Karels is an American Software Engineer and one of the key people in history of BSD UNIX.
A graduate of University of Notre Dame with a Bachelor of Science in Microbiology. Mike went on to University of California, Berkeley for his advanced degree in Microbiology.
Mike had access to the department's computer and since the administrator of that PDP-11 did not have enough time, Mike started helping him and then making changes to the system.
Mike started his contribution to Unix with the 2.9BSD release, distributed for the PDP-11.
When Mike saw a job posting with the Computer Systems Research Group in the BSD project, he decided to jump in.
In 1982, Mike took over Bill Joy's responsibilities when Mr. Joy left CSRG, and was the system architect for 4.3BSD, the most important BSD release and the base of the development for a number of commercial Unix flavors available today, including Solaris. This release was introduced to the world in deep detail through the all-time famous book, The Design and Implementation of the 4.3BSD UNIX Operating System, with black cover and smiling beastie. Mike was a CSRG principal programmer for 8 years.
Mike worked closely with Van Jacobson on a number of widely accepted algorithms
in TCP implementation. Including the Jacobson/Karels algorithm TCP slow start and the routing radix tree are probably the most famous ones.
Mike spends little time taking credit for this work, and on the other hand, uses every opportunity to mention the names of people who had in one way or other some role or contribution to the TCP/IP implementation in Unix.
In 1993, the USENIX Association gave a Lifetime Achievement Award (Flame) to the Computer Systems Research Group at University of California, Berkeley, honoring 180 individuals, including Karels, who contributed to the CSRG's 4.4BSD-Lite release.
Later, Mike moved to BSDi (Berkeley Software Design) and designed BSD/OS, which was, for years, the only commercially available BSD style Unix on Intel platform. BSD/OS is a very reliable OS platform designed for Internet services. BSDi software asset was bought by Wind River in April 2001, and Mike joined Wind River as the Principal Technologist for the BSD/OS platform.
In 2009, Mike was Sr Principal Engineer at McAfee. In 2015 he worked for Intel and later for Forcepoint LLC.
Bibliography
S. Leffler, M. McKusick, M. Karels, J. Quarterman: The Design and Implementation of the 4.3BSD UNIX Operating System, Addison-Wesley, January 1989, . German translation published June 1990, . Japanese translation published June 1991, (out of print).
S. Leffler, M. McKusick: The Design and Implementation of the 4.3BSD UNIX Operating System Answer Book, Addison-Wesley, April 1991, . Japanese translation published January 1992,
M. McKusick, K. Bostic, M. Karels, J. Quarterman: The Design and Implementation of the 4.4BSD Operating System, Addison-Wesley, April 1996, . French translation published 1997, International Thomson Publishing, Paris, France, .
References
External links
Mike Karels at Unix Guru Universe's Unix Contributors
Mike Karels Linkedin Page
American computer programmers
American computer scientists
BSD people
Living people
Year of birth missing (living people) | Operating System (OS) | 1,010 |
List of technology terms
This is an alphabetical list of notable technology terms. These terms includes that use in Internet, computer and other devices.
A
Accelerometer
ADSL
Android
Archive
Artificial Intelligence
ATX
Apple Inc.
Data
B
Backup
Bandwidth
Benchmark
Barcode
Booting or Boot loader
BIOS
Bitmap
Bitcoin
BitTorrent
Blacklist
Bluetooth
Binary
Backlink
Bloatware
Bus
Burn
C
Cache
Compression
Content
CMOS
Cookie
Cyber crime
Cybersecurity
D
Daemon
Debug
Developer
Dock
Dos
Driver
Device driver
DPI
DRM
E
Encryption
Emulator
Ethernet
End user
F
FAT32
Framework
Freeware
Firewall
Firmware
FTP
G
GIF
GIT
GPS
GUI
H
HTML
HTTPS
I
I/O
IEEE
IP Address
ISO
IMEI
ISP
Internet
J
JAVA
JavaScript
JPEG
K
Kernel
M
Macintosh
Mp3
Malware
MMS
MIDI
Machine
Mp4
N
Newbie
O
OEM
OS
OCR
Overclock
Overheat
P
PDF
Phishing
Python
Plug-in
Processor
Q
QWERTY
R
Remote access
Registry
Read-only
RAID
Rooting
RAM
S
Safe mode
SSID
SEO
Service pack
Server
Source code
Spam
Search engine
Search engine optimization
Swype
T
Trash
U
Underclock
Unix
V
Virus
VGA
VOIP (Voice Over Internet Protocol), Not to be confused with "VOYP", a telephone service provider in the US
W
WEB
Wi-Fi and Hotspot (Wi-Fi)
Wikipedia Zero
Windows
Wireless LAN
World Wide Web
WYSIWYG
WPA
=
See also
List of computer term etymologies
List of HTTP status codes
List of information technology acronyms
List of operating systems
Technical terminology | Operating System (OS) | 1,011 |
8.3 filename
An 8.3 filename (also called a short filename or SFN) is a filename convention used by old versions of DOS and versions of Microsoft Windows prior to Windows 95 and Windows NT 3.5. It is also used in modern Microsoft operating systems as an alternate filename to the long filename for compatibility with legacy programs. The filename convention is limited by the FAT file system. Similar 8.3 file naming schemes have also existed on earlier CP/M, TRS-80, Atari, and some Data General and Digital Equipment Corporation minicomputer operating systems.
Overview
8.3 filenames are limited to at most eight characters (after any directory specifier), followed optionally by a filename extension consisting of a period and at most three further characters. For systems that only support 8.3 filenames, excess characters are ignored. If a file name has no extension, a trailing has no significance (that is, and are equivalent). Furthermore, file and directory names are uppercase in this system, even though systems that use the 8.3 standard are usually case-insensitive (making equivalent to the name ). However, on non-8.3 operating systems (such as almost any modern operating system) accessing 8.3 file systems (including DOS-formatted diskettes, but also including some modern memory cards and networked file systems), the underlying system may alter filenames internally to preserve case and avoid truncating letters in the names, for example in the case of VFAT.
VFAT and computer-generated 8.3 filenames
VFAT, a variant of FAT with an extended directory format, was introduced in Windows 95 and Windows NT 3.5. It allowed mixed-case Unicode long filenames (LFNs) in addition to classic 8.3 names by using multiple 32-byte directory entry records for long filenames (in such a way that only one will be recognised by old 8.3 system software as a valid directory entry).
To maintain backward-compatibility with legacy applications (on DOS and Windows 3.1), on FAT and VFAT filesystems an 8.3 filename is automatically generated for every LFN, through which the file can still be renamed, deleted or opened, although the generated name (e.g. ) may show little similarity to the original. On NTFS filesystems the generation of 8.3 filenames can be turned off. The 8.3 filename can be obtained using the Kernel32.dll function GetShortPathName.
Although there is no compulsory algorithm for creating the 8.3 name from an LFN, Windows uses the following convention:
If the LFN is 8.3 uppercase, no LFN will be stored on disk at all.
Example:
If the LFN is 8.3 mixed case, the LFN will store the mixed-case name, while the 8.3 name will be an uppercased version of it.
Example: becomes .
If the filename contains characters not allowed in an 8.3 name (including space which was disallowed by convention though not by the APIs) or either part is too long, the name is stripped of invalid characters such as spaces and extra periods. If the name begins with periods the leading periods are removed. Other characters such as are changed to the underscore , and letters are put in uppercase. The stripped name is then truncated to the first 6 letters of its basename, followed by a tilde, followed by a single digit, followed by a period , followed by the first 3 characters of the extension.
Example: becomes (or , should already exist). becomes . becomes
On all NT versions including Windows 2000 and later, if at least 4 files or folders already exist with the same extension and first 6 characters in their short names, the stripped LFN is instead truncated to the first 2 letters of the basename (or 1 if the basename has only 1 letter), followed by 4 hexadecimal digits derived from an undocumented hash of the filename, followed by a tilde, followed by a single digit, followed by a period , followed by the first 3 characters of the extension.
Example: becomes .
On Windows 95, 98 and ME, if more than 9 files or folders with the same extension and first 6 characters and in their short names (so that through suffixes aren't enough to resolve the collision), the name is further truncated to 5 letters, followed by a tilde, followed by two digits starting from 10, followed by a period and the first 3 characters of the extension.
Example: becomes if through all exist already.
NTFS, a file system used by the Windows NT family, supports LFNs natively, but 8.3 names are still available for legacy applications. This can optionally be disabled to improve performance in situations where large numbers of similarly named files exist in the same folder.
The ISO 9660 file system (mainly used on compact discs) has similar limitations at the most basic Level 1, with the additional restriction that directory names cannot contain extensions and that some characters (notably hyphens) are not allowed in filenames. Level 2 allows filenames of up to 31 characters, more compatible with classic AmigaOS and classic Mac OS filenames.
Compatibility
This legacy technology is used in a wide range of products and devices, as a standard for interchanging information, such as compact flash cards used in cameras. VFAT LFN long filenames introduced by Windows 95/98/ME retained compatibility. But the VFAT LFN used on NT-based systems (Windows NT/2K/XP) uses a modified 8.3 shortname.
If a filename contains only lowercase letters, or is a combination of a lowercase basename with an uppercase extension, or vice versa; and has no special characters, and fits within the 8.3 limits, a VFAT entry is not created on Windows NT and later versions such as XP. Instead, two bits in byte 0x0c of the directory entry are used to indicate that the filename should be considered as entirely or partially lowercase. Specifically, bit 4 means lowercase extension and bit 3 lowercase basename, which allows for combinations such as or but not . Few other operating systems support this. This creates a backward-compatibility filename mangling problem with older Windows versions (95, 98, ME) that see all-uppercase filenames if this extension has been used, and therefore can change the capitalization of a file when it is transported, such as on a USB flash drive. This can cause problems for operating systems that do not exhibit the case-insensitive filename behavior as DOS and Windows do. Current (>2.6) versions of Linux will recognize this extension when reading; the mount option shortname determines whether this feature is used when writing.
For MS-DOS you may use Henrik Haftmann's DOSLFN.
Directory table
A directory table is a special type of file that represents a directory. Each file or directory stored within it is represented by a 32-byte entry in the table. Each entry records the name, extension, attributes (archive, directory, hidden, read-only, system and volume), the date and time of creation, the address of the first cluster of the file/directory's data and finally the size of the file/directory.
Legal characters for DOS filenames include the following:
Upper case letters –
Numbers –
Space (though trailing spaces in either the base name or the extension are considered to be padding and not a part of the filename, also filenames with spaces in them must be enclosed in quotes to be used on a DOS command line, and if the DOS command is built programmatically, the filename must be enclosed in double double-quotes (...) when viewed as a variable within the program building the DOS command.)
, , , , , , , , , , , , , , ,
Values 128–255 (though if NLS services are active in DOS, some characters interpreted as lowercase are invalid and unavailable)
This excludes the following ASCII characters:
, , , , , , , , =, , , , , , |Windows/MS-DOS has no shell escape character
() within name and extension fields, except in and entries (see below)
Lower case letters –, stored as – on FAT12/FAT16
Control characters 0–31
Value 127 (DEL)
The DOS filenames are in the OEM character set.
Code 0xE5 as the first byte (see below) makes troubles when extra-ASCII characters are used.
Directory entries, both in the Root Directory Region and in subdirectories, are of the following format:
Working with short filenames in a command prompt
Sometimes it may be desirable to convert a long filename to a short filename, for example when working with the command prompt. A few simple rules can be followed to attain the correct 8.3 filename.
A SFN filename can have at most 8 characters before the dot. If it has more than that, the first 6 must be written, then a tilde as the seventh character and a number (usually 1) as the eighth. The number distinguishes it from other files with both the same first six letters and the same extension.
Dots are important and must be used even for folder names (if there is a dot in the folder name). If there are multiple dots in the long file/directory name, only the last one is used. The preceding dots should be ignored. If there are more characters than three after the final dot, only the first three are used.
Generally:
Any spaces in the filenames should be ignored when converting to SFN.
Ignore all periods except the last one. Do not include any other periods, just like the spaces. Use the last period if any, and the next characters (up to 3). For instance, for .manifest, .man only would be used.
Commas, square brackets, semicolons, = signs and + signs are changed to underscores.
Case is not important, upper case and lower case characters are treated equally.
To find out for sure the SFN or 8.3 names of the files in a directory
use: shows the short names if there is one, and the long names.
or: shows only the short names, in the original DIR listing format.
In Windows NT-based operating systems, command prompt (cmd.exe) applets accept long filenames with wildcard characters (question mark and asterisk ); long filenames with spaces in them need to be escaped (i.e. enclosed in single or double quotes).
Starting with Windows Vista, console commands and PowerShell applets perform limited pattern matching by allowing wildcards in filename and each subdirectory in the file path and silently substituting the first matching directory entry (for example, will change the current directory to ).
See also
File Allocation Table (FAT)
Design of the FAT file system
File system
Filename extension
References
Filenames
CP/M technology
DOS technology | Operating System (OS) | 1,012 |
IBM System Object Model
In computing, the System Object Model (SOM) is an object-oriented shared library system developed by IBM. DSOM, a distributed version based on CORBA, allowed objects on different computers to communicate.
SOM defines an interface between programs, or between libraries and programs, so that an object's interface is separated from its implementation. SOM allows classes of objects to be defined in one programming language and used in another, and it allows libraries of such classes to be updated without requiring client code to be recompiled.
A SOM library consists of a set of classes, methods, static functions, and data members. Programs that use a SOM library can create objects of the types defined in the library, use the methods defined for an object type, and derive subclasses from SOM classes, even if the language of the program accessing the SOM library does not support class typing. A SOM library and the programs that use objects and methods of that library need not be written in the same programming language. SOM also minimizes the impact of revisions to libraries. If a SOM library is changed to add new classes or methods, or to change the internal implementation of classes or methods, one can still run a program that uses that library without recompiling. This is not the case for all other C++ libraries, which in some cases require recompiling all programs that use them whenever the libraries are changed, known as the fragile binary interface problem.
SOM provides an application programming interface (API) that gives programs access to information about a SOM class or SOM object. Any SOM class inherits a set of virtual methods that can be used, for example, to find the class name of an object, or to determine whether a given method is available for an object.
Applications
OS/2
OpenDoc
SOM was intended to be used universally from IBM's mainframe computers right down to the desktop in OS/2, allowing programs to be written that would run on the desktop but use mainframes for processing and data storage. IBM produced versions of SOM/DSOM for OS/2, Microsoft Windows and various Unix flavours (notably IBM's own AIX). For some time after the formation of the AIM alliance, SOM/DSOM was also used by Apple Computer for similar purposes. It was most widely used in their OpenDoc framework, but saw limited use in other roles as well.
Perhaps the most widespread uses of SOM within IBM were in later versions of OS/2, which used it for most code, including the Workplace Shell. Object REXX for OS/2 is able to deal with SOM classes and objects including WPS.
SOMobjects were not completely shut down by IBM. They were ported to OS/390, and are still available on this OS. One can read documentation on IBM website. In 1996 Tandem Computers Inc. obtained SOMobjects technology. Tandem was sold to Compaq, Compaq was sold to Hewlett-Packard. NonStop DOM and some other technologies eventually merged into NonStop CORBA, but current documentation of NonStop products does not contain signs of SOM technology still powering NonStop products.
Fading away
With the "death" of OS/2 in the mid-1990s, the raison d'être for SOM/DSOM largely disappeared; if users would not be running OS/2 on the desktop, there would be no universal object library anyway. In 1997, when Steve Jobs returned to Apple and ended many development efforts including Copland and OpenDoc, SOM was replaced with Objective-C already being in use in OPENSTEP (to become Mac OS X later). SOM/DSOM development faded, and is no longer actively developed, although it continues to be included and used in OS/2-based systems such as ArcaOS.
Despite effective death of OS/2 and OpenDoc, SOM could have yet another niche: Windows and cross-platform development. SOM 3.0 for WinNT was generally available in December 1996. The reasons for not advancing in these directions go beyond market adoption problems. They involve opportunities missed by IBM, and destructive incompatible changes:
The first version of VisualAge C++ for Windows was 3.5. It was the first and the last version to support SOM. It had SOM 2.1 bundled in and Direct-to-SOM support in the compiler. Versions 3.6.5 and later had no trace of SOM.
SOMobjects largely relied on makefiles. VisualAge C++ 4.0 introduced .icc projects and removed icc.exe and ilink.exe command line compiler and linker from supply. It is impossible to build any SOM DTK sample out of box with VAC++ 4.0. VisualAge C++ comes with its own samples, but there are no .icc SOM samples even in VAC++ 4.0 for OS/2. vacbld.exe, the only command line compilation tool, doesn't support SOM.
VisualAge C++ bundled-in Object Component Library (OCL) was not based on SOM. It was probably meant to be ported to SOM using C++ Direct-to-SOM mode, but in VAC v3.6.5 this mode was abandoned, and OCL has no SOM interface so far.
Near the end of the 1990s, IBM shut down SOMobjects download sites and never put them back online. SOM 3.0 DTK for WinNT can't be found on IBM FTP, despite much other legacy stuff lying around freely. Despite general availability of SOM 3.0 for WinNT, it was nearly impossible to locate until the end of 2012.
Finally, IBM never open-sourced SOM (as done to Object REXX), despite several articles and petitions.
Alternative implementations
Two projects of open-source SOM implementations exist. One is Netlabs Object Model (NOM), which is technically the same, but binary incompatible. Another is somFree, which is a clean room design of IBM SOM, and binary compatible.
Comparison of support for compiled class libraries
Historically, SOM was compared to Microsoft's Component Object Model (COM) by IBM. However, from some points of view there is no place for COM at all. From the point of view of release to release transformations, COM is on procedural level, thus, the table 1 in RRBC article (Release-to-Release Binary Compatibility referenced earlier) does not contain COM column at all. Instead, SOM is being compared to:
compiled Smalltalk
compiled Common Lisp Object System (CLOS)
generic C++
SGI Delta/C++
Sun Object Binary Interface
Objective-C
Java
Most information in this table is still applicable to modern versions (as of 2015), except Objective-C 2.0 getting so called non-fragile instance variables. Some solutions remained experimental: SGI Delta/C++ or Sun OBI. Most approaches based on one programming language were phased out or were never used actively in the same way. For instance, Netscape Plugin Application Programming Interface (NPAPI) browser plugins were written using Java API initially (LiveConnect), but Java Virtual Machine (JVM) was later excluded from the chain. It can be seen as Java replaced with Cross Platform Component Object Model (XPCOM). Common Lisp Object System (CLOS) and Smalltalk are not known as being chain links like Java in LiveConnect. Objective-C is also not known much in this role and not known to be marketed this way, but its runtime is one of the most friendly to similar use cases.
Generic C++ is still being used in Qt and the K Desktop Environment (KDE). Qt and KDE are notable for describing efforts it takes to maintain binary compatibility without special support in development tools.
GObject only aimed to avoid dependence on C++ compiler, but RRBC issues are the same as in generic C++.
Without special runtime many other programming languages will have the same issues, e.g., Delphi, Ada. It can be illustrated by so-called unprecedented approach it took to produce Delphi 2006 binary compatible Delphi 2007 release: How to add a "published" property without breaking DCU compatibility
Objective-C is the most promising competitor to SOM (although not being actively marketed as multi-language platform), and SOM should preferably be compared to Objective-C as opposed to COM as it happened historically. With non-fragile instance variables in Objective-C 2.0 it is the best alternative amongst actively supported.
COM, XPCOM are being used actively, but they only manage interfaces, not implementations, and thus are not on the same level as SOM, GObject and Objective-C. Windows Runtime under closer look behaves much like COM. Its metadata description is based on .NET, but since WinRT does not contain special runtime to resolve RRBC issues, like in Objective-C or SOM, several restrictions had to be applied that limit WinRT on procedural level:
Type System (C++/CX)
A ref class that has a public constructor must be declared as sealed, to prevent further derivation.
Windows Runtime Components - Windows Runtime Components in a .NET World
Another restriction is that no generic public classes or interfaces can be exposed. Polymorphism isn’t available to WinRT types, and the closest you can come is implementing WinRT interfaces; you must declare as sealed any classes that are publicly exposed by your Windows Runtime Component.
Comparison to COM
SOM is similar in concept to COM. Both systems address the problem of producing a standard library format that can be called from more than one language. SOM can be considered more robust than COM. COM offers two methods of accessing methods onto an object, and an object can implement either one of them or both. The first one is dynamic and late binding (IDispatch), and is language-neutral similar to what is offered by SOM. The second, called a Custom Interface, is using a function table which can be built in C but is also directly compatible with the binary layout of the virtual table of C++ objects in Microsoft's C++ compiler. With compatible C++ compilers, Custom Interfaces can therefore be defined directly as pure virtual C++ classes. The resulting interface can then be called by languages that can call C functions through pointers. Custom Interfaces trade robustness for performance. Once an interface is published in a released product, it can not be changed, because client applications of this interface were compiled against a specific binary layout of this interface. This is an example of the fragile base class problem, which can lead to DLL hell, as a new version of a shared library is installed and all programs based on the older version can stop functioning properly. To prevent this problem, COM developers must remember to never change an interface once it is published, and new interfaces need to be defined if new methods or other changes are required.
SOM prevents these issues by providing only late binding, to allow the run-time linker to re-build the table on the fly. This way, changes to the underlying libraries are resolved when they are loaded into programs, although there is a performance cost.
SOM is also much more robust in terms of fully supporting a wide variety of OO languages. Whereas basic COM essentially defines a cut-down version of C++ to program to, SOM supports almost all common features and even some more esoteric ones. For instance SOM supports multiple inheritance, metaclasses and dynamic dispatching. Some of these features are not found in most languages, which had led most SOM/COM-like systems to be simpler at the cost of supporting fewer languages. The full flexibility of multi-language support was important to IBM, however, as they had a major effort underway to support both Smalltalk (single inheritance and dynamic dispatch) with C++ (multiple inheritance and fixed dispatch).
The most notable difference between SOM and COM is support for inheritance—COM does not have any. It might seem odd that Microsoft produced an object library system that could not support one of the most fundamental concepts of OO programming; the main reason for this is that it is difficult to know where a base class exists in a system where libraries are loaded in a potentially random order. COM demands that the programmer specify the exact base class at compile time, making it impossible to insert other derived classes in the middle (at least in other COM libraries).
SOM instead uses a simple algorithm, looking for potential base classes by following the inheritance tree and stopping at the first one that matches; this is the basic idea behind inheritance in most cases. The downside to this approach is that it is possible that new versions of this base class may no longer work even if the API remains the same. This possibility exists in any program, not only those using a shared library, but a problem can become very difficult to track down if it exists in someone else's code. In SOM, the only solution is extensive testing of new versions of libraries, which is not always easy.
While SOM and COM were contrapositioned by IBM, they were not mutually exclusive. In 1995 Novell contributed ComponentGlue technology to OpenDoc for Windows. This technology provided different means to integrate between COM- and SOM-based components. In particular, SOM objects can be made available to OLE2 applications by either late binding bridge (based on IDispatch) or COM interfaces having higher performance. In essence, SOM classes are implementing COM interfaces this way.
The flexibility offered by SOM was considered worth the trouble by almost all , but similar systems, such as Sun Microsystems' Distributed Objects Everywhere, also supported full inheritance. NeXT's Portable Distributed Objects avoided these issues via a strong versioning system, allowing library authors to ship new versions along with the old, thereby guaranteeing backward compatibility for the small cost of disk space.
See also
Component Object Model
GObject
Objective-C
XPCOM
Windows Runtime
References
External links
IBM SOMobjects Developer's Toolkit Version 3.0 for Windows NT, OS/2 Warp, and AIX Documentation
System Object Model
SOM
Object-oriented programming
System Object Model | Operating System (OS) | 1,013 |
Image Packaging System
The Image Packaging System, also known as IPS or pkg(5), is a cross-platform package management system created by the OpenSolaris community in coordination with Sun Microsystems. It is used by Solaris 11, several illumos-based distributions: OpenIndiana, OmniOS, XStreamOS and a growing number of layered applications, including GlassFish, across a variety of OS platforms. IPS is coded in the Python programming language.
Features
Features include:
Use of ZFS, allowing multiple boot environments and easy package operation rollbacks
Transactional actions
Support for multiple platform architectures within a single package
Legacy support for SVR4 packages
Extensive search grammar
Remote search capability
Changes-only based package updates
Network package repository
File and network-based package publication
Package operation history
On-disk package format (p5p)
Multi-platform ports for layered applications:
Broad platform support: Windows, Linux, OS X, Darwin, Solaris, OpenSolaris, illumos and AIX
Cross-platform update notification and package management Graphical user interfaces.
Advantages
The fact that IPS delivers each single file in a separate shelf with a separate checksum, a package update only needs to replace files that have been modified. For ELF binaries, it computes checksums only from the loaded parts of an ELF binary; this permits e.g. to avoid to update an ELF binary that changed only the ELF comment section.
Trade offs
Due to the fact that IPS delivers each single file in a separate shelf, slow operation is caused when the input source is on a medium with high latency (e.g. internet with higher round trip time or CD/DVD media with slow seeks).
References
External links
Github project: Image Packaging System
Multi-platform Packaging for Layered Distros
GlassFish Update Center Toolkit)
Update Center 2.0 (multiplatform IPS)
OpenSolaris
Free package management systems
Sun Microsystems software
Unix package management-related software | Operating System (OS) | 1,014 |
Backward compatibility
Backward compatibility (sometimes known as backwards compatibility) is a property of an operating system, product, or technology that allows for interoperability with an older legacy system, or with input designed for such a system, especially in telecommunications and computing.
Modifying a system in a way that does not allow backward compatibility is sometimes called "breaking" backward compatibility.
A complementary concept is forward compatibility. A design that is forward-compatible usually has a roadmap for compatibility with future standards and products.
A related term from programming jargon is hysterical reasons or hysterical raisins (near-homophones for "historical reasons"), as the purpose of some software features may be solely to support older hardware or software versions.
Usage
In hardware
A simple example of both backward and forward compatibility is the introduction of FM radio in stereo. FM radio was initially mono, with only one audio channel represented by one signal. With the introduction of two-channel stereo FM radio, many listeners had only mono FM receivers. Forward compatibility for mono receivers with stereo signals was achieved by sending the sum of both left and right audio channels in one signal and the difference in another signal. That allows mono FM receivers to receive and decode the sum signal while ignoring the difference signal, which is necessary only for separating the audio channels. Stereo FM receivers can receive a mono signal and decode it without the need for a second signal, and they can separate a sum signal to left and right channels if both sum and difference signals are received. Without the requirement for backward compatibility, a simpler method could have been chosen.
Full backward compatibility is particularly important in computer instruction set architectures, one of the most successful being the x86 family of microprocessors. Their full backward compatibility spans back to the 16-bit Intel 8086/8088 processors introduced in 1978. (The 8086/8088, in turn, were designed with easy machine-translatability of programs written for its predecessor in mind, although they were not instruction-set compatible with the 8-bit Intel 8080 processor as of 1974. The Zilog Z80, however, was fully backward compatible with the Intel 8080.)
Fully backward compatible processors can process the same binary executable software instructions as their predecessors, allowing the use of a newer processor without having to acquire new applications or operating systems. Similarly, the success of the Wi-Fi digital communication standard is attributed to its broad forward and backward compatibility; it became more popular than other standards that were not backward compatible.
In software
Compiler backward compatibility may refer to the ability of a compiler of a newer version of the language to accept programs or data that worked under the previous version.
A data format is said to be backward compatible with its predecessor if every message or file that is valid under the old format is still valid, retaining its meaning, under the new format.
Video games
The earliest cases of backward compatibility in video games came through console add-ons. The Atari 2600's library is playable on its direct successor, the Atari 5200, as well as competitors Intellivision and ColecoVision in such a manner. The Japanese version of the Master System and its predecessor, the Sega Mark III, were compatible with most software and peripherals designed for the SC-3000 and SG-1000 series of platforms, Sega's earliest gaming platforms. Likewise, the Mega Drive/Genesis can play Master System cartridges and cards via a peripheral known as the Master System Converter in Europe and the Power Base Converter in North America.
The first console in North America to widely support backward compatibility without additional hardware is the third-generation Atari 7800, which could play most 2600 games. Most Nintendo handhelds since the Game Boy Advance (which could play original Game Boy and Game Boy Color game cartridges) have backwards compatibility with their immediate predecessor, with some exceptions such as the Game Boy Micro and Nintendo DSi, while the Neo-Geo Pocket and Wonderswan would receive "Color" refreshes. The PlayStation 2 was compatible with original PlayStation software, as well as most peripherals due to employing the same controller ports and memory card slots. Early models of the PlayStation 3 console came equipped with the Emotion Engine, allowing it to play both, original PlayStation and PlayStation 2 discs, but this component would be removed in later models, leaving only compatibility with original PlayStation discs through software emulation. The original Xbox's first two sequential successors, the Xbox 360 and the Xbox One, can support a fraction of games released for their respective, immediate predecessors via emulation, although some supported Xbox games may not function properly on the Xbox 360. The Wii features full compatibility with GameCube software and peripherals thanks to inclusion of four GameCube controller ports and two memory card slots, but this features was excised from later revised models as a cost-reducing measurement. Its successor, the Wii U, has a legacy mode for full compatibility with original Wii software, including digital WiiWare and Virtual Console titles. The PlayStation 5 and Xbox Series X/S can play almost all games designed for their respective, immediate predecessors, the PlayStation 4 and Xbox One, and can even optimize their performance.
As Sega planned its exit from the hardware market, chairman Isao Okawa approached Microsoft chairman Bill Gates to implement Dreamcast on their upcoming Xbox, but negotiations fell through when Gates refused to provide Internet connectivity, a feature that Okawa felt was essential.
Tradeoffs
Benefits
There are several incentives for a company to implement backward compatibility. Backward compatibility can be used to preserve older software that would have otherwise been lost when a manufacturer decides to stop supporting older hardware. Classic video games are a common example used when discussing the value of supporting older software. The cultural impact of video games is a large part of their continued success, and some believe ignoring backward compatibility would cause these titles to disappear. Backward compatibility also acts as an additional selling point for new hardware, as an existing player base can more affordably upgrade to subsequent generations of a console. This also helps to make up for lack of content in the early launch of new systems, as users can pull from the previous console's large library of games while developers slowly transition to the new hardware.
One example of this is the Sony PlayStation 2 (PS2) which was backward compatible with games for its predecessor PlayStation (PS1). While the selection of PS2 games available at launch was small, sales of the console were nonetheless strong in 2000–2001 thanks to the large library of games for the preceding PS1. This bought time for the PS2 to grow a large installed base and developers to release more quality PS2 games for the crucial 2001 holiday season.
Additionally, and despite not being included at launch, Microsoft slowly incorporated backward compatibility for select titles on the Xbox One several years into its product life cycle. Players have racked up over a billion hours with backward compatible games on Xbox, and the newest generation of consoles such as PlayStation 5 and Xbox Series X/S also support this feature. A large part of the success and implementation of this feature is that the hardware within newer generation consoles is both powerful and similar enough to legacy systems that older titles can be broken down and re-configured to run on the Xbox One. The backward compatibility program not only supports the previous generation Xbox 360, but also titles from the original Xbox system. Some titles are even given slight visual improvements and additional levels at no cost to the user. This program has proven incredibly popular with Xbox players and goes against the recent trend of studio made remasters of classic titles, creating what some believe to be an important shift in console maker's strategies.
Costs
The literal costs of supporting old software is considered a large drawback to the usage of backward compatibility. The associated costs of backward compatibility are a larger bill of materials if hardware is required to support the legacy systems; increased complexity of the product that may lead to longer time to market, technological hindrances, and slowing innovation; and increased expectations from users in terms of compatibility. Because of this, several gaming consoles chose to phase out backward compatibility toward the end of the console generation in order to reduce cost and briefly re-invigorate sales before the arrival of newer hardware.
A notable example is the contrast between Sony's hardware-based implementation of backward compatibility in earlier versions of the PlayStation 2 versus the PlayStation 3. In the PS3, PS2 hardware served no purpose in PS3 mode. In the PS2, a CPU core identical to that of the PS1 served a dual purpose, either as the main CPU in PS1 mode, or upclocking itself to offload I/O in PS2 mode. Such an approach can backfire, however, as in the case of the Super Nintendo, which opted for the peculiar 65C816 over more popular 16-bit microprocessors, on the basis that it would allow easy compatibility with the earlier Nintendo Entertainment System, but NES compatibility ultimately did not prove workable once the rest of the SNES's architecture was designed.
However, with the current decline in physical game sales and the rise of digital storefronts and downloads, some believe backward compatibility will soon be as obsolete as the phased-out consoles it supports. Many game studios are re-mastering and re-releasing their most popular titles by improving the quality of graphics and adding new content. These remasters have found success by appealing both to nostalgic players who remember enjoying the original versions when they were younger, and to newcomers who may not have had the original system it was released on. For most consumers, digital remasters are more appealing than hanging on to bulky cartridges and obsolete hardware. For the manufacturers of consoles, digital re-releases of classic titles are a large benefit. It not only removes the financial drawbacks of supporting older hardware, but also shifts all of the costs of updating software to the developers. The manufacturer gets a new addition to their system with strong name recognition, and the studio does not have to completely develop a game from the ground up. Officially licensed, "plug and play mini" emulators of classic consoles, with built in classic games, have also become more common in recent years, from companies like Sony, Sega and Nintendo.
See also
References
External links
Interoperability | Operating System (OS) | 1,015 |
OBS
OBS or obs. may refer to:
Organisations
Office of Boating Safety, the division of Transport Canada responsible for boating safety
Optus Broadband Satellite, a satellite broadband service offered by the Australian ISP Optus
Orange Belt Stages, a bus company based in California, US
Orange Business Services, a worldwide business communications company
Organization for Black Struggle, an activist organization in St. Louis, Missouri, US
Coop Obs!, a chain of hypermarkets in Norway, formerly known as Obs
Outward Bound School; See Outward Bound
Outward Bound Singapore, part of the network of Outward Bound centres
Océ Business Services, the outsourcing business of Océ
Oporto British School, a school in northern Portugal
Science and technology
Organic brain syndrome, a medical condition resulting from brain injury
Omnidirectional bearing selector, an aircraft navigation instrument; See VHF omnidirectional range
Obstetrics and gynaecology, commonly abbreviated Obs/Gyn
Ocean-bottom seismometer, a seismic tool to record earthquakes underwater
Old Body Style, a style of GM pickup trucks manufactured between 1988 and 2000
Computing
Open Broadcaster Software, an open source streaming and recording program
Open Build Service, a software distribution development platform
Optical burst switching, a switching technology in optical networks
Television, film, literature, and music
On Basilisk Station, the first novel in David Weber's Honor Harrington series
One Buck Short, a punk-rock band from Kuala Lumpur, Malaysia
"Orange Blossom Special" (song), bluegrass song written by Ervin T. Rouse
Broadcasters
OBS (South Korean broadcaster), a broadcast television station based in Bucheon, Gyeonggi-do, South Korea
Oita Broadcasting System, a broadcasting station in Ōita Prefecture, Japan
Olympic Broadcasting Services, an organization responsible for the broadcast of the Olympic Games since 2010 Vancouver Winter Games
Business
Off-balance-sheet, financing activity not on the company's balance sheet
Organisation breakdown structure, a global hierarchy that represents the different levels of responsibility within a project or enterprise
Other uses
Operation Blue Star
Original British Standard, a bullhead rail profile
See also
OBSS (disambiguation) | Operating System (OS) | 1,016 |
Computer-controlled Vehicle System
The Computer-controlled Vehicle System, almost universally referred to as CVS, was a personal rapid transit (PRT) system developed by a Japanese industrial consortium during the 1970s. Like most PRT systems under design at the same time, CVS was based around a small four-person electric vehicle similar to a small minivan that could be requested on demand and drive directly to the user's destination. Unlike other PRT systems, however, CVS also offered cargo vehicles, included "dual-use" designs that could be manually driven off the PRT network, and included the ability to stop at intersections in a conventional road-like network.
Work on CVS started in the late 1960s as a demonstration system for a "traffic game" at Expo '70. This demonstration was successful and led to a further development project in 1970, which expanded several times and eventually produced a large test track outside of Tokyo. However, in 1978, the Ministry of Land, Infrastructure and Transport declined to grant CVS a license under existing safety regulations, citing issues with the short headway distances. As other proposed CVS deployments also dried up, work on the project ended some time that year.
History
Background
The concept of personal rapid transit (PRT) developed in the 1950s as a solution to the problem of providing mass transit in smaller urban areas and the suburbs of larger cities. Existing systems, heavy rail and subways, required major infrastructure and had high capital costs that limited their use to only the densest urban areas. Buses could run on existing roadways, but were thus subject to traffic problems and could not offer the high-speed services that made subways so attractive to riders. Modern PRT really began around 1953 when Donn Fichter, a city transportation planner, began research on PRT and alternative transportation methods. In 1964, Fichter published a book, which proposed an automated public transit system for areas of medium to low population density.
The solution appeared to be a "mini-subway", one that was small enough that the routes did not require the same sort of capital costs as a conventional system. However, using traditional technology to implement such a system would not work, as the required distance between vehicles on a subway system, known as headway, was often several minutes. This would mean a low vehicle density, and, if this was combined with a small number of passengers per vehicle, a very low overall passenger capacity. If such a system was to be practical, the distance between the vehicles had to be reduced, something that the emerging computer market appeared able to address.
During the 1950s the United States underwent a period of intense urban decay. Planners pointed to the construction of the interstate highway system as the culprit; people were able to buy houses at low prices farther and farther from their jobs in the downtown cores, leading to a flight of capital out of the cities. Only those cities with well-developed mass transit systems, like New York and Boston, seemed to be avoiding these problems. If mass transit was the solution, there was a need for a system that could be built in smaller cities at reasonable prices. This led, naturally, to the PRT concept.
PRT development was given a major boost in 1967 with the start of what would be delivered as the "HUD reports", a series of industry studies funded by the US Department of Housing and Urban Development (HUD), which gave strong support to the PRT concept. The publication of the reports in 1968 as Tomorrow's Transportation sparked off a wave of developments around the world, as it appeared PRT was going to be "the next big thing". By the early 1960s there were dozens of PRT efforts underway, with a wide variety of solutions from what were essentially small subway systems to more complex systems that the HUD reports referred to as "dial-a-cab".
Traffic Game
As part of the Expo '70 program in Osaka, starting in 1968 a university and industry team built a "traffic game" in the Automobile Industries Pavilion. The network consisted of a grid of guideways on a 5 m grid carrying ten two-seat electrically powered cars. The cars communicated with a central computer using wires under the "roadway", which allowed the computer to start and stop the vehicles at the intersections if there was crossing traffic. If there wasn't, the vehicles could travel through the intersection non-stop. This greatly increases passenger throughput by eliminating unneeded stops that occur on a fixed-schedule system (like traffic lights), increasing the average vehicle speed.
In spite of being a show floor demonstration system, the system was quite advanced compared to most PRT systems then under study. Most systems had been designed in the era of Generation II computers (the PDP-8 was common), which were large and relatively slow. These systems normally limited themselves to planning the route in a fixed network with no stops, which greatly simplified the routing task. Vehicles on the network were assumed to be running at a fixed speed or stopped completely in emergencies, there were no on-route stops that could complicate timing. This meant that the guideway network could not be built into existing infrastructure like roads where there are stops at crossing points along the route, stations had to be built "off-line" to allow other vehicles to pass by at full speed.
The "traffic game" demonstration system was much more flexible. The computer system knew the location of all of the vehicles at all times, and was able to speed up and slow down vehicles as needed at fixed points in the network. This meant the guideway system could be built in a fashion much more similar to conventional roadways, without the need to separate tracks at crossing points, or building offline stations. Although these types of infrastructure would improve performance of the system, in areas of less demand or traffic they could be eliminated to save on capital costs.
When the "traffic game" was system successful the designers suggested that a similar but more complex system be presented at the 18th Tokyo Motor Show late in 1971. A formal presentation was submitted to the Ministry of International Trade and Industry (MITI) in July 1970, and accepted that autumn. Built between April and October 1971, the new system used 1:20th scale cars on a network representing the 300 m wide area of the Ginza district in Tokyo, with the centralized computer system able to control up to 1,000 vehicles.
CVS
Following the successful demonstration at the Tokyo Motor Show, MITI provided funding for development of a full-sized version of the same system at Higashimurayama, built on top of an existing car test track and former racetrack. Several other Japanese companies were already in the process of developing PRT systems, either self-designed or using licensed US designs, but the "traffic game" design, with its crossing guideway network and ability to deal with traffic made it uniquely advanced.
Basic track layout was completed by the middle of 1972 and construction of the short guideway section for the maintenance yard was completed by that autumn. Testing of frameless chassis started soon after. Construction of the rest of the track was completed by the autumn of 1973. The test track was 2 km long and about 200 m across, in the form of a large oval loop. In the center of the loop was a grid of crossing lines and several passenger stations at a 100 m spread, along with the maintenance and control facilities. The top portion of the loop was used for high-speed tests, while the bottom included two parallel tracks for lane-changing experiments. In total, the track containted 4.8 km of guideway.
The system originally envisioned a 100-vehicle mixed fleet, but rampant inflation in the 1970s led to budget cutbacks that were made good by reducing the fleet to 60. The basic passenger vehicle emerged as a four-person design that looked like a minivan with no "hood" area for the engine. Since the emergency braking was extremely powerful, passengers were seated facing to the rear, and Japanese law already precluded standing in automated vehicles. In some versions, two of the four seats could be folded to allow larger loads, like prams or bicycles. CVS also tested light cargo vehicles, carrying between 300 and 400 kg. Three types of cargo bodies were tried; a flatbed version for palleted cargo that was loaded using two conveyor belts on a trackside "station", another was similar to a pickup truck with a box end, and the last was an enclosed postal van.
CVS also developed a dual-mode version of the vehicle, which they demonstrated at Expo '75 on Okinawa in July 1975. This version allowed potential customers to purchase a vehicle and drive it like a normal car for short distances at low speeds using battery power. For longer distances and higher speeds, the car would be driven onto the guideway, which would provide the higher power and automated guidance needed for higher speeds. Expo also hosted a larger group rapid transit system from Kobe Steel, which was a licensed version of the Alden staRRcar being built by Boeing Vertol.
Cancellation
A two-phase testing program was carried out. Phase I was the basic construction and operation at various speeds with large headways, in order to work on the mechanical design. This phase completed in 1976, and was followed by Phase II, a "system demonstration" at one-second headways (considerably less than a car). Phase II testing completed in 1978 and the consortium started looking for deployment opportunities, developing a serious proposal for an installation in Baltimore.
However, CVS ran into the same difficulties as the many other PRT systems of the era. A combination of lowering gas prices, changes in attitudes toward major public projects of this size, and cost overruns in the demonstration system in Morgantown, and a lack of progress within the Urban Mass Transit Administration in the US all led to a souring of opinion for PRT systems. For example, the California Public Utilities Commission states that its rail regulations apply to PRT, and these require railway-sized headways. The degree to which CPUC would hold PRT to "light rail" and "rail fixed guideway" safety standards is not clear because it can grant particular exemptions and revise regulations. Although by this point in time there were numerous fully developed systems ready to be installed, a lack of interest and funding meant no new PRT systems were installed, and only the much larger Canadian Bombardier ART and French VAL systems saw any deployment projects during the 1980s.
J. Edward Anderson, a long-time PRT advocate and critic, noted that the guideway was very large and had a major visual impact. However, many other systems used similar or larger guideways, including the Morgantown PRT, and the guideway was smaller than a conventional roadway. He also noted that the stations only had a single berth, which would limit capacity, and that the vehicles had a rough ride (they were unsprung).
Description
CVS vehicles were built like contemporary vans, with a chassis holding the mechanical systems with a metal monocoque body placed on top. They were 3 m long, 1.6 wide and 1.85 high, and weighted about 1 ton. Motive power was provided by a conventional 200 VAC electric motor driving the rear wheels, which also provided regenerative braking at up to 0.2 G. Conventional brakes could increase this to 0.4 G. Emergency stopping at up to 2 G could be provided through an explosively fired device. The standard four-seat passenger vehicle weighed 2000 lbs.
The guideway consisted of parallel steel I-beams providing the running surface, with a third steel channel running down the middle of the two providing the guide rail, emergency stopping surface, vehicle power and communications. Due to the rubber-on-steel running surfaces, the maximum climbing grade was about 10 degrees, and would be reduced in wet or snowy weather. In good weather the vehicles normally ran at 40 km/h in the low-speed sections, but could run as high as 80 km/h in high-speed sections.
Vehicle control used a moving block control system, similar to those used on automated railways. Each vehicle had a small computer on board that communicated with the external scheduling systems every 1/2 second or less, sending in its current position with a resolution of less than 2 m. The position was measured by small spiral antennas running in the guide track, which also send position information to the scheduling computers at 1,200 bit/s over an inductive loop in the track.
In addition to the "quantum" computers on the vehicles, three separate control systems were tested; Hitachi built a system for control at high speed on the outer loop based on a HIDIC-350 computer, allowing speeds up to 60 km/h, Toshiba provided a system based on the TOSBAC-40 that ran the lower-speed network area at speeds under 40 km/h, and Fujitsu added a third system based on the FACOM 230-35 that supervised the other two and switched traffic between them.
Vehicles normally operated at a one-second headway, meaning a single lane could carry as many as 3,600 vehicles per hour, for 14,400 seats per hour. In operation it was expected to operate at about 1/3 this capacity. This placed CVS right in the middle of the PRT/GRT spectrum, between busses which normally deliver about 3,000 passengers per direction per hour (PPDPH) and conventional subways which operate around 50,000 PPDPH.
Bibliography
Notes
References
- Total pages: 50
- Total pages: 309
- Total pages: 162
- pg. 77-83
Automated guideway transit
MITI projects
Personal rapid transit | Operating System (OS) | 1,017 |
PC-Write
PC-Write was a computer word processor and was one of the first three widely popular software products sold via the marketing method that became known as shareware. It was originally written by Bob Wallace in early 1983.
Overview
PC-Write was a modeless editor, using control characters and special function keys to perform various editing operations. By default it accepted many of the same control key commands as WordStar while adding many of its own features. It could produce plain ASCII text files, but there were also features that embedded control characters in a document to support automatic section renumbering, bold and italic fonts, and other such; also, a feature that was useful in list processing (as used in Auto LISP) was its ability to find matching open and closed parenthesis "( )"; this matching operation also worked for the other paired characters: { }, [ ] and < >.
Lines beginning with particular control characters n
and/or a period (.) contained commands that were evaluated when the document was printed, e.g. to specify margin sizes, select elite or pica type, or to specify the number of lines of text that would fit on a page, such as in escape sequences.
While Quicksoft distributed copies of PC-Write for $10, the company encouraged users to make copies of the program for others in an early example of shareware. Quicksoft asked those who liked PC-Write to send it $75. The sum provided a printed manual (notable for its many pictures of cats, drawn by Megan Dana-Wallace), telephone technical support, source code, and a registration number that the user entered into his copy of the program. If anyone else paid the company $75 to purchase an already-registered copy of the software, the company paid a $25 commission back to the original registrant, and then issued a new number to the new buyer, thereby giving a financial incentive for buyers to distribute and promote the software.
A configuration file allowed customizing PC-Write, including remapping the keyboard. Later versions of the registered (paid for) version of the program included a thesaurus (which was not shareware) along with the editor. In addition, there was vocabulary available in other languages, such as in German. Utilities were also provided to convert PC-Write files to and from other file formats that were common at the time. One limitation of the software was its inability to print directly from memory - because the print function was a separate subprogram, a document must be saved to a file before it could be printed.
Bob Wallace found that running Quicksoft used so much of his time he could not improve the PC-Write software. In early 1991, he sold the firm to another Microsoft alumnus, Leo Nikora, the original product manager for Windows 1.0 (1983–1985). Wallace returned to full programming and an updated version of PC-Write was released in June 1991.
One unusual feature of PC-Write was its implementation of free form editing: it could copy and paste a block of text anywhere. For instance, if one had a block of information, one per line, in the format Name (spaces) Address, one could highlight only the addresses section and paste that into the right-hand part of a page. Today, Emacs and jEdit are also capable of performing this function.
When the market changed to multi-program software (office suites combining word processing, spreadsheet, and database programs), Quicksoft went out of business in 1993.
The first Trojan horse (appearing in 1986), PC-Write Trojan, masqueraded as "version 2.72" of the shareware word processor PC-Write. Quicksoft did not release a version 2.72.
PC-Write had one of the first "as you type", in "real-time mode" spell checker; earlier spell checkers only worked in "batch mode".
The Brown Bag Word Processor
is based on
PC-Write's source code,
licensed by Brown Bag Software,
with some minor modifications and additions.
Reception
PC Magazine stated that version 1.3 of "PC-Write rates extremely well and compares favorably with many word processors costing much more". It cited very fast performance, good use of color, and availability of source code as advantages, while lack of built-in support for printing bold or underline and keyboard macros was a disadvantage. Compute! complimented the software's "clean implementation of standard editing features", cited its "truly staggering" level of customization, and after mentioning a few flaws stated that they should be "viewed in context of the program's overall excellence".
See also
Andrew Fluegelman
Jim Knopf, also known as Jim Button
PC-File
PC-Talk
References
External links
PC-WRITE: Quality Word Processing at a Price That's Hard to Beat Review of PC-Write in COMPUTERS and COMPOSITION 2(4), August 1985, page 78.
1983 software
Shareware
Word processors
DOS text editors | Operating System (OS) | 1,018 |
ISO/IEC 19770
International standards in the ISO/IEC 19770 family of standards for IT asset management (ITAM) address both the processes and technology for managing software assets and related IT assets. Broadly speaking, the standard family belongs to the set of Software Asset Management (or SAM) standards and is integrated with other Management System Standards.
ISO/IEC 19770 day-to-day management comes under ISO/IEC JTC1/SC7/WG21, or Working Group 21 (WG21) chaired by Ron Brill as convener and Trent Allgood as secretary. It is WG21 that is responsible for developing, improving and ensuring market needs are met when developing these standards.
What is the purpose of ISO 19770?
The ISO 19770 standard is a concept of ITAM standardization within an organization incorporating ISO/IEC standards.
The objective of the standard is to give organizations of all sizes information and assistance to assist at the risk and cost minimization of ITAM assets. Through implementation, these same organizations will acquire a competitive advantage through:
Management of the risk of interrupted IT service delivery, breach of legal agreements and audit;
Reducing overall software costs through the implementation of various processes; and
Better information availability leading to improved decision-making based on accurate data.
The major parts of this ITAM standard are detailed below.
ISO/IEC 19770-1 is a process framework to enable an organization to prove that it is performing ITAM to a standard sufficient to satisfy corporate governance requirements and ensure effective support for IT service management overall.
ISO/IEC 19770-2 provides an ITAM data standard for software identification tags ("SWID").
ISO/IEC 19770-3 provides an ITAM data standard for software entitlements, including usage rights, limitations and metrics ("ENT").
ISO/IEC 19770-4 provides an ITAM data standard for Resource Utilization Measurement ("RUM")
ISO/IEC 19770-5 provides the overview and vocabulary.
ISO/IEC 19770-1: Processes
ISO/IEC 19770-1 is a framework of ITAM processes to enable an organization to prove that it is performing software asset management to a standard sufficient to satisfy corporate governance requirements and ensure effective support for IT service management overall. ISO/IEC 19770-1:2017 specifies the requirements for the establishment, implementation, maintenance and improvement of a management system for IT asset management (ITAM), referred to as an “IT asset management system” (ITAMS).
While ISO 55001:2014 specifies the requirements for the establishment, implementation, maintenance and improvement of a management system for asset management, referred to as an “asset management system”, it is primarily focused on physical assets with little provision for the management of software assets. There are a number of characteristics of IT assets which create additional or more detailed requirements. As a result of these characteristics of IT assets, the 19770-1 management system for IT assets has explicit additional requirements dealing with:
controls over software modification, duplication and distribution, with particular emphasis on access and integrity controls;
audit trails of authorizations and of changes made to IT assets;
controls over licensing, underlicensing, overlicensing, and compliance with licensing terms and conditions;
controls over situations involving mixed ownership and responsibilities, such as in cloud computing and with ‘Bring-Your-Own-Device’ (BYOD) practices; and
reconciliation of IT asset management data with data in other information systems when justified by business value, in particular with financial information systems recording assets and expenses.
Updates to 19770-1
The first generation was published in 2006.
The second generation was published in 2012. It retained the original content (with only minor changes) but splits the standard up into four tiers which can be attained sequentially. These tiers are:
Tier 1: Trustworthy Data
Tier 2: Practical Management
Tier 3: Operational Integration
Tier 4: Full ISO/IEC ITAM Conformance
ISO 19770-1 Edition 3 (current version)
The most recent version, known as ISO 19770-1:2017 and published in December 2017, specifies the requirements for the establishment, implementation, maintenance, and improvement of a management system for IT asset management (ITAM), referred to as an IT asset management system. ISO 19770-1:2017 was a major update and was rewrote the standard to conform to the ISO Management System Standards (MSS) format. The tiered structure from 197701:2012 was moved to an appendix within the updated standard.
Intended Users
This document can be used by any organization and can be applied to all types of IT assets. The organization determines to which of its IT assets this document applies.
This document is primarily intended for use by:
those involved in the establishment, implementation, maintenance, and improvement of an IT asset management system;
those involved in delivering IT asset management activities, including service providers;
internal and external parties to assess the organization’s ability to meet legal, regulatory and contractual requirements and the organization’s own requirements.
Preview of 19770-1
An overview of the standard is available from ISO and is available in English
ISO/IEC 19770-2: software identification tag
ISO/IEC 19770-2 provides an ITAM data standard for software identification (SWID) tags. Software ID tags provide authoritative identifying information for installed software or other licensable item (such as fonts or copyrighted papers).
Overview of SWID tags in use
There are three primary methods that may be used to ensure SWID tags are available on devices with installed software:
SWID tags created by a software creator or publisher which are installed with the software are the most authoritative for identification purposes.
Organizations can create their own SWID tags for any software title that does not include a tag, allowing the organization to more accurately track software installations in their network environment
Third party discovery tools may optionally add tags to a device as software titles are discovered
Providing accurate software identification data improves organizational security, and lowers the cost and increases the capability of many IT processes such as patch management, desktop management, help desk management, software policy compliance, etc.
Discovery tools, or processes that utilize SWID tag data to determine the normalized names and values that are associated with a software application and ensure that all tools and processes used by an organization refer to software products with the same exact names and values.
Standards development information
This standard was first published in November 2009. A revision of this standard was published in October 2015.
Steve Klos is the editor of 19770-2 and works for 1E, Inc as a SAM Subject Matter Expert.
Non-profit organizational support
In 2009, a non-profit organization called TagVault.org was formed under IEEE-ISTO to press for using SWID tags. TagVault.org acts as a registration and certification authority for ISO/IEC 19770-2 software identification tags (SWID tags) and will provide tools and services allowing all SAM ecosystem members to take advantage of SWID tags faster, with a lower cost and with more industry compatibility than would otherwise be possible. SWID tags can be created by anyone, so individuals and organizations are not required to be part of TagVault.org to create or distribute tags.
Commercial organizational support
Numerous Windows installation packaging tools utilize SWID tags including:
Caphyon's Advanced Installer
Flexera Software's InstallShield
Flexera Software's InstallAnywhere
Open Source - Windows Installer XML Toolset (WiX)
Many software discovery tools already utilize SWID tags, including Altiris, Aspera SmartCollect, DeskCenter Management Suite, Belarc's BelManage, Sassafras Software's K2-KeyServer, Snow Inventory, CA Technologies discovery tools, Eracent's EnterpriseAM, Flexera Software's FlexNet Manager Platform, HP's Universal Discovery, IBM Endpoint Manager, Microsoft's System Center 2012 R2 Configuration Manager, and Loginventory.
Adobe has released multiple versions of their Creative Suites and Creative Cloud products with SWID tags.
Symantec has also released multiple products that include SWID tags and is committed to helping move the software community to a more consistent and normalized approach to software identification and eventually to a more automated approach to compliance.
Microsoft Corporation has been adding SWID tags to all new releases of software products since Windows 8 was released.
IBM started shipping tags with some software products in early 2014, but as of November, all releases of IBM software include SWID tags. This equates to approximately 300 product releases a month that include SWID tags.
Governmental support
The US federal government has identified 19770-2 SWID tags as an important aspect of the efforts necessary to manage compliance, logistics and security software processes. The 19770-2 standard is included on the US Department of Defense Information Standards Registry (DISR) as an emerging standard as of September 2012. The National Institute of Standards and Technology (NIST) and the National Cybersecurity Center of Excellence (NCCoE) in 2015 discussed the need for SWIDs in the marketplace.
Standards development organization support
The Trusted Computing Group (TCG) is developing a standard TNC SWID Messages and Attributes for IF-M Specification that utilizes tag data for security purposes.
The National Cybersecurity Center of Excellence (NCCoE) has documented the Software Asset Management Continuous Monitoring building block that specifies how SWID tags are used for the near real-time identification of software.
The National Institute of Standards and Technology (NIST) is in the process of creating documentation that specifies how SWID tags will be used by governmental organizations including the Department of Homeland Security. David Waltermire presented information describing the NIST Security Automation Program and how SWID tags can support that effort.
The National Institute of Standards and Technology (NIST) published "Guidelines for the Creation of Interoperable Software Identification (SWID) Tags", NISTIR 8060, April 2016.
Preview of ISO 19770-2:2015
An overview of the standard is available from ISO and is available in English
ISO/IEC 19770-3: software entitlement schema (ENT)
This part of ISO/IEC 19770 provides a technical definition of an XML schema that can encapsulate the details of software entitlements, including usage rights, limitations and metrics.
The primary intentions of 19770-3 are:
To provide a basis for common terminology to be used when describing entitlement rights, limitations and metrics
To provide a schema which allows effective description of rights, limitations and metrics attaching to a software license.
The specific information provided by an entitlement schema (ENT) may be used to help ensure compliance with license rights and limits, to optimize license usage and to control costs. Though ENT creators are encouraged to provide the data that allow for the automatic processing, it is not mandated that data be automatically measurable. The data structure is intended to be capable of containing any kind of terms and conditions included in a software license agreement.
This part of ISO/IEC 19770 supports ITAM processes as defined in ISO/IEC 19770-1 It is also designed to work together with software identification tags as defined in ISO/IEC 19770-2. Standardization in the field of software entitlements provides uniform, measurable data for both the license compliance, and license optimization, processes of SAM practice.
This part of ISO/IEC 19770 does not provide requirements or recommendations for processes related to software asset management or ENTs. The software asset management processes are in the scope of ISO/IEC 19770-1.
Standards development information
The ISO/IEC 19770-3 Other Working Group ("OWG") was convened by teleconference call on 9 September 2008.
John Tomeny of Sassafras Software Inc served as the convener and lead author of the ISO/IEC 19770-3 "Other Working Group" (later renamed the ISO/IEC 19770-3 Development Group). Mr Tomeny was appointed by Working Group 21 (ISO/IEC JTC 1/SC 7/WG 21) together with Krzysztof Bączkiewicz of Eracent who served as Project Editor concurrent with Mr. Tomeny's leadership. In addition to WG21 members, other participants in the 19770-3 Development Group served as "individuals considered to have relevant expertise by the Convener".
Jason Keogh of 1E and part of the delegation from Ireland is the current editor of 19770-3.
ISO/IEC 19770-3 was published on April 15, 2016.
Principles
This part of ISO/IEC 19770 has been developed with the following practical principles in mind:
Maximum possible usability with legacy entitlement information
The ENT, or software entitlement schema, is intended to provide the maximum possible usability with existing entitlement information, including all historical licensing transactions. While the specifications provide many opportunities for improvement in entitlement processes and practices, they must be able to handle existing licensing transactions without imposing requirements which would prevent such transactions being codified into Ent records.
Maximum possible alignment with the software identification tag specification (ISO/IEC 19770-2)
This part of ISO/IEC 19770 (entitlement schema) is intended to align closely with part 2 of the standard (software identification tags). This should facilitate both understanding and their joint use. Furthermore, any of the elements, attributes, or other specifications of part 2 which the ENT creator may wish to utilize may be used in this part as well.
Stakeholder benefits
It is intended that this standardized schema will be of benefit to all stakeholders involved in the creation, licensing, distribution, release, installation, and ongoing management of software and software entitlements.
Benefits to software licensors who provide ENTs include, but are not limited to:
Immediate software customer recognition of details of the usage rights derived from their software entitlement.
Ability to specify details to customers that allow software assets to be measured and reported for license compliance purposes.
Increased awareness of software license compliance issues on the part of end-customers.
Improved software customer relationships through quicker and more effective license compliance audits.
Benefits to SAM tool providers, deployment tool providers, re-sellers, value-added re-sellers, packagers and release managers include, but are not limited to:
Receipt of consistent and uniform data from software licensors and ENT creators.
More consistent and structured entitlement information, supporting the use of automated techniques to determine the need for remediation of software licensing.
Improved reporting from additional categorization made possible by the use of ENTs.
Improved SAM tool entitlement reconciliation capabilities resulting from standardization in location and format of software entitlement data.
Ability to deliver value-added functionality for compliance management through the consumption of entitlement data.
The benefits for software customers, SAM practitioners, IT support professionals and end users of a given software configuration item include, but are not limited to:
Receipt of consistent and uniform data from software licensors, resellers and SAM tools providers.
More consistent and structured entitlement information supporting the use of automated techniques to determine the need for remediation of software licensing.
Improved reporting from additional categorization made possible by the use of ENTs.
Improved SAM and software license compliance capabilities stemming from standardized, software licensor-supplied, ISO/IEC 19770-2 software identification tags to reconcile with these ENTs.
Improved ability to avoid software license under-procurement or over-procurement with subsequent cost optimization.
Standardized usage across multiple platforms, rendering heterogeneous computing environments more manageable.
The ITAM Review developed a podcast with the 19770-3 project editor how end-user organizations can leverage this standard to their benefit. The link to the podcast is here.
ISO/IEC 19770-3: Entitlement Management
ISO 19770-3 relates to Entitlement tags - encapsulations of licensing terms, rights and limitations in a machine-readable, standardized format. The transport method (XML, JSON, etc.) is not defined, rather the meaning and name of specific data stores is outlined to facilitate a common schema between vendors and customers and tools providers.
The first commercial SAM tool to encapsulate ISO 19770-3 was AppClarity by 1E. Since then K2 by Sassafras Software has also encompassed 19770-3. As of the time of writing (February 2018) although other tools vendors have indicated interest in the standard but have not implemented same.
It is of note that Jason Keogh, Editor of the released 19770-3 works for 1E and John Tomeny (initial Editor of 19770-3) worked for Sassafras Software.
19770-3 was released in 2016 and can be downloaded from the main ISO web store.
ISO/IEC 19770-4: Resource Utilization Measurement
This document provides an International Standard for Resource Utilization Measurement (RUM). A RUM is a standardized structure containing usage information about the resources that are related to the use of an IT asset. A RUM will often be provided in an XML data file, but the same information may be accessible through other means depending on the platform and the IT asset/product.
This document contains information structures that are designed to align with the identification information defined in ISO/IEC 19770-2, and with the entitlement information defined in ISO/IEC 19770-3. When used together, these three types of information have the capability to significantly enhance and automate the processes of IT asset management.
This document supports the IT asset management processes defined in ISO/IEC 19770-1. This document also supports the other parts of the ISO/IEC 19770 series of standards that define information structures.
The RUM is specifically designed to be general-purpose and usable in a wide variety of situations. Like other information structures defined in the ISO/IEC 19770 series of standards, the consumer of a RUM may be an organization and/or a tool or other consumers. In contrast to the other information structures in the ISO/IEC 19770 series, the entity creating a RUM data on a periodic basis will likely be an IT asset or an automation tool monitoring an IT asset.
The definition of a RUM will benefit all stakeholders involved in the creation, licensing, distribution, releasing, installation, and on-going management of IT assets. Key benefits associated with a RUM for three specific groups of stakeholders include:
IT asset users
— RUM data will typically be generated and processed by IT assets and automation tools, within the consumers enterprise boundary, for purpose of IT asset compliance and optimization;
— RUM data is human readable and can provide improved visibility into resource utilization within IT assets independent of vendor or third-party supplied tools;
— the ability to combine identification, entitlement, and resource utilization information together to perform quantitative and authoritative IT asset management, for example, to meet compliance requirements;
— a much-improved ability to perform IT asset management in support of green data center strategies such as optimization of the use of power and air conditioning;
IT asset manufacturers
— the ability to consistently and authoritatively generate resource utilization information for consumption by a central facility that is maintained by the creator, or one or more third-party tools, or by the IT asset users;
— the ability to support multiple instances and types of third-party tools with a single set of functionality within the IT asset;
— the ability to offer a service to track real-time IT asset usage in the field and, when combined with identification and entitlement information, the ability to give advance warning as resource limits are approached;
— the ability to offer an alternative approach to asset utilization measurement to traditional techniques that employ key-based, or platform-restricted licenses;
Tool vendors
— the ability to support multiple IT assets, and types of IT asset, without having to create and maintain unique instrumentation that is associated with each asset;
— the ability to more easily aggregate usage information across multiple instances of an asset;
— a much-improved ability to track resource utilization and IT assets in near real-time.
Preview ISO/IEC 19770-4: Resource Utilization Measurement
An overview of the standard is available from ISO and is available in English here.
ISO/IEC 19770-5: overview and vocabulary
ISO/IEC 19770-5:2015 provides an overview of ITAM, which is the subject of the ISO/IEC 19770 family of standards, and defines related terms. ISO/IEC 19770-5:2015 is applicable to all types of organization (e.g. commercial enterprises, government agencies, non-profit organizations).
ISO/IEC 19770-5:2015 contains:
an overview of the ISO/IEC 19770 family of standards;
an introduction to SAM;
a brief description of the foundation principles and approaches on which SAM is based; and
consistent terms and definitions for use throughout the ISO/IEC 19770 family of standards.
Free copy of ISO/IEC 19770-5
A free copy of the overview and vocabulary is available here.
ISO/IEC 19770-8: Guidelines for mapping of industry practices to/from the ISO/IEC 19770 family of standards
ISO/IEC 19770-8 defines requirements, guidelines, formats and approaches for use when producing a mapping document that defines how industry practices map to/from the ISO/IEC 19770 series. The 19770-8:2020 edition is focused solely on mappings to/from both the second edition of ISO/IEC 19770-1 that was published in 2012, or the third edition of ISO/IEC 19770-1 that was published in 2017.
There are currently two mappings publicly available using the 19770-8:2020 standard:
PROZM ITAM Framework to/from ISO/IEC 19770-1:2017
SAMAC 4.1 to/from ISO/IEC 19770-1:2012
References
External links
Official WG21 web site
Business Software Alliance
International Association of Information Technology Asset Managers
National Cybersecurity Center of Excellence
National Institute for Standards and Technology
Trusted Computing Group
ITAM.ORG - Organization for IT Asset Management Professionals and ITAM Providers
Australian Software Asset Management Association (ASAMA)
Information technology management
19770 | Operating System (OS) | 1,019 |
SystemRescue
SystemRescue (Previously known as "SystemRescueCD") is a Linux distribution for x86 64 and x86 computers. The primary purpose of SystemRescue is to repair unbootable or otherwise damaged computer systems after a system crash. SystemRescue is not intended to be used as a permanent operating system. It runs from a Live CD, a USB flash drive or any type of hard drive. It was designed by a team led by François Dupoux, and is based on Arch Linux since version 6.0. Starting with version 6.0, it has systemd as its init system.
Features
SystemRescue is capable of graphics using the Linux framebuffer option for tools such as GParted. It has options such as connecting to the Internet through an ADSL modem or Ethernet and graphical web browsers such as Mozilla Firefox.
SystemRescue features include:
GNU Parted and GParted to partition disks and resize partitions, including FAT32 and NTFS
fdisk to edit the disk partition table
PartImage - disk imaging software which copies only used sectors
TestDisk - to recover lost partition and PhotoRec to recover lost data
smartmontools - a S.M.A.R.T. suite for HDD health reporting and data loss prevention
ddrescue - to extract recoverable data from physically damaged HDD and listing damaged sectors
FSArchiver - a system tool that allows you to save the contents of a file-system to a compressed archive file
nwipe - a secure data_erasure tool (fork of DBAN) for harddrives to remove data remanence, supports Gutmann method plus other overwriting standard algorithms and patterns.
A CD and DVD burner - dvd+rw-tools
Two bootloaders - GRUB and SYSLINUX
Web browsers - Firefox, ELinks
File manager - emelFM2
Archiving and unarchiving abilities
File system tools - file system create, delete, resize, move
Support for many file systems, including full NTFS read/write access (via NTFS-3G) as well as FAT32 and Mac OS HFS
Support for Intel x86 and PowerPC systems, including Macs
Ability to create a boot disk for operating systems
Support for Windows Registry editing and password changing from Linux
Can boot FreeDOS, Memtest86+, hardware diagnostics and other boot disks from a single CD
Burning DVDs and system backup
The CD can also boot from a customized DVD which has almost 4.6 GB of free space for backed-up files. This makes it good for storing all the information that is needed from a hard drive and then formatting it. To burn the DVD, one must burn the image file first and then add all the separate files and folders. This should not affect the general way in which the DVD works. The DVD can then be used to insert those files into the hard drive using Midnight Commander.
See also
Parted Magic
List of bootable data recovery software
References
External links
Arch-based Linux distributions
Operating system distributions bootable from read-only media
Live USB
Free security software
Free data recovery software
Linux distributions | Operating System (OS) | 1,020 |
IBM PL/S
PL/S, short for Programming Language/Systems, is a "machine-oriented" programming language based on PL/I. It was developed by IBM in the late 1960s, under the name Basic Systems Language (BSL), as a replacement for assembly language on internal software projects; it included support for inline assembly and explicit control over register usage.
Early projects using PL/S were the batch utility, IEHMOVE, and the Time Sharing Option of MVT, TSO.
By the 1970s, IBM was rewriting its flagship operating system in PL/S. Although users frequently asked IBM to release PL/S for their use, IBM refused saying that the product was proprietary. Their concern was that open PL/S would give competitors, Amdahl, Itel (National Advanced Systems), Storage Technology Corporation, Trilogy Systems, Magnuson Computer Systems, Fujitsu, Hitachi, and other PCM vendors a competitive advantage. However, even though they refused to make available a compiler, they shipped the PL/S source code to large parts of the OS to customers, many of whom thus became familiar with reading it.
Closed PL/S meant that only IBM could easily modify and enhance the operating system.
PL/S was succeeded by PL/S II, PL/S III and PL/AS (Programming Language/Advanced Systems), and then PL/X (Programming Language/Cross Systems). PL/DS (Programming Language/Distributed Systems) was a closely related language used to develop the DPPX operating system, and PL/DS II was a port of the S/370 architecture for the DPPX/370 port.
As the market for computers and software shifted away from IBM mainframes and MVS, IBM recanted and has offered the current versions of PL/S to select customers (ISVs through the Developer Partner program.)
Fujitsu "Developments"
A fully compliant PL/S compiler was "developed" by Fujitsu Ltd in the late-1970s, adapting IBM's PL/I Optimizer compiler source code as its starting point. This PL/S compiler was used internally by Fujitsu, and also by Fujitsu's external affiliates. Whether or not IBM was aware of this unlicensed use of its licensed intellectual property is not known. The phase names of this PL/S compiler were the same as the corresponding phase names of IBM's PL/I Optimizer compiler, with the initial "I" (IBM) in the phase name being replaced by an initial "J" (Japan). All IBM copyright notices within the modules were deleted to hide its true origin and ownership.
See also
PL360
High-level assembler
References
BSL Language Specifications, International Business Machines Corp., 1968, Z28-6642-0. Note that BSL was renamed PL/S and replaced by PL/S II
W.R. Brittenham, "PL/S, Programming Language/Systems", Proc GUIDE Intl, GUIDE 34, May 14, 1972, pp. 540–556
W.R. Brittenham and B.F. Melkun, "The Systems Programming Language Problem", Proceedings of the IFIP Working Conference on Machine Oriented Higher Level Languages, Trondheim, Norway, August 29–31, 1973, pp. 29–47. Amsterdam: North-Holland Publishing Co.; New York: American Elsevier, 1974. This paper explores the technical and psychological problems encountered in implementing PL/S. The language and compiler are described. The discussion that followed presentation of the paper is included.
Gio Wiederhold and John Ehrman, "Inferred SYNTAX and SEMANTICS of PL/S", Proceedings of the SIGPLAN symposium on Languages for system implementation 1971, in SIGPLAN Notices 6(10) October 1971
Guide to PL/S II, International Business Machines Corp., 1974. GC28-6794-0 Note that this manual is very out of date with respect to the PL/X language in use today.
PL/I programming language family
PL S
Systems programming languages
IBM System/360 mainframe line | Operating System (OS) | 1,021 |
Open-source robotics
Open-source robotics (OSR) is where the physical artifacts of the subject are offered by the open design movement. This branch of robotics makes use of open-source hardware and free and open-source software providing blueprints, schematics, and source code. The term usually means that information about the hardware is easily discerned so that others can make it from standard commodity components and tools—coupling it closely to the maker movement and open science.
Advantages
Long-term availability. Many non-open robots and components, especially at the hobbyist level, are designed and sold by tiny startups which can disappear overnight, leaving customers without support. Open-source systems are guaranteed to have their designs available forever so communities of users can, and do, continue support after the manufacturer has disappeared.
Avoiding lock-in. A company relying on any particular non-open component exposes itself to business risk that the supplier could ratchet up prices after they have invested time and technology building on it. Open hardware can be manufacturered by anyone, creating competition or at least the potential for competition, which both remove this risk.
Interchangeable software and/or hardware with common interfaces.
Ability to modify and fork designs more easily for customisation.
Scientific reproducibility - guarantees that other labs can replicate and extend work, leading to increased impact, citations and reputation for the designer.
Lower-cost. Costs of a robot can be decreased dramatically when all components and tools are commodities. No component seller can hold a project to ransom by ratcheting the price of a critical component, as competing suppliers can easily be interchanged.
Drawbacks
For commercial organisations, open-sourcing their own designs obviously means they can no longer make large profits through the traditional engineering business model of acting as the monopoly manufacturer or seller, because the open design can be manufactured and sold by anyone including direct competitors. Profit from engineering can come from three main sources: design, manufacturing, and support. As with other open source business models, commercial designers typically make profit via their association with the brand, which may still be trademarked. A valuable brand allows them to command a premium for their own manufactured products, as it can be associated with high quality and provide a quality guarantee to customers. The same brand is also used to command a premium on associated services, such as providing installation, maintenance, and integration support for the product. Again customers will typically pay more for the knowledge that this support is provided directly by the original designer, who therefore knows the product better than competitors.
Some customers associate open source with amateurism, the hacker community, low quality and poor support. Serious companies using this business model may need to work harder to overcome this perception by emphasising their professionalism and brand to differentiate themselves from amateur efforts.
Examples
This is a non-exhaustive list of open source robots : Plen2 Eiro robot Poppy Complete humanoïd robot inmoov
Popularity
A first sign of the increasing popularity of building robots yourself can be found with the DIY community. What began with small competitions for remote operated vehicles (e.g. Robot combat), soon developed to the building of autonomous telepresence robots as Sparky and then true robots (being able to take decisions themselves) as the Open Automaton Project and Leaf Project. Certain commercial companies now also produce kits for making simple robots.
A recurring problem in the community has been projects, especially on Kickstarter, promising to fully open-source their hardware and then reneging on this promise once funded, in order to profit from being the sole manufacturer and seller.
Popular applications include:
Domestic tasks: vacuum cleaning, floor washing and automated mowing.
The use of RepRaps and other 3-D printers for rapid prototyping, art, toy manufacturing, educational aides, and open-source appropriate technology
metalworks automation
building electronic circuitry (printing and component placing of PCB-boards)
transportation, i.e. self-driving vehicles
combat robots, including manual controlled and autonomous contests
See also
Accelerometer
Bluetooth
How-to
Internet of Things
Khepera mobile robot III
Maker culture
Modular design
Open-source computing hardware
OpenStructures
Phase-change material
Pulse-width modulation (PWM)
Robot software
Robotics suite
WiDi
References
Robotics | Operating System (OS) | 1,022 |
Loader (computing)
In computer systems a loader is the part of an operating system that is responsible for loading programs and libraries. It is one of the essential stages in the process of starting a program, as it places programs into memory and prepares them for execution. Loading a program involves reading the contents of the executable file containing the program instructions into memory, and then carrying out other required preparatory tasks to prepare the executable for running. Once loading is complete, the operating system starts the program by passing control to the loaded program code.
All operating systems that support program loading have loaders, apart from highly specialized computer systems that only have a fixed set of specialized programs. Embedded systems typically do not have loaders, and instead, the code executes directly from ROM or similar. In order to load the operating system itself, as part of booting, a specialized boot loader is used. In many operating systems, the loader resides permanently in memory, though some operating systems that support virtual memory may allow the loader to be located in a region of memory that is pageable.
In the case of operating systems that support virtual memory, the loader may not actually copy the contents of executable files into memory, but rather may simply declare to the virtual memory subsystem that there is a mapping between a region of memory allocated to contain the running program's code and the contents of the associated executable file. (See memory-mapped file.) The virtual memory subsystem is then made aware that pages with that region of memory need to be filled on demand if and when program execution actually hits those areas of unfilled memory. This may mean parts of a program's code are not actually copied into memory until they are actually used, and unused code may never be loaded into memory at all.
Responsibilities
In Unix, the loader is the handler for the system call execve(). The Unix loader's tasks include:
validation (permissions, memory requirements etc.);
copying the program image from the disk into main memory;
copying the command-line arguments on the stack;
initializing registers (e.g., the stack pointer);
jumping to the program entry point (_start).
In Microsoft Windows 7 and above, the loader is the LdrInitializeThunk function contained in ntdll.dll, that does the following:
initialisation of structures in the DLL itself (i.e. critical sections, module lists);
validation of executable to load;
creation of a heap (via the function RtlCreateHeap);
allocation of environment variable block and PATH block;
addition of executable and NTDLL to the module list (a doubly-linked list);
loading of KERNEL32.DLL to obtain several important functions, for instance BaseThreadInitThunk;
loading of executable's imports (i.e. dynamic-link libraries) recursively (check the imports' imports, their imports and so on);
in debug mode, raising of system breakpoint;
initialisation of DLLs;
garbage collection;
calling NtContinue on the context parameter given to the loader function (i.e. jumping to RtlUserThreadStart, that will start the executable)
Relocating loaders
Some operating systems need relocating loaders, which adjust addresses (pointers) in the executable to compensate for variations in the address at which loading starts. The operating systems that need relocating loaders are those in which a program is not always loaded into the same location in the address space and in which pointers are absolute addresses rather than offsets from the program's base address. Some well-known examples are IBM's OS/360 for their System/360 mainframes, and its descendants, including z/OS for the z/Architecture mainframes.
OS/360 & Derivatives
In OS/360 and descendant systems, the (privileged) operating system facility is called IEWFETCH, and is an internal component of the OS Supervisor, whereas the (non-privileged) LOADER application can perform many of the same functions, plus those of the Linkage Editor, and is entirely external to the OS Supervisor (although it certainly uses many Supervisor services).
IEWFETCH utilizes highly specialized channel programs, and it is theoretically possible to load and to relocate an entire executable within one revolution of the DASD media (about 16.6 ms maximum, 8.3 ms average, on "legacy" 3,600 rpm drives). For load modules which exceed a track in size, it is also possible to load and to relocate the entire module without losing a revolution of the media.
IEWFETCH also incorporates facilities for so-called overlay structures, and which facilitates running potentially very large executables in a minimum memory model (as small as 44 KB on some versions of the OS, but 88 KB and 128 KB are more common).
The OS's nucleus (the always resident portion of the Supervisor) itself is formatted in a way that is compatible with a stripped-down version of IEWFETCH. Unlike normal executables, the OS's nucleus is "scatter loaded": parts of the nucleus are loaded into different portions of memory; in particular, certain system tables are required to reside below the initial 64 KB, while other tables and code may reside elsewhere.
The system's Linkage Editor application is named IEWL. IEWL's main function is to associate load modules (executable programs) and object modules (the output from, say, assemblers and compilers), including "automatic calls" to libraries (high-level language "built-in functions"), into a format which may be most efficiently loaded by IEWFETCH. There are a large number of editing options, but for a conventional application only a few of these are commonly employed.
The load module format includes an initial "text record", followed immediately by the "relocation and/or control record" for that text record, followed by more instances of text record and relocation and/or control record pairs, until the end of the module.
The text records are usually very large; the relocation and/or control records are small as IEWFETCH's three relocation and/or control record buffers are fixed at 260 bytes (smaller relocation and/or control records are certainly possible, but 260 bytes is the maximum possible, and IEWL ensures that this limitation is complied with, by inserting additional relocation records, as required, before the next text record, if necessary; in this special case, the sequence of records may be: ..., text record, relocation record, ..., control record, text record, ...).
A special byte within the relocation and/or control record buffer is used as a "disabled bit spin" communication area, and is initialized to a unique value. The Read CCW for that relocation and/or control record has the Program Controlled Interrupt bit set. The processor is thereby notified when that CCW has been accessed by the channel via a special IOS exit. At this point the processor enters the "disabled bit spin" loop (sometimes called "the shortest loop in the world"). Once that byte changes from its initialized value, the CPU exits the bit spin, and relocation occurs, during the "gap" within the media between the relocation and/or control record and the next text record. If relocation is finished before the next record, the NOP CCW following the Read will be changed to a TIC, and loading and relocating will proceed using the next buffer; if not, then the channel will stop at the NOP CCW, until it is restarted by IEWFETCH via another special IOS exit. The three buffers are in a continuous circular queue, each pointing to its next, and the last pointing to the first, and three buffers are constantly reused as loading and relocating proceeds.
IEWFETCH can, thereby, load and relocate a load module of any practical size, and in the minimum possible time.
Dynamic linkers
Dynamic linking loaders are another type of loader that load and link shared libraries (like .so files, .dll files or .dylib files) to already loaded running programs.
See also
Compile and go system
DLL hell
Direct binding
Dynamic binding (computing)
Dynamic dead code elimination
Dynamic dispatch
Dynamic library
Dynamic linker
Dynamic loading
Dynamic-link library
GNU linker
Library (computing)
Linker (computing)
Name decoration
Prebinding
Prelinking
Relocation (computer science)
Relocation table
Shebang (Unix)
Static library
gold (linker)
prelink
Bug compatibility
References
Operating system kernels
Computer libraries | Operating System (OS) | 1,023 |
SCSI command
In SCSI computer storage, computers and storage devices use a client-server model of communication. The computer is a client which requests the storage device to perform a service, e.g., to read or write data. The SCSI command architecture was originally defined for parallel SCSI buses but has been carried forward with minimal change for use with Fibre Channel, iSCSI, Serial Attached SCSI, and other transport layers.
In the SCSI protocol, the initiator sends a SCSI command information unit to the target device. Data information units may then be transferred between the computer and device. Finally, the device sends a response information unit to the computer.
SCSI commands are sent in a command descriptor block (CDB), which consists of a one byte operation code (opcode) followed by five or more bytes containing command-specific parameters. Upon receiving and processing the CDB the device will return a status code byte and other information.
The rest of this article contains a list of SCSI commands, sortable in opcode or description alphabetical order. In the published SCSI standards, commands are designated as "mandatory," "optional" or "vendor-unique." Only the mandatory commands are required of all devices. There are links to detailed descriptions for the more common SCSI commands. Some opcodes produce different, though usually comparable, effects in different device types; for example, opcode recalibrates a disk drive by seeking back to physical sector zero, but rewinds the medium in a tape drive.
SCSI command lengths
Originally the most significant 3 bits of a SCSI opcode specified the length of the CDB. However, when variable-length CDBs were created this correspondence was changed, and the entire opcode must be examined to determine the CDB length.
The lengths are as follows:
List of SCSI commands
When a command is defined in multiple CDB sizes, the length of the CDB is given in parentheses after the command name, e.g., READ(6) and READ(10).
External links
Summary of SCSI command operation codes
SCSI | Operating System (OS) | 1,024 |
Unified interoperability
Unified interoperability is the property of a system that allows for the integration of real-time and non-real time communications, activities, data, and information services (i.e., unified) and the display and coordination of those services across systems and devices (i.e., interoperability). Unified interoperability provides the capability to communicate and exchange processing across different applications, data, and infrastructure.
Unified communications
Unified communications has been led by the business world, which has a need for efficiency, simplicity, and speed. Rather than a single tool or product, unified communications is a set of products that deliver a nearly identically user experience across multiple devices or media types. The system begins with “presence information” - a feature of telecommunications technology that “senses” where a user is in relation to the technology. This change has been dominated by telecommunications providers integrating video, instant messaging, voice, and collaboration.
Unified Communications Interoperability Forum
In May 2010, a number of communications technology vendors founded a nonprofit organization for the advancement of interoperability. The goal of the Unified Communications Interoperability Forum is to enable complete interoperability of hardware and software across huge networks of systems. The UCIF relies on existing standards rather than the authoring of new ones.
Members of the UCIF include (*founding member):
HP*
Microsoft*
Polycom*
Logitech*
Juniper Networks*
Acme Packet
Huawei
Aspect Software
AudioCodes
Broadcom
BroadSoft
Brocade Communications Systems
ClearOne
Jabra
Plantronics
Siemens Enterprise Communications
Teliris
Interoperability
In the broadest sense, interoperability is the ability of multiple systems (usually computer systems) to work together seamlessly. In the Information Age, interoperability is a highly desirable trait for most business systems. Likewise, as homes become more infused with networked technologies (desktop PCs, tablet computers, smartphones, Internet-ready television), interoperability becomes an issue even for the average consumer.
Computer operating systems are a prime example of interoperability, wherein several programs from different vendors are able to co-exist and, in many cases, exchange data in a meaningful way. An operating system is also “unified” in the sense that it presents the user with a common, easy to understand computer interface for executing numerous tasks. The unified interoperability of computers means that users need not have specialized knowledge about how computers function.
A system with the property of interoperability will retain that property well into the future. The system will be adaptable to the rapid changes in technology with only minor adjustments.
Syntactic interoperability
The most fundamental level of interoperability is syntactic interoperability. At this level, systems can exchange data without loss or corruption. Certain data formats are especially suited to the exchange of data between diverse systems. XML (extensible markup language), for instance, allows data to be transmitted in a comprehensible format for people and machines. SQL (structured query language), on the other hand, is an industry-standard, nearly universal format for compiling information in a database. SQL databases are essential for a business such as Amazon.com, with its vast catalog of products, attributes, and consumer reviews.
Semantic interoperability
Semantic interoperability goes a step further than syntactic interoperability. Systems with semantic interoperability can not only exchange data effortlessly, but also interpret and communicate that data to human users in a meaningful, actionable way.
Distributed functions and processing interoperability
Distributed functions and processing interoperability focus on the ability to create new products, applications and operating models without traditional intermediaries like data models, databases or large system integrations through establishing a Unified Interoperability framework between normally, diverse and distributed sources, data, technology and other assets.
It enables business problems to be solved by connecting interoperable components of any characteristic into single, uniform, global “instruction chain” of functionality. Components use existing IP or applications and so integrate disparate technology to a uniform platform. Configuration models combine runtime processing infrastructure for common and predictable performance, security, resiliency, and availability with the whole process, enabling the uniform exchange of data and consistent processing across components, irrespective of technology, format or location.
Benefits
Unified interoperability offers benefits for every stakeholder in a system. For customers and end-users of a system, unified interoperability offers a more convenient, satisfying experience. In business, interoperability helps lower costs and improves overall efficiency. As businesses strive to maximize the efficiency of their integrated systems, they encourage innovation and problem solving.
References
External links
UCIF Official Website
Strategic management | Operating System (OS) | 1,025 |
Framework (office suite)
Framework, launched in 1984, was a office suite to run on the (x86) IBM PC and compatibles with the MS-DOS operating system.
Unlike other integrated products, Framework was not created as "plug-in" modules with a similar look and feel, but as a single windowing workspace representing a desktop metaphor that could manage and outline "Frames" sharing a common underlying format.
Framework could be considered a predecessor to the present graphical user interface window metaphor: it was the first all-in-one package to run on any PC platform to offer a GUI, WYSIWYG typography on the display and printer output, as well as integrated interpreters.
History
Background
ValDocs, an even earlier integrated suite, and actually comparable to the original Macintosh of 1984 and Apple Lisa of 1982, was produced by Epson, a complete integrated work station that ran on the previous-generation Zilog Z80 CPU and CP/M operating system, with a graphical user interface (GUI) and "WYSIWYG" typography on the monitor and printing. Despite several iterations, ValDocs was too slow on the hardware that it was released on.
A few months before Framework, its close rival Lotus Symphony was released.
Framework offered all of the above ValDocs' functionality in the first all-in-one package to run on any PC platform.
Programmers at Work credits Robert Carr as the designer and principal developer of Framework.
Forefront Corporation
Robert Carr and Marty Mazner founded Forefront Corporation to develop Framework in 1983. In July of that year, they approached Ashton-Tate to provide the capital and to later market the product. Together with a team of six other individuals, Carr and company released the original Framework.
The initial release of Framework included about a dozen or so frame types (identified by a FRED function, @frametype). Frame types included containers which could be filled up with other frames, empty frames which could become other type of frames based on user input, formulas embedded in them or program output targeting them, word processor frames, flat-database frames and spreadsheet as well as graphic frames.
The product proved successful enough, that in 1985, Ashton-Tate bought Forefront a year sooner than planned.
Ashton-Tate era
The original team, now working for Ashton-Tate, continued to enhance the product.
Later Framework versions included a frame type that can hold compiled executable code,
Beginning with Framework II (1985), the company also produced the Framework II Runtime and the Framework II Developer's Toolkit. These products allowed application developers to create business applications using the built-in FRED programming language.
Framework III was produced in 1988–1989, and in 1991, Framework IV emerged as the last Ashton-Tate-released version.
Although Ashton-Tate humorously advertised, that "Lotus uses Framework", Framework failed to gain more than a fraction of the market share needed to become a workplace standard. Lotus 1-2-3 was able to successfully capture most of the spreadsheet market, and after a number of setbacks regarding Ashton-Tate's flagship product dBASE, Borland bought Ashton-Tate, and later sold Framework to Selections & Functions, Inc.
Selections and Functions, Inc
Beginning with Framework V (Framework 5), Selections and Functions introduced only a few features — mainly features required to prevent the office suite from becoming out-of-date.
For example, Framework VII (Framework 7) introduced long file names, the Euro symbol and the ability to display pictures in Framework.
Framework VIII (Framework 8) introduced the ability to display JPEG and .BMP files and to load such files into Framework databases.
Of particular importance, all of the Selections and Functions' versions of Framework added the ability to share "cut and paste" (memory buffer data) between Windows and Framework. For detailed feature lists and screen shots see the Framework homepage listed below.
Selections and Functions is continues to sell Framework — though no price is available publicly.
Components
In addition to frame types with compiled executable code, the current version includes an external frame type that is handled by separate applications running on the host operating system.
The spreadsheet program was superior in its day, offering true 3D capability, where spreadsheets could form an outline which can be "opened" to reveal a separate spreadsheet, as well as other frame types — a feat of sheer convenient function never again seen and further enhanced in much later versions.
Framework's built-in interpreter, the FRED (Frame Editor) computer language, was based on Lisp, and included an Eval function. It applied to all text and frame type across the product.
Present versions include the FrameworkPascal compiler, which extend Framework with the Windows API.
Compatibility
Framework works on most versions of Microsoft Windows. Framework 7 was the last version which can be run on Windows 95/98/ME or on DOS. Framework 8 and 9 were designed to run on Windows XP, but not in Windows 9x or DOS. Official updates are provided to run Framework on Windows 7 and 8.
See also
Comparison of office suites
References
External links
Framework home page
Interview of Robert Carr about framework (archive)
An early Framework and FRED book by Adam Green.
DOS software
Office suites | Operating System (OS) | 1,026 |
Universal Time-Sharing System
The Universal Time-Sharing System (UTS) is a discontinued operating system for the XDS Sigma series of computers, succeeding Batch Processing Monitor (BPM)/Batch Time-Sharing Monitor (BTM). UTS was announced in 1966, but because of delays did not actually ship until 1971. It was designed to provide multi-programming services for online (interactive) user programs in addition to batch-mode production jobs, symbiont (spooled) I/O, and critical real-time processes. System Daemons, called "ghost jobs" were used to run monitor code in user space. The final release, D00, shipped in January, 1973. It was succeeded by the CP-V operating system, which combined UTS with the heavily batch-oriented Xerox Operating System (XOS).
CP-V
The CP-V (pronounced sea-pea-five) operating system, the compatible successor to UTS, was released in August 1973. CP-V supported the same CPUs as UTS plus the Xerox 560. CP-V offers "single-stream and multiprogrammed batch; timesharing; and the remote processing mode, including intelligent remote batch." Realtime processing was added in release B00 in April 1974, and transaction processing in release C00 in November 1974.
CP-V version C00 and F00, and Telefile's TCP-V version I00 still run on a Sigma emulator developed in 1997.
CP-R
CP-R (Control Program for Real-Time) is a discontinued realtime operating system for Xerox 550 and Sigma 9 computer systems. CP-R supports three types of tasks: Foreground Primary Tasks, Foreground Secondary Tasks, and Batch Tasks.
CP-6
In 1975, Xerox decided to exit the computer business which it had purchased from Scientific Data Systems in 1969. Honeywell offered to purchase Xerox Data Systems, initially to provide field service support to the existing customer base.
The CP-6 system including OS and program products was developed, beginning in 1976, by Honeywell to convert Xerox CP-V users to run on Honeywell equipment. The first beta site was installed at Carleton University in Ottawa Canada in June 1979, and three other sites were installed before the end of 1979.
Support for CP-6 was transferred to ACTC in Canada in 1993. CP-6 systems continued to run for many years in the US, Canada, Sweden, the UK, and Germany. The final system shut down was at Carleton University in 2005.
CP-6 and its accomplishments, its developers, and its customers are commemorated with a plaque on the community wall at the Computer History Museum in Mountain View, California.
Software
CP-V Software as of release B00, 1974. CP-V was supported by the CP-6 team at the Honeywell Los Angeles Development Center (LADC) until 1977 and thereafter.
Bundled Software
TEL – Terminal Executive Language.
EASY – Simple interactive environment for FORTRAN and BASIC programs and data files.
CCI – Control Command (or Card) Interpreter. The batch counterpart of TEL.
BATCH – Submit jobstream to batch queue.
PCL – Peripheral Conversion Language (pronounced "pickle"). Data file device to device copy.
EDIT – Line Editor.
LINK – One-pass linking loader.
LOAD – Two-pass overlay loader.
DELTA – Instruction-level debugger.
SORT/MERGE.
Extended FORTRAN IV.
FDP – FORTRAN Debug Package.
META-SYMBOL – Macro assembler.
BASIC.
FLAG – Load-and-go FORTRAN compatible with IBM Fortran-H.
ANS COBOL.
COBOL On-Line debugger.
APL.
SL-1 – Simulation Language.
IBM 1400 Series Simulator.
SYSGEN – System Generation.
DEFCOM – Export external definitions from a load module.
SYMCON – Manipulate symbols in a load module.
ANALYZE – System dump analyzer.
Separately Priced Software
MANAGE – A generalized file management and reporting tool.
EDMS – Database Management System.
GPDS – General Purpose Discrete Simulator.
CIRC – Electronic Circuit Analysis.
Contributed Software
Xerox maintained a library of other Xerox and user-written software from the EXCHANGE user group.
References
Further reading
Bryan, G. Edward, "Not All Programmers Are Created Equal --Redux," 2012 IEEE Aerospace Conference Proceedings, March 2012
P.A. Crisman and Bryan, G. Edward, "Management of Software Development for CP 6 at LADC", Proceedings of the Fifth Annual Honeywell International Software Conference, March 1981.
Bryan, G. Edward, "CP-6: Quality and Productivity Measures in the 15-Year Life Cycle of an Operating System," Software Quality Journal 2, 129–144, June 1993.
Frost, Bruce, “APL and I-D-S/II APL access to large databases,” APL '83 Proceedings of the international conference on APL, pages 103–107.
Fielding, Roy T., "An Empirical Microanalysis of Software Failure Data from a 12-Year Software Maintenance Process," Masters thesis, University of California Irvine, 1992
External links
UTS Documentation at Bitsavers
CP-V Documentation at Bitsavers
CP-R Documentation at Bitsavers
The COMPUTER That Will Not Die: The SDS Sigma 7
A working Sigma 9 running CP-V at Living Computers: Museum + Labs: request a login
Time-sharing operating systems
Discontinued operating systems
Proprietary operating systems
Xerox computers | Operating System (OS) | 1,027 |
MacOS Catalina
macOS Catalina (version 10.15) is the sixteenth major release of macOS, Apple Inc.'s desktop operating system for Macintosh computers. It is the successor to macOS Mojave and was announced at WWDC 2019 on June 3, 2019 and released to the public on October 7, 2019. Catalina is the first version of macOS to support only 64-bit applications and the first to include Activation Lock. It is also the last version of macOS to have the major version number of 10; its successor, Big Sur, released on November 12, 2020, is version 11. In order to increase web compatibility, Safari, Chromium and Firefox have frozen the OS in the user agent running in subsequent releases of macOS at 10.15.7 Catalina.
The operating system is named after Santa Catalina Island, which is located off the coast of southern California.
System requirements
macOS Catalina officially runs on all standard configuration Macs that supported Mojave. 2010–2012 Mac Pros, which could run Mojave only with a GPU upgrade, are no longer supported. Catalina requires 4 GB of memory, an increase over the 2 GB required by Lion through Mojave.
iMac: Late 2012 or newer
iMac Pro: Late 2017
Mac Pro: Late 2013 or newer
Mac Mini: Late 2012 or newer
MacBook: Early 2015 or newer
MacBook Air: Mid 2012 or newer
MacBook Pro: Mid 2012 or newer, Retina display not needed
It is possible to install Catalina on many older Macintosh computers that are not officially supported by Apple. This requires using a patch to modify the install image.
Applications
AirPort Utility
App Store
Archive Utility
Audio MIDI Setup
Automator
Bluetooth File Exchange
Books
Boot Camp Assistant
Calculator
Calendar
Chess
ColorSync Utility
Console
Contacts
Dictionary
Digital Color Meter
Disk Utility
DVD Player
FaceTime
Find My
Font Book
GarageBand (may not be pre-installed)
Grapher
Home
iMovie (may not be pre-installed)
Image Capture
Keychain Access
Keynote (may not be pre-installed)
Mail
Migration Assistant
Music
News (only available for Australia, Canada, United Kingdom, and United States)
Notes
Numbers (may not be pre-installed)
Pages (may not be pre-installed)
Photo Booth
Podcasts
Preview
QuickTime Player
Reminders
Screenshot (succeeded Grab since Mojave)
Script Editor
Stickies
Stocks
System Information
Terminal
TextEdit
Time Machine
TV
Voice Memos
VoiceOver Utility
X11/XQuartz (may not be pre-installed)
Changes
System
Catalyst
Catalyst is a new software-development tool that allows developers to write apps that can run on macOS, iOS and iPadOS. Apple demonstrated several ported apps, including Jira and Twitter (after the latter discontinued its macOS app in February 2018).
System extensions
An upgrade from Kexts. System extensions avoid the problems of Kexts. There are 3 kinds of System extensions: Network Extensions, Endpoint Security Extensions, and Driver Extensions. System extensions run in userspace, outside of the kernel. Catalina will be the last version of macOS to support legacy system extensions.
DriverKit
A replacement for IOKit device drivers, driver extensions are built using DriverKit. DriverKit is a new SDK with all-new frameworks based on IOKit, but updated and modernized. It is designed for building device drivers in userspace, outside of the kernel.
Gatekeeper
Mac apps, installer packages, and kernel extensions that are signed with a Developer ID must be notarized by Apple to run on macOS Catalina.
Activation Lock
Activation Lock helps prevent the unauthorized use and drive erasure of devices with an Apple T2 security chip (2018, 2019, and 2020 MacBook Pro; 2020 5K iMac; 2018 MacBook Air, iMac Pro; 2018 Mac Mini; 2019 Mac Pro).
Dedicated system volume
The system runs on its own read-only volume, separate from all other data on the Mac.
Voice control
Users can give detailed voice commands to applications. On-device machine processing is used to offer better navigation.
Sidecar
Sidecar allows a Mac to use an iPad (running iPadOS) as a wireless external display. With Apple Pencil, the device can also be used as a graphics tablet for software running on the computer. Sidecar requires a Mac with Intel Skylake CPUs and newer (such as the fourth-generation MacBook Pro), and an iPad that supports Apple Pencil.
Support for wireless game controllers
The Game Controller framework adds support for two major console game controllers: the PlayStation 4's DualShock 4 and the Xbox One controller.
Time Machine
A number of under-the-hood changes were made to Time Machine, the backup software. For example, the manner in which backup data is stored on network-attached devices was changed, and this change is not backwards-compatible with earlier versions of macOS.
Apple declined to document these changes, but some of them have been noted.
Applications
iTunes
iTunes is replaced by separate Music, Podcasts, TV and Books apps, in line with iOS. iOS device management is now conducted via Finder. The TV app on Mac supports Dolby Atmos, Dolby Vision, and HDR10 on MacBooks released in 2018 or later, while 4K HDR playback is supported on Macs released in 2018 or later when connected to a compatible display.
iTunes can still be installed and will work separately from the new apps (according to support information on Apple’s website).
Find My
Find My Mac and Find My Friends are merged into an application called Find My.
Notes
The Notes application was enhanced to allow better management of checklists and the ability to share folders with other users. The application version was incremented from 4.6 (in macOS 10.14 Mojave) to 4.7.
Reminders
Among other visual and functional overhauls, attachments can be added to reminders and Siri can intelligently estimate when to remind the user about an event.
Voice Memos
The Voice Memos application, first ported from iOS to the Mac in macOS 10.14 Mojave as version 2.0, was incremented to version 2.1.
Removed or changed components
macOS Catalina exclusively supports 64-bit applications. 32-bit applications no longer run (including all software that utilizes the Carbon API as well as QuickTime 7 applications, image, audio and video codecs). Apple has also removed all 32-bit-only apps from the Mac App Store.
Zsh is the default login shell and interactive shell in macOS Catalina, replacing Bash, the default shell since Mac OS X Panther in 2003. Bash continues to be available in macOS Catalina, along with other shells such as csh/tcsh and ksh.
Dashboard has been removed in macOS Catalina.
The ability to add Backgrounds in Photo Booth was removed in macOS Catalina.
The command-line interface GNU Emacs application was removed in macOS Catalina.
Built-in support for Perl, Python 2.7 and Ruby are included in macOS for compatibility with legacy software. Future versions of macOS will not include scripting language runtimes by default, possibly requiring users to install additional packages.
Legacy AirDrop for connecting with Macs running Mac OS X Lion, Mountain Lion and Mavericks, or 2011 and older Macs has been removed.
Reception
Catalina received favourable reviews on release for some of its features. However, some critics found the OS version distinctly less reliable than earlier versions. The broad addition of user-facing security measures (somewhat analogous to the addition of User Account Control dialog boxes with Windows Vista a decade earlier) was criticised as intrusive and annoying.
Release history
References
External links
– official site
macOS Catalina download page at Apple
2019 software
Computer-related introductions in 2019
15
X86-64 operating systems | Operating System (OS) | 1,028 |
Commodore 64x
The Commodore 64x is a replica PC based on the original Commodore 64, powered by standard x86 Intel processors ranging from the Intel Atom to the Intel Core i7. It was sold by Commodore USA starting in April 2011. Because Commodore USA went out of business after the death of its founder, Barry Altman, this machine is no longer available.
History
The case of the C64x was designed to resemble the popular Commodore 64 in response to "overwhelming demand" from Commodore USA's customer base.
Volume production started in May 2011, with machines being released on to the market in June 2011 at a starting price of US $595 for the Basic model and up to US $895 for an Ultimate model, and as of August 13, an Extreme version fitted with an Intel Core i7 chip with 8 GB DDR3 RAM and 3 TB hard drive for US $1499. There was a case-only version of the C64x called the Barebones available for US $295.
It shipped initially with Ubuntu 10.10 Desktop Edition, and in November 2011, Commodore USA released their own Linux derivative called Commodore OS.
Software
The C64x is said to come bundled with Ubuntu 10.10. There is no hardware compatibility with the original C64, with software compatibility provided through the use of an emulator. Ubuntu is able to run VICE, an open source program which emulates 8-bit computers, such as the Commodore 64. VICE is available for free for almost all operating systems currently in use.
As of 18 August 2011, Commodore USA announced it would be providing international keyboards and keys for its customers worldwide for the C64x, with new keyboards made with additional keys for countries/languages if it is needed. For customers outside of its main US support area, keys and keycap pullers will be provided for easy self-installation.
References
Future models, planned for 2011
See also
Comm5 | Operating System (OS) | 1,029 |
MacTech
MacTech is the journal of Apple technology, a monthly magazine for consultants, IT Pros, system administrators, software developers, and other technical users of the Apple Macintosh line of computers.
The magazine was called "MacTech" for its first two issues, starting in 1984, after which its name was changed to MacTutor. At the time the magazine defined itself as "a technical publication devoted to advancing programming knowledge of the Macintosh for both hacker and professional alike".
In the spring of 1989 a new and separate magazine called MacTech was launched by TechAlliance, a global Apple users group headquartered in Renton, WA that hosted the Apple Programmers and Developers Association (APDA). The founding editor of MacTech was Andrew Himes, and Himes described the magazine as "The journal designed by people who program and develop for the Apple Macintosh. You hold in your hands what is designed to be a legendary publication for a legendary computer. In these wee, early hours of the computer revolution, just a scant half decade after the introduction of Apple's flagship computer, the world needs a magazine that focuses on the needs of the Mac developer community with the intensity, vision, and utility of no other existing publication. Nuts-and-bolts programming solutions and tutorials. In-depth looks at future technologies and present opportunities. An emphasis on both object-oriented and proceduarl languages, on database programming and spreadsheet macros, on HyperTalking and multimedia applications. Feature articles, tutorials, reviews, and commentary by some of the most important, creative, and eloquent programmers in the Macintosh universe. MacTech will be preeminently useful and intellectually provocative. An essential desktop reference for today's serious programmers, as well as a tool that will help pave your way to the future of programming."
In 1993, MacTutor Magazine purchased MacTech, and the name of the consolidated publication became MacTech.
In 2010, MacTech launched its event series -- first with MacTech Conference, then MacTech Boot Camp and then MacTech InDepth. 2010 had a single event, 2011 had seven events, and 2012 has fifteen events.
MacTech.com also provides a news aggregator service.
The magazine is privately owned, held by Xplain Corporation.
On June 5, 2008, MacTech announced it would act as caretaker to the archives of MacMinute News and Forums. At these forums, Stan's Lounge is crowded with MacMinute and MacCentral faithful in honor of Stan Flack's contributions to the Macintosh community.
On November 6, 2009, MacTech announced that it was acquiring MacMod, a community site dedicated to modding Apple hardware and software. As part of the acquisition, the MacMod forums have been integrated into MacTech's, and MacTech now has a podcast dedicated to Mac technical topics as well as industry news, event coverage, and interviews.
References
External links
MacTech Live podcast
Xplain Corporation
Archived MacTech magazines on the Internet Archive
Computer magazines published in the United States
Monthly magazines published in the United States
Macintosh magazines
Magazines established in 1984
Magazines published in California | Operating System (OS) | 1,030 |
Windows Live OneCare
Windows Live OneCare (previously Windows OneCare Live, codenamed A1) was a computer security and performance enhancement service developed by Microsoft for Windows. A core technology of OneCare was the multi-platform RAV (Reliable Anti-virus), which Microsoft purchased from GeCAD Software Srl in 2003, but subsequently discontinued. The software was available as an annual paid subscription, which could be used on up to three computers.
On 18 November 2008, Microsoft announced that Windows Live OneCare would be discontinued on 30 June 2009 and will instead be offering users a new free anti-malware suite called Microsoft Security Essentials to be available before then. However, virus definitions and support for OneCare would continue until a subscription expires. In the end-of-life announcement, Microsoft noted that Windows Live OneCare would not be upgraded to work with Windows 7 and would also not work in Windows XP Mode.
History
Windows Live OneCare entered a beta state in the summer of 2005. The managed beta program was launched before the public beta, and was located on BetaPlace, Microsoft's former beta delivery system. On 31 May 2006, Windows Live OneCare made its official debut in retail stores in the United States.
The beta version of Windows Live OneCare 1.5 was released in early October 2006 by Microsoft. Version 1.5 was released to manufacturing on 3 January 2007 and was made available to the public on 30 January 2007. On 4 July 2007, beta testing started for version 2.0, and the final version was released on 16 November 2007.
Microsoft acquired Komoku on 20 March 2008 and merged its computer security software into Windows Live OneCare.
Windows Live OneCare 2.5 (build 2.5.2900.28) final was released on 3 July 2008. On the same day, Microsoft also released Windows Live OneCare for Server 2.5.
Features
Windows Live OneCare features integrated anti-virus, personal firewall, and backup utilities, and a tune-up utility with the integrated functionality of Windows Defender for malware protection. A future addition of a registry cleaner was considered but not added because "there are not significant customer advantages to this functionality". Version 2 added features such as multi-PC and home network management, printer sharing support, start-time optimizer, proactive fixes and recommendations, monthly reports, centralized backup, and online photo backup.
Windows Live OneCare is built for ease-of-use and is designed for home users. OneCare also attempts a very minimal interface to lessen user confusion and resource use. It adds an icon to the notification area that tells the user at a glance the status of the system's health by using three alert colors: green (good), yellow (fair), and red (at risk).
Compatibility
Version 1.5 of OneCare is only compatible with the 32 bit versions of Windows XP and Windows Vista. Version 2 of OneCare supports 64 bit compatibility to Vista. In version 2.5, Microsoft released Windows Live OneCare for Server which supports Windows Server 2008 Standard 64-bit and Windows Small Business Server 2008 Standard and Premium editions. No edition of OneCare operates in safe mode. Windows Live OneCare will not support Windows 7 as its development has been discontinued and replaced by Microsoft Security Essentials.
Activation
Windows Live OneCare requires users to activate the product if they wish to continue using it after the free trial period (90 days) through a valid Windows Live ID. When the product is activated, the grey message bar at the top of the program disappears. The subscription remains active for 1 year from the date of activation. Windows Live OneCare does not require the operating system to be checked with Windows Genuine Advantage.
Protection
Windows Live OneCare Protection Plus is the security component in the OneCare suite. It consists of three parts:
A personal firewall capable of monitoring and blocking both incoming and outgoing traffic (The built-in Windows Firewall in Windows XP only monitors and blocks incoming traffic)
An anti-virus tool that uses regularly updated anti-virus definition files to protect against malicious software
An anti-spyware tool that uses the Windows Defender engine as a core to protect against potentially unwanted software (In version 1.0, this required the separate installation of Windows Defender and was not integrated into the OneCare interface, although it could be managed and launched from OneCare. Version 1.5 integrated the Windows Defender engine into OneCare and no longer requires separate installation.)
Windows Live OneCare 1.5 onwards also monitors Internet Explorer 7 and 8 security settings and ensures that the automatic website checking feature of the Phishing Filter is enabled.
Performance
Windows Live OneCare Performance Plus is the component that performs monthly PC tune-up related tasks, such as:
Disk cleanup and defragmentation.
A full virus scan using the anti-virus component in the suite.
User notification if files are in need of backing up.
Check for Windows updates by using the Microsoft Update service.
Backup
Windows Live OneCare Backup and Restore is the component that aids in backing up important files. Files can be backed up to various recordable media, such as external hard disks, CDs, and DVDs. When restoring files, the entirety or a subset of them can also be restored to a networked computer, as long as it's running OneCare as well. The Backup and Restore component supports backup software features such as incremental backups and scheduling.
Criticism
Windows Live OneCare has been criticized from both users and competing security software companies.
Microsoft's acquisition of GeCAD RAV, a core technology of OneCare, and their subsequent discontinuation of that product, deprived the Linux platform (and others) of one of its leading virus scanning tools for e-mail servers, bringing Microsoft's ultimate intentions into question.
On 26 January 2006, Windows Live OneCare was criticized by Foundstone (a division of the competing McAfee anti-virus) for the integrated firewall having default white lists which allow Java applications and digitally signed software to bypass user warnings, since neither of those applications carry assurances that they will not have security flaws or be written with a malicious intent. Microsoft has since responded to the criticism, justifying their decision in that Java applications are "widely used by third party applications, and is a popular and trusted program among our users", and that "it is highly unusual for malware to be signed."
Windows Live OneCare has also been criticized for the lack of adherence to industry firewall standards concerning intrusion detection. Tests conducted by Agnitum (the developers of Outpost Firewall) have shown OneCare failing to detect trojans and malware which hijack applications already resident on an infected machine.
In February 2007, the first Windows Vista anti-virus product testing by Virus Bulletin magazine (a sister company of Sophos, the developers of Sophos Anti-Virus) found that Windows Live OneCare failed to detect 18.6% of viruses. Fifteen anti-virus products were tested. To pass the Virus Bulletin's VB100 test, an anti-virus product has to detect 100% of the viruses.
AV-Comparatives also released results that placed Windows Live OneCare last in its testing of seventeen anti-virus products. In response, Jimmy Kuo of the Microsoft Security Research and Response (MSRR) team pledged to add "truly important" ("actively being spread") malware as soon as possible, while "[test detection] numbers will get better and better" for other malware "until they are on par with the other majors in this arena." He also expressed confidence in these improvements: "Soon after, [other majors] will need to catch up to us!"
As of April 2008, Windows Live OneCare has passed the VB100 test under Windows Vista SP1 Business Edition. As of August 2008, Windows Live OneCare placed 14th out of 16 anti-virus products in on-demand virus detection rates. On the other hand, as of May 2009, Windows Live OneCare placed 2nd in a proactive/retrospective performance test conducted by AV-Comparatives. AV-Comparatives.org, the test issuer, denotes that it had "very few false alarms, which is a very good achievement." The publisher also points out that false positives can cause as much harm as genuine infections, and furthermore, anti-virus scanners prone to false alarms essentially achieve higher detection scores.
Community Revival
After Windows Live OneCare was discontinued, end-users of the product could no longer install Windows Live OneCare due to the installer checking Microsoft OneCare's site for updates. This resulted in the installation giving an error message 'Network problems are preventing Windows Live OneCare Installation from continuing at this time'.
A YouTuber by the name 'Michael MJD' posted a review of the software to his second channel 'mjdextras' after finding it at a thrift-store but was unable to get the software installed due to the above error; MJD Community member 'Cobs Server Closet' requested a copy of the installation media to fix this issue.
As of October 20th 2020, 'Cobs Server Closet' posted to their website that they'd successfully recreated a functioning version of the installer, allowing end-users owning existing installation media to reinstall the software. This project was named 'OneCare Rewritten'.
While the OneCare Rewritten software did allow successful installation of OneCare, many of the notable features such as OneCare Circles and built-in Backup feature remain non-functional as a result of being dependant on Microsoft Windows Live OneCare servers.
See also
Windows Defender
Windows Live
Comparison of antivirus software
References
External links
OneCare
Antivirus software
Firewall software
Backup software
Spyware removal
2006 software | Operating System (OS) | 1,031 |
Comparison of embedded computer systems on board the Mars rovers
The embedded computer systems on board the Mars rovers sent by NASA must withstand the high radiation levels and large temperature changes in space. For this reason their computational resources are limited compared to systems commonly used on Earth.
In operation
Direct teleoperation of a Mars rover is impractical, since the round trip communication time between Earth and Mars ranges from 8 to 42 minutes and the Deep Space Network system is only available a few times during each Martian day (sol). Therefore, a rover command team plans, then sends, a sol of operational commands to the rover at one time.
A rover uses autonomy software to make decisions based on observations from its sensors. Each pair of images for stereo the Sojourner rover could generate 20 navigation 3D points (with the initial software version the craft landed with). The MER rovers can generate 15,000 (nominal) to 40,000 (survey mode) 3D points.
Performance comparisons
With the exception of Curiosity and Perseverance, each Mars rover has only one on-board computer. Both Curiosity and Perseverance have two identical computers for redundancy. Curiosity is, as of February 2013, operating on its redundant computer, while its primary computer is being investigated for the reasons why it started to fail.
Mars Rovers
See also
Radiation hardening
References
External links
The CPUs of Spacecraft Computers in Space
Space technology
Exploration of Mars
Computing comparisons | Operating System (OS) | 1,032 |
Atal Bihari Vajpayee Indian Institute of Information Technology and Management
Atal Bihari Vajpayee Indian Institute of Information Technology and Management, Gwalior (ABV-IIITM Gwalior), commonly known as the Indian Institute of Information Technology and Management, Gwalior (IIITM Gwalior), is a university located in Gwalior, Madhya Pradesh, India. Established in 1997 and named for Atal Bihari Vajpayee, it was recognized as an Institute of National Importance.
Campus
The institute is located on a campus near Gwalior Fort. It is a residential campus as the faculty and its students live on campus. It houses several departmental blocks with academic block houses, lecture theatres, seminar halls, library, laboratories and faculty offices, administrative block, an open amphitheatre, indoor sports complex and the student hostels. There are three hostels for boys and one for girls. The campus has a variety of plants including those with medicinal properties. The institute initially operated from a temporary site in Madhav Institute of Technology and Science Gwalior (MITS) and later shifted to its own facility.
Library
The college was initially equipped with a main reference library with a capacity of 24,000 books and a reading room adjacent to it, inside the academic block itself. Recently, the college has opened a large three-story central library adjacent to the academic block that has centralized air-conditioning and has a capacity of 80,000 books. Moreover, e-resources and online journals are maintained as part of a digital library to further facilitate the students and scholars in their quest for knowledge. The library subscribes to journals, periodicals, and magazines in the area of IT and management. The library has videotapes and video CDs for use by the students.
The Ministry of Human Resource Development (MHRD) set up the "Indian National Digital Library in Engineering Sciences and Technology" (INDEST) consortium. This provides students with a collection of journals and industrial database like IEEE, EBSCO, CME, ABI/Inform complete, Association for Computing Machinery Digital Library, IEEE - IEEE/IET Electronic Library (IEL)|IEL Online, J-Gate Engineering and technology, ProQuest Science journals and Springer Verlag's link.
Academics
Academic programmes
The institute offers various graduate and postgraduate programs, which include Master of Technology (MTech) in various information technology fields, Master of Business Administration (MBA), PhD, and a five-year integrated BTech/MTech or BTech/MBA program. In 2017, the institute opened B.Tech CSE program which is its first graduation course. In addition, Management Development Programmes and Faculty Development Programmes are offered.
Achievements
A team from the institute won the 2008 ACM ICPC Asia Regional Contest, an IT programming contest held in Kanpur.
Rankings
It Was Ranked 100 by NIRF in 2020. Among business schools in India, it was ranked in the 76-100 band by the National Institutional Ranking Framework (NIRF) in 2020.
References
External links
Gwalior
Universities in Madhya Pradesh
Universities and colleges in Gwalior
Educational institutions established in 1997
1997 establishments in Madhya Pradesh
Memorials to Atal Bihari Vajpayee | Operating System (OS) | 1,033 |
Features new to Windows 7
Some of the new features included in Windows 7 are advancements in touch, speech and handwriting recognition, support for virtual hard disks, support for additional file formats, improved performance on multi-core processors, improved boot performance, and kernel improvements.
Shell and user interface
Windows 7 retains the Windows Aero graphical user interface and visual style introduced in its predecessor, Windows Vista, but many areas have seen enhancements. Unlike Windows Vista, window borders and the taskbar do not turn opaque when a window is maximized while Windows Aero is active; instead, they remain translucent.
Desktop
Themes
Support for themes has been extended in Windows 7. In addition to providing options to customize colors of window chrome and other aspects of the interface including the desktop background, icons, mouse cursors, and sound schemes, the operating system also includes a native desktop slideshow feature. A new theme pack extension has been introduced, .themepack, which is essentially a collection of cabinet files that consist of theme resources including background images, color preferences, desktop icons, mouse cursors, and sound schemes. The new theme extension simplifies sharing of themes and can also display desktop wallpapers via RSS feeds provided by the Windows RSS Platform. Microsoft provides additional themes for free through its website.
The default theme in Windows 7 consists of a single desktop wallpaper named "Harmony" and the default desktop icons, mouse cursors, and sound scheme introduced in Windows Vista; however, none of the desktop backgrounds included with Windows Vista are present in Windows 7. New themes include Architecture, Characters, Landscapes, Nature, and Scenes, and an additional country-specific theme that is determined based on the defined locale when the operating system is installed; although only the theme for a user's home country is displayed within the user interface, the files for all of these other country-specific themes are included in the operating system. All themes included in Windows 7—excluding the default theme—include six wallpaper images. A number of new sound schemes (each associated with an included theme) have also been introduced: Afternoon, Calligraphy, Characters, Cityscape, Delta, Festival, Garden, Heritage, Landscape, Quirky, Raga, Savana, and Sonata. Themes may introduce their own custom sounds, which can be used with others themes as well.
Desktop Slideshow
Windows 7 introduces a desktop slideshow feature that periodically changes the desktop wallpaper based on a user-defined interval; the change is accompanied by a smooth fade transition with a duration that can be customized via the Windows Registry. The desktop slideshow feature supports local images and images obtained via RSS.
Gadgets
With Windows Vista, Microsoft introduced gadgets to display information such as image slideshows and RSS feeds on the user's desktop; the gadgets could optionally be displayed on a sidebar docked to a side of the screen. In Windows 7, the sidebar has been removed, but gadgets can still be placed on the desktop. Gadgets can be brought to the foreground on top of active applications by pressing . Several new features for gadgets are introduced, including new desktop context menu options to access gadgets and hide all active gadgets; high DPI support; and a feature that can automatically rearrange a gadget based on the position of other gadgets. Additional new features include cached gadget content; optimizations for touch-based devices; and a gadget for Windows Media Center.
Gadgets are more closely integrated with Windows Explorer, but the gadgets themselves continue to operate in a single sidebar.exe process, unlike in Windows Vista where gadgets could operate in multiple sidebar.exe processes. Active gadgets can also be hidden via a new desktop menu option; Microsoft has stated that this option can result in power-saving benefits.
Branding and customization
For original equipment manufacturers and enterprises, Windows 7 natively supports the ability to customize the wallpaper that is displayed during user login. Because the settings to change the wallpaper are available via the Windows Registry, users can also customize this wallpaper. Options to customize the appearance of interface lighting and shadows are also available.
Windows Explorer
Libraries
Windows Explorer in Windows 7 supports file libraries that aggregate content from various locations – including shared folders on networked systems if the shared folder has been indexed by the host system – and present them in a unified view. The libraries hide the actual location the file is stored in. Searching in a library automatically federates the query to the remote systems, in addition to searching on the local system, so that files on the remote systems are also searched. Unlike search folders, Libraries are backed by a physical location which allows files to be saved in the Libraries. Such files are transparently saved in the backing physical folder. The default save location for a library may be configured by the user, as can the default view layout for each library. Libraries are generally stored in the Libraries special folder, which allows them to be displayed on the navigation pane.
By default, a new user account in Windows 7 contains four libraries for different file types: Documents, Music, Pictures, and Videos. They are configured to include the user's profile folders for these respective file types, as well as the computer's corresponding Public folders. The Public folder also contains a hidden Recorded TV library that appears in the Windows Explorer sidepane when TV is set up in Media Center for the first time.
In addition to aggregating multiple storage locations, Libraries enable Arrangement Views and Search Filter Suggestions. Arrangement Views allow you to pivot your view of the library's contents based on metadata. For example, selecting the "By Month" view in the Pictures library will display photos in stacks, where each stack represents a month of photos based on the date they were taken. In the Music library, the "By Artist" view will display stacks of albums from the artists in your collection, and browsing into an artist stack will then display the relevant albums.
Search Filter Suggestions are a new feature of the Windows 7 Explorer's search box. When the user clicks in the search box, a menu shows up below it showing recent searches as well as suggested Advanced Query Syntax filters that the user can type. When one is selected (or typed in manually), the menu will update to show the possible values to filter by for that property, and this list is based on the current location and other parts of the query already typed. For example, selecting the "tags" filter or typing "tags:" into the search box will display the list of possible tag values which will return search results.
Arrangement Views and Search Filter Suggestions are database-backed features which require that all locations in the Library be indexed by the Windows Search service. Local disk locations must be indexed by the local indexer, and Windows Explorer will automatically add locations to the indexing scope when they are included in a library. Remote locations can be indexed by the indexer on another Windows 7 machine, on a Windows machine running Windows Search 4 (such as Windows Vista or Windows Home Server), or on another device that implements the MS-WSP remote query protocol.
Federated search
Windows Explorer also supports federating search to external data sources, such as custom databases or web services, that are exposed over the web and described via an OpenSearch definition. The federated location description (called a Search Connector) is provided as an .osdx file. Once installed, the data source becomes queryable directly from Windows Explorer. Windows Explorer features, such as previews and thumbnails, work with the results of a federated search as well.
Miscellaneous shell enhancements
Windows Explorer has received numerous minor enhancements that improve its overall functionality. The Explorer's search box and the address bar can be resized. Folders such as those on the desktop or user profile folders can be hidden in the navigation pane to reduce clutter. A new Content view is added, which shows thumbnails and metadata together. A new button to toggle the Preview Pane has been added to the toolbar. The button to create a new folder has been moved from the Organize menu and onto the toolbar. List view provides more space between items than in Windows Vista. Finally, storage space consumption bars that were only present for hard disks in Windows Vista are now shown for removable storage devices.
Other areas of the shell have also received similar fine-tunings: Progress bars and overlay icons may now appear on an application's button on the taskbar to better alert the user of the status of the application or the work in progress. File types for which property handlers or iFilters are installed are re-indexed by default. Previously, adding submenus to shell context menus or customizing the context menu's behavior for a certain folder was only possible by installing a form of plug-in known as shell extensions. In Windows 7 however, computer-savvy users can do so by editing Windows Registry and/or desktop.ini files. Additionally, a new shell API was introduced designed to simplify the writing of context menu shell extensions by software developers.
Windows 7 includes native support for burning ISO files. The functionality is available when a user selects the Burn disc image option within the context menu of an ISO file. Support for disc image verification is also included. In previous versions of Windows, users were required to install-third-party software to burn ISO images.
Start menu
The start orb now has a fade-in highlight effect when the user hovers the mouse cursor over it. The Start Menu's right column is now the Aero glass color. In Windows Vista, it was always black.
Windows 7's Start menu retains the two-column layout of its predecessors, with several functional changes:
The "Documents", "Pictures" and "Music" buttons now link to the Libraries of the same name.
A "Devices and Printers" option has been added that displays a new device manager.
The "shut down" icon in Windows Vista has been replaced with a text link indicating what action will be taken when the icon is clicked. The default action (switch user, log off, lock, restart, sleep, hibernate or shut down) to take is now configurable through the Taskbar and Start Menu Properties window.
Taskbar Jump Lists are presented in the Start Menu via a guillemet; when the user moves the mouse cursor over the guillemet, or presses the right-arrow key, the right-hand side of the Start menu is widened and replaced with the application's Jump List.
Links to the "Videos", "Downloads", and "Recorded TV", the Connect To menu, the Homegroup and Network menus, the Favorites and Recent Items folders and menus can now be added to the Start menu, and the Administrative Tools folder can be added to the All Programs menu.
The Start Search field, introduced in Windows Vista, has been extended to support searching for keywords of Control Panel items. For example, clicking the Start button then typing "wireless" will show Control Panel options related to configuring and connecting to wireless network, adding Bluetooth devices, and troubleshooting. Group Policy settings for Windows Explorer provide the ability for administrators of an Active Directory domain, or an expert user to add up to five Internet web sites and five additional "search connectors" to the Search Results view in the Start menu. The links, which appear at the bottom of the pane, allow the search to be executed again on the selected web site or search connector. Microsoft suggests that network administrators could use this feature to enable searching of corporate Intranets or an internal SharePoint server.
Taskbar
The Windows Taskbar has seen its most significant revision since its introduction in Windows 95 and combines the previous Quick Launch functionality with open application window icons. The taskbar is now rendered as an Aero glass element whose color can be changed via the Personalization Control Panel. It is 10 pixels taller than in Windows Vista to accommodate touch screen input and a new larger default icon size (although a smaller taskbar size is available), as well as maintain proportion to newer high resolution monitor modes. Running applications are denoted by a border frame around the icon. Within this border, a color effect (dependent on the predominant color of the icon) that follows the mouse cursor also indicates the opened status of the application. The glass taskbar is more translucent than in Windows Vista. Taskbar buttons show icons by default, not application titles, unless they are set to 'not combine', or 'combine when taskbar is full.' In this case, only icons are shown when the application is not running. Programs running or pinned on the taskbar can be rearranged. Items in the notification area can also be rearranged.
Pinned applications
The Quick Launch toolbar has been removed from the default configuration, but may be easily added. The Windows 7 taskbar is more application-oriented than window-oriented, and therefore doesn't show window titles (these are shown when an application icon is clicked or hovered over). Applications can now be pinned to the taskbar allowing the user instant access to the applications they commonly use. There are a few ways to pin applications to the taskbar. Icons can be dragged and dropped onto the taskbar, or the application's icon can be right-clicked to pin it to the taskbar.
Thumbnail previews
Thumbnail previews which were introduced in Windows Vista have been expanded to not only preview the windows opened by the application in a small-sized thumbnail view, but to also interact with them. The user can close any window opened by clicking the X on the corresponding thumbnail preview. The name of the window is also shown in the thumbnail preview. A "peek" at the window is obtained by hovering over the thumbnail preview. Peeking brings up only the window of the thumbnail preview over which the mouse cursor hovers, and turns any other windows on the desktop transparent. This also works for tabs in Internet Explorer: individual tabs may be peeked at in the thumbnail previews. Thumbnail previews integrate Thumbnail Toolbars which can control the application from the thumbnail previews themselves. For example, if Windows Media Player is opened and the mouse cursor is hovering on the application icon, the thumbnail preview will allow the user the ability to Play, Stop, and Play Next/Previous track without having to switch to the Windows Media Player window.
Jump lists
Jump lists are menu options available by right-clicking a taskbar icon or holding the left mouse button and sliding towards the center of the desktop on an icon. Each application has a jump list corresponding to its features, Microsoft Word's displaying recently opened documents; Windows Media Player's recent tracks and playlists; frequently opened directories in Windows Explorer; Internet Explorer's recent browsing history and options for opening new tabs or starting InPrivate Browsing; Windows Live Messenger's common tasks such as instant messaging, signing off, and changing online status. Third-party software can add custom actions through a dedicated API. Up to 10 menu items may appear on a list, partially customizable by user. Frequently used files and folders can be pinned by the user as to not get usurped from the list if others are opened more frequently.
Task progress
Progress bar in taskbar's tasks allows users to know the progress of a task without switching to the pending window. Task progress is used in Windows Explorer, Internet Explorer and third-party software.
Notification area
The notification area has been redesigned; the standard Volume, Network, Power and Action Center status icons are present, but no other application icons are shown unless the user has chosen them to be shown. A new "Notification Area Icons" control panel has been added which replaces the "Customize Notification Icons" dialog box in the "Taskbar and Start Menu Properties" window first introduced in Windows XP. In addition to being able to configure whether the application icons are shown, the ability to hide each application's notification balloons has been added. The user can then view the notifications at a later time.
A triangle to the left of the visible notification icons displays the hidden notification icons. Unlike Windows Vista and Windows XP, the hidden icons are displayed in a window above the taskbar, instead of on the taskbar. Icons can be dragged between this window and the notification area.
Aero Peek
In previous versions of Windows, the taskbar ended with the notification area on the right-hand side. Windows 7, however, introduces a show desktop button on the far right side of the taskbar which can initiate an Aero Peek feature that makes all open windows translucent when hovered over by a mouse cursor. Clicking this button shows the desktop, and clicking it again brings all windows to focus. The new button replaces the show desktop shortcut located in the Quick Launch toolbar in previous versions of Windows.
On touch-based devices, Aero Peek can be initiated by pressing and holding the show desktop button; touching the button itself shows the desktop. The button also increases in width to accommodate being pressed by a finger.
Window management mouse gestures
Aero Snap
Windows can be dragged to the top of the screen to maximize them and dragged away to restore them. Dragging a window to the left or right of the screen makes it take up half the screen, allowing the user to tile two windows next to each other. Also, resizing the window to the bottom of the screen or its top will extend the window to full height but retain its width. These features can be disabled via the Ease of Access Center if users do not wish the windows to automatically resize.
Aero Shake
Aero Shake allows users to clear up any clutter on their screen by shaking (dragging back and forth) a window of their choice with the mouse. All other windows will minimize, while the window the user shook stays active on the screen. When the window is shaken again, all previously minimized windows are restored, similar to desktop preview.
Keyboard shortcuts
A variety of new keyboard shortcuts have been introduced.
Global keyboard shortcuts:
operates as a keyboard shortcut for Aero Peek.
maximizes the current window.
if current window is maximized, restores it; otherwise minimizes current window.
makes upper and lower edge of current window nearly touch the upper and lower edge of the Windows desktop environment, respectively.
restores the original size of the current window.
snaps the current window to the left edge of the screen.
snaps the current window to the right half of the screen.
and move the current window to the left or right display.
functions as zoom in command wherever applicable.
functions as zoom out command wherever applicable.
turn off zoom once enabled.
operates as a keyboard shortcut for Aero Shake.
View opened application and windows in 3D stack view.
Opens Connect to a Network Projector, which has been updated from previous versions of Windows, and allows one to dictate where the desktop is displayed: on the main monitor, an external display, both; or allows one to display two independent desktops on two separate monitors.
Taskbar:
Shift + Click, or Middle click starts a new instance of the application, regardless of whether it's already running.
Ctrl + Shift + Click starts a new instance with Administrator privileges; by default, a User Account Control prompt will be displayed.
Shift + Right-click (or right-clicking the program's thumbnail) shows the titlebar's context menu which, by default, contains "Restore", "Move", "Size", "Maximize", "Minimize" and "Close" commands. If the icon being clicked on is a grouped icon, a specialized context menu with "Restore All", "Minimize All", and "Close All" commands is shown.
Ctrl + Click on a grouped icon cycles between the windows (or tabs) in the group.
Font management
The user interface for font management has been overhauled in Windows 7. As with Windows Vista, the collection of installed fonts is displayed in a Windows Explorer window, but fonts that originate from the same font family appear as icons that are represented as stacks that display font previews within the interface. Windows 7 also introduces the option to hide installed fonts; certain fonts are automatically removed from view based on a user's regional settings. An option to manually hide installed fonts is also available. Hidden fonts remain installed but are not enumerated when an application asks for a list of available fonts, thus reducing the amount of fonts to scroll through within the interface and also reducing memory usage. Windows 7 includes over 40 new fonts, including a new "Gabriola" font.
The dialog box for fonts in Windows 7 has also been updated to display font previews within the interface, which allows users to preview fonts before selecting them. Previous versions of windows only displayed the name of the font.
The ClearType Text Tuner which was previously available as a Microsoft Powertoy for earlier Windows versions has been integrated into, and updated for Windows 7.
Microsoft would later backport Windows 8 Emoji features to Windows 7.
Devices
There are two major new user interface components for device management in Windows 7, "Devices and Printers" and "Device Stage". Both of these are integrated with Windows Explorer, and together provide a simplified view of what devices are connected to the computer, and what capabilities they support.
Devices and Printers
Devices and Printers is a new Control Panel interface that is directly accessible from the Start menu. Unlike the Device Manager Control Panel applet, which is still present, the icons shown on the Devices and Printers screen are limited to components of the system that a non-expert user will recognize as plug-in devices. For example, an external monitor connected to the system will be displayed as a device, but the internal monitor on a laptop will not. Device-specific features are available through the context menu for each device; an external monitor's context menu, for example, provides a link to the "Display Settings" control panel.
This new Control Panel applet also replaces the "Printers" window in prior versions of Windows; common printer operations such as setting the default printer, installing or removing printers, and configuring properties such as paper size are done through this control panel.
Windows 7 and Server 2008 R2 introduce print driver isolation, which improves the reliability of the print spooler by running printer drivers in a separate process to the spooler service. If a third party print driver fails while isolated, it does not impact other drivers or the print spooler service.
Device Stage
Device Stage provides a centralized location for an externally connected multi-function device to present its functionality to the user. When a device such as a portable music player is connected to the system, the device appears as an icon on the task bar, as well as in Windows Explorer.
Windows 7 ships with high-resolution images of a number of popular devices, and is capable of connecting to the Internet to download images of devices it doesn't recognize. Opening the icon presents a window that displays actions relevant to that device. Screenshots of the technology presented by Microsoft suggest that a mobile phone could offer options for two-way synchronization, configuring ring-tones, copying pictures and videos, managing the device in Windows Media Player, and using Windows Explorer to navigate through the device. Other device status information such as free memory and battery life can also be shown. The actual per-device functionality is defined via XML files that are downloaded when the device is first connected to the computer, or are provided by the manufacturer on an installation disc.
Mobility enhancements
Multi-touch support
Hilton Locke, who worked on the Tablet PC team at Microsoft, reported on December 11, 2007 that Windows 7 will have new touch features on devices supporting multi-touch. An overview and demonstration of the multi-touch capabilities, including a virtual piano program, a mapping and directions program and a touch-aware version of Microsoft Paint, was given at the All Things Digital Conference on May 27, 2008; a video of the multi-touch capabilities was made available on the web later the same day.
Sensors
Windows 7 introduces native support for sensors, including accelerometer sensors, ambient light sensors, and location-based sensors; the operating system also provides a unified driver model for sensor devices. A notable use of this technology in Windows 7 is the operating system's adaptive display brightness feature, which automatically adjusts the brightness of a compatible computer's display based on environmental light conditions and factors. Gadgets developed for Windows 7 can also display location-based information. Applications for certain sensor capabilities can be developed without the requisite hardware.
Because data acquired by some sensors can be considered personally identifiable information, all sensors are disabled by default in Windows 7, and an account in Windows 7 requires administrative permissions to enable a sensor. Sensors also require user consent to share location data.
Power management
Battery notification messages
Unlike previous versions of Windows, Windows 7 is able to report when a laptop battery is in need of a replacement. The operating system works with design capabilities present in modern laptop batteries to report this information.
Hibernation improvements
The powercfg command enables the customization of the hibernation file size. By default, Windows 7 automatically sets the size of the hibernation file to 75% of a computer's total physical memory. The operating system also compresses the contents of memory during the hibernate process to minimize the possibility that the contents exceeds the default size of the hibernation file.
Power analysis and reporting
Windows 7 introduces a new /Energy parameter for the powercfg command, which generates an HTML report of a computer's energy efficiency and displays information related to devices or settings.
USB suspension
Windows 7 can individually suspend USB hubs and supports selective suspend for all in-box USB class drivers.
Graphics
DirectX
Direct3D 11 is included with Windows 7. It is a strict super-set of Direct3D 10.1, which was introduced in Windows Vista Service Pack 1 and Windows Server 2008.
Direct2D and DirectWrite, new hardware-accelerated vector graphics and font rendering APIs built on top of Direct3D 10 that are intended to replace GDI/GDI+ for screen-oriented native-code graphics and text drawing. They can be used from managed applications with the Windows API Code Pack
Windows Advanced Rasterization Platform (WARP), a software rasterizer component for DirectX that provides all of the capabilities of Direct3D 10.0 and 10.1 in software.
DirectX Video Acceleration-High Definition (DXVA-HD)
Direct3D 11, Direct2D, DirectWrite, DXGI 1.1, WARP and several other components are currently available for Windows Vista SP2 and Windows Server 2008 SP2 by installing the Platform Update for Windows Vista.
Desktop Window Manager
First introduced in Windows Vista, the Desktop Window Manager (DWM) in Windows 7 has been updated to use version 10.1 of Direct3D API, and its performance has been improved significantly.
The Desktop Window Manager still requires at least a Direct3D 9-capable video card (supported with new device type introduced with the Direct3D 11 runtime).
With a video driver conforming to Windows Display Driver Model v1.1, DXGI kernel in Windows 7 provides 2D hardware acceleration to APIs such as GDI, Direct2D and DirectWrite (though GDI+ was not updated to use this functionality). This allows DWM to use significantly lower amounts of system memory, which do not grow regardless of how many windows are opened, like it was in Windows Vista. Systems equipped with a WDDM 1.0 video card will operate in the same fashion as in Windows Vista, using software-only rendering.
The Desktop Window Manager in Windows 7 also adds support for systems using multiple heterogeneous graphics cards from different vendors.
Other changes
Support for color depths of 30 and 48 bits is included, along with the wide color gamut scRGB (which for HDMI 1.3 can be converted and output as xvYCC). The video modes supported in Windows 7 are 16-bit sRGB, 24-bit sRGB, 30-bit sRGB, 30-bit with extended color gamut sRGB, and 48-bit scRGB.
Each user of Windows 7 and Server 2008 R2 has individual DPI settings, rather than the machine having a single setting as in previous versions of Windows. DPI settings can be changed by logging on and off, without needing to restart.
File system
Solid state drives
Over time, several technologies have been incorporated into subsequent versions of Windows to improve the performance of the operating system on traditional hard disk drives (HDD) with rotating platters. Since Solid state drives (SSD) differ from mechanical HDDs in some key areas (no moving parts, write amplification, limited number of erase cycles allowed for reliable operation), it is beneficial to disable certain optimizations and add others, specifically for SSDs.
Windows 7 incorporates many engineering changes to reduce the frequency of writes and flushes, which benefit SSDs in particular since each write operation wears the flash memory.
Windows 7 also makes use of the TRIM command. If supported by the SSD (not implemented on early devices), this optimizes when erase cycles are performed, reducing the need to erase blocks before each write and increasing write performance.
Several tools and techniques that were implemented in the past to reduce the impact of the rotational latency of traditional HDDs, most notably disk defragmentation, SuperFetch, ReadyBoost, and application launch prefetching, involve reorganizing (rewriting) the data on the platters. Since SSDs have no moving platters, this reorganization has no advantages, and may instead shorten the life of the solid state memory. Therefore, these tools are by default disabled on SSDs in Windows 7, except for some early generation SSDs that might still benefit.
Finally, partitions made with Windows 7's partition-creating tools are created with the SSD's alignment needs in mind, avoiding unwanted systematic write amplification.
Virtual hard disks
The Enterprise and Ultimate editions of Windows 7 incorporate support for the Virtual Hard Disk (VHD) file format. VHD files can be mounted as drives, created, and booted from, in the same way as WIM files. Furthermore, an installed version of Windows 7 can be booted and run from a VHD drive, even on non-virtual hardware, thereby providing a new way to multi boot Windows. Some features such as hibernation and BitLocker are not available when booting from VHD.
Disk partitioning
By default, a computer's disk is partitioned into two partitions: one of limited size for booting, BitLocker and running the Windows Recovery Environment and the second with the operating system and user files.
Removable media
Windows 7 has also seen improvements to the Safely Remove Hardware menu, including the ability to eject just one camera card at the same time (from a single hub) and retain the ports for future use without reboot; and the labels of removable media are now also listed, rather than just the drive letter. Windows Explorer now by default only shows memory card reader ports in My Computer if they contain a card.
BitLocker to Go
BitLocker brings encryption support to removable disks such as USB drives. Such devices can be protected by a passphrase, a recovery key, or be automatically unlocked on a computer.
Boot performance
According to data gathered from the Microsoft Customer Experience Improvement Program (CEIP), 35% of Vista SP1 installations boot up in 30 seconds or less. The more lengthy boot times on the remainder of the machines are mainly due to some services or programs that are loaded but are not required when the system is first started. Microsoft's Mike Fortin, a distinguished engineer on the Windows team, noted in August 2008 that Microsoft has set aside a team to work solely on the issue, and that team aims to "significantly increase the number of systems that experience very good boot times". They "focused very hard on increasing parallelism of driver initialization". Also, Microsoft aims to "dramatically reduce" the number of system services, along with their demands on processors, storage, and memory.
Kernel and scheduling improvements
User-mode scheduler
The 64-bit versions of Windows 7 and Server 2008 R2 introduce a user-mode scheduling framework. On Microsoft Windows operating systems, scheduling of threads inside a process is handled by the kernel, ntoskrnl.exe. While for most applications this is sufficient, applications with large concurrent threading requirements, such as a database server, can benefit from having a thread scheduler in-process. This is because the kernel no longer needs to be involved in context switches between threads, and it obviates the need for a thread pool mechanism, as threads can be created and destroyed much more quickly when no kernel context switches are required.
Prior to Windows 7, Windows used a one-to-one user thread to kernel-thread relationship. It was of course always possible to cobble together a rough many-to-one user-scheduler (with user-level timer interrupts) but if a system call was blocked on any one of the user threads, it would block the kernel thread and accordingly block all other user threads on the same scheduler. A many-to-one model could not take full advantage of symmetric multiprocessing.
With Windows 7's user-mode scheduling, a program may configure one or more kernel threads as a scheduler supplied by a programming language library (one per logical processor desired) and then create a user-mode thread pool from which these UMS can draw. The kernel maintains a list of outstanding system calls which allows the UMS to continue running without blocking the kernel thread. This configuration can be used as either many-to-one or many-to-many.
There are several benefits to a user mode scheduler. Context switching in User Mode can be faster. UMS also introduces cooperative multitasking. Having customizable scheduler also gives more control over thread execution.
Memory management and CPU parallelism
The memory manager is optimized to mitigate the problem of total memory consumption in the event of excessive cached read operations, which occurred on earlier releases of 64-bit Windows.
Support for up to 256 logical processors
Fewer hardware locks and greater parallelism
Timer coalescing: modern processors and chipsets can switch to very low power usage levels while the CPU is idle. In order to reduce the number of times the CPU enters and exits idle states, Windows 7 introduces the concept of "timer coalescing"; multiple applications or device drivers which perform actions on a regular basis can be set to occur at once, instead of each action being performed on their own schedule. This facility is available in both kernel mode, via the KeSetCoalesableTimer API (which would be used in place of KeSetTimerEx), and in user mode with the SetWaitableTimerEx Windows API call (which replaces SetWaitableTimer).
Multimedia
Windows Media Center
Windows Media Center in Windows 7 has retained much of the design and feel of its predecessor, but with a variety of user interface shortcuts and browsing capabilities. Playback of H.264 video both locally and through a Media Center Extender (including the Xbox 360) is supported.
Some notable enhancements in Windows 7 Media Center include a new mini guide, a new scrub bar, the option to color code the guide by show type, and internet content that is more tightly integrated with regular TV via the guide. All Windows 7 versions now support up to four tuners of each type (QAM, ATSC, CableCARD, NTSC, etc.).
When browsing the media library, items that don't have album art are shown in a range of foreground and background color combinations instead of using white text on a blue background. When the left or right remote control buttons are held down to browse the library quickly, a two-letter prefix of the current album name is prominently shown as a visual aid. The Picture Library includes new slideshow capabilities, and individual pictures can be rated.
Also, while browsing a media library, a new column appears at the top named "Shared." This allows users to access shared media libraries on other Media Center PCs from directly within Media Center.
For television support, the Windows Media Center "TV Pack" released by Microsoft in 2008 is incorporated into Windows Media Center. This includes support for CableCARD and North American (ATSC) clear QAM tuners, as well as creating lists of favorite stations.
A gadget for Windows Media Center is also included.
Format support
Windows 7 includes AVI, WAV, AAC/ADTS file media sinks to read the respective formats, an MPEG-4 file source to read MP4, M4A, M4V, MP4V MOV and 3GP container formats and an MPEG-4 file sink to output to MP4 format. Windows 7 also includes a media source to read MPEG transport stream/BDAV MPEG-2 transport stream (M2TS, MTS, M2T and AVCHD) files.
Transcoding (encoding) support is not exposed through any built-in Windows application but codecs are included as Media Foundation Transforms (MFTs). In addition to Windows Media Audio and Windows Media Video encoders and decoders, and ASF file sink and file source introduced in Windows Vista, Windows 7 includes an H.264 encoder with Baseline profile level 3 and Main profile support and an AAC Low Complexity (AAC-LC) profile encoder.
For playback of various media formats, Windows 7 also introduces an H.264 decoder with Baseline, Main, and High profiles support, up to level 5.1, AAC-LC and HE-AAC v1 (SBR) multichannel, HE-AAC v2 (PS) stereo decoders, MPEG-4 Part 2 Simple Profile and Advanced Simple Profile decoders which includes decoding popular codec implementations such as DivX, Xvid and Nero Digital as well as MJPEG and DV MFT decoders for AVI. Windows Media Player 12 uses the built-in Media Foundation codecs to play these formats by default.
Windows 7 also updates the DirectShow filters introduced in Windows Vista for playback of MPEG-2 and Dolby Digital to decode H.264, AAC, HE-AAC v1 and v2 and Dolby Digital Plus (including downmixing to Dolby Digital).
Security
Action Center, formerly Windows Security Center, now encompasses both security and maintenance. It was called Windows Health Center and Windows Solution Center in earlier builds.
A new user interface for User Account Control has been introduced, which provides the ability to select four different levels of notifications, one of these notification settings, Default, is new to Windows 7. Geo-tracking capabilities are also available in Windows 7. The feature will be disabled by default. When enabled the user will only have limited control as to which applications can track their location.
The Encrypting File System supports Elliptic-curve cryptographic algorithms (ECC) in Windows 7. For backward compatibility with previous releases of Windows, Windows 7 supports a mixed-mode operation of ECC and RSA algorithms. EFS self-signed certificates, when using ECC, will use 256-bit key by default. EFS can be configured to use 1K/2k/4k/8k/16k-bit keys when using self-signed RSA certificates, or 256/384/512-bit keys when using ECC certificates.
In Windows Vista, the Protected User-Mode Audio (PUMA) content protection facilities are only available to applications that are running in a Protected Media Path environment. Because only the Media Foundation application programming interface could interact with this environment, a media player application had to be designed to use Media Foundation. In Windows 7, this restriction is lifted. PUMA also incorporates stricter enforcement of "Copy Never" bits when using Serial Copy Management System (SCMS) copy protection over an S/PDIF connection, as well as with High-bandwidth Digital Content Protection (HDCP) over HDMI connections.
Biometrics
Windows 7 includes the new Windows Biometric Framework. This framework consists of a set of components that standardizes the use of fingerprint biometric devices. In prior releases of Microsoft Windows, biometric hardware device manufacturers were required to provide a complete stack of software to support their device, including device drivers, software development kits, and support applications. Microsoft noted in a white paper on the Windows Biometric Framework that the proliferation of these proprietary stacks resulted in compatibility issues, compromised the quality and reliability of the system, and made servicing and maintenance more difficult. By incorporating the core biometric functionality into the operating system, Microsoft aims to bring biometric device support on par with other classes of devices.
A new Control Panel called Biometric Device Control Panel is included which provides an interface for deleting stored biometrics information, troubleshooting, and enabling or disabling the types of logins that are allowed using biometrics. Biometrics configuration can also be configured using Group Policy settings.
Networking
DirectAccess, a VPN tunnel technology based on IPv6 and IPsec. DirectAccess requires domain-joined machines, Windows Server 2008 R2 on the DirectAccess server, at least Windows Server 2008 domain controllers and a PKI to issue authentication certificates.
BranchCache, a WAN optimization technology.
The Bluetooth stack includes improvements introduced in the Windows Vista Feature Pack for Wireless, namely, Bluetooth 2.1+EDR support and remote wake from S3 or S4 support for self-powered Bluetooth modules.
NDIS 6.20 (Network Driver Interface Specification)
WWAN (Mobile broadband) support (driver model based on NDIS miniport driver for CDMA and GSM device interfaces, Connection Manager support and Mobile Broadband COM and COM Interop API).
Wireless Hosted Network capabilities: The Windows 7 wireless LAN service supports two new functions – Virtual Wi-Fi, that allows a single wireless network adapter to act like two client devices, or a software-based wireless access point (SoftAP) to act as both a wireless hotspot in infrastructure mode and a wireless client at the same time. This feature is not exposed through the GUI; however the Virtual WiFi Miniport adapter can be installed and enabled for wireless adapters with drivers that support a hosted network by using the command netsh wlan set hostednetwork mode=allow "ssid=<network SSID>" "key=<wlan security key>" keyusage=persistent|temporary at an elevated command prompt. The wireless SoftAP can afterwards be started using the command netsh wlan start hostednetwork. Windows 7 also supports WPA2-PSK/AES security for the hosted network, but DNS resolution for clients requires it to be used with Internet Connection Sharing or a similar feature.
SMB 2.1, which includes minor performance enhancements over SMB2, such as a new opportunistic locking mechanism.
RDP 7.0
Background Intelligent Transfer Service 4.0
HomeGroup
Alongside the workgroup system used by previous versions, Windows 7 adds a new ad hoc home networking system known as HomeGroup. The system uses a password to join computers into the group, and allows users' libraries, along with individual files and folders, to be shared between multiple computers. Only computers running Windows 7 to Windows 10 version 1709 can create or join a HomeGroup; however, users can make files and printers shared in a HomeGroup accessible to Windows XP and Windows Vista through a separate account, dedicated to sharing HomeGroup content, that uses traditional Windows sharing. HomeGroup support was deprecated in Windows 10 and has been removed from Windows 10 version 1803 and later.
HomeGroup as a concept is very similar to a feature slated for Windows Vista, known as Castle, which would have made it possible to have an identification service for all members on the network, without a centralized server.
HomeGroup is created in response to the need for a simple sharing model for inexperienced users who need to share files without wrestling with user accounts, Security descriptors and share permissions. To that end, Microsoft previously created Simple File Sharing mode in Windows XP that, once enabled, caused all connected computers to be authenticated as Guest. Under this model, either a certain file or folder was shared with anyone who connects to the network (even unauthorized parties who are in range of the wireless network) or was not shared at all. In a HomeGroup, however:
Communication between HomeGroup computers is encrypted with a pre-shared password.
A certain file or folder can be shared with the entire HomeGroup (anyone who joins) or a certain person only.
HomeGroup computers can also be a member of a Windows domain or Windows workgroup at the same time and take advantage of those file sharing mechanisms.
Only computers that support HomeGroup (Windows 7 to Windows 10 version 1709) can join the network.
Windows Firewall
Windows 7 adds support for multiple firewall profiles. The Windows Firewall in Windows Vista dynamically changes which network traffic is allowed or blocked based on the location of the computer (based on which network it is connected to). This approach falls short if the computer is connected to more than one network at the same time (as for a computer with both an Ethernet and a wireless interface). In this case, Vista applies the profile that is more secure to all network connections. This is often not desirable; Windows 7 resolves this by being able to apply a separate firewall profile to each network connection.
DNSSEC
Windows 7 and Windows Server 2008 R2 introduce support for Domain Name System Security Extensions (DNSSEC), a set of specifications for securing certain kinds of information provided by the Domain Name System (DNS) as used on Internet Protocol (IP) networks. DNSSEC employs digital signatures to ensure the authenticity of DNS data received from a DNS server, which protect against DNS cache poisoning attacks.
Management features
Windows 7 contains Windows PowerShell 2.0 out-of-the-box, which is also available as a download to install on older platforms:
Windows Troubleshooting Platform
Windows PowerShell Integrated Scripting Environment
PowerShell Remoting
Other new management features include:
AppLocker (a set of Group Policy settings that evolved from Software Restriction Policies, to restrict which applications can run on a corporate network, including the ability to restrict based on the application's version number or publisher)
Group Policy Preferences (also available as a download for Windows XP and Windows Vista).
The Windows Automation API (also available as a download for Windows XP and Windows Vista).
Upgraded components
Windows 7 includes Internet Explorer 8, .NET Framework 3.5 SP1, Internet Information Services (IIS) 7.5, Windows Installer 5.0 and a standalone XPS Viewer. Paint, Calculator, Resource Monitor, on-screen keyboard, and WordPad have also been updated.
Paint and WordPad feature a Ribbon interface similar to the one introduced in Office 2007, with both sporting several new features. WordPad supports Office Open XML and ODF file formats.
Calculator has been rewritten, with multiline capabilities including Programmer and Statistics modes, unit conversion, and date calculations. Calculator was also given a graphical facelift, the first since Windows 95 in 1995 and Windows NT 4.0 in 1996.
Resource Monitor includes an improved RAM usage display and supports display of TCP/IP ports being listened to, filtering processes using networking, filtering processes with disk activity and listing and searching process handles (e.g. files used by a process) and loaded modules (files required by an executable file, e.g. DLL files).
Microsoft Magnifier, an accessibility utility for low vision users has been dramatically improved. Magnifier now supports the full screen zoom feature, whereas previous Windows versions had the Magnifier attached to the top of the screen in a dock layout. The new full screen feature is enabled by default, however, it requires Windows Aero for the advantage of the full screen zoom feature. If Windows is set to the Windows 7 Basic, Windows Classic, or High Contrast themes, Magnifier will still function like it did in Windows Vista and earlier.
Windows Installer 5.0 supports installing and configuring Windows Services, and provides developers with more control over setting permissions during software installation. Neither of these features will be available for prior versions of Windows; custom actions to support these features will continue to be required for Windows Installer packages that need to implement these features.
Other features
Windows 7 improves the Tablet PC Input Panel to make faster corrections using new gestures, supports text prediction in the soft keyboard and introduces a new Math Input Panel for inputting math into programs that support MathML. It recognizes handwritten math expressions and formulas. Additional language support for handwriting recognition can be gained by installing the respective MUI pack for that language (also called language pack).
Windows 7 introduces a new Problem Steps Recorder tool that enables users to record their interaction with software for analysis and support. The feature can be used to replicate a problem to show support when and where a problem occurred.
As opposed to the blank start-up screen in Windows Vista, Windows 7's start-up screen consists of an animation featuring four colored light balls (one red, one yellow, one green, and one blue). They twirl around for a few seconds and then join together to form a glowing Windows logo. This only occurs on displays with a vertical resolution of 768 pixels or higher, as the animation is 1024x768. Any screen with a resolution below this displays the same startup screen that Vista used.
The Starter Edition of Windows 7 can run an unlimited number of applications, compared to only 3 in Windows Vista Starter. Microsoft had initially intended to ship Windows 7 Starter Edition with this limitation, but announced after the release of the Release Candidate that this restriction would not be imposed in the final release.
For developers, Windows 7 includes a new networking API with support for building SOAP-based web services in native code (as opposed to .NET-based WCF web services), new features to shorten application install times, reduced UAC prompts, simplified development of installation packages, and improved globalization support through a new Extended Linguistic Services API.
If an application crashes twice in a row, Windows 7 will automatically attempt to apply a shim. If an application fails to install a similar self-correcting fix, a tool that asks some questions about the application launches.
Windows 7 includes an optional TIFF IFilter that enables indexing of TIFF documents by reading them with optical character recognition (OCR), thus making their text content searchable. TIFF iFilter supports Adobe TIFF Revision 6.0 specifications and four compression schemes: LZW, JPEG, CCITT v4, CCITT v6
The Windows Console now adheres to the current Windows theme, instead of showing controls from the Windows Classic theme.
Games Internet Spades, Internet Backgammon and Internet Checkers, which were removed from Windows Vista, were restored in Windows 7.
Users can disable many more Windows components than was possible in Windows Vista. The new components which can now be disabled include: Handwriting Recognition, Internet Explorer, Windows DVD Maker, Windows Fax and Scan, Windows Gadget Platform Windows Media Center, Windows Media Player, Windows Search, and the XPS Viewer (with its services).
Windows XP Mode is a fully functioning copy of 32-bit Windows XP Professional SP3 running in a virtual machine in Windows Virtual PC (as opposed to Hyper-V) running on top of Windows 7. Through the use of the RDP protocol, it allows applications incompatible with Windows 7 to be run on the underlying Windows XP virtual machine, but still to appear to be part of the Windows 7 desktop, thereby sharing the native Start Menu of Windows 7 as well as participating in file type associations. It is not distributed with Windows 7 media, but is offered as a free download to users of the Professional, Enterprise and Ultimate editions from Microsoft's web site. Users of Home Premium who want Windows XP functionality on their systems can download Windows Virtual PC free of charge, but must provide their own licensed copy of Windows XP. XP Mode is intended for consumers rather than enterprises, as it offers no central management capabilities. Microsoft Enterprise Desktop Virtualization (Med-V) is available for the enterprise market.
Native support for Hyper-V virtual machines through the inclusion of VMBus integration drivers.
AVCHD camera support and Universal Video Class 1.1
Protected Broadcast Driver Architecture (PBDA) for TV tuner cards, first implemented in Windows Media Center TV Pack 2008 for Windows Vista.
Multi-function devices and Device Containers: Prior to Windows 7, every device attached to the system was treated as a single functional end-point, known as a devnode, that has a set of capabilities and a "status". While this is appropriate for single-function devices (such as a keyboard or scanner), it does not accurately represent multi-function devices such as a combined printer, fax machine, and scanner, or web-cams with a built-in microphone. In Windows 7, the drivers and status information for multi-function device can be grouped together as a single "Device Container", which is presented to the user in the new "Devices and Printers" Control Panel as a single unit. This capability is provided by a new Plug and Play property, ContainerID, which is a Globally Unique Identifier that is different for every instance of a physical device. The Container ID can be embedded within the device by the manufacturer, or created by Windows and associated with each devnode when it is first connected to the computer. In order to ensure the uniqueness of the generated Container ID, Windows will attempt to use information unique to the device, such as a MAC address or USB serial number. Devices connected to the computer via USB, IEEE 1394 (FireWire), eSATA, PCI Express, Bluetooth, and Windows Rally's PnP-X support can make use of Device Containers.
Windows 7 will also contain a new FireWire (IEEE 1394) stack that fully supports IEEE 1394b with S800, S1600 and S3200 data rates.
The ability to join a domain offline.
Service Control Manager in conjunction with the Windows Task Scheduler supports trigger-start services.
See also
References
External links
What's New in Windows 7 for IT Pros (RC)
Windows 7 Support
Windows 7
Windows 7 | Operating System (OS) | 1,034 |
PL360
PL360 (or PL/360) is a system programming language designed by Niklaus Wirth and written by Wirth, Joseph W. Wells Jr., and Edwin Satterthwaite Jr. for the IBM System/360 computer at Stanford University. A description of PL360 was published in early 1968, although the implementation was probably completed before Wirth left Stanford in 1967.
Description
PL/360 is a one pass compiler with a syntax similar to ALGOL that provides facilities for specifying exact machine code (language) instructions and registers similar to assembly language, but also provides features commonly found in high-level programming languages, such as complex arithmetic expressions and control structures. Wirth used PL360 to create ALGOL W.
Data types are:
Byte or character – 1 byte
Short integer – 2 bytes, interpreted as an integer in two's complement binary notation
Integer or logical – 4 bytes, interpreted as an integer in two's complement binary notation
Real – 4 bytes, interpreted as a base-16 (hexadecimal) short floating-point arithmetic number
Long real – 8 bytes, interpreted as a base-16 long floating-point number
Registers can contain integer, real, or long real.
Individual System/360 instructions can be generated inline using the PL360 "function statement" that defined an instruction by format and operation code. Function arguments were assigned sequentially to fields in the instruction. Examples are:
Example
R0, R1, and R2, and FLAG are predeclared names.
BEGIN INTEGER BUCKET;
IF FLAG THEN
BEGIN BUCKET := R0; R0 := R1; R1 := R2;
R2 := BUCKET;
END ELSE
BEGIN BUCKET := R2; R2 := R1; R1 := R0;
R0 := BUCKET;
END
RESET(FLAG);
END
Implementation
Wirth was at Stanford between 1963 and 1967, during the earlier part of which he was developing his Euler compiler and interpreter, the sources of which are dated 1965. Also in 1965, Stanford updated their drum memory based Burroughs large systems B5000 to a disk storage based B5500.
Since the target IBM S/360 (which was to replace an existing IBM 7090) was not installed until 1967, the initial implementation of PL360 was written in ALGOL and tested on Stanford's B5500. Once working, the compiler was then recoded in PL360, recompiled on the Burroughs system, and moved as a binary file to the S/360.
The B5500 is programmed in a high-level ALGOL-derived language Executive Systems Problem Oriented Language (ESPOL), and PL360 was intended to bring a comparable facility to the IBM mainframe architecture, although it was lacking major facilities of both Assembler F and ESPOL. This intent was largely ignored, with programmers continuing to use implementations of IBM's macro assemblers.
However, in the early 1970s, PL360 was extended to provide more capabilities, and was the programming language of choice for developing Stanford Physics Information Retrieval System (SPIRES), Stanford's Database Management System.
See also
IBM PL/S
High-level assembler
Notes
References
External links
PL360 Textbook
PL360@Everything2
Procedural programming languages
IBM System/360 mainframe line
Programming languages created in 1966 | Operating System (OS) | 1,035 |
Olivetti P6040
The Olivetti P6040 was a personal computer, described by its maker as a personal minicomputer. The P6040 was programmable in Mini BASIC and featured a floppy disk drive that used proprietary 2.5-inch sleeveless disks called "Minidisk". It was produced starting from 1977 and was the first microprocessor-based Olivetti computer, the Intel 8080, instead of on TTL logic CPU.
Designed by Pier Giorgio Perotto, it was presented at Hannover Messe in April 1975 together with the P6060, its hardware used TTL technology. Both had a brown-colored case.
The P6040 was little in dimensions and weight, thanks to the introduction of microprocessor (for the first time at Olivetti).
Another innovation in the model was the introduction of the light-emitting diode display. The design was by Mario Bellini.
See also
Olivetti
Bibliography
(it) La minimizzazione delle grammatiche libere da contesto, Angelo Monfroglio - Politecnico di Milano, 20 dicembre 1974
References
Olivetti personal computers
Computer-related introductions in 1975 | Operating System (OS) | 1,036 |
Vkernel
A virtual kernel architecture (vkernel) is an operating system virtualisation paradigm where kernel code can be compiled to run in the user space, for example, to ease debugging of various kernel-level components, in addition to general-purpose virtualisation and compartmentalisation of system resources. It is used by DragonFly BSD in its vkernel implementation since DragonFly 1.7, having been first revealed in , and first released in the stable branch with DragonFly 1.8 in .
The long-term goal, in addition to easing kernel development, is to make it easier to support internet-connected computer clusters without compromising local security.
Similar concepts exist in other operating systems as well; in Linux, a similar virtualisation concept is known as user-mode Linux; whereas in NetBSD since the summer of 2007, it has been the initial focus of the rump kernel infrastructure.
The virtual kernel concept is nearly the exact opposite of the unikernel concept — with vkernel, kernel components get to run in userspace to ease kernel development and debugging, supported by a regular operating system kernel; whereas with a unikernel, userspace-level components get to run directly in kernel space for extra performance, supported by baremetal hardware or a hardware virtualisation stack. However, both vkernels and unikernels can be used for similar tasks as well, for example, to self-contain software to a virtualised environment with low overhead. In fact, NetBSD's rump kernel, originally having a focus of running kernel components in userspace, has since shifted into the unikernel space as well (going after the anykernel moniker for supporting both paradigms).
The vkernel concept is different from FreeBSD jail in that jail is only meant for resource isolation, and cannot be used to develop and test new kernel functionality in the userland, because each jail is sharing the same kernel. (DragonFly, however, still has FreeBSD jail support as well.)
In DragonFly, the vkernel can be thought of as a first-class computer architecture, comparable to i386 or amd64, and, according to Matthew Dillon circa 2007, can be used as a starting point for porting DragonFly BSD to new architectures.
DragonFly's vkernel is supported by the host kernel through new system calls that help manage virtual memory address space (vmspace) — vmspace_create() et al., as well as extensions to several existing system calls like mmap's madvise — mcontrol.
See also
user-mode Linux
rump kernel
References
External links
2006 software
BSD software
Computer architecture
Computer performance
DragonFly BSD
Free software programmed in C
Free virtualization software
Operating system kernels
Operating system security
Operating system technology
System administration
Virtual machines
Virtualization software
Software using the BSD license | Operating System (OS) | 1,037 |
OSI protocols
The Open Systems Interconnection protocols are a family of information exchange standards developed jointly by the ISO and the ITU-T. The standardization process began in 1977.
While the seven-layer OSI model is often used as a reference for teaching and documentation, the protocols originally conceived for the model did not gain popularity, and only X.400, X.500, and IS-IS have achieved lasting impact. The goal of an open-standard protocol suite instead has been met by the Internet protocol suite, maintained by the Internet Engineering Task Force (IETF).
Overview
The OSI protocol stack is structured into seven conceptual layers. The layers form a hierarchy of functionality starting with the physical hardware components to the user interfaces at the software application level. Each layer receives information from the layer above, processes it and passes it down to the next layer. Each layer adds encapsulation information (header) to the incoming information before it is passed to the lower layer. Headers generally include address of source and destination, error control information, protocol identification and protocol parameters such as flow control options and sequence numbers.
Layer 1: physical layer
This layer deals with the physical plugs and sockets and electrical specification of signals only.
This is the medium over which the digital signals are transmitted. It can be twisted pair, coaxial cable, optical fiber, wireless, or other transmission media.
Layer 2: data link layer
The data link layer packages raw bits from the physical layer into frames (logical, structured packets for data). It is specified in ITU-T Rec. X.212 [ISO/IEC 8886], ITU-T Rec. X.222 and others. This layer is responsible for transferring frames from one host to another. It might perform error checking. This layer further consists of two sublayers : MAC and LLC.
Layer 3: network layer
Connectionless Network Service (CLNS) – ITU-T Rec. X.213 [ISO/IEC 8348]. SCCP is based on X.213.
Connectionless Network Protocol (CLNP) – ITU-T Rec. X.233 [ISO/IEC 8473-1].
Connection-Oriented Network Service (CONS) – ITU-T Rec. X.213 [ISO/IEC 8348].
Connection-Oriented Network Protocol (X.25) – ITU-T Rec. X.233 [ISO/IEC 8878]. This is the use of the X.25 protocol to provide the CONS.
Network Fast Byte Protocol – ISO/IEC 14700
End System to Intermediate System Routing Exchange Protocol (ES-IS) - ISO/IEC 9452 (reprinted in RFC 995).
Intermediate System to Intermediate System Intra-domain Routing Protocol (IS-IS) - ISO/IEC 10589 (reprinted in RFC 1142), later adapted for the TCP/IP model.
End System Routing Information Exchange Protocol for use with ISO/IEC 8878 (SNARE) – ITU-T Rec. X.116 [ISO/IEC 10030].
This level is in charge of transferring data between systems in a network, using network-layer addresses of machines to keep track of destinations and sources. This layer uses routers and switches to manage its traffic (control flow control, error check, routing etc.)
So here it takes all routing decisions, it deals with end to end data transmission.
Layer 4: transport layer
The connection-mode and connectionless-mode transport services are specified by ITU-T Rec. X.214 [ISO/IEC 8072]; the protocol that provides the connection-mode service is specified by ITU-T Rec. X.224 [ISO/IEC 8073], and the protocol that provides the connectionless-mode service is specified by ITU-T Rec. X.234 [ISO/IEC 8602].
Transport Protocol Class 0 (TP0)
Transport Protocol Class 1 (TP1)
Transport Protocol Class 2 (TP2)
Transport Protocol Class 3 (TP3)
Transport Protocol Class 4 (TP4)
Transport Fast Byte Protocol – ISO 14699
The transport layer transfers data between source and destination processes. Generally, two connection modes are recognized, connection-oriented or connectionless. Connection-oriented service establishes a dedicated virtual circuit and offers various grades of guaranteed delivery, ensuring that data received is identical to data transmitted. Connectionless mode provides only best-effort service without the built-in ability to correct errors, which includes complete loss of data without notifying the data source of the failure. No logical connection, and no persistent state of the transaction exists between the endpoints, lending the connectionless mode low overhead and potentially better real-time performance for timing-critical applications such as voice and video transmissions.
Layer 5: session layer
Session service – ITU-T Rec. X.215 [ISO/IEC 8326]
Connection-oriented Session protocol – ITU-T Rec. X.225 [ISO/IEC 8327-1]
Connectionless Session protocol – ITU-T Rec. X.235 [ISO/IEC 9548-1]
The session layer controls the dialogues (connections) between computers. It establishes, manages and terminates the connections between the local and remote application. It provides for full-duplex, and half-duplex or simplex operation, and establishes checkpointing, adjournment, termination, and restart procedures. The OSI model made this layer responsible for graceful close of sessions, which is a property of the Transmission Control Protocol, and also for session checkpointing and recovery, which is not usually used in the Internet Protocol Suite. The session layer is commonly implemented explicitly in application environments that use remote procedure calls.
Layer 6: presentation layer
Presentation service – ITU-T Rec. X.216 [ISO/IEC 8822]
Connection-oriented Presentation protocol – ITU-T Rec. X.226 [ISO/IEC 8823-1]
Connectionless Presentation protocol – ITU-T Rec. X.236 [ISO/IEC 9576-1]
This layer defines and encrypts/decrypts data types from the application layer. Protocols such as MIDI, MPEG, and GIF are presentation layer formats shared by different applications.
Layer 7: application layer
Common-Application Service Elements (CASEs)
Association Control Service Element (ACSE) – ITU-T Rec. X.217 [ISO/IEC 8649], ITU-T Rec. X.227 [ISO/IEC 8650-1], ITU-T Rec. X.237 [ISO/IEC 10035-1].
Reliable Transfer Service Element (RTSE) – ITU-T Rec. X.218 [ISO/IEC 9066-1], ITU-T Rec. X.228 [ISO/IEC 9066-2].
Remote Operations Service Element (ROSE) – ITU-T Rec. X.219 [ISO/IEC 9072-1], ITU-T Rec. X.229 [ISO/IEC 9072-2]. TCAP is related to X.219.
Commitment, Concurrency, and Recovery service element (CCRSE)
Security Exchange Service Element (SESE)
This keeps track of how each application talks to another application. Destination and source addresses are linked to specific applications.
Application processes
Common management information protocol (CMIP) – ISO 9596 / X.700
Directory services (DS) – X.500, later modified for the TCP/IP stack as LDAP
File transfer, access, and management (FTAM)
Message handling system (MHS) – X.400
Virtual terminal protocol (VT) - ISO 9040/9041
Remote Database Access (RDA)
Distributed Transaction Processing (OSI TP)
Interlibrary Loan Application Protocol (ILAP)
Document Transfer And Manipulation (DTAM)
Document Printing Application (DPA)
Document Filing and Retrieval (DFR)
Routing protocols
Intermediate System to Intermediate System (IS-IS) – ISO 10589 (reprinted in RFC 1142)
End System to Intermediate System (ES-IS) – ISO 9542 (reprinted in RFC 995)
Interdomain Routing Protocol (IDRP) – ISO 10747
See also
Protocol stack
Protocol Wars
WAP protocol suite
References
Network architecture
Reference models | Operating System (OS) | 1,038 |
Serviceability (computer)
In software engineering and hardware engineering, serviceability (also known as supportability) is one of the -ilities or aspects (from IBM's RAS(U) (Reliability, Availability, Serviceability, and Usability)). It refers to the ability of technical support personnel to install, configure, and monitor computer products, identify exceptions or faults, debug or isolate faults to root cause analysis, and provide hardware or software maintenance in pursuit of solving a problem and restoring the product into service. Incorporating serviceability facilitating features typically results in more efficient product maintenance and reduces operational costs and maintains business continuity.
Examples of features that facilitate serviceability include:
Help desk notification of exceptional events (e.g., by electronic mail or by sending text to a pager)
Network monitoring
Documentation
Event logging / Tracing (software)
Logging of program state, such as
Execution path and/or local and global variables
Procedure entry and exit, optionally with incoming and return variable values (see: subroutine)
Exception block entry, optionally with local state (see: exception handling)
Software upgrade
Graceful degradation, where the product is designed to allow recovery from exceptional events without intervention by technical support staff
Hardware replacement or upgrade planning, where the product is designed to allow efficient hardware upgrades with minimal computer system downtime (e.g., hotswap components.)
Serviceability engineering may also incorporate some routine system maintenance related features (see: Operations, Administration and Maintenance (OA&M.))
A service tool is defined as a facility or feature, closely tied to a product, that provides capabilities and data so as to service (analyze, monitor, debug, repair, etc.) that product. Service tools can provide broad ranges of capabilities. Regarding diagnosis, a proposed taxonomy of service tools is as follows:
Level 1: Service tool that indicates if a product is functional or not functional. Describing computer servers, the states are often referred to as ‘up’ or ‘down’. This is a binary value.
Level 2: Service tool that provides some detailed diagnostic data. Often the diagnostic data is referred to as a problem ‘signature’, a representation of key values such as system environment, running program name, etc. This level of data is used to compare one problem’s signature to another problem’s signature: the ability to match the new problem to an old one allows one to use the solution already created for the prior problem. The ability to screen problems is valuable when a problem does match a pre-existing problem, but it is not sufficient to debug a new problem.
Level 3: Provides detailed diagnostic data sufficient to debug a new and unique problem.
As a rough rule of thumb for these taxonomies, there are multiple ‘orders of magnitude’ of diagnostic data in level 1 vs. level 2 vs. level 3 service tools.
Additional characteristics and capabilities that have been observed in service tools:
Time of data collection: some tools can collect data immediately, as soon as problem occurs, others are delayed in collecting data.
Pre-analyzed, or not-yet-analyzed data: some tools collect ‘external’ data, while others collect ‘internal’ data. This is seen when comparing system messages (natural language-like statements in the user’s native language) vs. ‘binary’ storage dumps.
Partial or full set of system state data: some tools collect a complete system state vs. a partial system state (user or partial ‘binary’ storage dump vs. complete system dump).
Raw or analyzed data: some tools display raw data, while others analyze it (examples storage dump formatters that format data, vs. ‘intelligent’ data formatters (“ANALYZE” is a common verb) that combine product knowledge with analysis of state variables to indicate the ‘meaning’ of the data.
Programmable tools vs. ‘fixed function’ tools. Some tools can be altered to get varying amounts of data, at varying times. Some tools have only a fixed function.
Automatic or manual? Some tools are built into a product, to automatically collect data when a fault or failure occurs. Other tools have to be specifically invoked to start the data collection process.
Repair or non-repair? Some tools collect data as a fore-runner to an automatic repair process (self-healing/fault tolerant). These tools have the challenge of quickly obtaining unaltered data before the desired repair process starts.
See also
FURPS
Maintainability
External links
Excellent example of Serviceability Feature Requirements:
Sun Gathering Debug Data (Sun GDD). This is a set of tools developed by the Sun's support guys aimed to provide the right approach to problem resolution by leveraging proactive actions and best practices to gather the debug data needed for further analysis.
"Carrier Grade Linux Serviceability Requirements Definition Version 4," Copyright (c) 2005-2007 by Open Source Development Labs, Inc. Beaverton, OR 97005 USA
Design for X | Operating System (OS) | 1,039 |
Computers for Schools (Canada)
Computers For Schools (CFS) () (OPÉ) is a pan Canadian program, founded in 1993 by both Industry Canada and the TelecomPioneers. The Computers for Schools (CFS) Program is a national, Innovation, Science, Economic Development Canada-led initiative that has offshoots in all provinces and territories. The different organizations operating the program collectand refurbish donated surplus computers from both public and private sector sources, and redistribute them to schools, public libraries, not-for-profit organizations and Aboriginal communities throughout Canada.
Since 1993, CFS has donated over 1.5 million refurbished computers nationwide, reducing the overall impact on the environment.
In 2015, $2 million was announced to expand the Computers for Schools program to include non-profit organizations that support low-income Canadians and new Canadians with access to refurbished equipment.
The CFS program has workshops throughout Canada, in every province and territory.
See also
Computer recycling
References
External links
Innovation, Science and Economic Development Canada
Educational organizations based in Canada | Operating System (OS) | 1,040 |
Intel 8008
The Intel 8008 ("eight-thousand-eight" or "eighty-oh-eight") is an early byte-oriented microprocessor designed by Computer Terminal Corporation (CTC), implemented and manufactured by Intel, and introduced in April 1972. It is an 8-bit CPU with an external 14-bit address bus that could address 16 KB of memory. Originally known as the 1201, the chip was commissioned by Computer Terminal Corporation (CTC) to implement an instruction set of their design for their Datapoint 2200 programmable terminal. As the chip was delayed and did not meet CTC's performance goals, the 2200 ended up using CTC's own TTL-based CPU instead. An agreement permitted Intel to market the chip to other customers after Seiko expressed an interest in using it for a calculator.
History
CTC formed in San Antonio in 1968 under the direction of Austin O. "Gus" Roche and Phil Ray, both NASA engineers. Roche, in particular, was primarily interested in producing a desktop computer. However, given the immaturity of the market, the company's business plan mentioned only a Teletype Model 33 ASR replacement, which shipped as the Datapoint 3300. The case was deliberately designed to fit in the same space as an IBM Selectric typewriter and used a video screen shaped to have the same aspect ratio as an IBM punched card. Although commercially successful, the 3300 had ongoing heat problems due to the amount of circuitry packed into such a small space.
In order to address the heating and other issues, a re-design started that featured the CPU part of the internal circuitry re-implemented on a single chip. Looking for a company able to produce their chip design, Roche turned to Intel, then primarily a vendor of memory chips. Roche met with Bob Noyce, who expressed concern with the concept; John Frassanito recalls that "Noyce said it was an intriguing idea, and that Intel could do it, but it would be a dumb move. He said that if you have a computer chip, you can only sell one chip per computer, while with memory, you can sell hundreds of chips per computer." Another major concern was that Intel's existing customer base purchased their memory chips for use with their own processor designs; if Intel introduced their own processor, they might be seen as a competitor, and their customers might look elsewhere for memory. Nevertheless, Noyce agreed to a $50,000 development contract in early 1970. Texas Instruments (TI) was also brought in as a second supplier.
TI was able to make samples of the 1201 based on Intel drawings, but these proved to be buggy and were rejected. Intel's own versions were delayed. CTC decided to re-implement the new version of the terminal using discrete TTL instead of waiting for a single-chip CPU. The new system was released as the Datapoint 2200 in the spring 1970, with their first sale to General Mills on May 25, 1970. CTC paused development of the 1201 after the 2200 was released, as it was no longer needed. Six months later, Seiko approached Intel, expressing an interest in using the 1201 in a scientific calculator, likely after seeing the success of the simpler Intel 4004 used by Busicom in their business calculators. A small re-design followed, under the leadership of Federico Faggin, the designer of the 4004, now project leader of the 1201, expanding from a 16-pin to 18-pin design, and the new 1201 was delivered to CTC in late 1971.
By that point, CTC had once again moved on, this time to the Datapoint 2200 II, which was faster. The 1201 was no longer powerful enough for the new model. CTC voted to end their involvement with the 1201, leaving the design's intellectual property to Intel instead of paying the $50,000 contract. Intel renamed it the 8008 and put it in their catalog in April 1972 priced at $120. This renaming tried to ride off the success of the 4004 chip, by presenting the 8008 as simply a 4 to 8 port, but the 8008 is not based on the 4004. The 8008 went on to be a commercially successful design. This was followed by the Intel 8080, and then the hugely successful Intel x86 family.
One of the first teams to build a complete system around the 8008 was Bill Pentz' team at California State University, Sacramento. The Sac State 8008 was possibly the first true microcomputer, with a disk operating system built with IBM Basic assembly language in PROM, all driving a color display, hard drive, keyboard, modem, audio/paper tape reader and printer. The project started in the spring of 1972, and with key help from Tektronix the system was fully functional a year later. Bill assisted Intel with the MCS-8 kit and provided key input to the Intel 8080 instruction set, which helped make it useful for the industry and hobbyists.
In the UK, a team at S. E. Laboratories Engineering (EMI) led by Tom Spink in 1972 built a microcomputer based on a pre-release sample of the 8008. Joe Hardman extended the chip with an external stack. This, among other things, gave it power-fail save and recovery. Joe also developed a direct screen printer. The operating system was written using a meta-assembler developed by L. Crawford and J. Parnell for a Digital Equipment Corporation PDP-11. The operating system was burnt into a PROM. It was interrupt-driven, queued, and based on a fixed page size for programs and data. An operational prototype was prepared for management, who decided not to continue with the project.
The 8008 was the CPU for the very first commercial non-calculator personal computers (excluding the Datapoint 2200 itself): the US SCELBI kit and the pre-built French Micral N and Canadian MCM/70. It was also the controlling microprocessor for the first several models in Hewlett-Packard's 2640 family of computer terminals.
Intel offered an instruction set simulator for the 8008 named INTERP/8. It was written in FORTRAN.
Design
The 8008 was implemented in 10 μm silicon-gate enhancement-mode PMOS logic. Initial versions could work at clock frequencies up to 0.5 MHz. This was later increased in the 8008-1 to a specified maximum of 0.8 MHz. Instructions take between 5 and 11 T-states, where each T-state is 2 clock cycles.
Register–register loads and ALU operations take 5T (20 μs at 0.5 MHz), register–memory 8T (32 μs), while calls and jumps (when taken) take 11 T-states (44 μs).
The 8008 is a little slower in terms of instructions per second (36,000 to 80,000 at 0.8 MHz) than the 4-bit Intel 4004 and Intel 4040. but since the 8008 processes data 8 bits at a time and can access significantly more RAM, in most applications it has a significant speed advantage over these processors. The 8008 has 3,500 transistors.
The chip (limited by its 18-pin DIP) has a single 8-bit bus and requires a significant amount of external support logic. For example, the 14-bit address, which can access "16 K × 8 bits of memory", needs to be latched by some of this logic into an external memory address register (MAR). The 8008 can access 8 input ports and 24 output ports.
For controller and CRT terminal use, this is an acceptable design, but it is rather cumbersome to use for most other tasks, at least compared to the next generations of microprocessors. A few early computer designs were based on it, but most would use the later and greatly improved Intel 8080 instead.
Related processor designs
The subsequent 40-pin NMOS Intel 8080 expanded upon the 8008 registers and instruction set and implements a more efficient external bus interface (using the 22 additional pins). Despite a close architectural relationship, the 8080 was not made binary compatible with the 8008, so an 8008 program would not run on an 8080. However, as two different assembly syntaxes were used by Intel at the time, the 8080 could be used in an 8008 assembly-language backward-compatible fashion.
The Intel 8085 is an electrically modernized version of the 8080 that uses depletion-mode transistors and also added two new instructions.
The Intel 8086, the original x86 processor, is a non-strict extension of the 8080, so it loosely resembles the original Datapoint 2200 design as well. Almost every Datapoint 2200 and 8008 instruction has an equivalent not only in the instruction set of the 8080, 8085, and Z80, but also in the instruction set of modern x86 processors (although the instruction encodings are different).
Features
The 8008 architecture includes the following features:
Seven 8-bit "scratchpad" registers: The main accumulator (A) and six other registers (B, C, D, E, H, and L).
14-bit program counter (PC).
Seven-level push-down address call stack. Eight registers are actually used, with the top-most register being the PC.
Four condition code status flags: carry (C), even parity (P), zero (Z), and sign (S).
Indirect memory access using the H and L registers (HL) as a 14-bit data pointer (the upper two bits are ignored).
Example code
The following 8008 assembly source code is for a subroutine named MEMCPY that copies a block of data bytes of a given size from one location to another.
In the code above, all values are given in octal. Locations , , and are 16-bit parameters for the subroutine named . In actuality, only 14 bits of the values are used, since the CPU has only a 14-bit addressable memory space. The values are stored in little-endian format, although this is an arbitrary choice, since the CPU is incapable of reading or writing more than a single byte into memory at a time. Since there is no instruction to load a register directly from a given memory address, the HL register pair must first be loaded with the address, and the target register can then be loaded from the M operand, which is an indirect load from the memory location in the HL register pair. The BC register pair is loaded with the parameter value and decremented at the end of the loop until it becomes zero. Note that most of the instructions used occupy a single 8-bit opcode.
Designers
CTC (Instruction set and architecture): Victor Poor and Harry Pyle.
Intel (Implementation in silicon):
Ted Hoff, Stan Mazor and Larry Potter (IBM Chief Scientist) proposed a single-chip implementation of the CTC architecture, using RAM-register memory rather than shift-register memory, and also added a few instructions and interrupt facility. The 8008 (originally called 1201) chip design started before the 4004 development. Hoff and Mazor, however, could not and did not develop a "silicon design" because they were neither chip designers nor process developers, and furthermore the necessary silicon-gate-based design methodology and circuits, under development by Federico Faggin for the 4004, were not yet available.
Federico Faggin, having finished the design of the 4004, became leader of the project from January 1971 until its successful completion in April 1972, after it had been suspended – for lack of progress – for about seven months.
Hal Feeney, project engineer, did the detailed logic design, circuit design, and physical layout under Faggin's supervision, employing the same design methodology that Faggin had originally developed for the Intel 4004 microprocessor, and utilizing the basic circuits he had developed for the 4004. A combined "HF" logo was etched onto the chip about halfway between the D5 and D6 bonding pads.
Second sources
See also
Mark-8 and SCELBI, 8008-based computer kits
MCM/70 and Micral, pioneering microcomputers
PL/M, the first programming language targeting a microprocessor, the Intel 8008, developed by Gary Kildall
References
External links
MCS-8 User Manual with 8008 data sheet (1972)
The Intel 8008 support page unofficial
The DigiBarn Computer Museum's page on Bill Pentz' Sacramento State machine, a full microcomputer built around the 8008
8008 Assembly Language Reference Card
Computer-related introductions in 1972
8-bit microprocessors | Operating System (OS) | 1,041 |
Productivity software
Productivity software (also called personal productivity software or office productivity software) is application software used for producing information (such as documents, presentations, worksheets, databases, charts, graphs, digital paintings, electronic music and digital video). Its names arose from it increasing productivity, especially of individual office workers, from typists to knowledge workers, although its scope is now wider than that. Office suites, which brought word processing, spreadsheet, and relational database programs to the desktop in the 1980s, are the core example of productivity software. They revolutionized the office with the magnitude of the productivity increase they brought as compared with the pre-1980s office environments of typewriters, paper filing, and handwritten lists and ledgers. In the United States, some 78% of "middle-skill" occupations (those that call for more than a high school diploma but less than a bachelor's degree) now require the use of productivity software. In the 2010s, productivity software has become even more consumerized than it already was, as computing becomes ever more integrated into daily personal life.
Details
Productivity software traditionally runs directly on a computer. For example, Commodore Plus/4 model of computer contained in ROM for applications of productivity software. Productivity software is one of the reasons people use personal computers.
Office suite
An office suite is a bundle of productivity software (a software suite) intended to be used by office workers. The components are generally distributed together, have a consistent user interface and usually can interact with each other, sometimes in ways that the operating system would not normally allow.
The earliest office suite for personal computers was Starburst in the early 1980s, comprising the WordStar word processor, the CalcStar spreadsheet and the DataStar database software. Other suites arose in the 1980s, and Microsoft Office came to dominate the market in the 1990s, a position it retains .
Office suite components
The base components of office suites are:
Word processor
Spreadsheet
Presentation program
Other components include:
Database software
Graphics suite (raster graphics editor, vector graphics editor, image viewer)
Desktop publishing software
Formula editor
Diagramming software
Email client
Communication software
Personal information manager
Notetaking software
Groupware
Project management software
Web log analysis software
See also
Integrated software
List of office suites
List of collaborative software
List of personal information managers
List of software that supports Office Open XML
Comparison of word processors
Comparison of spreadsheet software
Comparison of notetaking software
Online office suite
Online spreadsheet
OpenDocument software
References
External links
Personal information managers | Operating System (OS) | 1,042 |
Alpha 21464
The Alpha 21464 is an unfinished microprocessor that implements the Alpha instruction set architecture (ISA) developed by Digital Equipment Corporation and later by Compaq after it acquired Digital. The microprocessor was also known as EV8 (codenamed Araña). Slated for a 2004 release, it was canceled on 25 June 2001 when Compaq announced that Alpha would be phased out in favor of Itanium by 2004. When it was canceled, the Alpha 21464 was at a late stage of development but had not been taped out.
The 21464's origins began in the mid-1990s when computer scientist Joel Emer was inspired by Dean Tullsen's research into simultaneous multithreading (SMT) at the University of Washington. Emer had researched the technology in the late 1990s and began to promote it once he was convinced of its value. Compaq made the announcement that the next Alpha microprocessor would use SMT in October 1999 at Microprocessor Forum 1999. At that time, it was expected that systems using the Alpha 21464 would ship in 2003.
Description
The microprocessor was an eight-issue superscalar design with out-of-order execution, four-way SMT and a deep pipeline. It fetches 16 instructions from a 64 KB two-way set-associative instruction cache. The branch predictor then selected the "good" instructions and entered them into a collapsing buffer. (This allowed for a fetch bandwidth of up to 16 instructions per cycle, depending on the taken branch density.) The front-end had significantly more stages than previous Alpha implementation and as a result, the 21464 had a significant minimum branch misprediction penalty of 14 cycles. The microprocessor used an advanced branch prediction algorithm to minimize these costly penalties.
Implementing SMT required the replication of certain resources such as the program counter. Instead of one program counter, there were four program counters, one for each thread. However, very little logic after the front-end needed to be expanded for SMT support. The register file contained 512 entries, but its size was determined by the maximum number of in-flight instructions, not SMT. Access to the register file required three pipeline stages due to the physical size of the circuit. Up to eight instructions from four threads could be dispatched to eight integer and four floating-point execution units every cycle. The 21464 had a 64 KB data cache (Dcache), organized as eight banks to support dual-porting. This was backed by an on-die 3 MB, six-way set-associative unified secondary cache (Scache).
The integer execution unit made use of a new structure: the register cache. The register cache was not meant to mitigate the three tick register file latency (as some reports have claimed), but to reduce the complexity of operand bypass management. The register cache held all the results produced by the ALU and Load pipes for the previous N cycles. (N was something like 8.) The register cache structure was an architectural relabeling of what previous processors had implemented as a distributed mux.
The system interface was similar to that of the Alpha 21364. There were integrated memory controllers that provided ten RDRAM channels. Multiprocessing was facilitated by a router that provided links to other 21464s, and it architecturally supported 512-way multiprocessing without glue logic.
It was to be implemented in a 0.125 μm (sometimes referred to as 0.13 μm) complementary metal–oxide–semiconductor (CMOS) process with seven layers of copper interconnect, partially depleted silicon-on-insulator (PD-SOI), and low-K dielectric. The transistor count was estimated to be 250 million and die size was estimated to be 420 mm2.
Tarantula
Tarantula was the code-name for an extension of the Alpha architecture under consideration and a derivative of the Alpha 21464 that implemented the aforementioned extension. It was canceled while still in development, before any implementation work had started, and before the 21464 was finished. The extension was to provide Alpha with a vector processing capability. It specified thirty-two 64 by 128-bit (8,192-bit or 1 KB) vector registers, approximately 50 vector instructions, and an unspecified number of instructions for moving data to and from the vector registers. Other EV8 follow-up candidates included a multicore design with two EV8 cores and a 4.0 GHz operating frequency.
Notes
References
Further reading
DEC microprocessors
Superscalar microprocessors | Operating System (OS) | 1,043 |
IBM PS/2 L40 SX
The IBM Personal System/2 Model L40 SX (stylized as PS/2 L40 SX) was a portable computer made by IBM, as part of the IBM PS/2 series. It was the successor to the IBM PC Convertible. The "SX" in the name refers to its CPU, the Intel 80386SX.
Development
The L40 SX was designed and manufactured over the course of thirteen months between 1990 and 1991. By 1990, IBM were already late to the market of 386SX-powered laptops. Faced with releasing an obsolete product, should they have followed their normal two-year lead time, IBM hastened development of the L40 SX.
The L40 SX's case and keyboard assemblies took roughly five months to produce and involved novel methods to achieve this time frame. IBM hired their former subsidiary Lexmark of Lexington, Kentucky, and Leap Technologies of Otsego, Michigan, to achieve this production. Both companies used IBM's own Catia CAD–CAM system
to design the models of the parts for the aforementioned assemblies. Lexmark were responsible for drafting these models, sending them electronically to Leap for revisions. Once revised, Leap used these models to machine the injection molds for each part. The two companies' electronic exchange of models was novel for the time and accelerated production by eliminating the need for mocking up and prototyping. It also posed a risk, however, as any design flaws realized after manufacturing would set production back up to a year and compel IBM to cancel the laptop. Because of this, both Leap and Lexmark used specialized software to predict how the parts would result from Leap's molds.
Before designing began, however, Leap and Lexmark had to source suitable plastic. They settled on a polycarbonate–ABS polyblend by Dow Chemical that was durable, colorable, and plateable. The latter quality was necessary for compliance with the FCC's regulations on electromagnetic interference. Integrated circuits, such as microprocessors, cause such interference; most companies at the time compensated by spraying a thick layer of metallic paint on their cases' interiors. Because the 386SX's power overrode such shielding, however, IBM turned to electroless plating—a method that was novel for laptops. This provided the case with stronger shielding and not much more weight but also considerable expense for IBM. Research on the method was also costly: as electroless plating had seldom been used on their polyblends, Dow had to perform rigorous laboratory tests. After designing ended and the molds were machined, Leap performed injection only on the molds for the case assembly parts, shipping the molds for the keyboard assembly parts to Lexmark. Leap performed ultrasonic welding on their parts where necessary and handed the responsibility of plating to a company in Michigan. Leap then sent the completed case assemblies to Lexmark.
Toshiba of Japan provided IBM with the L40 SX's liquid-crystal display, which was a 10-inch, sidelit, passive-matrix panel. Final assembly of these panels were performed in Raleigh, North Carolina. IBM considered using Toshiba's active-matrix LCD which provided a better response times, wider viewing angles, and no blotching, but these displays drew too much power. IBM also teamed with Western Digital of Irvine to design the L40 SX's motherboard. Western Digital provided assembly of the L40 SX's entire motherboard as well as their 7600LP series of video and hard disk drive controller chipsets, as well as the means for IBM to assemble the motherboard themselves further down the line.
Manufacture of the L40 SX was plagued with parts shortages, but IBM were able to produce roughly 4,000 pre-release units which were sold to select members of the public. Hard disk drives were the latest shortage in April 1991, with IBM having to look at producing its own 2.5-inch 60 MB drives instead of waiting for Conner Peripherals.
The substantial price raise of the L40 SX in March 1991 drew criticism from potential buyers who had enthusiastically praised it at IBM's last press briefing. IBM justified this price raise by classifying the L40 SX as a desktop replacement. The L40 SX's larger-than-notebook dimensions was advantageous for IBM in both raising its technical capability, fitting its coveted full-sized keyboard, and meeting the expectations of buyers specifically looking for a desktop replacement machine. Potential buyers felt the L40 SX's exceptionally comfortable keyboard and low power consumption failed to justify its launch price, however. At the time of the company's announcement of their price raise for the L40 SX, IBM were evaluating demand for a low-priced notebook computer in the United States after releasing the PS/55 Note in Japan.
At the time, the L40 SX differed from most other laptops in operation by offering a suspend mode, a dynamic CPU clock cycle that slows down when the processor is idle, and the use of LCDs for status indicators, as opposed to LEDs. The latter two features lower the L40 SX's power draw. The back of the L40 SX sports one serial port, one parallel port, an external AT expansion port, a VGA port, and a PS/2 mouse port. IBM provided an optional modem that can receive fax transmissions.
Specifications
CPU: 20 MHz Intel 386SX
Screen: 10" monochrome VGA (640x480)
OS: DOS 3.3 or 4.0, Windows 3.0, OS/2 1.31
Disk: 2.5" IDE hard drive
Bus: ISA
Optional peripherals
Trackpoint (Trackball/Mouse convertible)
Quick Charger
Car Battery Adapter
Internal Data Fax Modem
Deluxe Carrying Case
Recall
The Wall Street Journal reported that IBM had received 15 complaints of a short circuit occurring between the circuitry and a conductive coating inside the case which, in some instances, has melted a small hole in the case. The short occurs when the laptop is run on batteries, and IBM reported it will install a fuse to stop overheating. They had to issue a recall for 150,000 machines.
Successors
One year after the announcement of the L40 SX, on 24 March 1992, three notebooks and a laptop were announced by IBM: N51SX, N51SCL, N45SL as part of the IBM PS/2 Note series and the CL57SX. The CL57SX was the first laptop from IBM that featured a color TFT display.
References
External links
German Thinkwiki about the L40SX
PS/2 L40SX Reference Guide
Teardown video by EEVblog
PS 2 L40SX
PS 2 L40SX
Computer-related introductions in 1991 | Operating System (OS) | 1,044 |
Operations, administration and management
Operations, administration and management or operations, administration and maintenance (OA&M or OAM) are the processes, activities, tools, and standards involved with operating, administering, managing and maintaining any system. This commonly applies to telecommunication, computer networks, and computer hardware.
In particular, Ethernet operations, administration and maintenance (EOAM) is the protocol for installing, monitoring and troubleshooting Ethernet metropolitan area network (MANs) and Ethernet WANs. It relies on a new, optional sublayer in the data link layer of the OSI model. The OAM features covered by this protocol are discovery, link monitoring, remote fault detection, remote loopback
Standards
Fault management and performance monitoring (ITU-T Y.1731) - Defines performance monitoring measurements such as frame loss ratio, frame delay and frame delay variation to assist with SLA assurance and capacity planning. For fault management the standard defines continuity checks, loopbacks, link trace, and alarm suppression (AIS, RDI) for effective fault detection, verification, isolation, and notification in carrier networks.
Connectivity fault management (IEEE 802.1ag) - Defines standardized continuity checks, loopbacks and link trace for fault management capabilities in enterprise and carrier networks. This standard also partitions the network into 8 hierarchical administrative domains.
Link layer discovery (IEEE 802.1AB) - Defines discovery for all provider edges (PEs) supporting a common service instance and/or discovery for all edge devices and P routers) common to a single network domain.
Ethernet in the First Mile defined in IEEE 802.3ah mechanisms for monitoring and troubleshooting Ethernet access links. Specifically, it defines tools for discovery, remote failure indication, remote and local loopbacks, and status and performance monitoring.
Ethernet protection switching (ITU G.8031) - Brings SONET APS / SDH MSP-like protection switching to Ethernet trunks.
OAMP
OAMP, traditionally OAM&P, stands for operations, administration, maintenance, and provisioning. The addition of 'T' in recent years stands for troubleshooting, and reflects its use in network operations environments. The term is used to describe the collection of disciplines generally, as well as whatever specific software package(s) or functionality a given company uses to track these things.
Though the term, and the concept, originated in the wired telephony world, the discipline (if not the term) has expanded to other spheres in which the same sorts of work are done, including cable television and many aspects of Internet services and network operations. 'Ethernet OAM' is another recent concept in which the terminology is used.
Operations encompass automatic monitoring of the environment, detecting and determining faults, and alerting admins. Administration typically involves collecting performance stats, accounting data for the purpose of billing, capacity planning using Usage data, and maintaining system reliability. It can also involve maintaining the service databases which are used to determine periodic billing.
Maintenance involves upgrades, fixes, new feature enablement, backing up and restoring data, and monitoring the media health. The major task is Diagnostics and troubleshooting. Provisioning is the setting up of the user accounts, devices, and services.
Although they both target the same set of markets, OAMP covers much more than the five specific areas targeted by FCAPS (See FCAPS for more details; it is a terminology that has been more popular than OAMP in non-telecom environs in the past). In NOC environments, OAMP and OAMPT are used to describe the problem management life cycle more and more - and especially with the dawn of Carrier-Grade Ethernet, telco terminology is becoming more and more embedded in traditionally IP termed worlds.
O - Operations
A - Administration
M - Maintenance
P - Provisioning
T - Troubleshooting
Procedures
Operational
Basically, these are the procedures you use during normal network operations.
They are day-to-day organisational procedures: handover, escalation, major issue management, call out, support procedures, regular updates including emails and meetings. In this section group, you will find things like Daily Checklists, On-call and Shift rotas, Call response and ticket opening procedures, Manufacturer documentation like technical specifications and operator handbooks, OOB Procedures
Administration
These are support procedures that are necessary for day-to-day operations - things like common passwords, equipment and tools access, organisational forms and timesheets, meeting minutes and agendas, and customer Service Reports.
This is not necessarily 'network admin', but also 'network operations admin'.
Maintenance
Tasks that if not done will affect service or system operation, but are not necessarily as a result of a failure. Configuration and hardware changes that are a response to system deterioration. These involve scheduling provider maintenance, standard network equipment configuration changes as a result of policy or design, routine equipment checks, hardware changes, and software/firmware upgrades. Maintenance tasks can also involve the removal of administrative privileges as a security policy.
Provisioning
Introducing a new service, creating new circuits and setting up new equipment, installing new hardware. Provisioning processes will normally include 'how to' guides and checklists that need to be strictly adhered to and signed off. They can also involve integration and commissioning process which will involve sign-off to other parts of the business life cycle.
Troubleshooting
Troubleshooting is carried out as a result of a fault or failure, may result in maintenance procedures, or emergency workarounds until such time as a maintenance procedure can be carried out. Troubleshooting procedures will involve knowledge databases, guides, and processes to cover the role of network operations engineers from initial diagnostics to advanced troubleshooting. This stage often involves problem simulation and is the traditional interface to design.
See also
Carrier Ethernet
FCAPS
IEEE 802.1
International Telecommunication Union
Metro Ethernet Forum
NComm
Operations support system
Provider Backbone Transport
References
External links
EFM OAM Tutorial EFM OAM Tutorial
Ethernet Operations, Administration, and Maintenance
Get IEEE 802.3 LAN/MAN CSMA/CD Access Method—Download 802.3 specifications.
ITU-T M.3020 "TMN INTERFACE SPECIFICATION METHODOLOGY"
Lee, Cheng-yin (Ottawa, CA), Elkady, Amr (Ottawa, CA) 2007 Extensible OAM support in MPLS/ATM networks United States Alcatel Canada Inc. (Kanata, ON, CA)
http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=544990
http://www.faqs.org/rfcs/rfc3429.html
http://www.cisco.com/en/US/docs/routers/access/as5850/software/operations/guide/ehrsc/pref.html
Ethernet
System administration
Telephony
Network management | Operating System (OS) | 1,045 |
Self-booting disk
A self-booting disk is a floppy disk for home or personal computers that loads directly into a standalone application when the system is turned on, bypassing the operating system. This was common, even standard, on some computers in the late 1970s to early 1990s. Video games were the type of application most commonly distributed using this technique.
The term PC booter is also used, primarily in reference to self-booting software for IBM PC compatibles. On other computers, like the Apple II and Atari 8-bit family, almost all software is self-booting. On the IBM PC, the distinction is between self-booting software and that which uses DOS-compatible operating systems. The term "PC booter" was not contemporary to when self-booting games were being released.
Benefits
The software starts automatically, without any further action required by the user.
Copy prevention, because self-booting floppies often use a nonstandard filesystem or format.
Bypassing the normal operating system to use a specialized replacement.
Drawbacks
The user needs to reboot the system to run other software.
The application cannot co-exist with other data or applications stored on a hard disk.
Hardware normally supported by the operating system may not work.
Examples
Between 1983 and 1984, Digital Research offered several of their business and educational applications for the IBM PC on bootable floppy diskettes bundled with SpeedStart CP/M, a reduced version of CP/M-86 as a bootable runtime environment.
Infocom offered the only third-party games for the Macintosh at launch by distributing them with its own bootable operating system.
A scaled down version of GeoWorks was used by America Online for their AOL client software until the late 1990s. AOL was distributed on a single 3.5-inch floppy disk, which could be used to boot GeoWorks as well.
In 1998, Caldera distributed a demo version of their 32-bit DPMI web-browser and mail client DR-WebSpyder on a bootable fully self-contained 3.5-inch floppy. On 386 PCs with a minimum of 4 MB of RAM, the floppy would boot a minimal DR-DOS 7.02 system complete with memory manager, RAM disk, dial-up modem, LAN, mouse and display drivers and automatically launch into the graphical browser, without ever touching the machine's hard disk. Users could start browsing the web immediately after entering their access credentials.
See also
Boot diskette
List of self-booting IBM PC compatible games
Live CD
Live USB
Portable application
Self-extracting archive
Executable compression
References
Further reading
Video game distribution | Operating System (OS) | 1,046 |
Windows Aero
Windows Aero (a backronym for Authentic, Energetic, Reflective, and Open) is a design language introduced in the Windows Vista operating system. The changes made in the Aero interface affected many elements of the Windows interface, including the incorporation of a new look, along with changes in interface guidelines reflecting appearance, layout, and the phrasing and tone of instructions and other text in applications.
Windows Aero was in force during the development of Windows Vista and Windows 7. In 2012, with the development of Windows 8 and Windows Server 2012, Microsoft moved on to a design language codenamed "Metro".
Features
For the first time since the release of Windows 95, Microsoft completely revised its user interface guidelines, covering aesthetics, common controls such as buttons and radio buttons, task dialogs, wizards, common dialogs, control panels, icons, fonts, user notifications, and the "tone" of text used.
Aero Glass theme
On Windows Vista and Windows 7 computers that meet certain hardware and software requirements, the Aero Glass theme is used by default, primarily incorporating various animation and transparency effects into the desktop using hardware acceleration and the Desktop Window Manager (DWM). In the "Personalize" section added to Control Panel of Windows Vista, users can customize the "glass" effects to either be opaque or transparent, and change the color it is tinted. Enabling Aero Glass also enables other new features, including an enhanced Alt-Tab menu and taskbar thumbnails with live previews of windows, and "Flip 3D", a window switching mechanism which cascades windows with a 3D effect.
Windows 7 features refinements in Aero Glass, including larger window buttons by default (minimize, maximize, close and query), revised taskbar thumbnails, the ability to manipulate windows by dragging them to the top or sides of the screen (to the side to make it fill half the screen, and to the top to maximize), the ability to hide all windows by hovering the Show Desktop button on the taskbar, and the ability to minimize all other windows by shaking one.
Use of DWM, and by extension the Aero Glass theme, requires a video card with 128MB of graphics memory (or at least 64MB of video RAM and 1GB of system RAM for on-board graphics) supporting pixel shader 2.0, and with WDDM-compatible drivers. Aero Glass is also not available in Windows 7 Starter, only available to a limited extent on Windows Vista Home Basic, and is automatically disabled if a user is detected to be running a non-genuine copy of Windows. Windows Server 2008 and Windows Server 2008 R2 also support Aero Glass as part of the "Desktop Experience" component, which is disabled by default.
Aero Wizards
Wizard 97 had been the prevailing standard for wizard design, visual layout, and functionality used in Windows 98 through to Windows Server 2003, as well as most Microsoft products in that time frame. Aero Wizards are the replacement for Wizard 97, incorporating visual updates to match the aesthetics of the rest of Aero, as well as changing the interaction flow.
More specifically:
To increase the efficiency of the wizard, the "Welcome" pages in Wizard 97 are no longer used. (A precursor to this change was implied in a number of wizards in products such as SQL Server 2005 where a check-box was added to welcome pages, allowing a user to disable the welcome page in future uses of the wizard.)
Aero Wizards can be resized, whereas the Wizard 97 guidelines defined exact sizes for wizard window and content sizes.
The purpose of Aero Wizards are more clearly stated at the top.
A new kind of control called a "Command link" provides a single-click operation to choose from a short list of options.
The notion of "Commit pages" is introduced, where it is made clear that the next step will be the actual process that the wizard is being used to enact. If no follow-up information needs to be communicated, these are the last pages in a wizard. Typically a commit page has a button at the bottom-right that is labeled with the action to be taken, such as "Create account".
The "Back" button has moved to the top-left corner of the wizard window and matches the visual style of the back button in other Vista applications. This is done to give more focus to the commit choices. The "Next" button is only shown on pages where it is necessary.
At the end of a wizard, a "Follow-up page" can be used to direct the user to related tasks that they may be interested in after completing the wizard. For example, a follow-up for a CD burning wizard may present options like "Duplicate this disc" and "Make a disc label".
Notifications
Notifications allow an application or operating system component with an icon in the notification area to create a pop-up window with some information about an event or problem. These windows, first introduced in Windows 2000 and known colloquially as "balloons", are similar in appearance to the speech balloons that are commonly seen in comics. Balloons were often criticized in prior versions of Windows due to their intrusiveness, especially with regard to how they interacted with full-screen applications such as games (the entire application was minimized as the bubble came up). Notifications in Aero aim to be less intrusive by gradually fading in and out, and not appearing at all if a full-screen application or screensaver is being displayed—in these cases, notifications are queued until an appropriate time. Larger icons and multiple font sizes and colors are also introduced with Aero's notification windows.
Font
The Segoe UI typeface is the default font for Aero with languages that use Latin, Greek, and Cyrillic character sets. The default font size is also increased from 8pt to 9pt to improve readability. In the Segoe UI typeface prior to Windows 8, the numeral zero ("0") is narrow, while capital letter "O" is wider (Windows 8's Segoe UI keeps this difference), and numeral one ("1") has a top hook, while capital letter "I" has equal crown and base (Windows 8's "1" has no base, and the "I" does not have a crown or base).
Icons
Aero's base icons were designed by The Iconfactory, which had previously designed Windows XP icons.
Phrasing tone
The Vista User Experience Guidelines also address the issue of "tone" in the writing of text used with the Aero user interface. Prior design guidelines from Microsoft had not done much to address the issue of how user interface text is phrased, and as such, the way that information and requests are presented to the user had not been consistent between parts of the operating system.
The guidelines for Vista and its applications suggest messages that present technically accurate advice concisely, objectively, and positively, and assume an intelligent user motivated to solve a particular problem. Specific advice includes the use of the second person and the active voice (e.g. "Print the photos on your camera") and avoidance of words like "please", "sorry" and "thank you".
History
Windows Vista
The Aero interface was unveiled for Windows Vista as a complete redesign of the Windows interface, replacing Windows XP's "Luna" theme. Until the release of Windows Vista Beta 1 in July 2005, little had been shown of Aero in public or leaked builds, with alpha builds containing interim designs such as "Plex".
Windows Aero incorporated the following features in Windows Vista.
Aero Glass theme: The main component of Aero, it is the successor of Windows XP's "Luna" and changes the look and feel of graphical control elements, including but not limited to buttons, checkboxes, radio buttons, menus, progress bars and default Windows icons. Even message boxes are changed.
Windows Flip improvements: Windows Flip (Alt+Tab) in Windows Vista now shows a live preview of each open window instead of the application icons.
Windows Flip 3D: Windows Flip 3D (Windows key+Tab) renders live images of open windows, allowing one to switch between them while displaying them in a three-dimensional view.
Taskbar live thumbnails – Hovering over the taskbar button of a window displays a preview of that window in the taskbar.
Desktop Window Manager (DWM) – Due to the significant impact of the new changes on hardware and performance, Desktop Window Manager was introduced to achieve hardware acceleration, transferring the duty of UI rendering from CPU to graphic subsystem. DWM in Windows Vista required compatible hardware.
Task Dialogs: Dialog boxes meant to help communicate with the user and receive simple user input. Task Dialogs are more complex than traditional message boxes that only bear a message and a set of command buttons. Task Dialogs may have expandable sections, hyperlinks, checkboxes, progress bars and graphical elements.
Windows 7
Windows Aero is revised in Windows 7, with many UI changes, such as a more touch friendly interface, and many new visual effects and features including pointing device gestures:
Aero Peek: Hovering over a taskbar thumbnail shows a preview of the entire window. Aero Peek is also available through the "Show desktop" button at the right end of the taskbar, which makes all open windows transparent for a quick view of the desktop. A similar feature was patented during Windows Vista development.
Aero Shake: Shaking (quickly dragging back and forth) a window minimizes all other windows. Shaking it again brings them back.
Aero Snap: Dragging a window to the right or left side of the desktop causes the window to fill the respective half of the screen. Snapping a window to the top of the screen maximizes it. Windows can be resized by stretching them to touch the top or bottom of the screen, which fully increases their vertical screen estate, while retaining their width, these windows can then slide horizontally if moved by the title bar, or pulled off, which returns the window to its original height.
Windows Aero was revised to be more touch-friendly. For example, touch gestures and support for high DPI on displays were added.
Title bars of maximized windows remain transparent instead of becoming opaque.
The outline of non-maximized windows is completely white, rather than having a cyan outline on the right side and bottom.
When hovering over the taskbar button of an open program, the button glows the dominant RGB color of its icon, with the effect following the mouse cursor.
Progress indicators are present in taskbar buttons. For example, downloading a program through Internet Explorer causes the button to fill with color as the operation progresses.
In later versions of Windows
Some of the features introduced in Aero remain in modified forms in later versions of Windows.
Windows 8
While pre-release versions of Windows 8 used an updated version of Aero Glass with a flatter, squared look, the Glass theme was replaced prior to release by a new flat design theme based on Metro design language. Transparency effects were removed from the interface, aside from the taskbar, which maintained transparency but no longer has a blur effect.
Flip 3D is removed; was changed to switch between the desktop and Windows Store apps.
Windows 10
The OS initially maintained an updated version of Metro, but began to increasingly reinstate detailed lighting effects (including those that follow the cursor) and Aero Glass-like transparency via its migration to Fluent Design System as a successor to Metro.
Virtual desktops have been added via "Task View". This takes over the keyboard shortcut.
Window snapping has been extended to allow applications to be snapped to quadrants of the screen. When a window is snapped to half of the screen, the user is prompted to select a second window to occupy the other half of the screen.
Aero Shake is now referred to as "Title bar window shake".
See also
Aqua (user interface)
Compiz
Compositing window manager
Desktop Window Manager
Development of Windows 7
Development of Windows Vista
Features new to Windows 7
Features new to Windows Vista
Kwin
References
External links
Windows 7 Aero Peek Feature
Microsoft Docs - Win32 Apps UX Guidelines
3D GUIs
Design language
Graphical user interfaces
Windows components
Windows Vista
Windows 7 | Operating System (OS) | 1,047 |
Useware
Useware is a generic term introduced in 1998 that denotes all hard- and software components of a technical system which serve its interactive use. The main idea of the term Useware is the focus of technical design according to human abilities and needs. The only promising method (Zuehlke, 2007) to design future technical products and systems is to understand human abilities and limitations and to focus the technology on these abilities and limitations.
Today Useware requires its own need for development which is partly higher than in the classical development fields (Zuehlke, 2004). Thus usability is increasingly recognized as a value-adding factor. Often the Useware of machines with similar or equal technical functions is the only characteristic that sets it apart (Zuehlke, 2002).
Useware-Engineering
Similar to Software Engineering Useware Engineering implies the standardized production of Useware by engineers and the associated processes (see figure 1). The aim of Useware engineering is to develop interfaces which are easy to understand and efficient to use. These interfaces are adapted to the human work task. Also the interfaces represent machine functionality without overemphasizing it.
Thus the objective of systematic Useware Engineering guarantees high usability based on the actual tasks of the users. However, it requires an approach which comprises active and iterative participation of different groups of people.
Therefore, the professional associations GfA (Gesellschaft für Arbeitswissenschaft), GI (Gesellschaft für Informatik), VDE-ITG (The Information Technology Society in VDE) and VDI/VDE GMA (The Society for Measurement and Automatic Control in the VDI/VDE) agreed in 1998 on defining Useware as a new term. The term Useware was intentionally selected in linguistic analogy to hard- and software.
Consequently, Useware Engineering developed in a similar way to the development of engineering processes (see figure 2). This reinforces the principal demand for structured development of user-centered user interfaces expressed e.g. by Ben Shneiderman (Shneiderman, 1998). After many years of function-oriented development human abilities and needs are brought into focus. The only promising method to develop future technical products and systems is to understand the users’ abilities and limitations and to aim the technology in that direction (Zuehlke, 2007).
The Useware development process (see figure 1) distinguishes the following steps: analysis, structure design, design, realisation and evaluation.
Each of these steps should not be regarded separately but rather overlapping. The continuity of the process as well as the use of suitable tools, e.g. on the basis of the Extensible Markup Language (XML) make it possible to avoid information losses and media breaks.
Analysis
Humans learn, think and work in completely different ways. Therefore the first step in the development of a user interface is to analyze the users, their tasks and their work environment in order to identify the requirements and needs of these users. This step forms the basis for a user- and task-oriented user interface. Both humans and machines are considered as interaction partners. The analysis of the users and their behavior employs different methods like e.g. structured interviews, observations, card sorting etc. They should give a preferably complete image of the working task, the various groups of users and their working environment. To use these methods several professional experts, e.g. engineers, computer scientists and psychologists should be involved. Especially in the analysis phase, task models are generated for documentation and user interface generation, which implicitly contains a function model of the process and/or of the machine (Meixner and Goerlich, 2008).
Structure design
The results of the analysis phase are adjusted within the structuring phase. An abstract use model (Zuehlke and Thiels, 2008) is developed on the basis of this information which is platform independent. The result of the structuring phase is the basic structure of the future user interface. The use model is a formal model of use contexts, tasks and information demanding the functionality of the machine. The use model is modeled using the Useware Markup Language, useML (Reuther, 2003) within a model based development environment.
Design
Parallel to the structuring phase a hardware-platform for the Useware has to be selected. This selection is based on environmental requirements of the machine usage (pollution, noise, vibration, ...) on the one hand and the user’s requirements (display size, optimal interaction device, …) on the other. Furthermore economic factors have to be considered. If the model is intensively networked or is composed of a huge number of elements, sufficient display size for visualizing information structure should be provided. These factors partly depend on user groups and contexts of use (Goerlich et al., 2008).
Realisation/Prototyping
During the prototyping developers must select a development tool. If the selected development environment provides import possibilities, the developed use model can be imported and the derivation of the user interface can be processed. In most cases the processing affects the realisation of dynamic components as well as the fine design of dialogues. Often there is a media break between the structuring and the (fine) design phase. Today‘s field of development tools have a wide variety of notations. Developers need to represent the Useware in form of prototypes, e.g. paper prototypes or Microsoft PowerPoint prototypes.
Evaluation
A continuous evaluation during the development process allows an early detection of product problems and thus reduces development costs (Bias and Mayhew, 1994). It is relevant to include structural aspects e.g. navigational concepts etc. in the evaluation and not only design aspects.
Some tests have shown that 60% of all use errors are not the result of bad design but due to structural deficiencies. The evaluation phase needs to be considered as a cross-sectional task in the entire development process. Thus it is very important to integrate users in the development of the product.
References
Bias, R. G.; Mayhew, D. J. (1994). Cost-justifying usability. Boston, MA: Academic Press
Goerlich, D.; Thiels, N.; Meixner, G. (2007): Personalized Use Models in Ambient Intelligence Environments. Proc. of the 17th IFAC World Congress, Seoul, Korea, 2008
Meixner, G.; Goerlich, D. (2008): Aufgabenmodellierung als Kernelement eines nutzerzentrierten Entwicklungsprozesses für Bedienoberflächen. Workshop "Verhaltensmodellierung: Best Practices und neue Erkenntnisse", Fachtagung Modellierung, Berlin, Germany, März 2008
Reuther, A. (2003): useML–Systematische Entwicklung von Maschinenbediensystemen mit XML. Fortschritt-Berichte pak, Band 8. Kaiserslautern: Technische Universität Kaiserslautern
Shneiderman, B. (1998): Designing the user interface: Strategies for Effective Human-Computer-Interaction. Massachusetts/USA: Addison-Wesley
Zuehlke, D. (2002): Useware–Herausforderung der Zukunft. Automatisierungstechnische Praxis (atp), 9/2002, S.73-78
Zuehlke, D. (2004): Useware-Engineering für technische Systeme. Berlin, Heidelberg, New York: Springer-Verlag
Zuehlke, D. (2007): Useware. In: K. Landau (Hrsg.): Lexikon Arbeitsgestaltung. Best Practice im Arbeitsprozess. Stuttgart: Gentner Verlag; ergonomia Verlag
Zuehlke, D.; Thiels, N. (2008): Useware engineering: a methodology for the development of user-friendly interfaces. Library Hi Tech, 26(1):126-140
Further literature
Oberquelle, H. (2002): Useware Design and Evolution: Bridging Social Thinking and Software Construction. In: Y. Dittrich, C. Floyd, R. Klischewski (Hrsg.): Social Thinking–Software Practice, S. 391-408, Cambridge, London: MIT-Press
For further informationen see the Useware-Forum 17 March 2009
Computing terminology | Operating System (OS) | 1,048 |
Zonbu
Zonbu was a technology company that markets a computing platform which combines a web-centric service, a small form factor PC, and an open source based software architecture. Zonbu was founded by Alain Rossmann (previously Founder and CEO of Openwave) and Gregoire Gentil (previously co-founder of Twingo Systems).
Hardware
The first-generation Zonbox hardware was the eBox-4854 sold by DMP Electronics of Taiwan.
Called the Zonbu Mini, it was a nettop computer measuring . It is flash based, fanless, and thus effectively silent. The official specifications for the device in 2007 were:
1.2 GHz Via Eden CPU (C7 Esther core),
512 MB RAM,
Ethernet over twisted pair 10/100 Mbit/s,
PS/2 keyboard and mouse ports, VGA display port and 6 USB 2.0 ports,
4 GB CompactFlash local storage, and
Graphics up to 2048 x 1536 with 16 million colors – hardware graphics and MPEG2 acceleration.
Disassembly by Zonbu owners has shown that the Zonbu includes options for internal expansion:
Mini PCI slot (for an optional wireless card),
IDE connector, with room in the case for a 2.5" hard drive, and
serial and parallel ports.
Service
The Zonbu subscription plans include online storage (using Amazon S3), automatic upgrades, online support and remote file access. The subscription service is promoted as reducing the hassle of typical computer maintenance tasks, such as hardware repair, software installation, updates and upgrades, and malicious software removal.
Software architecture
The Zonbu OS is a customized version of Linux based on the Gentoo distribution using the Xfce desktop environment. It is geared towards non-technical users, and the user interface focuses more on simplicity than advanced features.
The filesystem architecture combines a transparent overlay filesystem (pioneered by Linux Live Distributions) with an on-line backup service. User data is locally cached on Compact Flash Card, then transparently encrypted with 128-bit encryption and transferred to remote storage servers at Amazon S3.
Applications
Zonbu comes pre-loaded with a number of software applications. These include the Firefox web browser, OpenOffice productivity suite and Skype IP phone service, and 30 casual games.
With the default OS, the user is not allowed to install any third-party software. The default set of applications is supposed to fit the needs of non-technical users or of a second home PC. However, Zonbu provides instructions to unlock the operating system and install additional software. The procedure is technical and intended for a minority of users (Standard for many linux users, unlocking GRUB, root, and install with emerge/portage).
See also
Everex, provider of the hardware used on some Zonbu models
Linutop Linux PC
Linpus Linux
OLPC laptop
gOS
References
External links
Zonbu official website
EPATec, distributor of same hardware in Germany and Spain
Zonbu First Look Video
Climate Trust
Norhtec MicroClient Jr.
Media
Zonbu Launches Subscription Laptops
A Holiday Helper for the Computer Savvy
Hassle-Free PC
A PC That Uses Less Energy, but Charges a Monthly Fee
Engadget: Zonbu launches subscription-based PC, service plans
Zonbu Desktop Mini
Everex Zonbu Notebook
Sneak Peek at the Zonbu PC
Defunct computer companies of the United States
Linux-based devices
Cloud clients | Operating System (OS) | 1,049 |
Armbian
Armbian is a computing build framework that allows users to create ready-to-use images with working kernels in variable user space configurations for various single board computers (SBCs). It provides various pre-build images for some supported boards. These are usually Debian or Ubuntu flavored.
Supported hardware
Banana Pi
Banana Pi M2
Banana Pi M2+
Banana Pi Pro
Beelink X2
Clearfog base
Clearfog pro
Cubieboard
Cubieboard2
Cubietruck
Outernet Dreamcatcher
Cubox-i
Lemaker Guitar
Libre Computer Project AML-S905X-CC (Le Potato)
Libre Computer Project ALL-H3-CC (Tritium) H2+/H3/H5
Lamobo R1
Olimex Lime
Olimex Lime 2
Olimex Lime A10
Olimex Lime A33
Olimex Micro
Xunlong Orange Pi 2
Xunlong Orange Pi 3
Xunlong Orange Pi Lite
Xunlong Orange Pi One
Xunlong Orange Pi PC
Xunlong Orange Pi PC+
Xunlong Orange Pi PC2
Xunlong Orange Pi R1
Xunlong Orange Pi Win
Xunlong Orange Pi Zero
Xunlong Orange Pi Zero 2+ H3
Xunlong Orange Pi Zero 2+ H5
Xunlong Orange Pi Zero+
Xunlong Orange Pi+
Xunlong Orange Pi+ 2
Xunlong Orange Pi+ 2e (Plus2e)
MQmaker Miqi
Friendlyarm NanoPC T4
Friendlyarm Nanopi Air
Friendlyarm Nanopi M1
Friendlyarm Nanopi M1+
Friendlyarm Nanopi Neo
Friendlyarm Nanopi Neo2
Odroid C1
Odroid C2
Odroid C4
Odroid HC4
Odroid XU4
Odroid N2/N2+
LinkSprite Pcduino 2
LinkSprite Pcduino 3
LinkSprite Pcduino 3 nano
Pine64 (a.k.a. Pine A64)
Pine64so
Pinebook64
Rock Pi 4
RockPro64
Roseapple Pi
Asus Tinkerboard
Udoo
Udoo Neo
Helios4
Helios64
See also
Raspberry Pi OS
References
ARM Linux distributions
ARM operating systems
Debian-based distributions
Ubuntu derivatives
Operating systems based on the Linux kernel
Free software culture and documents
Linux distributions | Operating System (OS) | 1,050 |
Post-WIMP
In computing, post-WIMP ("windows, icons, menus, pointer") comprises work on user interfaces, mostly graphical user interfaces, which attempt to go beyond the paradigm of windows, icons, menus and a pointing device, i.e. WIMP interfaces.
The reason WIMP interfaces have become so prevalent since their conception at Xerox PARC is that they are very good at abstracting work-spaces, documents, and their actions. Their analogous desktop metaphor to documents as paper sheets or folders makes WIMP interfaces easy to introduce to new users. Furthermore their basic representations as rectangular regions on a 2D flat screen make them a good fit for system programmers, thus favoring the abundance of commercial widget toolkits in this style.
However, WIMP interfaces are not optimal for working with certain tasks or through input devices which differ from a mouse and keyboard. WIMPs are usually pixel-hungry, so given limited screen real estate they can distract attention from the task at hand. Thus, other interfaces can better encapsulate workspaces, actions, and objects for such tasks.
Interfaces based on these considerations, now called "post-WIMP", have made their way to the general public in mobile and embedded applications. Meanwhile, software for desktop computer workstations still uses WIMP interfaces, but has started undergoing major operational changes as desktop marketshare declines. These include the exploration of virtual 3D space, interaction techniques for window/icon sorting, focus, and embellishment.
The seminal paper for post-WIMP interfaces is "Non Command User Interfaces" by Jakob Nielsen 1993, followed by "The Anti-Mac Interface". Updated proposals are discussed in "Post-WIMP user interfaces" by Andries van Dam. Michel Beaudouin-Lafon subsequently proposed a framework called instrumental interaction, that defines a design space for Post-WIMP interaction techniques and a set of properties for comparing them. Examples of Post-WIMP interaction include 3D interaction and reality-based interaction.
Examples
Computer game
Virtual reality systems
Gesture-based interfaces
Voice user interfaces
See-through tools
Smartphones and mobile apps
Zooming user interfaces
Tangible user interfaces
Web applications
See also
10-foot user interface
Natural user interface
References
Graphical user interfaces
User interface techniques | Operating System (OS) | 1,051 |
OMNIWRITER
OMNIWRITER is a word processor for the Commodore 64 home computer HESware. Called "Dean of Commodore 64 word-processing programs" in Stewart Brand's Whole Earth Software Catalog, OMNIWRITER sold at a list price of USD$79. It was specifically recommended for home business use.
The package included a reference card that fit around the Commodore's function keys, customizable colors, and scrolling 80-column width display. The maximum file size is 23 pages, but multiple files can be chained for printing. OMNIWRITER can import up to 240 columns of data from Microsoft Multiplan.
References
Commodore 64 software
Word processors
Discontinued software | Operating System (OS) | 1,052 |
XaAES
XaAES is a graphical user interface for the OS kernel MiNT (now known as FreeMiNT), and is aimed at systems that are compatible with 16/32 bit (hence ST) Atari computers such as the ST, TT or Falcon. The combination of MiNT and XaAES is the natural successor to MultiTOS.
History
XaAES - The beginning
XaAES is a free AES (Application Environment Service) written with MiNT in mind, originally developed by Craig Graham (Data Uncertain Software) back in September 1995. Taken from the XaAES beta6, here is a snippet of the readme.txt in which Craig explains his motives for initiating XaAES:
"After using MultiTOS, then AES4.1, I became frustrated at the lack of a decent GUI to use the real power of the MiNT kernel - X Windows is all very well, but I can't run GEM programs on it. MultiTOS (even AES 4.1) is too slow. Geneva didn't run with MiNT (and, having tried the new MiNT compatible version, I can say it wasn't very compatible - at least AES 4.1 is quite stable, if a little slow). MagiC lives in a very fast, very small world all its own, with no networking support , few programs written to exploit it."
NOTE: MagiC later became available on Mac OS (and still later on the x86 PC) with built-in networking, and network drivers also began to appear for the Atari ST. A lot of MagiC software was MiNT compatible, and vice versa, but that came later than the time period of the above quote.
Craig worked actively on XaAES until 1997 when he stopped the development, at that time a plethora of applications were already usable under XaAES.
In 1998 the project was taken up by Swedish programmer Johan Klockars. He had been involved already during Craig's maintainership and at this point he stepped forward after a period of inactivity.
Johan's work resulted in several bugfixes which eventually were released as Beta7+. Shortly after this beta Johan also made the decision to hand over the project to someone else. This time it really seemed like XaAES had hit the end of the road, with no one interested in taking up the project again.
After a period of complete standstill Dutch coder Henk Robbers took over the project in November 1999. During Henk's maintainer-ship loads of progress was made, and XaAES went from interesting to becoming rather usable and showing great potential. The visual appearance was made to look closer to that of N.AES, as this was the obvious reference target - the AES that at the time was the GUI for FreeMiNT. XaAES also become a lot more robust although the response for key and mouse input was still somewhat of a problem.
Odd Skancke (aka Ozk) continued the development of XaAES, and together with Frank Naumann (then FreeMiNT maintainer), XaAES graphical improvements (skinning) were released with FreeMiNT 1.16. Alan Hourihane, as then FreeMiNT maintainer, was left to do bug fixes until round 2009, when after a resurgence of interest in the FreeMiNT OS, XaAES was then maintained and extended considerably by Helmut Karlowski (who maintains his own branch), especially in the area of Atari TOS application compatibility.
XaAES goes CVS
In early 2003 Henk Robbers (of AHCC fame, also makes XaAES beta6 source available) decided it was time to let someone else carry on his work, as he wanted to move on to other computing issues. When Henk went looking for someone who could take care of the continued development, the idea that XaAES should be part of the FreeMiNT project was suggested. After all, it was developed to be an AES for MiNT exclusively, and since FreeMiNT is being administrated via CVS, anyone could access the sources and contribute.
The move to CVS was made possible thanks to great efforts from the FreeMiNT maintainer Frank Naumann, who made the necessary changes to allow XaAES to compile under gcc. In earlier XaAES builds, one of the major problems has been the somewhat irregular response to mouse buttons. This was reworked by Odd Skancke (aka Ozk), something that also resulted in a complete rewrite of the XDD. The moose.xdd (mouse device driver) is now coded in C too, just like the rest of the XaAES code.
Development was later moved from AtariForge to an SVN repository at SourceForge, and from there to the publicly browsable FreeMiNT GIT repository on GitHub.
XaAES - A FreeMiNT kernel module
In order to get a clean and fast XaAES, the best solution turned out to be changing XaAES into a kernel module. To achieve this goal a completely new API was constructed, and it was quickly apparent that the new kernel module offered massively improved performance. Most noticeably, the response time was significantly improved, resulting in a much more snappy and responsive experience when trying to click a button to see live window redraws, etc. All in all, XaAES reached a whole new level after being integrated this tightly with FreeMiNT and as of the 1.16.1 FreeMiNT release it must be considered highly usable. With the implementation of window shading the list of missing features was getting short.
(This section is used on Wikipedia with permission from http://xaaes.atariforge.net)
See also
EmuTOS
Atari TOS
MultiTOS
MiNT
FreeMiNT
References
External links
The Unofficial XaAES page
XaAES source
Free windowing systems | Operating System (OS) | 1,053 |
Access Systems Americas
ACCESS Systems Americas, Inc. (formerly PalmSource) is a subsidiary of ACCESS which develops the Palm OS PDA operating system and its successor, the Access Linux Platform, as well as BeOS. PalmSource was spun off from Palm Computing, Inc.
Palm OS runs on 38 million devices that have been sold since 1996 from hardware manufacturers including Palm, Inc., Samsung, IBM, Aceeca, AlphaSmart, Fossil, Inc., Garmin, Group Sense PDA (Xplore), Kyocera, PiTech, Sony, and Symbol. PalmSource also develops several programs for the Palm OS, and as of December 2005, PalmGear claims to offer 28,769 software titles of varying genres. Palm OS software programs can also be downloaded from CNET, PalmSource, Handango, and Tucows.
PalmSource also owns BeOS, which it purchased from Be Inc. in August 2001.
History
In January 2002, Palm, Inc. set up a wholly owned subsidiary to develop and license Palm OS, which was named PalmSource in February. In October 2003, PalmSource was spun off from Palm as an independent company, and Palm renamed itself palmOne. palmOne and PalmSource set up a holding company that owned the Palm trademark.
In January 2004, PalmSource announced the successor to classic Palm OS called Palm OS Cobalt. However, it failed to gain support from hardware licensees. That December, PalmSource acquired China MobileSoft, a software company with a mobile Linux offering. As a result, PalmSource announced that they would extend Palm OS to run on top of the Linux architecture.
In May 2005, palmOne purchased PalmSource's share of the Palm trademark for US$30 million and two months later renamed itself Palm, Inc. As part of the agreement, palmOne granted certain rights to Palm trademarks to PalmSource and licensees for a four-year transition period. Later that year, ACCESS, which specializes in mobile and embedded web browser technologies, including NetFront, acquired PalmSource for US$324 million. In October 2006, PalmSource announced that it would rename itself to ACCESS, to match its parent company's name.
See also
List of Palm OS Devices
References
External links
PalmSource Historical SEC Filings
Electronics companies of the United States
Companies formerly listed on the Nasdaq
Palm, Inc. | Operating System (OS) | 1,054 |
IBM System/360 Model 75
The IBM System/360 Model 75 is a discontinued high end/high performance system that was introduced on April 22, 1965. Although it played many roles in IBM's System/360 lineup, it accounted for a small fraction of a percent of the 360 systems sold. Five Model 75 computers housed at NASA's Real Time Computer Complex were used during the Apollo program.
Models
Three models, the H75, I75, and J75, were respectively configured with one, two, or four IBM 2365 Model 3 Processor Storage units, each of which provided 262,144 (256K) bytes of core memory, so that the H75 had 262,144 (256K) bytes of core, the I75 had 524,288 (512K), and the J75 1,048,576 (1 MB).
Performance
The high performance of the Model 75 was attributed to half a dozen advanced features, including Parallel arithmetic, Overlapped memory fetch, and Parallel addition for address calculation.
Furthermore, independent storage sections provided two-way (H75) or four-way (I75, J75) interleaving of memory access. Even with only two-way interleaving, "an effective sequential access rate of 400 nanoseconds per double word (eight bytes) is possible".
Features
The Model 75 implements the complete System/360 "universal instruction set" architecture, including floating-point, decimal, and character operations as standard features.
References
External links
http://ibiblio.org/comphist/node/105
http://www-03.ibm.com/ibm/history/exhibits/mainframe/images/overlay/2423PH2075.jpg
System 360 Model 75
Computer-related introductions in 1965 | Operating System (OS) | 1,055 |
Orthogonal instruction set
In computer engineering, an orthogonal instruction set is an instruction set architecture where all instruction types can use all addressing modes. It is "orthogonal" in the sense that the instruction type and the addressing mode vary independently. An orthogonal instruction set does not impose a limitation that requires a certain instruction to use a specific register so there is little overlapping of instruction functionality.
Orthogonality was considered a major goal for processor designers in the 1970s, and the VAX-11 is often used as the benchmark for this concept. However, the introduction of RISC design philosophies in the 1980s significantly reversed the trend against more orthogonality.
Modern CPUs often simulate orthogonality in a preprocessing step before performing the actual tasks in a RISC-like core. This "simulated orthogonality" in general is a broader concept, encompassing the notions of decoupling and completeness in function libraries, like in the mathematical concept: an orthogonal function set is easy to use as a basis into expanded functions, ensuring that parts don’t affect another if we change one part.
Basic concepts
At their core, all general purpose computers work in the same underlying fashion; data stored in a main memory is read by the central processing unit (CPU) into a fast temporary memory (e.g. CPU registers), acted on, and then written back to main memory. Memory consists of a collection of data values, encoded as numbers and referred to by their addresses, also a numerical value. This means the same operations applied to the data can be applied to the addresses themselves. While being worked on, data can be temporarily held in processor registers, scratchpad values that can be accessed very quickly. Registers are used, for example, when adding up strings of numbers into a total.
Single instruction, single operand
In early computers, the instruction set architecture (ISA) often used a single register, in which case it was known as the accumulator. Instructions included an address for the operand. For instance, an ADD address instruction would cause the CPU to retrieve the number in memory found at that address and then add it to the value already in the accumulator. This very simple example ISA has a "one-address format" because each instruction includes the address of the data.
One-address machines have the disadvantage that even simple actions like an addition require multiple instructions, each of which takes up scarce memory, and requires time to be read. Consider the simple task of adding two numbers, 5 + 4. In this case, the program would have to load the value 5 into the accumulator with the LOAD address instruction, use the ADD address instruction pointing to the address for the 4, and finally SAVE address to store the result, 9, back to another memory location.
Single instruction, multiple operands
Further improvements can be found by providing the address of both of the operands in a single instruction, for instance, ADD address 1, address 2. Such "two-address format" ISAs are very common. One can further extend the concept to a "three-address format" where the SAVE is also folded into an expanded ADD address 1, address 2, address of result.
It is often the case that the basic computer word is much larger than needed to hold just the instruction and an address, and in most systems, there are leftover bits that can be used to hold a constant instead of an address. Instructions can be further improved if they allow any one of the operands to be replaced by a constant. For instance, ADD address 1, constant 1 eliminates one memory cycle, and ADD constant 1, constant 2 another.
Multiple data
Further complexity arises when one considers common patterns in which memory is accessed. One very common pattern is that a single operation may be applied across a large amount of similar data. For instance, one might want to add up 1,000 numbers. In a simple two-address format of instructions, there is no way to change the address, so 1,000 additions have to be written in the machine language. ISAs fix this problem with the concept of indirect addressing, in which the address of the next point of data is not a constant, but itself held in memory. This means the programmer can change the address by performing addition on that memory location. ISAs also often include the ability to offset an address from an initial location, by adding a value held in one of its registers, in some cases a special index register. Others carry out this addition automatically as part of the instructions that use it.
The variety of addressing modes leads to a profusion of slightly different instructions. Considering a one-address ISA, for even a single instruction, ADD, we now have many possible "addressing modes":
Immediate (constant): ADD.C constant 1 — adds the constant value to the result in the accumulator
Direct address: ADD.A address 1 — add the value stored at address 1
Memory indirect: ADD.M address 1 — read the value in address 1, use that value as another address and add that value
Many ISAs also have registers that can be used for addressing as well as math tasks. This can be used in a one-address format if a single address register is used. In this case, a number of new modes become available:
Register direct: ADD.R register 1 — add the value stored in the address held in register one
Displacement: ADD.D constant 1 — add the constant to the address register, then add the value found in memory at that resulting location
Index: ADD.I register 1 — add the value in register 1 to the address register to make a new address and then adds the value at that location to the accumulator
Autoindex: ADD.AI register 1 — as in the Index case, but automatically increments the address
Orthogonality
Orthogonality is the principle that every instruction should be able to use any supported addressing mode. In this example, if the direct addressing version of ADD is available, all the others should be as well. The reason for this design is not aesthetic, the goal is to reduce the total size of a program's object code. By providing a variety of addressing modes, the ISA allows the programmer to choose the one that precisely matches the need of their program at that point, and thereby reduce the need to use multiple instructions to achieve the same end. This means the total number of instructions is reduced, both saving memory and improving performance. Orthogonality was often described as being highly "bit efficient".
As the ultimate end of orthogonal design is simply to allow any instruction to use any type of address, implementing orthogonality is often simply a case of adding more wiring between the parts of the processor. However, it also adds to the complexity of the instruction decoder, the circuitry that reads an instruction from memory at the location pointed to by the program counter and then decides how to process it.
In the example ISA outlined above, the ADD.C instruction using direct encoding already has the data it needs to run the instruction and no further processing is needed, the decoder simply sends the value into the arithmetic logic unit (ALU). However, if the ADD.A instruction is used, the address has to be read, the value at that memory location read, and then the ALU can continue. This series of events will take much longer to complete and requires more internal steps.
As a result, the time needed to complete different variations of an instruction can vary widely, which adds complexity to the overall CPU design. Therefore, orthogonality represents a tradeoff in design; the computer designer can choose to offer more addressing modes to the programmer to improve code density at the cost of making the CPU itself more complex.
When memory was small and expensive, especially during the era of drum memory or core memory, orthogonality was highly desirable. However, the complexity was often beyond what could be achieved using current technology. For this reason, most machines from the 1960s offered only partial orthogonality, as much as the designers could afford. It was in the 1970s that the introduction of large scale integration significantly reduced the complexity of computer designs and fully orthogonal designs began to emerge. By the 1980s, such designs could be implemented on a single-chip CPU.
In the late 1970s, with the first high-powered fully orthogonal designs emerging, the goal widened to become the high-level language computer architecture, or HLLCA for short. Just as orthogonality was desired to improve the bit density of machine language, HLLCA's goal was to improve the bit density of high-level languages like ALGOL 68. These languages generally used an activation record, a type of complex stack that stored temporary values, which the ISAs generally did not directly support and had to be implemented using many individual instructions from the underlying ISA. Adding support for these structures would allow the program to be translated more directly into the ISA.
Orthogonality in practice
The PDP-11
The PDP-11 was substantially orthogonal (primarily excepting its floating point instructions). Most integer instructions could operate on either 1-byte or 2-byte values and could access data stored in registers, stored as part of the instruction, stored in memory, or stored in memory and pointed to by addresses in registers. Even the PC and the stack pointer could be affected by the ordinary instructions using all of the ordinary data modes. "Immediate" mode (hardcoded numbers within an instruction, such as ADD #4, R1 (R1 = R1 + 4) was implemented as the mode "register indirect, autoincrement" and specifying the program counter (R7) as the register to use reference for indirection and to autoincrement.
The PDP-11 used 3-bit fields for addressing modes (0-7) and registers (R0–R5, SP, PC), so there were (electronically) 8 addressing modes. Immediate and absolute address operands applying the two autoincrement modes to the Program Counter (R7), provided a total of 10 conceptual addressing modes.
The VAX-11
The VAX-11 extended the PDP-11's orthogonality to all data types, including floating point numbers. Instructions such as 'ADD' were divided into data-size dependent variants such as ADDB, ADDW, ADDL, ADDP, ADDF for add byte, word, longword, packed BCD and single-precision floating point, respectively. Like the PDP-11, the Stack Pointer and Program Counter were in the general register file (R14 and R15).
The general form of a VAX-11 instruction would be:
opcode [ operand ] [ operand ] ...
Each component being one byte, the opcode a value in the range 0–255, and each operand consisting of two nibbles, the upper 4 bits specifying an addressing mode, and the lower 4 bits (usually) specifying a register number (R0–R15).
In contrast to the PDP-11's 3-bit fields, the VAX-11's 4-bit sub-bytes resulted in 16 logical addressing modes (0–15). However, addressing modes 0–3 were "short immediate" for immediate data of 6 bits or less (the 2 low-order bits of the addressing mode being the 2 high-order bits of the immediate data, when prepended to the remaining 4 bits in that data-addressing byte). Since addressing modes 0-3 were identical, this made 13 (electronic) addressing modes, but as in the PDP-11, the use of the Stack Pointer (R14) and Program Counter (R15) created a total of over 15 conceptual addressing modes (with the assembler program translating the source code into the actual stack-pointer or program-counter based addressing mode needed).
The MC68000 and similar
Motorola's designers attempted to make the assembly language orthogonal while the underlying machine language was somewhat less so. Unlike PDP-11, the MC68000 (68k) used separate registers to store data and the addresses of data in memory. The ISA was orthogonal to the extent that addresses could only be used in those registers, but there was no restriction on which of the registers could be used by different instructions. Likewise, the data registers were also orthogonal across instructions.
In contrast, the NS320xx series were originally designed as single-chip implementations of the VAX-11 ISA. Although this had to change due to legal issues, the resulting system retained much of the VAX-11's overall design philosophy and remained completely orthogonal. This included the elimination of the separate data and address registers found in the 68k.
The 8080 and follow on designs
The 8-bit Intel 8080 (as well as the 8085 and 8051) microprocessor was basically a slightly extended accumulator-based design and therefore not orthogonal. An assembly-language programmer or compiler writer had to be mindful of which operations were possible on each register: Most 8-bit operations could be performed only on the 8-bit accumulator (the A-register), while 16-bit operations could be performed only on the 16-bit pointer/accumulator (the HL-register pair), whereas simple operations, such as increment, were possible on all seven 8-bit registers. This was largely due to a desire to keep all opcodes one byte long.
The binary-compatible Z80 later added prefix-codes to escape from this 1-byte limit and allow for a more powerful instruction set. The same basic idea was employed for the Intel 8086, although, to allow for more radical extensions, binary-compatibility with the 8080 was not attempted here. It maintained some degree of non-orthogonality for the sake of high code density at the time. The 32-bit extension of this architecture that was introduced with the 80386, was somewhat more orthogonal despite keeping all the 8086 instructions and their extended counterparts. However, the encoding-strategy used still shows many traces from the 8008 and 8080 (and Z80). For instance, single-byte encodings remain for certain frequent operations such as push and pop of registers and constants; and the primary accumulator, the EAX register, employs shorter encodings than the other registers on certain types of operations. Observations like this are sometimes exploited for code optimization in both compilers and hand written code.
RISC
A number of studies through the 1970s demonstrated that the flexibility offered by orthogonal modes was rarely or never used in actual problems. In particular, an effort at IBM studied traces of code running on the System/370 and demonstrated that only a fraction of the available modes were being used in actual programs. Similar studies, often about the VAX, demonstrated the same pattern. In some cases, it was shown that the complexity of the instructions meant they took longer to perform than the sequence of smaller instructions, with the canonical example of this being the VAX's INDEX instruction.
During this same period, semiconductor memories were rapidly increasing in size and decreasing in cost. However, they were not improving in speed at the same rate. This meant the time needed to access data from memory was growing in relative terms in comparison to the speed of the CPUs. This argued for the inclusion of more registers, giving the CPU more temporary values to work with. A larger number of registers meant more bits in the computer word would be needed to encode the register number, which suggested that the instructions themselves be reduced in number to free up room.
Finally, a paper by Andrew Tanenbaum demonstrated that 97% of all the constants in a program are between 0 and 10, with 0 representing between 20 and 30% of the total. Additionally, between 30 and 40% of all the values in a program are constants, with simple variables (as opposed to arrays or such) another 35 to 40%. If the processor uses a larger instruction word, like 32-bits, two constants and a register number can be encoded in a single instruction as long as the instruction itself does not use too many bits.
These observations led to the abandonment of the orthogonal design as a primary goal of processor design, and the rise of the RISC philosophy in the 1980s. RISC processors generally have only two addressing modes, direct (constant) and register. All of the other modes found in older processors are handled explicitly using load and store instructions moving data to and from the registers. Only a few addressing modes may be available, and these modes may vary depending on whether the instruction refers to data or involves a transfer of control.
Notes
References
Instruction processing | Operating System (OS) | 1,056 |
HP 300LX
The HP 300LX was one of the first handheld PCs designed to run the Windows CE 1.0 operating system from Microsoft. Unlike other HPCs of the time, the resistive touch screen had an enhanced screen resolution of 640x240 with 4 shades of grey, rather than the standard 480x240 resolution of other devices, such as the Casio Cassiopeia A-10. The device also sported a full PC card slot, a serial link cable plug, and an infrared port.
It was released with 'Pocket' versions of Microsoft applications, such as Word and Excel, and PIM applications such as Tasks, Calendar and Contacts. A very basic version of Internet Explorer was included with Windows CE. Version 1.0 (with an update to 1.1 released shortly after), Inbox was also included for email capability. Access to the internet required use of 3rd party PCMCIA card modem or network card.
The device was made of moulded grey plastic with a lighter grey plastic stylus which is stored on the right hand side of the device under the keyboard. The left hand side of the device is taken up by the PCMCIA card slot and eject button. On the right hand side towards the back is the infrared port. On the back of the device is the proprietary serial connection which has a maximum baud rate of 115,200bit/s and the power adapter port. Power for the device was provided by 2 AA batteries with a CR2032 backup battery to save user data when the 2 AA batteries ran out. A single mono speaker, ROM door and battery compartment are located on the underside of the device.
In the box
HP 300LX
User manual
Serial cable
3rd party software CD
Handheld PC Explorer software CD
HP 320LX and 360LX
The HP 320LX, released in 1997, was an improved version of the 300LX. It was largely identical to its sister unit, but included a backlit screen, an increase in RAM from 2MB to 4MB, and a dedicated compact flash slot. It was also upgradable to Windows CE 2.0 (which required a physical replacement of the ROM card containing the OS). The 320LX came boxed with an AC adapter and serial cradle in addition to the standard equipment from 300LX.
An even later variant, HP 360LX, increased the RAM to 8 megabytes and increased the clock speed of the CPU to 60 MHz. It also shipped with Windows CE 2.0.
Ericsson MC12 and MC16
Ericsson marketed re-branded variants of the 320LX and 360LX named MC12 and MC16, respectively. They consisted of essentially the same hardware, but were shipped with a cable and software combo that allowed for using select Ericsson phones as modems.
See also
List of HP Pocket Computers
HP 620LX
HP 660LX
References
300LX | Operating System (OS) | 1,057 |
IBM Future Systems project
The Future Systems project (FS) was a research and development project undertaken in IBM in the early 1970s, aiming to develop a revolutionary line of computer products, including new software models which would simplify software development by exploiting modern powerful hardware.
Background and goals
Until the end of the 1960s, IBM had been making most of its profit on the hardware, bundling support software and services along with its systems. Only hardware carried a price tag, but those prices included an allocation for software and services.
Other manufacturers had started to market compatible hardware, mainly peripherals such as tape and disk drives, at a price significantly lower than IBM, thus shrinking the possible base for recovering the cost of software and services. Early in 1971, after Gene Amdahl had left IBM to set up his own company offering IBM compatible mainframes, an internal IBM taskforce (project Counterpoint) concluded that the compatible mainframe business was indeed a viable business, and that the basis for charging for software and services as part of the hardware price would quickly vanish.
Another strategic issue was that the cost of computing was steadily going down while the costs of programming and operations, being made of personnel costs, were steadily going up. Therefore, the part of the customer's IT budget available for hardware vendors would be significantly reduced in the coming years, and with it the base for IBM revenue. It was imperative that IBM, by addressing the cost of application development and operations in its future products, would at the same time reduce the total cost of IT to the customers and capture a larger portion of that cost.
At the same time, IBM was under legal attack for its dominant position and its policy of bundling software and services in the hardware price, so that any attempt at "re-bundling" part of its offerings had to be firmly justified on a pure technical basis, so as to withstand any legal challenge.
In May–June 1971, an international task force convened in Armonk under John Opel, then a vice-president of IBM. Its assignment was to investigate the feasibility of a new line of computers which would take advantage of IBM's technological advantages in order to render obsolete all previous computers - compatible offerings but also IBM's own products. The task force concluded that the project was worth pursuing, but that the key to acceptance in the marketplace was an order of magnitude reduction in the costs of developing, operating and maintaining application software.
The major objectives of the FS project were consequently stated as follows:
make obsolete all existing computing equipment, including IBM's, by fully exploiting the newest technologies,
diminish greatly the costs and efforts involved in application development and operations,
provide a technically sound basis for re-bundling as much as possible of IBM's offerings (hardware, software and services)
It was hoped that a new architecture making a heavier use of hardware resources, the cost of which was going down, could significantly simplify software development and reduce costs for both IBM and customers.
Technology
Data access
One design principle of FS was a "single-level store" which extended the idea of virtual memory to cover persistent data. Working memory, files, and databases were all accessed in a uniform way by an abstraction of the notion of address.
Therefore, programmers would not have to be concerned whether the object they were trying to access was in memory or on the disk.
This and other planned enhancements were expected to make programming easier and thereby reduce the cost of developing software.
Implementation of that principle required that the addressing mechanism at the heart of the machine would incorporate a complete storage hierarchy management system and major portions of a data base management system, that until then were implemented as add-on software.
Processor
Another principle was the use of very high-level complex instructions to be implemented in microcode. As an example, one of the instructions, CreateEncapsulatedModule, was a complete linkage editor. Other instructions were designed to support the internal data structures and operations of programming languages such as FORTRAN, COBOL, and PL/I. In effect, FS was designed to be the ultimate complex instruction set computer (CISC).
Another way of presenting the same concept was that the entire collection of functions previously implemented as hardware, operating system software, data base software and more would now be considered as making up one integrated system, with each and every elementary function implemented in one of many layers including circuitry, microcode, and conventional software. More than one layer of microcode and code were contemplated, sometimes referred to as picocode or millicode.
Depending on the people one was talking to, the very notion of a "machine" therefore ranged between those functions which were implemented as circuitry (for the hardware specialists) to the complete set of functions offered to users, irrespective of their implementation (for the systems architects).
The overall design also called for a "universal controller" to handle primarily input-output operations outside of the main processor. That universal controller would have a very limited instruction set, restricted to those operations required for I/O, pioneering the concept of a reduced instruction set computer (RISC).
Meanwhile, John Cocke, one of the chief designers of early IBM computers, began a research project to design the first reduced instruction set computer (RISC). In the long run, the RISC architecture, which eventually evolved into IBM's Power and PowerPC architecture, proved to be vastly cheaper to implement and capable of achieving much higher clock rate.
History
Project start
In the late 1960s and early 1970s, IBM considered a radical redesign of their entire product line to take advantage of the much lower cost of computer circuitry expected in the 1980s.
The IBM Future Systems project (FS) was officially started In September 1971, following the recommendations of a special task force assembled in the second quarter of 1971. In the course of time, several other research projects in various IBM locations merged into the FS project or became associated with it.
Project management
During its entire life, the FS project was conducted under tight security provisions. The project was broken down into many subprojects assigned to different teams. The documentation was similarly broken down into many pieces, and access to each document was subject to verification of the need-to-know by the project office. Documents were tracked and could be called back at any time.
In Sowa's memo (see External Links, below) he noted The avowed aim of all this red tape is to prevent anyone from understanding the whole system; this goal has certainly been achieved.
As a consequence, most people working on the project had an extremely limited view of it, restricted to what they needed to know in order to produce their expected contribution. Some teams were even working on FS without knowing. This explains why, when asked to define FS, most people give a very partial answer, limited to the intersection of FS with their field of competence.
Planned product lines
Three implementations of the FS architecture were planned: the top-of-line model was being designed in Poughkeepsie, NY, where IBM's largest and fastest computers were built; the middle model was being designed in Endicott, NY, which had responsibility for the mid-range computers; and the smallest model was being designed in Rochester, MN, which had the responsibility for IBM's small business computers.
A continuous range of performance could be offered by varying the number of processors in a system at each of the three implementation levels.
Early 1973, overall project management and the teams responsible for the more "outside" layers common to all implementations were consolidated in the Mohansic ASDD laboratory (halfway between the Armonk/White Plains headquarters and Poughkeepsie).
Project end
The FS project was killed in 1975. The reasons given for killing the project depend on the person asked, each of whom puts forward the issues related to the domain with which they were familiar. In reality, the success of the project was dependent on a large number of breakthroughs in all areas from circuit design and manufacturing to marketing and maintenance. Although each single issue, taken in isolation, might have been resolved, the probability that they could all be resolved in time and in mutually compatible ways was practically zero.
One symptom was the poor performance of its largest implementation, but the project was also marred by protracted internal arguments about various technical aspects, including internal IBM debates about the merits of RISC vs. CISC designs. The complexity of the instruction set was another obstacle; it was considered "incomprehensible" by IBM's own engineers and there were strong indications that the system wide single-level store could not be backed up in part, foretelling the IBM AS/400's partitioning of the System/38's single-level store. Moreover, simulations showed that the execution of native FS instructions on the high-end machine was slower than the System/370 emulator on the same machine.
The FS project was finally terminated when IBM realized that customer acceptance would be much more limited than originally predicted because there was no reasonable application migration path for 360 architecture customers. In order to leave maximum freedom to design a truly revolutionary system, ease of application migration was not one of the primary design goals for the FS project, but was to be addressed by software migration aids taking the new architecture as a given. In the end, it appeared that the cost of migrating the mass of user investments in COBOL and assembly language based applications to FS was in many cases likely to be greater than the cost of acquiring a new system.
Results
Although the FS project as a whole was killed, a simplified version of the architecture for the smallest of the three machines continued to be developed in Rochester. It was finally released as the IBM System/38, which proved to be a good design for ease of programming, but it was woefully underpowered. The AS/400 inherited the same architecture, but with performance improvements. In both machines, the high-level instruction set generated by compilers is not interpreted, but translated into a lower-level machine instruction set and executed; the original lower-level instruction set was a CISC instruction set with some similarities to the System/360 instruction set. In later machines the lower-level instruction set was an extended version of the PowerPC instruction set, which evolved from John Cocke's RISC machine. The dedicated hardware platform was replaced in 2008 by the IBM Power Systems platform running the IBM i operating system.
Besides System/38 and the AS/400, which inherited much of the FS architecture, bits and pieces of Future Systems technology were incorporated in the following parts of IBM's product line:
the IBM 3081 mainframe computer, which was essentially the System/370 emulator designed in Poughkeepsie, but with the FS microcode removed
the 3800 laser printer, and some machines that would lead to the IBM 3279 terminal and GDDM
the IBM 3850 automatic magnetic tape library
the IBM 8100 mid-range computer, which was based on a CPU called the Universal Controller, which had been intended for FS input/output processing
network enhancements concerning VTAM and NCP
Sources
References
External links
An internal memo by John F. Sowa. This outlines the technical and organizational problems of the FS project in late 1974.
Overview of IBM Future Systems
Computing platforms
Future Systems project
Information technology projects | Operating System (OS) | 1,058 |
New World ROM
New World ROM computers are Macintosh models that do not use a Macintosh Toolbox ROM on the logic board. Due to Mac OS X not requiring the availability of the Toolbox, this allowed ROM sizes to shrink dramatically (typically from to ), and facilitated the use of flash memory for system firmware instead of the now more expensive and less flexible Mask ROM that most previous Macs used. A facility for loading the Toolbox from the startup device was, however, made available, allowing the use of Mac OS 8 and Mac OS 9 on New World machines.
The New World architecture was developed for the Macintosh Network Computer, an unrealized project that eventually contributed several key technologies to the first-generation iMac.
All PowerPC Macs from the iMac, the iBook, the Blue and White Power Mac G3 and the Bronze Keyboard (Lombard) PowerBook G3 forward are New World ROM machines, while all previous models (including the Beige Power Mac G3 and all other beige and platinum Macs) are Old World ROM machines. Intel based Macs are incapable of running Mac OS 9 (or, indeed, any version of Mac OS X prior to Tiger), and on these machines EFI is used instead of Open Firmware, which both New World and Old World machines are based on.
New World ROM Macs are the first Macs where direct usage of the Open Firmware (OF) subsystem is encouraged. Previous PCI Power Macs used Open Firmware for booting, but the implementation was not complete; in these machines OF was only expected to probe PCI devices, then immediately hand control over to the Mac OS ROM. Because of this, versions 1.0.5 and 2.x had several serious bugs, as well as missing functionality (such as being able to load files from a HFS partition or a TFTP server). Apple also set the default input and output devices to (the modem port on beige Macs), which made it difficult for normal users to get to Open Firmware; to do so it was necessary to either hook up a terminal, or change the Open Firmware settings from inside Mac OS using a tool such as Boot Variables or Apple's System Disk.
The New World ROM introduced a much-improved version of the Open Firmware interpreter, version 3.0, which added many missing features, fixed most of the bugs from earlier versions, and had the capability to run CHRP boot scripts. The Toolbox ROM was embedded inside a CHRP script in the System Folder called "Mac OS ROM", along with a short loader stub and a copy of the Happy Mac icon suitable for display from Open Firmware. Once the ROM was loaded from disk, the Mac boot sequence continued as usual. As before, Open Firmware could also run a binary boot loader, and version 3.0 added support for ELF objects as well as the XCOFF files versions 1.0.5 and 2.0 supported. Also, version 3.0 (as well as some of the last releases of version 2.x, starting with the PowerBook 3400) officially supported direct access to the Open Firmware command prompt from the console (by setting the auto-boot? variable to false from Mac OS, or by holding down --- at boot).
One major difference between Old World ROM Macs and New World ROM Macs, at least in Classic Mac OS, is that the Gestalt selector for the machine type is no longer usable; all New World ROM Macs use the same mach ID, 406 decimal, and the actual machine ID is encoded in the "model" and "compatible" properties of the root node of the Open Firmware device tree. The New World ROM also sets the "compatible" property of the root node to "MacRISC2" (machines that can boot Classic Mac OS using "Mac OS ROM") or "MacRISC3" (machines that can only boot Mac OS X or another Unix-like system).
It is somewhat easier to boot a non-Mac-OS operating system on a New World system, and indeed OpenBSD's bootloader only works on a New World system.
The simplest way to distinguish a New World ROM Mac is that it will have a factory built-in USB port. No Old World ROM Mac had a USB port as factory equipment; instead, they used ADB for keyboard and mouse, and mini-DIN-8 "modem" and "printer" serial ports for other peripherals. Also, New World ROM Macs generally do not have a built-in floppy drive.
References
External links
The Mac ROM Enters a New World Apple's original New World ROM documentation
MacOS
Macintosh operating systems
Macintosh firmware | Operating System (OS) | 1,059 |
Laptop
A laptop, laptop computer, or notebook computer is a small, portable personal computer (PC) with a screen and alphanumeric keyboard. These typically have a clam shell form factor with the screen mounted on the inside of the upper lid and the keyboard on the inside of the lower lid, although 2-in-1 PCs with a detachable keyboard are often marketed as laptops or as having a laptop mode. Laptops are folded shut for transportation, and thus are suitable for mobile use. Its name comes from lap, as it was deemed practical to be placed on a person's lap when being used. Today, laptops are used in a variety of settings, such as at work, in education, for playing games, web browsing, for personal multimedia, and general home computer use.
As of 2021, in American English, the terms 'laptop computer' and 'notebook computer' are used interchangeably; in other dialects of English one or the other may be preferred. Although the terms 'notebook computers' or 'notebooks' originally referred to a specific size of laptop (originally smaller and lighter than mainstream laptops of the time), the terms have come to mean the same thing and notebook no longer refers to any specific size.
Laptops combine all the input/output components and capabilities of a desktop computer, including the display screen, small speakers, a keyboard, data storage device, sometimes an optical disc drive, pointing devices (such as a touch pad or pointing stick), with an operating system, a processor and memory into a single unit. Most modern laptops feature integrated webcams and built-in microphones, while many also have touchscreens. Laptops can be powered either from an internal battery or by an external power supply from an AC adapter. Hardware specifications, such as the processor speed and memory capacity, significantly vary between different types, models and price points.
Design elements, form factor and construction can also vary significantly between models depending on the intended use. Examples of specialized models of laptops include rugged notebooks for use in construction or military applications, as well as low production cost laptops such as those from the One Laptop per Child (OLPC) organization, which incorporate features like solar charging and semi-flexible components not found on most laptop computers. Portable computers, which later developed into modern laptops, were originally considered to be a small niche market, mostly for specialized field applications, such as in the military, for accountants, or traveling sales representatives. As portable computers evolved into modern laptops, they became widely used for a variety of purposes.
History
As the personal computer (PC) became feasible in 1971, the idea of a portable personal computer soon followed. A "personal, portable information manipulator" was imagined by Alan Kay at Xerox PARC in 1968, and described in his 1972 paper as the "Dynabook". The IBM Special Computer APL Machine Portable (SCAMP) was demonstrated in 1973. This prototype was based on the IBM PALM processor. The IBM 5100, the first commercially available portable computer, appeared in September 1975, and was based on the SCAMP prototype.
As 8-bit CPU machines became widely accepted, the number of portables increased rapidly. The first "laptop-sized notebook computer" was the Epson HX-20, invented (patented) by Suwa Seikosha's Yukio Yokozawa in July 1980, introduced at the COMDEX computer show in Las Vegas by Japanese company Seiko Epson in 1981, and released in July 1982. It had an LCD screen, a rechargeable battery, and a calculator-size printer, in a chassis, the size of an A4 notebook. It was described as a "laptop" and "notebook" computer in its patent.
The portable micro computer Portal of the French company R2E Micral CCMC officially appeared in September 1980 at the Sicob show in Paris. It was a portable microcomputer designed and marketed by the studies and developments department of R2E Micral at the request of the company CCMC specializing in payroll and accounting. It was based on an Intel 8085 processor, 8-bit, clocked at 2 MHz. It was equipped with a central 64 KB RAM, a keyboard with 58 alphanumeric keys and 11 numeric keys (separate blocks), a 32-character screen, a floppy disk: capacity = 140,000 characters, of a thermal printer: speed = 28 characters / second, an asynchronous channel, asynchronous channel, a 220 V power supply. It weighed 12 kg and its dimensions were 45 x 45 x 15 cm. It provided total mobility. Its operating system was aptly named Prologue.
The Osborne 1, released in 1981, was a luggable computer that used the Zilog Z80 and weighed . It had no battery, a cathode ray tube (CRT) screen, and dual single-density floppy drives. Both Tandy/RadioShack and Hewlett Packard (HP) also produced portable computers of varying designs during this period. The first laptops using the flip form factor appeared in the early 1980s. The Dulmont Magnum was released in Australia in 1981–82, but was not marketed internationally until 1984–85. The US$8,150 (US$ today) GRiD Compass 1101, released in 1982, was used at NASA and by the military, among others. The Sharp PC-5000, Ampere and Gavilan SC released in 1983. The Gavilan SC was described as a "laptop" by its manufacturer, while the Ampere had a modern clamshell design. The Toshiba T1100 won acceptance not only among PC experts but the mass market as a way to have PC portability.
From 1983 onward, several new input techniques were developed and included in laptops, including the touch pad (Gavilan SC, 1983), the pointing stick (IBM ThinkPad 700, 1992), and handwriting recognition (Linus Write-Top, 1987). Some CPUs, such as the 1990 Intel i386SL, were designed to use minimum power to increase battery life of portable computers and were supported by dynamic power management features such as Intel SpeedStep and AMD PowerNow! in some designs.
Displays reached 640x480 (VGA) resolution by 1988 (Compaq SLT/286), and color screens started becoming a common upgrade in 1991, with increases in resolution and screen size occurring frequently until the introduction of 17" screen laptops in 2003. Hard drives started to be used in portables, encouraged by the introduction of 3.5" drives in the late 1980s, and became common in laptops starting with the introduction of 2.5" and smaller drives around 1990; capacities have typically lagged behind physically larger desktop drives.
Common resolutions of laptop webcams are 720p (HD), and in lower-end laptops 480p. The earliest known laptops with 1080p (Full HD) webcams like the Samsung 700G7C were released in the early 2010s.
Optical disc drives became common in full-size laptops around 1997; this initially consisted of CD-ROM drives, which were supplanted by CD-R, DVD, and Blu-ray drives with writing capability over time. Starting around 2011, the trend shifted against internal optical drives, and as of 2021, they have largely disappeared; they are still readily available as external peripherals.
Etymology
While the terms laptop and notebook are used interchangeably today, there is some question as to the original etymology and specificity of either term—the term laptop appears to have been coined in the early 1980s to describe a mobile computer which could be used on one's lap, and to distinguish these devices from earlier and much heavier, portable computers (informally called "luggables"). The term "notebook" appears to have gained currency somewhat later as manufacturers started producing even smaller portable devices, further reducing their weight and size and incorporating a display roughly the size of A4 paper; these were marketed as notebooks to distinguish them from bulkier mainstream or desktop replacement laptops.
Types
Since the introduction of portable computers during the late 1970s, their form has changed significantly, spawning a variety of visually and technologically differing subclasses. Except where there is a distinct legal trademark around a term (notably, Ultrabook), there are rarely hard distinctions between these classes and their usage has varied over time and between different sources. Since the late 2010s, the use of more specific terms has become less common, with sizes distinguished largely by the size of the screen.
Smaller and Larger Laptops
There were in the past a number of marketing categories for smaller and larger laptop computers; these included "subnotebook" models, low cost "netbooks", and "Ultra-mobile PCs" where the size class overlapped with devices like smartphone and handheld tablets, and "Desktop replacement" laptops for machines notably larger and heavier than typical to operate more powerful processors or graphics hardware. All of these terms have fallen out of favor as the size of mainstream laptops has gone down and their capabilities have gone up; except for niche models, laptop sizes tend to be distinguished by the size of the screen, and for more powerful models, by any specialized purpose the machine is intended for, such as a "gaming laptop" or a "mobile workstation" for professional use.
Convertible, hybrid, 2-in-1
The latest trend of technological convergence in the portable computer industry spawned a broad range of devices, which combined features of several previously separate device types. The hybrids, convertibles, and 2-in-1s emerged as crossover devices, which share traits of both tablets and laptops. All such devices have a touchscreen display designed to allow users to work in a tablet mode, using either multi-touch gestures or a stylus/digital pen.
Convertibles are devices with the ability to conceal a hardware keyboard. Keyboards on such devices can be flipped, rotated, or slid behind the back of the chassis, thus transforming from a laptop into a tablet. Hybrids have a keyboard detachment mechanism, and due to this feature, all critical components are situated in the part with the display. 2-in-1s can have a hybrid or a convertible form, often dubbed 2-in-1 detachable and 2-in-1 convertibles respectively, but are distinguished by the ability to run a desktop OS, such as Windows 10. 2-in-1s are often marketed as laptop replacement tablets.
2-in-1s are often very thin, around , and light devices with a long battery life. 2-in-1s are distinguished from mainstream tablets as they feature an x86-architecture CPU (typically a low- or ultra-low-voltage model), such as the Intel Core i5, run a full-featured desktop OS like Windows 10, and have a number of typical laptop I/O ports, such as USB 3 and Mini DisplayPort.
2-in-1s are designed to be used not only as a media consumption device but also as valid desktop or laptop replacements, due to their ability to run desktop applications, such as Adobe Photoshop. It is possible to connect multiple peripheral devices, such as a mouse, keyboard, and several external displays to a modern 2-in-1.
Microsoft Surface Pro-series devices and Surface Book are examples of modern 2-in-1 detachable, whereas Lenovo Yoga-series computers are a variant of 2-in-1 convertibles. While the older Surface RT and Surface 2 have the same chassis design as the Surface Pro, their use of ARM processors and Windows RT do not classify them as 2-in-1s, but as hybrid tablets. Similarly, a number of hybrid laptops run a mobile operating system, such as Android. These include Asus's Transformer Pad devices, examples of hybrids with a detachable keyboard design, which do not fall in the category of 2-in-1s.
Rugged laptop
A rugged laptop is designed to reliably operate in harsh usage conditions such as strong vibrations, extreme temperatures, and wet or dusty environments. Rugged laptops are bulkier, heavier, and much more expensive than regular laptops, and thus are seldom seen in regular consumer use.
Hardware
The basic components of laptops function identically to their desktop counterparts. Traditionally they were miniaturized and adapted to mobile use, although desktop systems increasingly use the same smaller, lower-power parts which were originally developed for mobile use. The design restrictions on power, size, and cooling of laptops limit the maximum performance of laptop parts compared to that of desktop components, although that difference has increasingly narrowed.
In general, laptop components are not intended to be replaceable or upgradable by the end-user, except for components that can be detached; in the past, batteries and optical drives were commonly exchangeable. This restriction is one of the major differences between laptops and desktop computers, because the large "tower" cases used in desktop computers are designed so that new motherboards, hard disks, sound cards, RAM, and other components can be added. Memory and storage can often be upgraded with some disassembly, but with the most compact laptops, there may be no upgradeable components at all.
Intel, Asus, Compal, Quanta, and some other laptop manufacturers have created the Common Building Block standard for laptop parts to address some of the inefficiencies caused by the lack of standards and inability to upgrade components.
The following sections summarizes the differences and distinguishing features of laptop components in comparison to desktop personal computer parts.
Display
Internally, a display is usually an LCD panel, although occasionally OLEDs are used. These interface to the laptop using the LVDS or embedded DisplayPort protocol, while externally, it can be a glossy screen or a matte (anti-glare) screen. As of 2021, mainstream consumer laptops tend to come with either 13" or 15"-16" screens; 14" models are more popular among business machines. Larger and smaller models are available, but less common – there is no clear dividing line in minimum or maximum size. Machines small enough to be handheld (screens in the 6–8" range) can be marketed either as very small laptops or "handheld PCs," while the distinction between the largest laptops and "All-in-One" desktops is whether they fold for travel.
Sizes
In the past, there was a broader range of marketing terms (both formal and informal) to distinguish between different sizes of laptops. These included Netbooks, subnotebooks, Ultra-mobile PC, and Desktop replacement computers; these are sometimes still used informally, although they are essentially dead in terms of manufacturer marketing.
Resolution
Having a higher resolution display allows more items to fit onscreen at a time, improving the user's ability to multitask, although at the higher resolutions on smaller screens, the resolution may only serve to display sharper graphics and text rather than increasing the usable area. Since the introduction of the MacBook Pro with Retina display in 2012, there have been an increase in the availability of "HiDPI" (or high Pixel density) displays; as of 2021, this is generally considered to be anything higher than 1920 pixels wide. This has increasingly converged around 4K (3840-pixel-wide) resolutions.
External displays can be connected to most laptops, and models with a Mini DisplayPort can handle up to three.
Refresh rates and 3D
The earliest laptops known to feature a display with doubled 120 Hz of refresh rate and active shutter 3D system were released in 2011 by Dell (M17x) and Samsung (700G7A).
Central processing unit
A laptop's central processing unit (CPU) has advanced power-saving features and produces less heat than one intended purely for desktop use. Mainstream laptop CPUs made after 2018 have four processor cores, although some inexpensive models still have 2-core CPUs, and 6-core and 8-core models are also available.
For the low price and mainstream performance, there is no longer a significant performance difference between laptop and desktop CPUs, but at the high end, the fastest desktop CPUs still substantially outperform the fastest laptop processors, at the expense of massively higher power consumption and heat generation; the fastest laptop processors top out at 56 watts of heat, while the fastest desktop processors top out at 150 watts.
There has been a wide range of CPUs designed for laptops available from both Intel, AMD, and other manufacturers. On non-x86 architectures, Motorola and IBM produced the chips for the former PowerPC-based Apple laptops (iBook and PowerBook). Between around 2000 to 2014, most full-size laptops had socketed, replaceable CPUs; on thinner models, the CPU was soldered on the motherboard and was not replaceable or upgradable without replacing the motherboard. Since 2015, Intel has not offered new laptop CPU models with pins to be interchangeable, preferring ball grid array chip packages which have to be soldered;and as of 2021, only a few rare models using desktop parts.
In the past, some laptops have used a desktop processor instead of the laptop version and have had high-performance gains at the cost of greater weight, heat, and limited battery life; this is not unknown as of 2021, but since around 2010, the practice has been restricted to small-volume gaming models. Laptop CPUs are rarely able to be overclocked; most use locked processors. Even on gaming models where unlocked processors are available, the cooling system in most laptops is often very close to its limits and there is rarely headroom for an overclocking–related operating temperature increase.
Graphical processing unit
On most laptops, a graphical processing unit (GPU) is integrated into the CPU to conserve power and space. This was introduced by Intel with the Core i-series of mobile processors in 2010, and similar accelerated processing unit (APU) processors by AMD later that year.
Before that, lower-end machines tended to use graphics processors integrated into the system chipset, while higher-end machines had a separate graphics processor. In the past, laptops lacking a separate graphics processor were limited in their utility for gaming and professional applications involving 3D graphics, but the capabilities of CPU-integrated graphics have converged with the low-end of dedicated graphics processors since the mid-2010s.
Higher-end laptops intended for gaming or professional 3D work still come with dedicated and in some cases even dual, graphics processors on the motherboard or as an internal expansion card. Since 2011, these almost always involve switchable graphics so that when there is no demand for the higher performance dedicated graphics processor, the more power-efficient integrated graphics processor will be used. Nvidia Optimus and AMD Hybrid Graphics are examples of this sort of system of switchable graphics.
Memory
Since around the year 2000, most laptops have used SO-DIMM RAM, although, as of 2021, an increasing number of models use memory soldered to the motherboard. Before 2000, most laptops used proprietary memory modules if their memory was upgradable.
In the early 2010s, high end laptops such as the 2011 Samsung 700G7A have passed the 10 GB RAM barrier, featuring 16 GB of RAM.
When upgradeable, memory slots are sometimes accessible from the bottom of the laptop for ease of upgrading; in other cases, accessing them requires significant disassembly. Most laptops have two memory slots, although some will have only one, either for cost savings or because some amount of memory is soldered. Some high-end models have four slots; these are usually mobile engineering workstations, although a few high-end models intended for gaming do as well.
As of 2021, 8 GB RAM is most common, with lower-end models occasionally having 4GB. Higher-end laptops may come with 16 GB of RAM or more.
Internal storage
The earliest laptops most often used floppy disk for storage, although a few used either RAM disks or tape, by the late 1980s hard disk drives had become the standard form of storage.
Between 1990 and 2009, almost all laptops typically had a hard disk drive (HDD) for storage; since then, solid-state drives (SSD) have gradually come to supplant hard drives in all but some inexpensive consumer models. Solid-state drives are faster and more power-efficient, as well as eliminating the hazard of drive and data corruption caused by a laptop's physical impacts, as they use no mechanical parts such as a rotational platter. In many cases, they are more compact as well. Initially, in the late 2000s, SSDs were substantially more expensive than HDDs, but as of 2021 prices on smaller capacity (under 1 terabyte) drives have converged; larger capacity drives remain more expensive than comparable-sized HDDs.
Since around 1990, where a hard drive is present it will typically be a 2.5-inch drive; some very compact laptops support even smaller 1.8-inch HDDs, and a very small number used 1" Microdrives. Some SSDs are built to match the size/shape of a laptop hard drive, but increasingly they have been replaced with smaller mSATA or M.2 cards. SSDs using the newer and much faster NVM Express standard for connecting are only available as cards.
As of 2021, many laptops no longer contain space for a 2.5" drive, accepting only M.2 cards; a few of the smallest have storage soldered to the motherboard. For those that can, they can typically contain a single 2.5-inch drive, but a small number of laptops with a screen wider than 15 inches can house two drives.
A variety of external HDDs or NAS data storage servers with support of RAID technology can be attached to virtually any laptop over such interfaces as USB, FireWire, eSATA, or Thunderbolt, or over a wired or wireless network to further increase space for the storage of data. Many laptops also incorporate a card reader which allows for use of memory cards, such as those used for digital cameras, which are typically SD or microSD cards. This enables users to download digital pictures from an SD card onto a laptop, thus enabling them to delete the SD card's contents to free up space for taking new pictures.
Removable media drive
Optical disc drives capable of playing CD-ROMs, compact discs (CD), DVDs, and in some cases, Blu-ray discs (BD), were nearly universal on full-sized models between the mid-1990s and the early 2010s. As of 2021, drives are uncommon in compact or premium laptops; they remain available in some bulkier models, but the trend towards thinner and lighter machines is gradually eliminating these drives and players – when needed they can be connected via USB instead.
Inputs
An alphanumeric keyboard is used to enter text, data, and other commands (e.g., function keys). A touchpad (also called a trackpad), a pointing stick, or both, are used to control the position of the cursor on the screen, and an integrated keyboard is used for typing. Some touchpads have buttons separate from the touch surface, while others share the surface. A quick double-tap is typically registered as a click, and operating systems may recognize multi-finger touch gestures.
An external keyboard and mouse may be connected using a USB port or wirelessly, via Bluetooth or similar technology. Some laptops have multitouch touchscreen displays, either available as an option or standard. Most laptops have webcams and microphones, which can be used to communicate with other people with both moving images and sound, via web conferencing or video-calling software.
Laptops typically have USB ports and a combined headphone/microphone jack, for use with headphones, a combined headset, or an external mic. Many laptops have a card reader for reading digital camera SD cards.
Input/output (I/O) ports
On a typical laptop there are several USB ports; if they use only the older USB connectors instead of USB-C, they will typically have an external monitor port (VGA, DVI, HDMI or Mini DisplayPort or occasionally more than one), an audio in/out port (often in form of a single socket) is common. It is possible to connect up to three external displays to a 2014-era laptop via a single Mini DisplayPort, using multi-stream transport technology.
Apple, in a 2015 version of its MacBook, transitioned from a number of different I/O ports to a single USB-C port. This port can be used both for charging and connecting a variety of devices through the use of aftermarket adapters. Google, with its updated version of Chromebook Pixel, shows a similar transition trend towards USB-C, although keeping older USB Type-A ports for a better compatibility with older devices. Although being common until the end of the 2000s decade, Ethernet network port are rarely found on modern laptops, due to widespread use of wireless networking, such as Wi-Fi. Legacy ports such as a PS/2 keyboard/mouse port, serial port, parallel port, or FireWire are provided on some models, but they are increasingly rare. On Apple's systems, and on a handful of other laptops, there are also Thunderbolt ports, but Thunderbolt 3 uses USB-C. Laptops typically have a headphone jack, so that the user can connect external headphones or amplified speaker systems for listening to music or other audio.
Expansion cards
In the past, a PC Card (formerly PCMCIA) or ExpressCard slot for expansion was often present on laptops to allow adding and removing functionality, even when the laptop is powered on; these are becoming increasingly rare since the introduction of USB 3.0. Some internal subsystems such as Ethernet, Wi-Fi, or a wireless cellular modem can be implemented as replaceable internal expansion cards, usually accessible under an access cover on the bottom of the laptop. The standard for such cards is PCI Express, which comes in both mini and even smaller M.2 sizes. In newer laptops, it is not uncommon to also see Micro SATA (mSATA) functionality on PCI Express Mini or M.2 card slots allowing the use of those slots for SATA-based solid-state drives.
Battery and power supply
Since the late 1990s, laptops have typically used lithium ion or lithium polymer batteries, These replaced the older nickel metal-hydride typically used in the 1990s, and nickel–cadmium batteries used in most of the earliest laptops. A few of the oldest laptops used non-rechargeable batteries, or lead–acid batteries.
Battery life is highly variable by model and workload and can range from one hour to nearly a day. A battery's performance gradually decreases over time; a substantial reduction in capacity is typically evident after one to three years of regular use, depending on the charging and discharging pattern and the design of the battery. Innovations in laptops and batteries have seen situations in which the battery can provide up to 24 hours of continued operation, assuming average power consumption levels. An example is the HP EliteBook 6930p when used with its ultra-capacity battery.
Laptops with removable batteries may support larger replacement batteries with extended capacity.
A laptop's battery is charged using an external power supply, which is plugged into a wall outlet. The power supply outputs a DC voltage typically in the range of 7.2—24 volts. The power supply is usually external and connected to the laptop through a DC connector cable. In most cases, it can charge the battery and power the laptop simultaneously. When the battery is fully charged, the laptop continues to run on power supplied by the external power supply, avoiding battery use. If the used power supply is not strong enough to power computing components and charge the battery simultaneously, the battery may charge in a shorter period of time if the laptop is turned off or sleeping. The charger typically adds about to the overall transporting weight of a laptop, although some models are substantially heavier or lighter. Most 2016-era laptops use a smart battery, a rechargeable battery pack with a built-in battery management system (BMS). The smart battery can internally measure voltage and current, and deduce charge level and State of Health (SoH) parameters, indicating the state of the cells.
Power connectors
Historically, DC connectors, typically cylindrical/barrel-shaped coaxial power connectors have been used in laptops. Some vendors such as Lenovo made intermittent use of a rectangular connector.
Some connector heads feature a center pin to allow the end device to determine the power supply type by measuring the resistance between it and the connector's negative pole (outer surface). Vendors may block charging if a power supply is not recognized as original part, which could deny the legitimate use of universal third-party chargers.
With the advent of USB-C, portable electronics made increasing use of it for both power delivery and data transfer. Its support for 20 V (common laptop power supply voltage) and 5 A typically suffices for low to mid-end laptops, but some with higher power demands such as gaming laptops depend on dedicated DC connectors to handle currents beyond 5 A without risking overheating, some even above 10 A. Additionally, dedicated DC connectors are more durable and less prone to wear and tear from frequent reconnection, as their design is less delicate.
Cooling
Waste heat from the operation is difficult to remove in the compact internal space of a laptop. The earliest laptops used passive cooling; this gave way to heat sinks placed directly on the components to be cooled, but when these hot components are deep inside the device, a large space-wasting air duct is needed to exhaust the heat. Modern laptops instead rely on heat pipes to rapidly move waste heat towards the edges of the device, to allow for a much smaller and compact fan and heat sink cooling system. Waste heat is usually exhausted away from the device operator towards the rear or sides of the device. Multiple air intake paths are used since some intakes can be blocked, such as when the device is placed on a soft conforming surface like a chair cushion. Secondary device temperature monitoring may reduce performance or trigger an emergency shutdown if it is unable to dissipate heat, such as if the laptop were to be left running and placed inside a carrying case. Aftermarket cooling pads with external fans can be used with laptops to reduce operating temperatures.
Docking station
A docking station (sometimes referred to simply as a dock) is a laptop accessory that contains multiple ports and in some cases expansion slots or bays for fixed or removable drives. A laptop connects and disconnects to a docking station, typically through a single large proprietary connector. A docking station is an especially popular laptop accessory in a corporate computing environment, due to a possibility of a docking station transforming a laptop into a full-featured desktop replacement, yet allowing for its easy release. This ability can be advantageous to "road warrior" employees who have to travel frequently for work, and yet who also come into the office. If more ports are needed, or their position on a laptop is inconvenient, one can use a cheaper passive device known as a port replicator. These devices mate to the connectors on the laptop, such as through USB or FireWire.
Charging trolleys
Laptop charging trolleys, also known as laptop trolleys or laptop carts, are mobile storage containers to charge multiple laptops, netbooks, and tablet computers at the same time. The trolleys are used in schools that have replaced their traditional static computer labs suites of desktop equipped with "tower" computers, but do not have enough plug sockets in an individual classroom to charge all of the devices. The trolleys can be wheeled between rooms and classrooms so that all students and teachers in a particular building can access fully charged IT equipment.
Laptop charging trolleys are also used to deter and protect against opportunistic and organized theft. Schools, especially those with open plan designs, are often prime targets for thieves who steal high-value items. Laptops, netbooks, and tablets are among the highest–value portable items in a school. Moreover, laptops can easily be concealed under clothing and stolen from buildings. Many types of laptop–charging trolleys are designed and constructed to protect against theft. They are generally made out of steel, and the laptops remain locked up while not in use. Although the trolleys can be moved between areas from one classroom to another, they can often be mounted or locked to the floor or walls to prevent thieves from stealing the laptops, especially overnight.
Solar panels
In some laptops, solar panels are able to generate enough solar power for the laptop to operate. The One Laptop Per Child Initiative released the OLPC XO-1 laptop which was tested and successfully operated by use of solar panels. Presently, they are designing an OLPC XO-3 laptop with these features. The OLPC XO-3 can operate with 2 watts of electricity because its renewable energy resources generate a total of 4 watts. Samsung has also designed the NC215S solar–powered notebook that will be sold commercially in the U.S. market.
Accessories
A common accessory for laptops is a laptop sleeve, laptop skin, or laptop case, which provides a degree of protection from scratches. Sleeves, which are distinguished by being relatively thin and flexible, are most commonly made of neoprene, with sturdier ones made of low-resilience polyurethane. Some laptop sleeves are wrapped in ballistic nylon to provide some measure of waterproofing. Bulkier and sturdier cases can be made of metal with polyurethane padding inside and may have locks for added security. Metal, padded cases also offer protection against impacts and drops. Another common accessory is a laptop cooler, a device that helps lower the internal temperature of the laptop either actively or passively. A common active method involves using electric fans to draw heat away from the laptop, while a passive method might involve propping the laptop up on some type of pad so it can receive more airflow. Some stores sell laptop pads that enable a reclining person on a bed to use a laptop.
Modularity
Some of the components of earlier models of laptops can easily be replaced without opening completely its bottom part, such as keyboard, battery, hard disk, memory modules, CPU cooling fan, etc.
Some of the components of recent models of laptops reside inside. Replacing most of its components, such as keyboard, battery, hard disk, memory modules, CPU cooling fan, etc., requires removal of its either top or bottom part, removal of the motherboard, and returning them.
In some types, solder and glue are used to mount components such as RAM, storage, and batteries, making repairs additionally difficult.
Obsolete features
Features that certain early models of laptops used to have that are not available in most current laptops include:
Reset ("cold restart") button in a hole (needed a thin metal tool to press)
Instant power off button in a hole (needed a thin metal tool to press)
Integrated charger or power adapter inside the laptop
Floppy disk drive
Serial port
Parallel port
Modem
Shared PS/2 input device port
IrDA
S-video port
S/PDIF audio port
PC Card / PCMCIA slot
ExpressCard slot
CD/DVD Drives (starting with 2013 models)
VGA port (starting with 2013 models)
Comparison with desktops
Advantages
Portability is usually the first feature mentioned in any comparison of laptops versus desktop PCs. Physical portability allows a laptop to be used in many places—not only at home and the office but also during commuting and flights, in coffee shops, in lecture halls and libraries, at clients' locations or a meeting room, etc. Within a home, portability enables laptop users to move their devices from the living room to the dining room to the family room. Portability offers several distinct advantages:
Productivity: Using a laptop in places where a desktop PC cannot be used can help employees and students to increase their productivity on work or school tasks, such as an office worker reading their work e-mails during an hour-long commute by train, or a student doing their homework at the university coffee shop during a break between lectures, for example.
Immediacy: Carrying a laptop means having instant access to information, including personal and work files. This allows better collaboration between coworkers or students, as a laptop can be flipped open to look at a report, document, spreadsheet, or presentation anytime and anywhere.
Up-to-date information: If a person has more than one desktop PC, a problem of synchronization arises: changes made on one computer are not automatically propagated to the others. There are ways to resolve this problem, including physical transfer of updated files (using a USB flash memory stick or CD-ROMs) or using synchronization software over the Internet, such as cloud computing. However, transporting a single laptop to both locations avoids the problem entirely, as the files exist in a single location and are always up-to-date.
Connectivity: In the 2010s, a proliferation of Wi-Fi wireless networks and cellular broadband data services (HSDPA, EVDO and others) in many urban centers, combined with near-ubiquitous Wi-Fi support by modern laptops meant that a laptop could now have easy Internet and local network connectivity while remaining mobile. Wi-Fi networks and laptop programs are especially widespread at university campuses.
Other advantages of laptops:
Size: Laptops are smaller than desktop PCs. This is beneficial when space is at a premium, for example in small apartments and student dorms. When not in use, a laptop can be closed and put away in a desk drawer.
Low power consumption: Laptops are several times more power-efficient than desktops. A typical laptop uses 20–120 W, compared to 100–800 W for desktops. This could be particularly beneficial for large businesses, which run hundreds of personal computers thus multiplying the potential savings, and homes where there is a computer running 24/7 (such as a home media server, print server, etc.).
Quiet: Laptops are typically much quieter than desktops, due both to the components (quieter, slower 2.5-inch hard drives) and to less heat production leading to the use of fewer and slower cooling fans.
Battery: a charged laptop can continue to be used in case of a power outage and is not affected by short power interruptions and blackouts. A desktop PC needs an uninterruptible power supply (UPS) to handle short interruptions, blackouts, and spikes; achieving on-battery time of more than 20–30 minutes for a desktop PC requires a large and expensive UPS.
All-in-One: designed to be portable, most 2010-era laptops have all components integrated into the chassis (however, some small laptops may not have an internal CD/CDR/DVD drive, so an external drive needs to be used). For desktops (excluding all-in-ones) this is usually divided into the desktop "tower" (the unit with the CPU, hard drive, power supply, etc.), keyboard, mouse, display screen, and optional peripherals such as speakers.
Disadvantages
Compared to desktop PCs, laptops have disadvantages in the following areas:
Performance
While the performance of mainstream desktops and laptops are comparable, and the cost of laptops has fallen less rapidly than desktops, laptops remain more expensive than desktop PCs at the same performance level. The upper limits of performance of laptops remain much lower than the highest-end desktops (especially "workstation class" machines with two processor sockets), and "leading-edge" features usually appear first in desktops and only then, as the underlying technology matures, are adapted to laptops.
For Internet browsing and typical office applications, where the computer spends the majority of its time waiting for the next user input, even relatively low-end laptops (such as Netbooks) can be fast enough for some users. Most higher-end laptops are sufficiently powerful for high-resolution movie playback, some 3D gaming and video editing and encoding. However, laptop processors can be disadvantaged when dealing with a higher-end database, maths, engineering, financial software, virtualization, etc. This is because laptops use the mobile versions of processors to conserve power, and these lag behind desktop chips when it comes to performance. Some manufacturers work around this performance problem by using desktop CPUs for laptops.
Upgradeability
The upgradeability of laptops is very limited compared to thoroughly standardized desktops. In general, hard drives and memory can be upgraded easily. Optical drives and internal expansion cards may be upgraded if they follow an industry standard, but all other internal components, including the motherboard, CPU, and graphics, are not always intended to be upgradeable. Intel, Asus, Compal, Quanta and some other laptop manufacturers have created the Common Building Block standard for laptop parts to address some of the inefficiencies caused by the lack of standards. The reasons for limited upgradeability are both technical and economic. There is no industry-wide standard form factor for laptops; each major laptop manufacturer pursues its own proprietary design and construction, with the result that laptops are difficult to upgrade and have high repair costs. Moreover, starting with 2013 models, laptops have become increasingly integrated (soldered) with the motherboard for most of its components (CPU, SSD, RAM, keyboard, etc.) to reduce size and upgradeability prospects. Devices such as sound cards, network adapters, hard and optical drives, and numerous other peripherals are available, but these upgrades usually impair the laptop's portability, because they add cables and boxes to the setup and often have to be disconnected and reconnected when the laptop is on the move.
Ergonomics and health effects
Wrists
Prolonged use of laptops can cause repetitive strain injury because of their small, flat keyboard and trackpad pointing devices. Usage of separate, external ergonomic keyboards and pointing devices is recommended to prevent injury when working for long periods of time; they can be connected to a laptop easily by USB, Bluetooth or via a docking station. Some health standards require ergonomic keyboards at workplaces.
Neck and spine
A laptop's integrated screen often requires users to lean over for a better view, which can cause neck or spinal injuries. A larger and higher-quality external screen can be connected to almost any laptop to alleviate this and to provide additional screen space for more productive work. Another solution is to use a computer stand.
Possible effect on fertility
A study by State University of New York researchers found that heat generated from laptops can increase the temperature of the lap of male users when balancing the computer on their lap, potentially putting sperm count at risk. The study, which included roughly two dozen men between the ages of 21 and 35, found that the sitting position required to balance a laptop can increase scrotum temperature by as much as . However, further research is needed to determine whether this directly affects male sterility. A later 2010 study of 29 males published in Fertility and Sterility found that men who kept their laptops on their laps experienced scrotal hyperthermia (overheating) in which their scrotal temperatures increased by up to . The resulting heat increase, which could not be offset by a laptop cushion, may increase male infertility.
A common practical solution to this problem is to place the laptop on a table or desk or to use a book or pillow between the body and the laptop. Another solution is to obtain a cooling unit for the laptop. These are usually USB powered and consist of a hard thin plastic case housing one, two, or three cooling fans – with the entire assembly designed to sit under the laptop in question – which results in the laptop remaining cool to the touch, and greatly reduces laptop heat buildup.
Thighs
Heat generated from using a laptop on the lap can also cause skin discoloration on the thighs known as "toasted skin syndrome".
Durability
Laptops are less durable than desktops/PCs. However, the durability of the laptop depends on the user if proper maintenance is done then the laptop can work longer.
Equipment wear
Because of their portability, laptops are subject to more wear and physical damage than desktops. Components such as screen hinges, latches, power jacks, and power cords deteriorate gradually from ordinary use and may have to be replaced. A liquid spill onto the keyboard, a rather minor mishap with a desktop system (given that a basic keyboard costs about US$20), can damage the internals of a laptop and destroy the computer, result in a costly repair or entire replacement of laptops. One study found that a laptop is three times more likely to break during the first year of use than a desktop. To maintain a laptop, it is recommended to clean it every three months for dirt, debris, dust, and food particles. Most cleaning kits consist of a lint-free or microfiber cloth for the LCD screen and keyboard, compressed air for getting dust out of the cooling fan, and a cleaning solution. Harsh chemicals such as bleach should not be used to clean a laptop, as they can damage it.
Heating and cooling
Laptops rely on extremely compact cooling systems involving a fan and heat sink that can fail from blockage caused by accumulated airborne dust and debris. Most laptops do not have any type of removable dust collection filter over the air intake for these cooling systems, resulting in a system that gradually conducts more heat and noise as the years pass. In some cases, the laptop starts to overheat even at idle load levels. This dust is usually stuck inside where the fan and heat sink meet, where it can not be removed by a casual cleaning and vacuuming. Most of the time, compressed air can dislodge the dust and debris but may not entirely remove it. After the device is turned on, the loose debris is reaccumulated into the cooling system by the fans. Complete disassembly is usually required to clean the laptop entirely. However, preventative maintenance such as regular cleaning of the heat sink via compressed air can prevent dust build-up on the heat sink. Many laptops are difficult to disassemble by the average user and contain components that are sensitive to electrostatic discharge (ESD).
Battery life
Battery life is limited because the capacity drops with time, eventually requiring replacement after as little as a year. A new battery typically stores enough energy to run the laptop for three to five hours, depending on usage, configuration, and power management settings. Yet, as it ages, the battery's energy storage will dissipate progressively until it lasts only a few minutes. The battery is often easily replaceable and a higher capacity model may be obtained for longer charging and discharging time. Some laptops (specifically ultrabooks) do not have the usual removable battery and have to be brought to the service center of their manufacturer or a third-party laptop service center to have their battery replaced. Replacement batteries can also be expensive.
Security and privacy
Because they are valuable, commonly used, portable, and easy to hide in a backpack or other type of travel bag, laptops are often stolen. Every day, over 1,600 laptops go missing from U.S. airports. The cost of stolen business or personal data, and of the resulting problems (identity theft, credit card fraud, breach of privacy), can be many times the value of the stolen laptop itself. Consequently, the physical protection of laptops and the safeguarding of data contained on them are both of great importance. Most laptops have a Kensington security slot, which can be used to tether them to a desk or other immovable object with a security cable and lock. In addition, modern operating systems and third-party software offer disk encryption functionality, which renders the data on the laptop's hard drive unreadable without a key or a passphrase. As of 2015, some laptops also have additional security elements added, including eye recognition software and fingerprint scanning components.
Software such as LoJack for Laptops, Laptop Cop, and GadgetTrack have been engineered to help people locate and recover their stolen laptops in the event of theft. Setting one's laptop with a password on its firmware (protection against going to firmware setup or booting), internal HDD/SSD (protection against accessing it and loading an operating system on it afterward), and every user account of the operating system are additional security measures that a user should do. Fewer than 5% of lost or stolen laptops are recovered by the companies that own them, however, that number may decrease due to a variety of companies and software solutions specializing in laptop recovery. In the 2010s, the common availability of webcams on laptops raised privacy concerns. In Robbins v. Lower Merion School District (Eastern District of Pennsylvania 2010), school-issued laptops loaded with special software enabled staff from two high schools to take secret webcam shots of students at home, via their students' laptops.
Sales
Manufacturers
There are many laptop brands and manufacturers. Several major brands that offer notebooks in various classes are listed in the adjacent box.
The major brands usually offer good service and support, including well-executed documentation and driver downloads that remain available for many years after a particular laptop model is no longer produced. Capitalizing on service, support, and brand image, laptops from major brands are more expensive than laptops by smaller brands and ODMs. Some brands specialize in a particular class of laptops, such as gaming laptops (Alienware), high-performance laptops (HP Envy), netbooks (EeePC) and laptops for children (OLPC).
Many brands, including the major ones, do not design and do not manufacture their laptops. Instead, a small number of Original Design Manufacturers (ODMs) design new models of laptops, and the brands choose the models to be included in their lineup. In 2006, 7 major ODMs manufactured 7 of every 10 laptops in the world, with the largest one (Quanta Computer) having 30% of the world market share. Therefore, identical models are available both from a major label and from a low-profile ODM in-house brand.
Market share
Battery-powered portable computers had just 2% worldwide market share in 1986. However, laptops have become increasingly popular, both for business and personal use. Around 109 million notebook PCs shipped worldwide in 2007, a growth of 33% compared to 2006. In 2008 it was estimated that 145.9 million notebooks were sold, and that the number would grow in 2009 to 177.7 million. The third quarter of 2008 was the first time when worldwide notebook PC shipments exceeded desktops, with 38.6 million units versus 38.5 million units.
May 2005 was the first time notebooks outsold desktops in the US over the course of a full month; at the time notebooks sold for an average of $1,131 while desktops sold for an average of $696. When looking at operating systems, for Microsoft Windows laptops the average selling price (ASP) showed a decline in 2008/2009, possibly due to low-cost netbooks, drawing an average US$689 at U.S. retail stores in August 2008. In 2009, ASP had further fallen to $602 by January and to $560 in February. While Windows machines ASP fell $129 in these seven months, Apple macOS laptop ASP declined just $12 from $1,524 to $1,512.
Disposal
The list of materials that go into a laptop computer is long, and many of the substances used, such as beryllium (used in beryllium-copper alloy contacts in some connectors and sockets), lead (used in lead-tin solder), chromium, and mercury (used in CCFL LCD backlights) compounds, are toxic or carcinogenic to humans. Although these toxins are relatively harmless when the laptop is in use, concerns that discarded laptops cause a serious health risk and toxic environmental damage, were so strong, that the Waste Electrical and Electronic Equipment Directive (WEEE Directive) in Europe specified that all laptop computers must be recycled by law. Similarly, the U.S. Environmental Protection Agency (EPA) has outlawed landfill dumping or the incinerating of discarded laptop computers.
Most laptop computers begin the recycling process with a method known as Demanufacturing, this involves the physical separation of the components of the laptop. These components are then either grouped into materials (e.g. plastic, metal and glass) for recycling or more complex items that require more advanced materials separation (e.g.) circuit boards, hard drives and batteries.
Corporate laptop recycling can require an additional process known as data destruction. The data destruction process ensures that all information or data that has been stored on a laptop hard drive can never be retrieved again. Below is an overview of some of the data protection and environmental laws and regulations applicable for laptop recycling data destruction:
Data Protection Act 1998 (DPA)
EU Privacy Directive (Due 2016)
Financial Conduct Authority
Sarbanes-Oxley Act
PCI-DSS Data Security Standard
Waste, Electronic & Electrical Equipment Directive (WEEE)
Basel Convention
Bank Secrecy Act (BSA)
FACTA Sarbanes-Oxley Act
FDA Security Regulations (21 C.F.R. part 11)
Gramm-Leach-Bliley Act (GLBA)
HIPAA (Health Insurance Portability and Accountability Act)
NIST SP 800–53
Add NIST SP 800–171
Identity Theft and Assumption Deterrence Act
Patriot Act of 2002
PCI Data Security Standard
US Safe Harbor Provisions
Various state laws
JAN 6/3
Gramm-leach-Bliley Act
DCID
Extreme use
The ruggedized Grid Compass computer was used since the early days of the Space Shuttle program. The first commercial laptop used in space was a Macintosh portable in 1991 aboard Space Shuttle mission STS-43. Apple and other laptop computers continue to be flown aboard crewed spaceflights, though the only long-duration flight certified computer for the International Space Station is the ThinkPad. As of 2011, over 100 ThinkPads were aboard the ISS. Laptops used aboard the International Space Station and other spaceflights are generally the same ones that can be purchased by the general public but needed modifications are made to allow them to be used safely and effectively in a weightless environment such as updating the cooling systems to function without relying on hot air rising and accommodation for the lower cabin air pressure. Laptops operating in harsh usage environments and conditions, such as strong vibrations, extreme temperatures, and wet or dusty conditions differ from those used in space in that they are custom designed for the task and do not use commercial off-the-shelf hardware.
See also
List of computer size categories
List of laptop brands and manufacturers
Netbook
Smartbook
Chromebook
Ultrabook
Smartphone
Subscriber Identity Module
Mobile broadband
Mobile Internet device (MID)
Personal digital assistant
VIA OpenBook
Tethering
XJACK
Open-source computer hardware
Novena
Portal laptop computer
Mobile modem
Stereoscopy glasses
Notes
References
Classes of computers
Japanese inventions
Mobile computers
Office equipment
Personal computers
1980s neologisms | Operating System (OS) | 1,060 |
Ordnance Survey
Ordnance Survey (OS) is the national mapping agency for Great Britain. The agency's name indicates its original military purpose (see ordnance and surveying), which was to map Scotland in the wake of the Jacobite rising of 1745. There was also a more general and nationwide need in light of the potential threat of invasion during the Napoleonic Wars. Since 1 April 2015 Ordnance Survey has operated as Ordnance Survey Ltd, a government-owned company, 100% in public ownership. The Ordnance Survey Board remains accountable to the Secretary of State for Business, Energy and Industrial Strategy. It was also a member of the Public Data Group.
Paper maps for walkers represent only 5% of the company's annual revenue. They produce digital map data, online route planning and sharing services and mobile apps, plus many other location-based products for business, government and consumers. Ordnance Survey mapping is usually classified as either "large-scale" (in other words, more detailed) or "small-scale". The Survey's large-scale mapping comprises 1:2,500 maps for urban areas and 1:10,000 more generally. (The latter superseded the 1:10,560 "six inches to the mile" scale in the 1950s.) These large scale maps are typically used in professional land-use contexts and were available as sheets until the 1980s, when they were digitised. Small-scale mapping for leisure use includes the 1:25,000 "Explorer" series, the 1:50,000 "Landranger" series and the 1:250,000 road maps. These are still available in traditional sheet form.
Ordnance Survey maps remain in copyright for fifty years after their publication. Some of the Copyright Libraries hold complete or near-complete collections of pre-digital OS mapping.
Origins
The origins of the Ordnance Survey lie in the aftermath of the Jacobite rising of 1745. Prince William, Duke of Cumberland realised that the British Army did not have a good map of the Scottish Highlands to locate Jacobite dissenters such as Simon Fraser, 11th Lord Lovat so that they could be put on trial. In 1747, Lieutenant-Colonel David Watson proposed the compilation of a map of the Highlands to help to subjugate the clans. In response, King George II charged Watson with making a military survey of the Highlands under the command of the Duke of Cumberland. Among Watson's assistants were William Roy, Paul Sandby and John Manson. The survey was produced at a scale of 1 inch to 1000 yards (1:36,000) and included "the Duke of Cumberland's Map" (primarily by Watson and Roy), now held in the British Library.
Roy later had an illustrious career in the Royal Engineers (RE), rising to the rank of General, and he was largely responsible for the British share of the work in determining the relative positions of the French and British royal observatories. This work was the starting point of the Principal Triangulation of Great Britain (1783–1853), and led to the creation of the Ordnance Survey itself. Roy's technical skills and leadership set the high standard for which Ordnance Survey became known. Work was begun in earnest in 1790 under Roy's supervision, when the Board of Ordnance (a predecessor of part of the modern Ministry of Defence) began a national military survey starting with the south coast of England. Roy's birthplace near Carluke in South Lanarkshire is today marked by a memorial in the form of a large OS trig point.
By 1791 the Board received the newer Ramsden theodolite (an improved successor to the one that Roy had used in 1784), and work began on mapping southern Great Britain using a five-mile baseline on Hounslow Heath that Roy himself had previously measured; it crosses the present Heathrow Airport. In 1991 Royal Mail marked the bicentenary by issuing a set of postage stamps featuring maps of the Kentish village of Hamstreet.
In 1801 the first one-inch-to-the-mile (1:63,360 scale) map was published, detailing the county of Kent, with Essex following shortly afterwards. The Kent map was published privately and stopped at the county border, while the Essex maps were published by Ordnance Survey and ignore the county border, setting the trend for future Ordnance Survey maps.
In the next 20 years about a third of England and Wales was mapped at the same scale (see Principal Triangulation of Great Britain) under the direction of William Mudge, as other military matters took precedence. It took until 1823 to re-establish the relationship with the French survey made by Roy in 1787. By 1810 one inch to the mile maps of most of the south of England were completed, but they were withdrawn from sale between 1811 and 1816 because of security fears. By 1840 the one-inch survey had covered all of Wales and all but the six northernmost counties of England.
Surveying was hard work. For instance, Major Thomas Colby, the longest-serving Director General of Ordnance Survey, walked in 22 days on a reconnaissance in 1819. In 1824, Colby and most of his staff moved to Ireland to work on a six-inches-to-the-mile (1:10,560) valuation survey. The survey of Ireland, county by county, was completed in 1846. The suspicions and tensions it caused in rural Ireland are the subject of Brian Friel's play Translations.
Colby was not only involved in the design of specialist measuring equipment. He also established a systematic collection of place names, and reorganised the map-making process to produce clear, accurate plans. Place names were recorded in "Name Books", a system first used in Ireland. The instructions for their use were:
Whilst these procedures generally produced excellent results, mistakes were made: for instance, the Pilgrims' Way in the North Downs labelled the wrong route, but the name stuck. Similarly, the spelling of Scafell and Scafell Pike copied an error on an earlier map, and was retained as this was the name of a corner of one of the Principal Triangles, despite "Scawfell" being the almost universal form at the time.
Colby believed in leading from the front, travelling with his men, helping to build camps and, as each survey session drew to a close, arranging mountain-top parties with enormous plum puddings.
The British Geological Survey was founded in 1835 as the Ordnance Geological Survey under Henry De la Beche, and remained a branch of the Ordnance Survey until 1965. At the same time the uneven quality of the English and Scottish maps was being improved by engravers under Benjamin Baker. By the time Colby retired in 1846, the production of six-inch maps of Ireland was complete. This had led to a demand for similar treatment in England, and work was proceeding on extending the six-inch map to northern England, but only a three-inch scale for most of Scotland.
When Colby retired he recommended William Yolland as his successor, but he was considered too young and the less experienced Lewis Alexander Hall was appointed. After a fire in the Tower of London, the headquarters of the survey was moved to Southampton, and Yolland was put in charge, but Hall sent him off to Ireland so that when Hall left in 1854 Yolland was again passed over in favour of Major Henry James. Hall was enthusiastic about extending the survey of the north of England to a scale of 1:2,500. In 1855, the Board of Ordnance was abolished and the Ordnance Survey was placed under the War Office together with the Topographical Survey and the Depot of Military Knowledge. Eventually in 1870 it was transferred to the Office of Works.
The primary triangulation of the United Kingdom of Roy, Mudge and Yolland was completed by 1841, but was greatly improved by Alexander Ross Clarke who completed a new survey based on Airy's spheroid in 1858, completing the Principal Triangulation. The following year, he completed an initial levelling of the country.
Great Britain "County Series"
After the Ordnance Survey published its first large-scale maps of Ireland in the mid-1830s, the Tithe Commutation Act 1836 led to calls for a similar six-inch to the mile survey in England and Wales. Official procrastination followed, but the development of the railways added to pressure that resulted in the Ordnance Survey Act 1841. This granted a right to enter property for the purpose of the survey. Following a fire at its headquarters at the Tower of London in 1841 the Ordnance Survey relocated to a site in Southampton and was in disarray for several years, with arguments about which scales to use. Major-General Sir Henry James was by then Director General, and he saw how photography could be used to make maps of various scales cheaply and easily. He developed and exploited photozincography, not only to reduce the costs of map production but also to publish facsimiles of nationally important manuscripts. Between 1861 and 1864, a facsimile of the Domesday Book was issued, county by county; and a facsimile of the Gough Map was issued in 1870.
From the 1840s, the Ordnance Survey concentrated on the Great Britain "County Series", modelled on the earlier Ireland survey. A start was made on mapping the whole country, county by county, at six inches to the mile (1:10,560). In 1854, "twenty-five inch" maps were introduced with a scale of 1:2500 (25.344 inches to the mile) and the six inch maps were then based on these twenty-five inch maps. The first edition of the two scales was completed by the 1890s, with a second edition completed in the 1890s and 1900s. From 1907 till the early 1940s, a third edition (or "second revision") was begun but never completed: only areas with significant changes on the ground were revised, many two or three times. Meanwhile, publication of the one-inch to the mile series for Great Britain was completed in 1891.
From the late 19th century to the early 1940s, the OS produced many "restricted" versions of the County Series maps and other War Department sheets for War Office purposes, in a variety of large scales that included details of military significance such as dockyards, naval installations, fortifications and military camps. Apart from a brief period during the disarmament talks of the 1930s, these areas were left blank or incomplete on standard maps. The War Department 1:2500s, unlike the standard issue, were contoured. The de-classified sheets have now been deposited in some of the Copyright Libraries, helping to complete the map-picture of pre-Second World War Britain.
City and town mapping, 19th and early 20th century
From 1824, the OS began a 6-inch (1:10,560) survey of Ireland for taxation purposes but found this to be inadequate for urban areas and adopted the five-foot scale (1:1056) for Irish cities and towns. From 1840, the six-inch standard was adopted in Great Britain for the un-surveyed northern counties and the 1:1056 scale also began to be adopted for urban surveys. Between 1842 and 1895, some 400 towns were mapped at 1:500 (126 inches), 1:528 (120 inches, "10 foot scale") or 1:1056 (60 inches), with the remaining towns mapped at 1:2500 (~25 inches). In 1855, the Treasury authorised funding for 1:2500 for rural areas and 1:500 for urban areas. The 1:500 scale was considered more 'rational' than 1:528 and became known as the "sanitary scale" since its primary purpose was to support establishment of mains sewerage and water supply. However, a review of the Ordnance Survey in 1892 found that sales of the 1:500 series maps were very poor and the Treasury declined to fund their continuing maintenance, declaring that any revision or new mapping at this scale must be self-financing. Very few towns and cities saw a second edition of the town plans: by 1909 only fourteen places had paid for updates. The review determined that revision of 1:2500 mapping should proceed apace.
The most detailed mapping of London was the OS's 1:1056 survey between 1862 and 1872, which took 326 sheets to cover the capital; a second edition (that needed 759 sheets due to urban expansion) was completed and brought out between 1891 and 1895. London was unusual in that land registration on transfer of title was made compulsory there in 1900. The 1:1056 sheets were partially revised to provide a basis for HM Land Registry index maps and the OS mapped the whole London County Council area (at 1:1056) at national expense.
From 1911 onwardsand mainly between 1911 and 1913the Ordnance Survey photo-enlarged many 1:2500 sheets covering built-up areas to 1:1250 (50.688 inches to the mile) for Land Valuation and Inland Revenue purposes: the increased scale was to provide space for annotations. About a quarter of these 1:1250s were marked "Partially revised 1912/13". In areas where there were no further 1:2500s, these partially revised "fifty inch" sheets represent the last large-scale revision (larger than six-inch) of the County Series. The County Series mapping was superseded by the Ordnance Survey National Grid 1:1250s, 1:2500s and 1:10,560s after the Second World War.
20th century
During World War I, the Ordnance Survey was involved in preparing maps of France and Belgium. During World War II, many more maps were created, including:
1:40,000 map of Antwerp, Belgium
1:100,000 map of Brussels, Belgium
1:5,000,000 map of South Africa
1:250,000 map of Italy
1:50,000 map of north-east France
1:30,000 map of the Netherlands with manuscript outline of districts occupied by the German Army.
After the war, Colonel Charles Close, then Director General, developed a strategy using covers designed by Ellis Martin to increase sales in the leisure market. In 1920 O. G. S. Crawford was appointed Archaeology Officer and played a prominent role in developing the use of aerial photography to deepen understanding of archaeology.
In 1922, devolution to Northern Ireland led to the creation of Ordnance Survey of Northern Ireland (OSNI) and independence of the Irish Free State led to the creation of the Ordnance Survey of Ireland, so the original Ordnance Survey pulled its coverage back to Great Britain.
In 1935, the Davidson Committee was established to review the Ordnance Survey's future. The new Director General, Major-General Malcolm MacLeod, started the retriangulation of Great Britain, an immense task involving the erection of concrete triangulation pillars ("trig points") on prominent hilltops as infallible positions for theodolites. Each measurement made by theodolite during the retriangulation was repeated no fewer than 32 times.
The Davidson Committee's final report set the Ordnance Survey on course for the 20th century. The metric national grid reference system was launched and a 1:25000-scale series of maps was introduced. The one-inch maps continued to be produced until the 1970s, when they were superseded by the 1:50000-scale seriesas proposed by William Roy more than two centuries earlier.
Ordnance Survey had outgrown its site in the centre of Southampton (made worse by the bomb damage of the Second World War). The bombing during the Blitz devastated Southampton in November 1940 and destroyed most of Ordnance Survey's city centre offices. Staff were dispersed to other buildings and to temporary accommodation at Chessington and Esher, Surrey, where they produced 1:25000 scale maps of France, Italy, Germany and most of the rest of Europe in preparation for its invasion. Until 1969, Ordnance Survey largely remained at its Southampton city centre HQ and at temporary buildings in the suburb of Maybush nearby, when a new purpose-built headquarters was opened in Maybush adjacent to the wartime temporary buildings there. Some of the remaining buildings of the original Southampton city-centre site are now used as part of the city's court complex.
The new head office building was designed by the Ministry of Public Buildings and Works for 4000 staff, including many new recruits who were taken on in the late 1960s and early 1970s as draughtsmen and surveyors. The buildings originally contained factory-floor space for photographic processes such as heliozincography and map printing, as well as large buildings for storing flat maps. Above the industrial areas were extensive office areas. The complex was notable for its concrete mural by sculptor Keith McCarter and the concrete elliptical paraboloid shell roof over the staff restaurant building.
In 1995, Ordnance Survey digitised the last of about 230,000 maps, making the United Kingdom the first country in the world to complete a programme of large-scale electronic mapping. By the late 1990s technological developments had eliminated the need for vast areas for storing maps and for making printing plates by hand. Although there was a small computer section at Ordnance Survey in the 1960s, the digitising programme had replaced the need for printing large-scale maps, while computer-to-plate technology (in the form of a single machine) had also rendered the photographic platemaking areas obsolete. Part of the latter was converted into a new conference centre in 2000, which was used for internal events and also made available for external organisations to hire.
The Ordnance Survey became an Executive Agency in 1990, making the organisation independent of ministerial control. In 1999 the agency was designated a trading fund, required to cover its costs by charging for its products and to remit a proportion of its profits to the Treasury.
21st century
In 2010, OS announced that printing and warehouse operations were to be outsourced, ending over 200 years of in-house printing. The Frome-based firm Butler, Tanner and Dennis (BT&D) secured its printing contract. As already stated, large-scale maps had not been printed at Ordnance Survey since the common availability of geographical information systems (GISs), but, until late 2010, the OS Explorer and OS Landranger series were printed in Maybush.
In April 2009 building began of a new head office in Adanac Park on the outskirts of Southampton.
By 10 February 2011 virtually all staff had relocated to the new "Explorer House" building and the old site had been sold off and redeveloped. Prince Philip officially opened the new headquarters building on 4 October 2011.
On 22 January 2015 plans were announced for the organisation to move from a trading fund model to a government-owned limited company, with the move completed in April 2015. The organisation remains fully owned by the UK government and retains many of the features of a public organisation.
In September 2015 the history of the Ordnance Survey was the subject of a BBC Four TV documentary entitled A Very British Map: The Ordnance Survey Story.
On 10 June 2019 the Department for Business, Energy and Industrial Strategy (BEIS) appointed Steve Blair as the Chief Executive of Ordnance Survey. Ordnance Survey supported the launch of the Slow Ways initiative, which encourages users to walk on lesser used paths between UK towns.
GB map range
Ordnance Survey produces a large range of paper maps and digital mapping products.
OS MasterMap
Ordnance Survey's flagship digital product, launched in November 2001, is OS MasterMap, a database that records, in one continuous digital map, every fixed feature of Great Britain larger than a few metres. Every feature is given a unique TOID (TOpographical IDentifier), a simple identifier that includes no semantic information. Typically, each TOID is associated with a polygon that represents the area on the ground that the feature covers, in National Grid coordinates.
OS MasterMap is offered in themed layers, each linked to a number of TOIDs. In September 2010, the layers were:
Topography The primary layer of OS MasterMap, consisting of vector data comprising large-scale representation of features in the real world, such as buildings and areas of vegetation. The features captured and the way they are depicted is listed in a specification available on the Ordnance Survey website.
Integrated transport network A link-and-node network of transport features such as roads and railways. This data is at the heart of many satnav systems. In an attempt to reduce the number of HGVs using unsuitable roads, a data-capture programme of "Road Routing Information" was undertaken by 2015, aiming to add information such as height restrictions and one-way streets.
Imagery Orthorectified aerial photography in raster format.
Address An overlay adding every address in the UK to other layers.
Address 2 Adds further information to the Address layer, such as addresses with multiple occupants (blocks of flats, student houses, etc.) and objects with no postal addresses, such as fields and electricity substations.
ITB was withdrawn in April 2019 and replaced by OS MasterMap Highways Network
The Address layers were withdrawn in about 2016 with the information now being available in the AddressBase products - so as of 2020, MasterMap consists of Topography and Imagery.
Pricing of licenses to OS MasterMap data depends on the total area requested, the layers licensed, the number of TOIDs in the layers, and the period in years of the data usage. OS MasterMap can be used to generate maps for a vast array of purposes and maps can be printed from OS MasterMap data with detail equivalent to a traditional 1:1250 scale paper map.
Ordnance Survey states that thanks to continuous review, OS MasterMap data is never more than six months out of date. The scale and detail of this mapping project is unique. By 2009, around 440 million TOIDs had been assigned, and the database stood at 600 gigabytes in size. As of March 2011, OS claims 450 million TOIDs. As of 2005, OS MasterMap was at version 6; 2010's version 8 includes provision for Urban Paths (an extension of the "integrated transport network" layer) and pre-build address layer. All these versions have a similar GML schema.
Business mapping
Ordnance Survey produces a wide variety of different products aimed at business users, such as utility companies and local authorities. The data is supplied by Ordnance Survey on optical media or increasingly, via the Internet. Products can be downloaded via FTP or accessed 'on demand' via a web browser. Organisations using Ordnance Survey data have to purchase a licence to do so. Some of the main products are:
OS MasterMap Ordnance Survey's most detailed mapping showing individual buildings and other features in a vector format. Every real-world object is assigned a unique reference number (TOID) that allows customers to add this reference to their own databases. OS MasterMap consists of several so-called "layers" such as the aerial imagery, transport and postcode. The principal layer is the topographic layer.
OS VectorMap Local A customisable vector product at 1:10,000 scale.
Meridian 2, Strategi Mid-scale mapping in vector format.
Boundary-Line Mapping showing administrative boundaries such as counties, parishes and electoral wards.
Raster versions of leisure maps 1:10,000, 1:25,000, 1:50,000, 1:250,000 scale raster
Leisure maps
OS's range of leisure maps are published in a variety of scales:
Tour One-sheet maps covering a generally county-sized area, showing major and most minor roads and containing tourist information and selected footpaths. Tour maps are generally produced from enlargements of 1:250,000 mapping. Several larger scale town maps are provided on each sheet for major settlement centres. The maps have sky-blue covers and there are eight sheets in the series. Scales vary:
OS Landranger The "general purpose" map. They have pink covers; 204 sheets cover the whole of Great Britain and the Isle of Man. The map shows all footpaths and the format is similar to the Explorer maps, but with less detail.
OS Landranger Active Select OS Landranger maps available in a plastic-laminated waterproof version, similar to the OS Explorer Active range. , 25 of the 204 Landranger maps were available as OS Landranger Active maps.
OS Explorer, Specifically designed for walkers and cyclists. They have orange covers, and contain 403 sheets covering the whole of Great Britain (the Isle of Man is excluded from this series). These are the most detailed leisure maps that Ordnance Survey publish and cover all types of footpaths and most details of the countryside for easy navigation. The OL branded sheets within the Explorer series show areas of greater interest (such as the Lake District, the Black Mountains, etc.) with an enlarged area coverage. They appear identical to the ordinary Explorer maps, except for the numbering and a little yellow mark on the corner (a relic of the old Outdoor Leisure series). The OS Explorer maps, together with the former Outdoor Leisure series, superseded the numerous green-covered Pathfinder maps. In May 2015 Ordnance Survey announced that the new release of OL series maps would come with a mobile download version, available through a dedicated app on Android and iOS devices. It is expected that this will be rolled out to all the Explorer and Landranger series over time.
OS Explorer Active OS Explorer and Outdoor Leisure maps in a plastic-laminated waterproof version.
Activity Maps An experimental range of maps designed to support specific activities. The four map packs currently published are Off-Road Cycling Hampshire North, South, East and West. Each map pack contains 12 cycle routes printed on individual map sheets on waterproof paper. While they are based on the 1:25,000 scale maps, the scales have been adjusted so each route fits on a single A4 sheet.
Until 2010, OS also produced the following:
Route A double-sided map designed for long-distance road users, covering the whole of Great Britain.
Road A series of eight sheets covering Great Britain, designed for road users.
These, along with fifteen Tour maps, were discontinued during January 2010 as part of a drive for cost-efficiency.
The Road series was reintroduced in September 2016.
App development
In 2013, Ordnance Survey released its first official app, OS MapFinder (still available, but no longer maintained), and has since added three more apps. In 2021, OS Maps added coverage in Australia.
OS Maps Available on iOS and Android, the free to download app allows users to access maps direct to their devices, plan and record routes and share routes with others. Users can subscribe and download OS Landranger and OS Explorer high-resolution maps in 660dpi quality and use them without incurring roaming charges as maps are stored on the device and can be used offline – without WiFi or mobile signal.
OS Maps Web Available as a web page – it allows users to access maps from the web using modern web browsers, planning of custom routes and printing of maps is possible similarly to what the mobile applications can do
OS Locate Launched in February 2014 and available on iOS and Android, the free app is a fast and highly accurate means of pinpointing a users exact location and displays grid reference, latitude, longitude and altitude. OS Locate does not need a mobile signal to function, so the inbuilt GPS system in a device can be relied upon.
Custom products
Ordnance Survey also offers OS Custom Made, a print-on-demand service based on digital raster data that allows a customer to specify the area of the map or maps desired. Two scales are offered1:50,000 (equivalent to 40 km by 40 km) or 1:25,000 (20 km by 20 km)and the maps may be produced either folded or flat for framing or wall mounting. Customers may provide their own titles and cover images for folded maps.
Ordnance Survey also produces more detailed custom mapping to order, at 1:1,250 or 1:500 (Siteplan), from its large-scale digital data. Custom scales may also be produced from the enlargement or reduction of the existing scales.
Educational mapping
Ordnance Survey supplies reproductions of its maps from the early 1970s to the 1990s for educational use. These are widely seen in schools both in Britain and in former British colonies, either as stand-alone geographic aids or as part of geography textbooks or workbooks.
During the 2000s, in an attempt to increase schoolchildren's awareness of maps, Ordnance Survey offered a free OS Explorer Map to every 11-year-old in UK primary education. By the end of 2010, when the scheme closed, over 6 million maps had been given away. The scheme was replaced by free access to the Digimap for Schools service provided by EDINA for eligible schools.
With the trend away from paper products towards geographical information systems (GISs), Ordnance Survey has been looking into ways of ensuring schoolchildren are made aware of the benefits of GISs and has launched "MapZone", an interactive child-orientated website featuring learning resources and map-related games.
Ordnance Survey publishes a quarterly journal, principally for geography teachers, called Mapping News.
Derivative and licensed products
One series of historic maps, published by Cassini Publishing Ltd, is a reprint of the Ordnance Survey first series from the mid-19th century but using the OS Landranger projection at 1:50,000 and given 1 km gridlines. This means that features from over 150 years ago fit almost exactly over their modern equivalents and modern grid references can be given to old features.
The digitisation of the data has allowed Ordnance Survey to sell maps electronically. Several companies are now licensed to produce the popular scales (1:50,000 and 1:25,000) and their own derived datasets of the map on CD/DVD or to make them available online for download. The buyer typically has the right to view the maps on a PC, a laptop, and a pocket PC/smartphone, and to print off any number of copies. The accompanying software is GPS-aware, and the maps are ready-calibrated. Thus, the user can quickly transfer the desired area from their PC to their laptop or smartphone, and go for a drive or walk with their position continually pinpointed on the screen. The individual map is more expensive than the equivalent paper version, but the price per square km falls rapidly with the size of coverage bought.
Free access to historic mapping
The National Library of Scotland provides free access to OS mapping from 1840 to 1970, in a variety of scales from 1:1056 "five foot" maps of London to 1:625,000 "ten mile" national planning maps.
History of 1:63360 and 1:50000 map publications
Cartography
The Ordnance Survey's original maps were made by triangulation. For the second survey, in 1934, this process was used again and resulted in the building of many triangulation pillars (trig points): short (c. 4 feet/1.2 m high), usually square, concrete or stone pillars at prominent locations such as hill tops. Their precise locations were determined by triangulation, and the details in between were then filled in with less precise methods.
Modern Ordnance Survey maps are largely based on orthorectified aerial photographs, but large numbers of the triangulation pillars remain, many of them adopted by private land owners. Ordnance Survey still has a team of surveyors across Great Britain who visit in person and survey areas that cannot be surveyed using photogrammetric methods (such as land obscured by vegetation) and there is an aim of ensuring that any major feature (such as a new motorway or large housing development) is surveyed within six months of being built. While original survey methods were largely manual, the current surveying task is simplified by the use of GPS technology, allowing the most precise surveying standards yet. Ordnance Survey is responsible for a UK-wide network of GPS stations known as "OS Net". These are used for surveying and other organisations can purchase the right to utilise the network for their own uses.
Ordnance Survey still maintains a set of master geodetic reference points to tie the Ordnance Survey geographic datum points to modern measurement systems such as GPS. Ordnance Survey maps of Great Britain use the Ordnance Survey National Grid rather than latitude and longitude to indicate position. The Grid is known technically as OSGB36 (Ordnance Survey Great Britain 1936) and was introduced after the 1936–1953 retriangulation.
Ordnance Survey's CartoDesign team performs a key role in the organisation, as the authority for cartographic design and development, and engages with internal and external audiences to promote and communicate the value of cartography. They work on a broad range of projects and are responsible for styling all new products and services.
Research
For several decades Ordnance Survey has had a research department that is active in several areas of geographical information science, including:
Spatial cognition
Map generalisation
Spatial data modelling
Remote sensing and analysis of remotely sensed data
Semantics and ontologies
Ordnance Survey actively supports the academic research community through its external research and university liaison team. The research department actively supports MSc and PhD students as well as engaging in collaborative research. Most Ordnance Survey products are available to UK universities that have signed up to the Digimap agreement and data is also made available for research purposes that advances Ordnance Survey's own research agenda.
More information can be found at Ordnance Survey Research.
Data access and criticisms
Ordnance Survey has been subject to criticism. Most centres on the point that Ordnance Survey possesses a virtual government monopoly on geographic data in the UK, but, although a government agency, it has been required to act as a trading fund (i.e. a commercial entity) from 1999 to 2015. This meant that it is supposed to be entirely self-funded from the commercial sale of its data and derived products whilst at the same time the public supplier of geographical information. In 1985, the Committee of Enquiry into the Handling of Geographic Information was set up to "advise the Secretary of State for the Environment within two years on the future handling of geographic information in the UK, taking account of modern developments in information technology and market needs". The committee's final report, published in 1987 under the name of its chairman Roger Chorley, stressed the importance of accessible geographic information to the UK and recommended a loosening of policies on distribution and cost recovery.
In 2007 Ordnance Survey were criticised for contracting the public relations company Mandate Communications to understand the dynamics of the free data movement and discover which politicians and advisers continued to support their current policies.
OS OpenData
In response to the feedback from a consultation Policy options for geographic information from Ordnance Survey the government announced that a package of Ordnance Survey data sets would be released for free use and re-use. On 1 April 2010 Ordnance Survey released the brand OS OpenData under an attribution-only license compatible with CC-BY. Various groups and individuals had campaigned for this release of data, but some were disappointed when some of the profitable datasets, including the leisure 1:50,000 scale and 1:25,000 scale mapping, as well as the low scale Mastermap were not included. These were withheld with the counter-argument that if licensees do not pay for OS data collection then the government would have to be willing to foot a £30 million per annum bill to obtain the future economic benefit of sharing the mapping.
In mid-2013 Ordnance Survey described an "enhanced" linked-data service with a SPARQL 1.1-compliant endpoint and bulk-download options.
In June 2018, following the recommendations of the Geospatial Commission, part of the Cabinet Office, it was announced that parts of OS Mastermap would be released under the Open Government Licence. These would include:
property extents created from OS MasterMap Topography Layer
TOIDs from OS MasterMap Topography Layer, by integration into OpenMap Local
Other data would be made available free up to small businesses (under a transaction threshold)
OS MasterMap Topography Layer, including building heights and functional sites
OS MasterMap Greenspace Layer
OS MasterMap Highways Network
OS MasterMap Water Network Layer
OS Detailed Path Network
These are available through APIs on the OS Data Hub.
Historical material
Ordnance Survey historical works are generally available, as the agency is covered by Crown Copyright: works more than fifty years old, including historic surveys of Britain and Ireland and much of the New Popular Edition, are in the public domain. However, finding suitable originals remains an issue as Ordnance Survey does not provide historical mapping on 'free' terms, instead marketing commercially 'enhanced' reproductions in partnership with companies including GroundSure and Landmark.
The National Library of Scotland has been developing its archive to make Ordnance Survey maps for all of Great Britain more easily available through their website.
Wikimedia has complete sets of scans of the Old/First series one-inch maps of England and Wales; of the Old/First series one-inch maps of Scotland; of the Seventh Series One-inch maps of Great Britain (1952-1967); of the Third Edition quarter-inch maps of England and Wales; and of the Fifth Series quarter-inch maps of Great Britain. These sets are complete in the sense of including at least one copy of each of the sheets in the series, not in the sense of including all revision levels.
The (GB) Ordnance Survey's approach can be contrasted with, for example, that of Ordnance Survey Ireland. OSI holds copyright over its mapping (and over digital copies of the public domain historical mapping), but all its maps (historic and current) are available free to view on their website (but not to reuse without a license).
See also
Admiralty chart
Alastair Macdonald, Director of Surveys and Production at Ordnance Survey 1982–1992
Benchmark (surveying)
Cartography
Directors of the Ordnance Survey
Geoinformatics
Grid reference
Great Trigonometric Survey
Irish national grid reference system
Ordnance Survey National Grid
Hydrography
Hydrographic survey
United Kingdom Hydrographic Office
International Map of the World
Geographers' A-Z Map Company, principal partner of the OS
Martin Hotine, founder of the Directorate of Overseas Surveys
(List of) national mapping agencies
Ordnance datum (sea level)
Ordnance Survey International
Ordnance Survey Ireland
Ordnance Survey of Northern Ireland
Romer, a device for accurate reading of grid references from a map
References
Notes
Citations
Sources
External links
1791 establishments in Great Britain
Cartography organizations
Department for Business, Energy and Industrial Strategy
Geodesy organizations
Geographical databases in the United Kingdom
Geography of the United Kingdom
Geography organizations
Government databases in the United Kingdom
Government-owned companies of the United Kingdom
Surveying organizations
Maps of the United Kingdom
National mapping agencies
Organisations based in Southampton
Organizations established in 1791
Geographic data and information organisations in the United Kingdom
Surveying of the United Kingdom | Operating System (OS) | 1,061 |
System request
System Request (SysRq or Sys Req) is a key on personal computer keyboards that has no standard use. Introduced by IBM with the PC/AT, it was intended to be available as a special key to directly invoke low-level operating system functions with no possibility of conflicting with any existing software. A special BIOS routine – software interrupt 0x15, subfunction 0x85 – was added to signal the OS when SysRq was pushed or released. Unlike most keys, when it is pressed nothing is stored in the keyboard buffer.
History
The specific low level function intended for the SysRq key was to switch between operating systems. When the original IBM-PC was created in 1980, there were three leading competing operating systems: PC DOS, CP/M-86, and UCSD p-System, while Xenix was added in 1983–1984. The SysRq key was added so that multiple operating systems could be run on the same computer, using the capabilities of the 286 chip in the PC/AT.
A special key was needed because most software of the day operated at a low level, often bypassing the OS entirely, and typically made use of many hotkey combinations. The use of Terminate and Stay Resident (TSR) programs further complicated matters. To implement a task switching or multitasking environment, it was thought that a special, separate key was needed. This is similar to the way "Control-Alt-Delete" is used under Windows NT.
On 84-key keyboards (except the 84-key IBM Model M space saver keyboard), SysRq was a key of its own. On the later 101-key keyboard, it shares a physical key with the Print screen key function. The Alt key must be held down while pressing this dual-function key to invoke SysRq.
The default BIOS keyboard routines simply ignore SysRq and return without taking action. So did the MS-DOS input routines. The keyboard routines in libraries supplied with many high-level languages followed suit. Although it is still included on most PC keyboards manufactured, and though it is used by some debugging software, the key is of no use for the vast majority of users.
On the Hyundai/Hynix Super-16 computer, pressing will hard boot the system (it will reboot when is unresponsive, and it will invoke startup memory tests that are bypassed on soft-boot).
Modern uses
In Linux, the kernel can be configured to provide functions for system debugging and crash recovery. This use is known as the "magic SysRq key".
Microsoft has also used SysRq for various OS- and application-level debuggers. In the CodeView debugger, it was sometimes used to break into the debugging during program execution. For the Windows NT remote kernel debugger, it can be used to force the system into the debugger.
Similar keys
IBM 3270-type console keyboards of the IBM System/370 mainframe computer, created in 1970, had an operator interrupt key that was used to cause the operating system such as VM/370 or MVS to allow the console to give input to the operating system.
See also
Serial console
Break key
Scroll Lock
References
External links
Computer keys
Request
Out-of-band management
IBM personal computers | Operating System (OS) | 1,062 |
Information infrastructure
An information infrastructure is defined by Ole Hanseth (2002) as "a shared, evolving, open, standardized, and heterogeneous installed base" and by Pironti (2006) as all of the people, processes, procedures, tools, facilities, and technology which supports the creation, use, transport, storage, and destruction of information.
The notion of information infrastructures, introduced in the 1990s and refined during the following decade, has proven quite fruitful to the information systems (IS) field. It changed the perspective from organizations to networks and from systems to infrastructure, allowing for a global and emergent perspective on information systems. Information infrastructure is a technical structure of an organizational form, an analytical perspective or a semantic network.
The concept of information infrastructure (II) was introduced in the early 1990s, first as a political initiative (Gore, 1993 & Bangemann, 1994), later as a more specific concept in IS research. For the IS research community, an important inspiration was Hughes' (1983) accounts of large technical systems, analyzed as socio-technical power structures (Bygstad, 2008). Information infrastructure are typically different from the previous generations of "large technological system" because these digital sociotechnical systems are considered generative, meaning they allow new users to connect with or even appropriate the system.
Information infrastructure, as a theory, has been used to frame a number of extensive case studies (Star and Ruhleder 1996; Ciborra 2000; Hanseth and Ciborra 2007), and in particular to develop an alternative approach to IS design: "Infrastructures should rather be built by establishing working local solutions supporting local practices which subsequently are linked together rather than by defining universal standards and subsequently implementing them" (Ciborra and Hanseth 1998). It has later been developed into a full design theory, focusing on the growth of an installed base (Hanseth and Lyytinen 2008).
Information infrastructures include the Internet, health systems and corporate systems. It is also consistent to include innovations such as Facebook, LinkedIn and MySpace as excellent examples (Bygstad, 2008).
Bowker has described several key terms and concepts that are enormously helpful for analyzing information infrastructure: imbrication, bootstrapping, figure/ground, and a short discussion of infrastructural inversion. "Imbrication" is an analytic concept that helps to ask questions about historical data. "Bootstrapping" is the idea that infrastructure must already exist in order to exist (2011).
Definitions
"Technological and non-technological elements that are linked" (Hanseth and Monteiro 1996).
"Information infrastructures can, as formative contexts, shape not only the work routines, but also the ways people look at practices, consider them 'natural' and give them their overarching character of necessity. Infrastructure becomes an essential factor shaping the taken-for-grantedness of organizational practices" (Ciborra and Hanseth 1998).
"The technological and human components, networks, systems, and processes that contribute to the functioning of the health information system" (Braa et al. 2007).
"The set of organizational practices, technical infrastructure and social norms that collectively provide for the smooth operation of scientific work at a distance (Edwards et al. 2007).
"A shared, evolving, heterogeneous installed base of IT capabilities developed on open and standardized interfaces" (Hanseth and Lyytinen 2008).
Etymology
According to the Online Etymology Dictionary (OED) the etymology of the words that make up the phrase "information infrastructure" are as follows:
Information late 14c., "act of informing," from O.Fr. informacion, enformacion "information, advice, instruction," from L. informationem (nom. informatio) "outline, concept, idea," noun of action from pp. stem of informare (see inform). Meaning "knowledge communicated" is from mid-15c. Information technology attested from 1958. Information revolution from 1969.
Infrastructure 1887, from Fr. infrastructure (1875); see infra- + structure. The installations that form the basis for any operation or system. Originally in a military sense.
Theories
Dimensions
According to Star and Ruhleder, there are 8 dimensions of information infrastructures.
Embeddedness
Transparency
Reach or scope
Learned as part of membership
Links with conventions of practice
Embodiment of standards
Built on an installed base
Becomes visible upon breakdown
As a public policy
Presidential Chair and Professor of Information Studies at the University of California, Los Angeles, Christine L. Borgman argues information infrastructures, like all infrastructures, are "subject to public policy". In the United States, public policy defines information infrastructures as the "physical and cyber-based systems essential to the minimum operations of the economy and government" and connected by information technologies.
Global Information Infrastructure (GII)
Borgman says governments, businesses, communities, and individuals can work together to create a global information infrastructure which links "the world's telecommunication and computer networks together" and would enable the transmission of "every conceivable information and communication application."
Currently, the Internet is the default global information infrastructure."
Regional information infrastructure
Asia
The Asia-Pacific Economic Cooperation Telecommunications and Information Working Group (TEL) Program of Asian for Information and Communications Infrastructure.
Southeast Asia
Association of South East Asian Nations, e-ASEAN Framework Agreement of 2000.
North America
United States
National Information Infrastructure Act of 1993 National Information Infrastructure (NII)
Canada
The National Research Council established CA*net in 1989 and the network connecting "all provincial nodes" was operational in 1990.
The Canadian Network for the Advancement of Research, Industry and Education(CANARIE) was established in 1992 and CA*net was upgraded to a T1 connection in 1993 and T3 in 1995. By 2000, "the commercial basis for Canada's information infrastructure" was established, and the government ended its role in the project.
Europe
In 1994, the European Union proposed the European Information Infrastructure.: European Information Infrastructure has evolved furthermore thanks to Martin Bangemann report and projects eEurope 2003+, eEurope 2005 and iIniciaive 2010
Africa
In 1995, American Vice President Al Gore asked USAID to help improve Africa's connection to the global information infrastructure.
The USAID Leland Initiative (LI) was designed from June to September 1995, and implemented in on 29 September 1995. The Initiative was "a five-year $15 million US Government effort to support sustainable development" by bringing "full Internet connectivity" to approximately 20 African nations.
The initiative had three strategic objectives:
Creating and Enabling Policy Environment – to "reduce barriers to open connectivity".
Creating Sustainable Supply of Internet Services – help build the hardware and industry need for "full Internet connectivity".
Enhancing Internet Use for Sustainable Development – improve the ability of African nations to use these infrastructures.
See also
Data infrastructure
Information science
IT infrastructure
Online Etymology Dictionary
Notes
References
External links
USAID Leland Initiative
ASEAN
APEC
Information technology
Telecommunications infrastructure
IT infrastructure | Operating System (OS) | 1,063 |
SystemVerilog
SystemVerilog, standardized as IEEE 1800, is a hardware description and hardware verification language used to model, design, simulate, test and implement electronic systems. SystemVerilog is based on Verilog and some extensions, and since 2008 Verilog is now part of the same IEEE standard. It is commonly used in the semiconductor and electronic design industry as an evolution of Verilog.
History
SystemVerilog started with the donation of the Superlog language to Accellera in 2002 by the startup company Co-Design Automation. The bulk of the verification functionality is based on the OpenVera language donated by Synopsys. In 2005, SystemVerilog was adopted as IEEE Standard 1800-2005. In 2009, the standard was merged with the base Verilog (IEEE 1364-2005) standard, creating IEEE Standard 1800-2009. The current version is IEEE standard 1800-2017.
The feature-set of SystemVerilog can be divided into two distinct roles:
SystemVerilog for register-transfer level (RTL) design is an extension of Verilog-2005; all features of that language are available in SystemVerilog. Therefore, Verilog is a subset of SystemVerilog.
SystemVerilog for verification uses extensive object-oriented programming techniques and is more closely related to Java than Verilog. These constructs are generally not synthesizable.
The remainder of this article discusses the features of SystemVerilog not present in Verilog-2005.
Design features
Data lifetime
There are two types of data lifetime specified in SystemVerilog: static and automatic. Automatic variables are created the moment program execution comes to the scope of the variable. Static variables are created at the start of the program's execution and keep the same value during the entire program's lifespan, unless assigned a new value during execution.
Any variable that is declared inside a task or function without specifying type will be considered automatic. To specify that a variable is static place the "static" keyword in the declaration before the type, e.g., "static int x;". The "automatic" keyword is used in the same way.
New data types
Enhanced variable types add new capability to Verilog's "reg" type:
logic [31:0] my_var;
Verilog-1995 and -2001 limit reg variables to behavioral statements such as RTL code. SystemVerilog extends the reg type so it can be driven by a single driver such as gate or module. SystemVerilog names this type "logic" to remind users that it has this extra capability and is not a hardware register. The names "logic" and "reg" are interchangeable. A signal with more than one driver (such as a tri-state buffer for general-purpose input/output) needs to be declared a net type such as "wire" so SystemVerilog can resolve the final value.
Multidimensional packed arrays unify and extend Verilog's notion of "registers" and "memories":
logic [1:0][2:0] my_pack[32];
Classical Verilog permitted only one dimension to be declared to the left of the variable name. SystemVerilog permits any number of such "packed" dimensions. A variable of packed array type maps 1:1 onto an integer arithmetic quantity. In the example above, each element of my_pack may be used in expressions as a six-bit integer. The dimensions to the right of the name (32 in this case) are referred to as "unpacked" dimensions. As in Verilog-2001, any number of unpacked dimensions is permitted.
Enumerated data types (enums) allow numeric quantities to be assigned meaningful names. Variables declared to be of enumerated type cannot be assigned to variables of a different enumerated type without casting. This is not true of parameters, which were the preferred implementation technique for enumerated quantities in Verilog-2005:
typedef enum logic [2:0] {
RED, GREEN, BLUE, CYAN, MAGENTA, YELLOW
} color_t;
color_t my_color = GREEN;
initial $display("The color is %s", my_color.name());
As shown above, the designer can specify an underlying arithmetic type (logic [2:0] in this case) which is used to represent the enumeration value. The meta-values X and Z can be used here, possibly to represent illegal states. The built-in function name() returns an ASCII string for the current enumerated value, which is useful in validation and testing.
New integer types: SystemVerilog defines byte, shortint, int and longint as two-state signed integral types having 8, 16, 32, and 64 bits respectively. A bit type is a variable-width two-state type that works much like logic. Two-state types lack the X and Z metavalues of classical Verilog; working with these types may result in faster simulation.
Structures and unions work much like they do in the C programming language. SystemVerilog enhancements include the packed attribute and the tagged attribute. The tagged attribute allows runtime tracking of which member(s) of a union are currently in use. The packed attribute causes the structure or union to be mapped 1:1 onto a packed array of bits. The contents of struct data types occupy a continuous block of memory with no gaps, similar to bitfields in C and C++:
typedef struct packed {
bit [10:0] expo;
bit sign;
bit [51:0] mant;
} FP;
FP zero = 64'b0;As shown in this example, SystemVerilog also supports typedefs, as in C and C++.
Procedural blocks
SystemVerilog introduces three new procedural blocks intended to model hardware: always_comb (to model combinational logic), always_ff (for flip-flops), and always_latch (for latches). Whereas Verilog used a single, general-purpose always block to model different types of hardware structures, each of SystemVerilog's new blocks is intended to model a specific type of hardware, by imposing semantic restrictions to ensure that hardware described by the blocks matches the intended usage of the model. An HDL compiler or verification program can take extra steps to ensure that only the intended type of behavior occurs.
An always_comb block models combinational logic. The simulator infers the sensitivity list to be all variables from the contained statements:
always_comb begin
tmp = b * b - 4 * a * c;
no_root = (tmp < 0);
end
An always_latch block models level-sensitive latches. Again, the sensitivity list is inferred from the code:
always_latch
if (en) q <= d;
An always_ff block models synchronous logic (especially edge-sensitive sequential logic):
always_ff @(posedge clk)
count <= count + 1;
Electronic design automation (EDA) tools can verify the design's intent by checking that the hardware model does not violate any block usage semantics. For example, the new blocks restrict assignment to a variable by allowing only one source, whereas Verilog's always block permitted assignment from multiple procedural sources.
Interfaces
For small designs, the Verilog port compactly describes a module's connectivity with the surrounding environment. But major blocks within a large design hierarchy typically possess port counts in the thousands. SystemVerilog introduces concept of interfaces to both reduce the redundancy of port-name declarations between connected modules, as well as group and abstract related signals into a user-declared bundle. Additional concept is modport, that shows direction of logic connections.
Example:
interface intf;
logic a;
logic b;
modport in (input a, output b);
modport out (input b, output a);
endinterface
module top;
intf i ();
u_a m1 (.i1(i.in));
u_b m2 (.i2(i.out));
endmodule
module u_a (intf.in i1);
endmodule
module u_b (intf.out i2);
endmodule
Verification features
The following verification features are typically not synthesizable, meaning they cannot be implemented in hardware based on HDL code. Instead, they assist in the creation of extensible, flexible test benches.
New data types
The string data type represents a variable-length text string. For example:
string s1 = "Hello";
string s2 = "world";
string p = ".?!";
string s3 = {s1, ", ", s2, p[2]}; // string concatenation
$display("[%d] %s", s3.len(), s3); // simulation will print: "[13] Hello, world!"
In addition to the static array used in design, SystemVerilog offers dynamic arrays, associative arrays and queues:
int cmdline_elements; // # elements for dynamic array
int da[]; // dynamic array
int ai[int]; // associative array, indexed by int
int as[string]; // associative array, indexed by string
int qa[$]; // queue, indexed as an array, or by built-in methods
initial begin
cmdline_elements = 16;
da = new[ cmdline_elements ]; // Allocate array with 16 elements
end
A dynamic array works much like an unpacked array, but offers the advantage of being dynamically allocated at runtime (as shown above.) Whereas a packed array's size must be known at compile time (from a constant or expression of constants), the dynamic array size can be initialized from another runtime variable, allowing the array to be sized and resize arbitrarily as needed.
An associative array can be thought of as a binary search tree with a user-specified key type and data type. The key implies an ordering; the elements of an associative array can be read out in lexicographic order. Finally, a queue provides much of the functionality of the C++ STL deque type: elements can be added and removed from either end efficiently. These primitives allow the creation of complex data structures required for scoreboarding a large design.
Classes
SystemVerilog provides an object-oriented programming model.
In SystemVerilog, classes support a single-inheritance model, but may implement functionality similar to multiple-inheritance through the use of so-called "interface classes" (identical in concept to the interface feature of Java). Classes can be parameterized by type, providing the basic function of C++ templates. However, template specialization and function templates are not supported.
SystemVerilog's polymorphism features are similar to those of C++: the programmer may specifically write a virtual function to have a derived class gain control of the function. See virtual function for further info.
Encapsulation and data hiding is accomplished using the local and protected keywords, which must be applied to any item that is to be hidden. By default, all class properties are public.
Class instances are dynamically created with the new keyword. A constructor denoted by function new can be defined. SystemVerilog has automatic garbage collection, so there is no language facility to explicitly destroy instances created by the new operator.
Example:
virtual class Memory;
virtual function bit [31:0] read(bit [31:0] addr); endfunction
virtual function void write(bit [31:0] addr, bit [31:0] data); endfunction
endclass
class SRAM #(parameter AWIDTH=10) extends Memory;
bit [31:0] mem [1<<AWIDTH];
virtual function bit [31:0] read(bit [31:0] addr);
return mem[addr];
endfunction
virtual function void write(bit [31:0] addr, bit [31:0] data);
mem[addr] = data;
endfunction
endclass
Constrained random generation
Integer quantities, defined either in a class definition or as stand-alone variables in some lexical scope, can be assigned random values based on a set of constraints. This feature is useful for creating randomized scenarios for verification.
Within class definitions, the rand and randc modifiers signal variables that are to undergo randomization. randc specifies permutation-based randomization, where a variable will take on all possible values once before any value is repeated. Variables without modifiers are not randomized.
class eth_frame;
rand bit [47:0] dest;
rand bit [47:0] src;
rand bit [15:0] f_type;
rand byte payload[];
bit [31:0] fcs;
rand bit [31:0] fcs_corrupt;
constraint basic {
payload.size inside {[46:1500]};
}
constraint good_fr {
fcs_corrupt == 0;
}
endclass
In this example, the fcs field is not randomized; in practice it will be computed with a CRC generator, and the fcs_corrupt field used to corrupt it to inject FCS errors. The two constraints shown are applicable to conforming Ethernet frames. Constraints may be selectively enabled; this feature would be required in the example above to generate corrupt frames. Constraints may be arbitrarily complex, involving interrelationships among variables, implications, and iteration. The SystemVerilog constraint solver is required to find a solution if one exists, but makes no guarantees as to the time it will require to do so as this is in general an NP-hard problem (boolean satisfiability).
Randomization methods
In each SystemVerilog class there are 3 predefined methods for randomization: pre_randomize, randomize and post_randomize. The randomize method is called by the user for randomization of the class variables. The pre_randomize method is called by the randomize method before the randomization and the post_randomize method is called by the randomize method after randomization.
class eth_frame;
rand bit [47:0] dest;
rand bit [47:0] src;
rand bit [15:0] f_type;
rand byte payload[];
bit [31:0] fcs;
rand bit corrupted_frame;
constraint basic {
payload.size inside {[46:1500]};
}
function void post_randomize()
this.calculate_fcs(); // update the fcs field according to the randomized frame
if (corrupted_frame) // if this frame should be corrupted
this.corrupt_fcs(); // corrupt the fcs
endfunction
endclass
Controlling constraints
The constraint_mode() and the random_mode() methods are used to control the randomization. constraint_mode() is used to turn a specific constraint on and off and the random_mode is used to turn a randomization of a specific variable on or off. The below code describes and procedurally tests an Ethernet frame:
class eth_frame;
rand bit [47:0] dest;
rand bit [47:0] src;
rand bit [15:0] f_type;
rand byte payload[];
bit [31:0] fcs;
rand bit corrupted_frame;
constraint basic {
payload.size inside {[46:1500]};
}
constraint one_src_cst {
src == 48'h1f00
}
constraint dist_to_fcs {
fcs dist {0:/30,[1:2500]:/50}; // 30, and 50 are the weights (30/80 or 50/80, in this example)
}
endclass
.
.
.
eth_frame my_frame;
my_frame.one_src_cst.constraint_mode(0); // the constraint one_src_cst will not be taken into account
my_frame.f_type.random_mode(0); // the f_type variable will not be randomized for this frame instance.
my_frame.randomize();
Assertions
Assertions are useful for verifying properties of a design that manifest themselves after a specific condition or state is reached. SystemVerilog has its own assertion specification language, similar to Property Specification Language. The subset of SystemVerilog language constructs that serves assertion is commonly called SystemVerilog Assertion or SVA.
SystemVerilog assertions are built from sequences and properties. Properties are a superset of sequences; any sequence may be used as if it were a property, although this is not typically useful.
Sequences consist of boolean expressions augmented with temporal operators. The simplest temporal operator is the ## operator which performs a concatenation:
sequence S1;
@(posedge clk) req ##1 gnt;
endsequence
This sequence matches if the gnt signal goes high one clock cycle after req goes high. Note that all sequence operations are synchronous to a clock.
Other sequential operators include repetition operators, as well as various conjunctions. These operators allow the designer to express complex relationships among design components.
An assertion works by continually attempting to evaluate a sequence or property. An assertion fails if the property fails. The sequence above will fail whenever req is low. To accurately express the requirement that gnt follow req a property is required:
property req_gnt;
@(posedge clk) req |=> gnt;
endproperty
assert_req_gnt: assert property (req_gnt) else $error("req not followed by gnt.");
This example shows an implication operator |=>. The clause to the left of the implication is called the antecedent and the clause to the right is called the consequent. Evaluation of an implication starts through repeated attempts to evaluate the antecedent. When the antecedent succeeds, the consequent is attempted, and the success of the assertion depends on the success of the consequent. In this example, the consequent won't be attempted until req goes high, after which the property will fail if gnt is not high on the following clock.
In addition to assertions, SystemVerilog supports assumptions and coverage of properties. An assumption establishes a condition that a formal logic proving tool must assume to be true. An assertion specifies a property that must be proven true. In simulation, both assertions and assumptions are verified against test stimuli. Property coverage allows the verification engineer to verify that assertions are accurately monitoring the design.
Coverage
Coverage as applied to hardware verification languages refers to the collection of statistics based on sampling events within the simulation. Coverage is used to determine when the device under test (DUT) has been exposed to a sufficient variety of stimuli that there is a high confidence that the DUT is functioning correctly. Note that this differs from code coverage which instruments the design code to ensure that all lines of code in the design have been executed. Functional coverage ensures that all desired corner and edge cases in the design space have been explored.
A SystemVerilog coverage group creates a database of "bins" that store a histogram of values of an associated variable. Cross-coverage can also be defined, which creates a histogram representing the Cartesian product of multiple variables.
A sampling event controls when a sample is taken. The sampling event can be a Verilog event, the entry or exit of a block of code, or a call to the sample method of the coverage group. Care is required to ensure that data are sampled only when meaningful.
For example:
class eth_frame;
// Definitions as above
covergroup cov;
coverpoint dest {
bins bcast[1] = {48'hFFFFFFFFFFFF};
bins ucast[1] = default;
}
coverpoint f_type {
bins length[16] = { [0:1535] };
bins typed[16] = { [1536:32767] };
bins other[1] = default;
}
psize: coverpoint payload.size {
bins size[] = { 46, [47:63], 64, [65:511], [512:1023], [1024:1499], 1500 };
}
sz_x_t: cross f_type, psize;
endgroup
endclass
In this example, the verification engineer is interested in the distribution of broadcast and unicast frames, the size/f_type field and the payload size. The ranges in the payload size coverpoint reflect the interesting corner cases, including minimum and maximum size frames.
Synchronization
A complex test environment consists of reusable verification components that must communicate with one another. Verilog's 'event' primitive allowed different blocks of procedural statements to trigger each other, but enforcing thread synchronization was up to the programmer's (clever) usage. SystemVerilog offers two primitives specifically for interthread synchronization: mailbox and semaphore. The mailbox is modeled as a FIFO message queue. Optionally, the FIFO can be type-parameterized so that only objects of the specified type may be passed through it. Typically, objects are class instances representing transactions: elementary operations (for example, sending a frame) that are executed by the verification components. The semaphore is modeled as a counting semaphore.
General improvements to classical Verilog
In addition to the new features above, SystemVerilog enhances the usability of Verilog's existing language features. The following are some of these enhancements:
The procedural assignment operators (<=, =) can now operate directly on arrays.
Port (inout, input, output) definitions are now expanded to support a wider variety of data types: struct, enum, real, and multi-dimensional types are supported.
The for loop construct now allows automatic variable declaration inside the for statement. Loop flow control is improved by the continue and break statements.
SystemVerilog adds a do/while loop to the while loop construct.
Constant variables, i.e. those designated as non-changing during runtime, can be designated by use of const.
Variable initialization can now operate on arrays.
Increment and decrement operators (x++, ++x, x--, --x) are supported in SystemVerilog, as are other compound assignment operators (x += a, x -= a, x *= a, x /= a, x %= a, x <<= a, x >>= a, x &= a, x ^= a, x |= a) as in C and descendants.
The preprocessor has improved `define macro-substitution capabilities, specifically substitution within literal-strings (""), as well as concatenation of multiple macro-tokens into a single word.
The fork/join construct has been expanded with join_none and join_any.
Additions to the `timescale directive allow simulation timescale to be controlled more predictably in a large simulation environment, with each source file using a local timescale.
Task ports can now be declared ref. A reference gives the task body direct access to the source arguments in the caller's scope, known as "pass by reference" in computer programming. Since it is operating on the original variable itself, rather than a copy of the argument's value, the task/function can modify variables (but not nets) in the caller's scope in real time. The inout/output port declarations pass variables by value, and defer updating the caller-scope variable until the moment the task exits.
Functions can now be declared void, which means it returns no value.
Parameters can be declared any type, including user-defined typedefs.
Besides this, SystemVerilog allows convenient interface to foreign languages (like C/C++), by SystemVerilog DPI (Direct Programming Interface).
Verification and synthesis software
In the design verification role, SystemVerilog is widely used in the chip-design industry. The three largest EDA vendors (Cadence Design Systems, Mentor Graphics, Synopsys) have incorporated SystemVerilog into their mixed-language HDL simulators. Although no simulator can yet claim support for the entire SystemVerilog , making testbench interoperability a challenge, efforts to promote cross-vendor compatibility are underway. In 2008, Cadence and Mentor released the Open Verification Methodology, an open-source class-library and usage-framework to facilitate the development of re-usable testbenches and canned verification-IP. Synopsys, which had been the first to publish a SystemVerilog class-library (VMM), subsequently responded by opening its proprietary VMM to the general public. Many third-party providers have announced or already released SystemVerilog verification IP.
In the design synthesis role (transformation of a hardware-design description into a gate-netlist), SystemVerilog adoption has been slow. Many design teams use design flows which involve multiple tools from different vendors. Most design teams cannot migrate to SystemVerilog RTL-design until their entire front-end tool suite (linters, formal verification and automated test structure generators) support a common language subset.
See also
List of SystemVerilog Simulators (Search for SV2005)
Verilog-AMS
e (verification language)
SpecC
Accellera
SystemC
SystemRDL
References
Spear, Chris, "SystemVerilog for Verification" Springer, New York City, NY.
Stuart Sutherland, Simon Davidmann, Peter Flake, "SystemVerilog for Design Second Edition: A Guide to Using SystemVerilog for Hardware Design and Modeling" Springer, New York City, NY.
Ben Cohen, Srinivasan Venkataramanan, Ajeetha Kumari and Lisa Piper SystemVerilog Assertions Handbook, 4th Edition, 2016- http://SystemVerilog.us
Ben Cohen Srinivasan Venkataramanan and Ajeetha Kumari A Pragmatic Approach to VMM Adoption, - http://SystemVerilog.us
Erik Seligman and Tom Schubert Formal Verification: An Essential Toolkit for Modern VLSI Design, Jul 24, 2015,
External links
IEEE Standard Reference
The most recent SystemVerilog standard documents are available at no cost from IEEExplore.
1800-2017 - IEEE Standard for SystemVerilog--Unified Hardware Design, Specification, and Verification Language
Tutorials
SystemVerilog Tutorial
SystemVerilog Tutorial for Beginners
Standards Development
IEEE P1800 – Working group for SystemVerilog
Sites used before IEEE 1800-2005
SystemVerilog official website
SystemVerilog Technical Committees
Language Extensions
Verilog AUTOs – An open source meta-comment system to simplify maintaining Verilog code
Online Tools
EDA Playground – Run SystemVerilog from a web browser (free online IDE)
SVeN – A SystemVerilog BNF Navigator (current to IEEE 1800-2012)
Other Tools
SVUnit – unit test framework for developers writing code in SystemVerilog. Verify SystemVerilog modules, classes and interfaces in isolation.
sv2v - open-source converter from SystemVerilog to Verilog
Hardware description languages
Hardware verification languages
System description languages
Programming languages created in 2002 | Operating System (OS) | 1,064 |
Old World ROM
Old World ROM computers are the Macintosh (Mac) models that use a Macintosh Toolbox read-only memory (ROM) chip, usually in a socket (but soldered to the motherboard in some models). All Macs prior to the iMac, the iBook, the Blue and White Power Mac G3 and the Bronze Keyboard (Lombard) PowerBook G3 use Old World ROM, while said models, as well as all subsequent models until the introduction of the Intel-based EFI Models, are New World ROM machines. In particular, the Beige Power Mac G3 and all other beige and platinum-colored Power Macs are Old World ROM machines. In common use, the "Old World" designation usually applies to the early generations of PCI-based "beige" Power Macs (and sometimes the first NuBus-equipped models), but not the older Motorola 68000-based Macs; however, the Toolbox runs the same way on all three types of machines.
Details
PCI Power Macs with an Old World ROM contain an Open Firmware implementation, and a copy of the Macintosh Toolbox as an Open Firmware device. These machines are set to boot from this device by default, thus starting the normal Macintosh startup procedure. This can be changed, just as on New World ROM Macs, but with limitations placed on what devices and formats can be used; on these machines, particularly the early machines like the Power Macintosh 9500, the Open Firmware implementation was just enough to enumerate PCI devices and load the Toolbox ROM, and these Open Firmware revisions have several bugs which must be worked around by boot loaders or nvramrc patches. The Open Firmware environment can be entered by holding the key combination while booting.
All Power Macs emulate a 68LC040 CPU inside a nanokernel; this emulator is then used to boot the predominantly 68k-based Toolbox, and is also used to support applications written for the 68k processor. Once Toolbox is running, PPC machines can boot into Mac OS directly.
On all Old World ROM machines, once Toolbox is loaded, the boot procedure is the same. Toolbox executes a memory test, enumerates Mac OS devices it knows about (this varies from model to model), and either starts the on-board video (if present) or the option ROM on a NuBus or PCI video card. Toolbox then checks for a disk in the floppy drive, and scans all SCSI buses for a disk with a valid System Folder, giving preference to whatever disk is set as the startup disk in the parameter RAM.
If a bootable disk is found, the Happy Mac logo is displayed, and control is handed over to the Mac OS. If no disk to boot from is present, an icon depicting a floppy disk with a blinking question mark in the middle will be displayed. If a hardware problem occurs during the early part of the boot process, the machine will display the Sad Mac icon with a hexadecimal error code and freeze; on Macs made after 1987, this will be accompanied by the Chimes of Death sound.
Since the Old World ROM usually boots to Toolbox, most OSs have to be installed using a boot loader from inside Mac OS (BootX is commonly used for Linux installations). 68K-based Macs and NuBus Power Macs must have Mac OS installed to load another OS (even A/UX, which was an Apple product), usually with virtual memory turned off. PCI Power Macs can be configured to boot into Open Firmware, allowing the firmware to load a boot loader directly, or they can use a specially-prepared floppy disk to trick the Toolbox into loading a kernel (this is used for Linux installation floppy images).
The simplest way to identify an Old World ROM Mac is that it will not have a factory built-in USB port. Only New World ROM Macs featured a USB port as factory equipment.
See also
BootX (Linux), the standard LinuxPPC boot loader for Old World machines
Quik (boot loader), a replacement boot loader for Old World PCI systems
External links
Macintosh operating systems
Macintosh firmware | Operating System (OS) | 1,065 |
OCZ
OCZ was a brand of Toshiba that was used for some of its solid-state drives (SSDs) before they were rebranded with Toshiba. OCZ Storage Solutions was a manufacturer of SSDs based in San Jose, California, USA and was the new company formed after the sale of OCZ Technology Groups SSD assets to Toshiba Corporation. Since entering the memory market as OCZ Technology in 2002, the company has targeted its products primarily at the computer hardware enthusiast market, producing performance DDR RAM, video cards, USB drives, power supplies, and various cooling products. SSD devices with the OCZ brand that are using SATA III, PCI Express, Serial attached SCSI and USB 3.0 interfaces, for both client and enterprise applications are currently being produced. OCZ Storage Solutions was dissolved on April 1, 2016 and absorbed into Toshiba America Electronic Components, Inc, which later then became Kioxia.
History
OCZ was originally called "The Overclockerz Store" when it was founded by Ryan Peterson in 2000 selling overclocked Athlon processors for example. San Jose, California-based OCZ Technology Group, Inc. was founded in 2002.
OCZ maintained satellite offices in The Netherlands, United Kingdom, and Israel along with manufacturing and logistics facilities in Taiwan. In June 2006, OCZ went public on the London Stock Exchange Alternative Investment Market (LSE AIM), with the ticker symbol "OCZ".
On May 25, 2007, OCZ acquired PC Power & Cooling, whose products include power supplies. PC Power & Cooling is in Carlsbad, California. It operated as a separate satellite office for OCZ and maintained its own product lines, although OCZ later launched OCZ-branded power supply models as well.
In early March 2009, OCZ announced their intent to delist from the LSE to pursue a listing on an American stock exchange. On April 24, 2010, OCZ announced a listing on NASDAQ with the ticker symbol "OCZ".
In September 2010 OCZ announced the RevoDrive, which is a bootable PCI-E drive for the enthusiast market. It also recently announced an SSD interface called High Speed Data Link (HSDL), which is a PCIe/SAS hybrid interface, along with corresponding products to implement it.
As of 2012, OCZ's SSDs offered up to a 1 TB capacity.
In November 2010, OCZ acquired intellectual property from Solid Data Inc., for Fibre Channel, SAS, and controller assets for solid state drives. The cost was approximately $950,000, paid with restricted common stock and cash.
OCZ discontinued all RAM production, citing poor market performance and the weakening global DRAM market, by the end of their 2010 fiscal year on February 28, 2011.
In March 2011, OCZ acquired Indilinx Company, Limited, a privately held fabless provider of flash controller silicon and software for SSDs, for approximately $32 million of OCZ common stock.
On October 5, 2011, OCZ announced an intent to acquire PLX Technology's Abingdon R&D department (formerly Oxford Semiconductor), which specializes in storage SoC development.
In 2012 OCZ acquired Sanrad Inc., a privately held provider of flash caching and virtualization software and hardware from the RAD Group; Sanrad became the OCZ Israel office.
Reports in 2012 indicated that a possible acquisition of OCZ by Seagate Technology fell through.
On September 17, 2012, founder and CEO Ryan Petersen was fired by his board of directors, and chief marketing officer Alex Mei was appointed as interim CEO. Media outlets speculated that Ryan was ousted by the board of directors.
On October 10, 2012, OCZ appointed board member Ralph Schmitt as the company's president and CEO. Schmitt joined OCZ from PLX, where he served as president and CEO since 2008.
Accounting practices
Several shareholder lawsuits revolved around questionable accounting practices during 2012. In May 2013 the NASDAQ gave OCZ until September 16, 2013 to file its delayed earnings. The company was several quarters late in filing and restated earnings all the way back to 2008.
On September 12, 2013, the company disclosed it would not meet the deadline, which was then extended to October 7.
2013 saw OCZ's revenue fall steeply from $88.6 million in the second fiscal quarter of 2012 to $33.5 million in the second fiscal quarter of 2013, while financial losses increased. OCZ took a $30 million loan at a steep 15% interest rate from Hercules Technology Growth Capital. Because OCZ put in their own firm as collateral for the loan, Hercules Technology Growth Capital would gain ownership of OCZ if OCZ failed to repay the loan.
After failing to meet the term of the loan, it was extended to June 2014, with the share price dropping 40% on November 4, 2013. On November 25, 2013, Hercules took control of OCZ's bank accounts because it was not in compliance with the conditions of the loan.
The Securities and Exchange Commission charged former CEO Ryan Petersen and former CFO Arthur Knap for accounting failures. In a complaint filed in the Northern District of California, the SEC alleged that OCZ's former CEO Ryan Petersen engaged in a scheme to materially inflate OCZ's revenues and gross margins from 2010 to 2012. It separately charged OCZ's former chief financial officer Arthur Knapp for certain accounting, disclosure, and internal accounting controls failures at OCZ. Knapp agreed to settle the SEC's charges without admitting the allegations against him. As of 2015,the SEC's litigation continued against Petersen, and finally came to a close in 2017.
Toshiba acquisition
On November 27, 2013, OCZ Technology stock was halted. OCZ then stated they expected to file a petition for bankruptcy and that Toshiba Corporation had expressed interest in purchasing its assets in a bankruptcy proceeding. On December 2, 2013, OCZ announced Toshiba had agreed to purchase nearly all of OCZ's assets for $35 million. The deal was completed on January 21, 2014 when the assets of OCZ Technology Group became a new independently operated subsidiary of Toshiba named OCZ Storage Solutions. OCZ Technology Group then changed its name to ZCO Liquidating Corporation; on August 18, 2014, ZCO Liquidating Corporation and its subsidiaries were liquidated.
In February 2014, the PC Power & Cooling subsidiary was sold to FirePower Technology.
OCZ Storage Solutions was dissolved on April 1, 2016 and absorbed into Toshiba America Electronic Components, Inc., with OCZ becoming a brand of Toshiba. Toshiba later reorganized its memory products division under a new company and brand, Kioxia.
Reliability history
OCZ-branded SSDs were notable for high failure rates. Two lines from the old OCZ Technology Group, the Petrol and the SATA II versions of the Octane, had return rates of over 40% at one anonymous French technology online retailer. The SATA II version of the 128 GB Octane had a return rate of 52.07%. No other company had any SSDs with a return rate of over 5% from this retailer in the data set that was published on March 5, 2013. In a dataset published on April 30, 2014, OCZ again had the highest return rates from the same anonymous retailer. The next dataset, published on November 6, 2014, shows only one OCZ drive, the OCZ Agility 3 480 GB, which had a much lower return rate of 1.34%.
KitGuru tested five samples of the OCZ ARC100 240GB SSD, which was released after Toshiba acquired OCZ's assets, over four months in 2014 and 2015. The drives were deliberately tested to destruction; the lowest lifetime throughput was 350TB and the highest 700TB. The drive's warranty covers 20GB of data transfer a day for three years, a total of 22TB.
References
External links
2014 establishments in California
Accounting scandals
Companies based in Sunnyvale, California
Companies formerly listed on the Nasdaq
Companies that filed for Chapter 11 bankruptcy in 2013
Computer memory companies
Computer power supply unit manufacturers
Manufacturing companies disestablished in 2014
Manufacturing companies disestablished in 2016
Manufacturing companies established in 2002
Scandals in California
Technology companies disestablished in 2014
Technology companies disestablished in 2016
Technology companies established in 2002
Electronics companies established in 2014
Toshiba brands | Operating System (OS) | 1,066 |
Windows Subsystem for Linux
Windows Subsystem for Linux (WSL) is a compatibility layer for running Linux binary executables (in ELF format) natively on Windows 10, Windows 11, and Windows Server 2019.
In May 2019, WSL 2 was announced, introducing important changes such as a real Linux kernel, through a subset of Hyper-V features. Since June 2019, WSL 2 is available to Windows 10 customers through the Windows Insider program, including the Home edition. WSL is not available to all Windows 10 users by default. It can be installed either by joining the Windows Insider program or manual install.
History
Microsoft's first foray into achieving Unix-like compatibility on Windows began with the Microsoft POSIX Subsystem, superseded by Windows Services for UNIX via MKS/Interix, which was eventually deprecated with the release of Windows 8.1. The technology behind Windows Subsystem for Linux originated in the unreleased Project Astoria, which enabled some Android applications to run on Windows 10 Mobile. It was first made available in Windows 10 Insider Preview build 14316.
Whereas Microsoft's previous projects and the third-party Cygwin had focused on creating their own unique Unix-like environments based on the POSIX standard, WSL aims for native Linux compatibility. Instead of wrapping non-native functionality into Win32 system calls as Cygwin did, WSL's initial design (WSL 1) leveraged the NT kernel executive to serve Linux programs as special, isolated minimal processes (known as "pico processes") attached to kernel mode "pico providers" as dedicated system call and exception handlers distinct from that of a vanilla NT process, opting to reutilize existing NT implementations wherever possible.
WSL beta was introduced in Windows 10 version 1607 (Anniversary Update) on August 2, 2016. Only Ubuntu (with Bash as the default shell) was supported. WSL beta was also called "Bash on Ubuntu on Windows" or "Bash on Windows". WSL was no longer beta in Windows 10 version 1709 (Fall Creators Update), released on October 17, 2017. Multiple Linux distributions could be installed and were available for install in the Windows Store.
In 2017 Richard Stallman expressed fears that integrating Linux functionality into Windows will only hinder the development of free software, calling efforts like WSL "a step backward in the campaign for freedom."
Though WSL (via this initial design) was much faster and arguably much more popular than its brethren UNIX-on-Windows projects, Windows kernel engineers found difficulty in trying to increase WSL's performance and syscall compatibility by trying to reshape the existing NT kernel to recognize and operate correctly on Linux's API. At a Microsoft Ignite conference in 2018, Microsoft engineers gave a high-level overview of a new "lightweight" Hyper-V VM technology for containerization where a virtualized kernel could make direct use of NT primitives on the host. In 2019, Microsoft announced a completely redesigned WSL architecture (WSL 2) using this lightweight VM technology hosting actual (customized) Linux kernel images, claiming full syscall compatibility. Microsoft announced WSL 2 on May 6, 2019, and it was shipped with Windows 10 version 2004. It was also backported to Windows 10 version 1903 and 1909.
GPU support for WSL 2 was introduced in Windows build 20150. GUI support for WSL 2 was introduced in Windows build 21364. Both of them are shipped in Windows 11.
In April 2021, Microsoft released a Windows 10 test build that also includes the ability to run Linux graphical user interface (GUI) apps using WSL 2 and CBL-Mariner. The Windows Subsystem for Linux GUI (WSLg) was officially released at the Microsoft Build 2021 conference. It is included in Windows 10 Insider build 21364 or later.
Microsoft introduced a Windows Store version of WSL on October 11, 2021 for Windows 11.
Features
WSL is available in Windows Server 2019 and in versions of Windows 10 from version 1607, though only in 64-bit versions.
Microsoft envisages WSL as "primarily a tool for developers – especially web developers and those who work on or with open source projects". In September 2018, Microsoft said that "WSL requires fewer resources (CPU, memory, and storage) than a full virtual machine" (which prior to WSL was the most direct way to run Linux software in a Windows environment), while also allowing users to use Windows apps and Linux tools on the same set of files.
The first release of WSL provides a Linux-compatible kernel interface developed by Microsoft, containing no Linux kernel code, which can then run the user space of a Linux distribution on top of it, such as Ubuntu, openSUSE, SUSE Linux Enterprise Server, Debian and Kali Linux. Such a user space might contain a GNU Bash shell and command language, with native GNU command-line tools (sed, awk, etc.), programming-language interpreters (Ruby, Python, etc.), and even graphical applications (using an X11 server at the host side).
The architecture was redesigned in WSL 2, with a Linux kernel running in a lightweight virtual machine environment.
wsl.exe
The wsl.exe command is used to manage distributions in the Windows Subsystem for Linux on the command-line. It can list available distributions, set a default distribution, and uninstall distributions. The command can also be used to run Linux binaries from the Windows Command Prompt or Windows PowerShell. wsl.exe replaces lxrun.exe which is deprecated as of Windows 10 1803 and later.
WSLg
WSLg is short for Windows Subsystem for Linux GUI built with the purpose of enabling support for running Linux GUI applications (X11 and Wayland) on Windows in a fully integrated desktop experience. WSLg was officially released at the Microsoft Build 2021 conference. It is included in Windows 10 Insider build 21364 or later. However, with the introduction of Windows 11 WSLg is finally shipping with a production build of Windows bringing support for both graphics and audio in WSL apps.
Prerequisites for running WSLg include:
Windows 11 (build 22000.*) or Windows 11 Insider Preview (builds 21362+)
A system with virtual GPU (vGPU) enabled for WSL is recommended, as it will allow you to benefit from hardware accelerated OpenGL rendering
Design
WSL 1
LXSS Manager Service is the service in charge of interacting with the subsystem (through the drivers and ), and the way that Bash.exe (not to be confused with the Shells provided by the Linux distributions) launches the Linux processes, as well as handling the Linux system calls and the binary locks during their execution. All Linux processes invoked by a particular user go into a "Linux Instance" (usually, the first invoked process is init). Once all the applications are closed, the instance is closed.
WSL 1's design featured no hardware emulation / virtualization (unlike other projects such as coLinux) and makes direct use of the host file system (through and ) and some parts of the hardware, such as the network, which guarantees interoperability. Web servers for example, can be accessed through the same interfaces and IP addresses configured on the host, and shares the same restrictions on the use of ports that require administrative permissions, or ports already occupied by other applications. There are certain locations (such as system folders) and configurations whose access/modification is restricted, even when running as root, with sudo from the shell. An instance with elevated privileges must be launched in order to get "sudo" to give real root privileges, and allow such access.
WSL 1 is not capable of running all Linux software, such as 32-bit binaries, or those that require specific Linux kernel services not implemented in WSL. Due to a lack of any "real" Linux kernel in WSL 1, kernel modules, such as device drivers, cannot be run. WSL 2, however, makes use of live virtualized Linux kernel instances. It is possible to run some graphical (GUI) applications (such as Mozilla Firefox) by installing an X11 server within the Windows (host) environment (such as VcXsrv or Xming), although not without caveats, such as the lack of audio support (though this can be remedied by installing PulseAudio in Windows in a similar manner to X11) or hardware acceleration (resulting in poor graphics performance). Support for OpenCL and CUDA is also not being implemented currently, although planned for future releases. Microsoft stated WSL was designed for the development of applications, and not for desktop computers or production servers, recommending the use of virtual machines (Hyper-V), Kubernetes, and Azure for those purposes.
In benchmarks WSL 1's performance is often near native Linux Ubuntu, Debian, Intel Clear Linux or other Linux distributions. I/O is in some tests a bottleneck for WSL. The redesigned WSL 2 backend is claimed by Microsoft to offer twenty-fold increases in speed on certain operations compared to that of WSL 1. In June 2020, a benchmark with 173 tests with an AMD Threadripper 3970x shows good performance with WSL 2 (20H2) with 87% of the performance of native Ubuntu 20.04.0 LTS. This is an improvement over WSL 1, which has only 70% of the performance of native Ubuntu in this comparison. WSL 2 improves I/O performance, providing a near-native level.
A comparison of 69 tests with Intel i9 10900K in May 2020 shows nearly the same relative performance. In December 2020, a benchmark with 43 tests with an AMD Ryzen 5900X shows good performance with WSL 2 (20H2) with 93% of the performance of native 20.04.1 LTS. This is an improvement over WSL 1, which has only 73% in this comparison.
WSL 2
Version 2 introduces changes in the architecture. Microsoft has opted for virtualization through a highly optimized subset of Hyper-V features, in order to run the kernel and distributions (based upon the kernel), promising performance equivalent to WSL 1. For backward compatibility, developers don't need to change anything in their published distributions. WSL 2 settings can be tweaked by the WSL global configuration, contained in an INI file named .wslconfig in the User Profile folder.
The distribution installation resides inside an ext4-formatted filesystem inside a virtual disk, and the host file system is transparently accessible through the 9P protocol, similarly to other virtual machine technologies like QEMU. For the users, Microsoft promised up to 20 times the read/write performance of WSL 1. From Windows an IFS network redirector is provided for Linux guest file access using the UNC path prefix of \\wsl$.
WSL 2 requires Windows 10 version 1903 or higher, with Build 18362 or higher, for x64 systems, and Version 2004 or higher, with Build 19041 or higher, for ARM64 systems.
See also
Azure Sphere
FreeBSD#OS compatibility layers
SmartOS#SmartOS types of zones
Terminal (Windows)
Xenix
References
External links
WSL on Microsoft Docs
Compatibility layers
Windows 10
Windows components
Virtualization software | Operating System (OS) | 1,067 |
IBM System 9000
The System 9000 (S9000) is a family of microcomputers from IBM consisting of the System 9001, 9002, and 9003. The first member of the family, the System 9001 laboratory computer, was introduced in May 1982 as the IBM Instruments Computer System Model 9000. It was renamed to the System 9001 in 1984 when the System 9000 family name and the System 9002 multi-user general-purpose business computer was introduced. The last member of the family, the System 9003 industrial computer, was introduced in 1985. All members of the System 9000 family did not find much commercial success and the entire family was discontinued on 2 December 1986. The System 9000 was based around the Motorola 68000 microprocessor and the Motorola VERSAbus system bus. All members had the IBM CSOS real-time operating system (OS) stored on read-only memory; and the System 9002 could also run the multi-user Microsoft Xenix OS, which was suitable for business use and supported up to four users.
Features
There were three versions of the System 9000. The 9001 was the benchtop (lab) model, the 9002 was the desktop model without laboratory-specific features, and the 9003 was a manufacturing and process control version modified to be suitable for factory environments. The System 9002 and 9003 were based on the System 9001, which was based on around an 8MHz Motorola 68000, and the Motorola VERSAbus system bus (the System 9000 was one of the few that used the VERSAbus). Input/output ports included three RS-232C serial ports, an IEEE-488 instrument port, and a bidirectional 8-bit parallel port. For laboratory data acquisition, analog-to-digital converters that could be attached to its I/O ports were available. User input could be via a user-definable 10-key touch panel on the integrated CRT display, a 57-key user-definable keypad, or a 83-key Model F keyboard. The touch panel and keypad were designed for controlling experiments.
All System 9000 members had an IBM real-time operating system called CSOS (Computer System Operating System) on 128KB of read-only memory (ROM). This was a multi-tasking operating system that could be extended by loading components from disk. IBM also offered Microsoft Xenix on the System 9002, but this required at least 640KB of main memory and a VERSAbus card containing a memory management unit. The machines shipped with 128KB of main memory as standard, and up to 5MB could be added to the system using memory boards that plugged into the VERSAbus. Each board could contain up to 1MB, which were installed in 256KB increments.
History
The System 9000 was developed by IBM Instruments, Inc., an IBM subsidiary established in 1980 that focused on selling scientific and technical instruments as well as the computer equipment designed to control, log, or process these instruments. It was originally introduced as the IBM Instruments Computer System Model 9000 in May 1982. Its long name led to it being referred to as the Computer System 9000, CS-9000, CS/9000, or CS9000. Originally, the CS9000 was available for scientific instrument users, it was not offered to customers who wanted to use it for other purposes. The CS9000 was unsuccessful in this niche; the cheaper IBM Personal Computer was adequate for many instrumentation tasks, and IBM's larger general-purpose computers were used for more demanding tasks.
In 1983 IBM began encouraging value-added resellers to sell the CS9000 as an alternative to large computers like DEC Professional and Honeywell Level 6. IBM formally repositioned the CS9000 on February 21, 1984 as a family of computers, renaming it to the System 9000, which consisted of the System 9001 and 9002. The 9001 was a renamed CS9000, which retained its focus on the instrumentation market, while the 9002 was a general-purpose business computer that ran the IBM CSOS or Microsoft Xenix operating systems and supported one to four users. The 9002 was unsuccessful in the business market, due to the lack of business application software support from software developers other than IBM. IBM finally introduced a new model, the System 9003, in April 1985 as a computer-aided manufacturing computer, but it was also unsuccessful. As a result, manufacturing of the System 9000 family was stopped in January 1986, and it remained in limited availability until it was discontinued on 2 December 1986. Reasons cited for the failure of the System 9000 were its poor performance and high price, which led to the IBM PC being used where price was of concern, and to other 32-bit microcomputers being used where performance mattered. IBM closed its Instrument division in January 1987, reassigning the approximately 150 employees that had worked for it to other positions.
Reception
Noting the obscurity of its 1982 release, BYTE in January 1983 called the System 9000 "IBM's 'Secret' Computer" and stated that it was "in its quiet way, one of the most exciting new arrivals on today's microcomputer scene". The magazine speculated that with some changes it would be "a natural candidate for a business or general-purpose computer". A later review by a member of Brandeis University's chemistry department criticized several aspects of the hardware and software, but praised the sophisticated BASIC and IBM's customer service. The reviewer concluded that "the CS-9000 is a very fast and powerful laboratory computer [that is] very affordable".
IBM 9000
At least some ads by dealers in 1983 referred to "The IBM 9000: Multi-User Micro," although the name "IBM Computer System 9000" was also advertised.
IBM also sometimes referred to the System 9000 as "IBM 9000" in their own marketing, at least when referring to their C compiler for the system.
References
Further reading
(A book about the System 9000 and how to use it written by a researcher at IBM Research.)
The CS9000 Microcomputer, a SHARE paper on the CS9000 by Marty Sandfelder (IBM)
David J. States, "NUMBER CRUNCHING ON IBM'S NEW S9000. IBM joins with MIT's National Magnet Lab to develop spectrometers for imaging systems" in the BYTE Guide to the IBM PC, fall 1984, pp. 218–230 has a fairly extensive review of S9000 used with CSOS
System 9000
System 9000
68k architecture
32-bit computers | Operating System (OS) | 1,068 |
ISO/IEC 33001
ISO/IEC 33001 Information technology -- Process assessment -- Concepts and terminology is a set of technical standards documents for the computer software development process and related business management functions.
ISO/IEC 33001:2015 is a revision of ISO/IEC 15504, also termed Software Process Improvement and Capability Determination (SPICE).
The ISO/IEC 330xx family superseded the ISO/IEC 155xx family.
Further reading
ISO/IEC 33001:2015 Information technology -- Process assessment -- Concepts and terminology
ISO/IEC 33002:2015 Information technology -- Process assessment -- Requirements for performing process assessment
ISO/IEC 33003:2015 Information technology -- Process assessment -- Requirements for process measurement frameworks
ISO/IEC 33004:2015 Information technology -- Process assessment -- Requirements for process reference, process assessment and maturity models
ISO/IEC 33014:2013 Information technology -- Process assessment -- Guide for process improvement
ISO/IEC 33020:2015 Information technology -- Process assessment -- Process measurement framework for assessment of process capability
ISO/IEC 33063:2015 Information technology -- Process assessment -- Process assessment model for software testing
ISO/IEC TR 29110-3-1:2015 Systems and software engineering -- Lifecycle profiles for Very Small Entities (VSEs) -- Part 3-1: Assessment guide
References
Software engineering standards
Software development process
33001 | Operating System (OS) | 1,069 |
System migration
System migration involves moving a set of instructions or programs, e.g., PLC (programmable logic controller) programs, from one platform to another, minimizing reengineering.
Migration of systems can also involve downtime, while the old system is replaced with a new one.
Migration can be from a mainframe computer which has a closed architecture, to an open system which employ x86 servers. As well, migration can be from an open system to a Cloud Computing platform. The motivation for this can be the cost savings. Migration can be simplified by tools that can automatically convert data from one form to another. There are also tools to convert the code from one platform to another to be either compiled or interpreted. Vendors of such tools include Micro Focus and Metamining. An alternative to converting the code is the use of software that can run the code from the old system on the new system. Examples are Oracle Tuxedo Application Rehosting Workbench, Morphis - Transformer and products for LINC 4GL.
Migration may also be required when the hardware is no longer available. See JOVIAL.
See also
Data conversion
Data migration
Data transformation
Software migration
Software modernization
List of Linux adopters
References
Software maintenance | Operating System (OS) | 1,070 |
Control panel (software)
Many computer user interfaces use a control panel metaphor to give the user control of software and hardware features. The control panel consists of multiple settings including display settings, network settings, user account settings, and hardware settings. Some control panels require the user to have admin rights or root access.
Computer history
The term control panel was used for the plugboards in unit record equipment and in the early computers of the 1940s and '50s. In the 1980s, the Xerox Star and the Apple Lisa, which pioneered the first graphical user interface metaphors, controlled user settings by single click selections and variable fields. In 1984 the Apple Macintosh in its initial release made use of fundamental graphic representation of a "control panel board" imitating the operation of slider controls, on/off buttons and radio-select buttons that corresponded to user settings.
Functionality
There are many tasks grouped in a control panel:
Hardware
Color
Color management
Computer displays
Brightness
Contrast
Color calibration
Energy saving
Gamma correction
Screen resolution and orientation
Graphics tablet
Keyboard
Shortcuts and bindings
Language and layout
Text cursor appearance
Mouse and touchpad
Power management
Energy saving
Battery usage
Display brightness
Power button actions
Power plans
Printers and scanners
Sound
Networking
Bluetooth connection and file exchange
Ethernet connection
Internet Accounts
E-mail integration
Social media integration
Wi-Fi connection
System-wide proxy
Security
Certificates and password management
Firewall
Filesystem encryption
Privacy
File indexing and event tracking
Data sharing
System
Login window
System information
Hostname
System time
Calendar system
NTP server
Time zone
Software management
Application management
System update configuration
Software sources
Different types
In Microsoft Windows operating systems, the Control Panel and Settings app are where various computer settings can be modified.
In the classic Mac OS, a control panel served a similar purpose. In macOS, the equivalent to control panels are referred to as System Preferences.
In web hosting, browser-based control panels, such as CPanel and Plesk, are used to manage servers, web services and users.
There are different control panels in free desktops, like GNOME, KDE, Webmin...
See also
Control panel (engineering)
Dashboard (business)
References
User interfaces | Operating System (OS) | 1,071 |
Runlevel
A runlevel is a mode of operation in the computer operating systems that implements Unix System V-style initialization. Conventionally, seven runlevels exist, numbered from zero to six. S is sometimes used as a synonym for one of the levels. Only one runlevel is executed on startup; run levels are not executed one after another (i.e. only runlevel 2, 3, or 4 is executed, not more of them sequentially or in any other order).
A runlevel defines the state of the machine after boot. Different runlevels are typically assigned (not necessarily in any particular order) to the single-user mode, multi-user mode without network services started, multi-user mode with network services started, system shutdown, and system reboot system states. The exact setup of these configurations varies between operating systems and Linux distributions. For example, runlevel 4 might be a multi-user GUI no-server configuration on one distribution, and nothing on another. Runlevels commonly follow the general patterns described in this article; however, some distributions employ certain specific configurations.
In standard practice, when a computer enters runlevel zero, it halts, and when it enters runlevel six, it reboots. The intermediate runlevels (1–5) differ in terms of which drives are mounted and which network services are started. Default runlevels are typically 3, 4, or 5. Lower runlevels are useful for maintenance or emergency repairs, since they usually offer no network services at all. The particular details of runlevel configuration differ widely among operating systems, and also among system administrators.
In various Linux distributions, the traditional script used in the Version 7 Unix was first replaced by runlevels and then by systemd states on most major distributions.
Standard runlevels
Linux
Although systemd is, , used by default in most major Linux distributions, runlevels can still be used through the means provided by the sysvinit project. After the Linux kernel has booted, the program reads the file to determine the behavior for each runlevel. Unless the user specifies another value as a kernel boot parameter, the system will attempt to enter (start) the default runlevel.
Linux Standard Base specification
Systems conforming to the Linux Standard Base (LSB) need not provide the exact run levels given here or give them the meanings described here, and may map any level described here to a different level which provides the equivalent functionality.
Slackware Linux
Slackware Linux uses runlevel 1 for maintenance, as on other Linux distributions; runlevels 2, 3 and 5 identically configured for a console (with all services active); and runlevel 4 adds the X Window System.
Gentoo Linux
Debian GNU/Linux
Unix
System V Releases 3 and 4
Solaris
Starting from Solaris 10, SMF (Service Management Facility) is used instead of SVR4 run levels. The latter are emulated to preserve compatibility with legacy startup scripts.
HP-UX
AIX
AIX does not follow the System V R4 (SVR4) runlevel specification, with runlevels from 0 to 9 available, as well as from a to c (or h). 0 and 1 are reserved, 2 is the default normal multi-user mode and runlevels from 3 to 9 are free to be defined by the administrator. Runlevels from a to c (or h) allow the execution of processes in that runlevel without killing processes started in another.
The S, s, M and m runlevels are not true runlevels, but are used to tell the init command to enter maintenance mode. When the system enters maintenance mode from another runlevel, only the system console is used as the terminal.
See also
Init
Operating system service management
systemd
Upstart
Notes
References
External links
Runlevel Definition - by The Linux Information Project (LINFO)
What are run levels? - LinuxQuestions.org
FreeBSD system startup
chkconfig, a utility for querying and updating runlevel-controlled services
Unix
Unix process- and task-management-related software | Operating System (OS) | 1,072 |
Barebook
A barebook computer (or barebone laptop) is an incomplete notebook PC. A barebone laptop is similar to a barebone computer, but in a laptop form.
As it leaves the factory, it contains only elements strictly tied to the computer's design (case, motherboard, display, keyboard, pointing device, etc.), and the consumer or reseller has to add standardized off-the-shelf components such as CPU and GPU (when not integrated on the motherboard), memory, mass storage, WiFi card, etc. separately.
Because it is not manufactured with storage media such as harddisks or SSDs, a barebook does not typically include an operating system, which may make barebooks appealing to opposers of the bundling of Microsoft Windows.
References
See also
Barebone computer
Homebuilt computer
Do it yourself
Laptops | Operating System (OS) | 1,073 |
Openware
Openware, Inc. (previously known as France-based Openware S.A.S.) is a United States multinational open-source Blockchain software engineering company headquartered in South San Francisco, California, that designs, develops and sells computer software, and online services.
It developed a complete digital asset and cryptocurrency exchange framework based on its expertise in financial DevOps, network security, open-source integration, and assistance to international companies, banks, and service providers.
Openware's Blockchain software application products are:
OpenDAX, a trading platform to facilitate the exchange of stocks, digital assets, cryptocurrencies, and security tokens.
OpenFinex, a market platform to facilitate order-matching, liquid trading, and market-making on an enterprise scale.
ArkeBot, an automated trading system to facilitate automation of trading, market making, and market liquidity provision.
See also
Distributed ledger
Information Security
External links
Official website
Public codebase repositories on GitHub
Software companies based in the San Francisco Bay Area
American companies established in 2006
Software companies established in 2006 | Operating System (OS) | 1,074 |
Industry Standard Architecture
Industry Standard Architecture (ISA) is the 16-bit internal bus of IBM PC/AT and similar computers based on the Intel 80286 and its immediate successors during the 1980s. The bus was (largely) backward compatible with the 8-bit bus of the 8088-based IBM PC, including the IBM PC/XT as well as IBM PC compatibles.
Originally referred to as the PC bus (8-bit) or AT bus (16-bit), it was also termed I/O Channel by IBM. The ISA term was coined as a retronym by competing PC-clone manufacturers in the late 1980s or early 1990s as a reaction to IBM attempts to replace the AT-bus with its new and incompatible Micro Channel architecture.
The 16-bit ISA bus was also used with 32-bit processors for several years. An attempt to extend it to 32 bits, called Extended Industry Standard Architecture (EISA), was not very successful, however. Later buses such as VESA Local Bus and PCI were used instead, often along with ISA slots on the same mainboard. Derivatives of the AT bus structure were and still are used in ATA/IDE, the PCMCIA standard, Compact Flash, the PC/104 bus, and internally within Super I/O chips.
Even though ISA disappeared from consumer desktops many years ago, it is still used in industrial PCs, where certain specialized expansion cards that never transitioned to PCI and PCI Express are used.
History
The original PC bus was developed by a team led by Mark Dean at IBM as part of the IBM PC project in 1981. It was an 8-bit bus based on the I/O bus of the IBM System/23 Datamaster system - it used the same physical connector, and a similar signal protocol and pinout. A 16-bit version, the IBM AT bus, was introduced with the release of the IBM PC/AT in 1984. The AT bus was a mostly backward compatible extension of the PC bus—the AT bus connector was a superset of the PC bus connector. In 1988, the 32-bit EISA standard was proposed by the "Gang of Nine" group of PC-compatible manufacturers that included Compaq. Compaq created the term "Industry Standard Architecture" (ISA) to replace "PC compatible". In the process, they retroactively renamed the AT bus to "ISA" to avoid infringing IBM's trademark on its PC and PC/AT systems (and to avoid giving their major competitor, IBM, free advertisement).
IBM designed the 8-bit version as a buffered interface to the motherboard buses of the Intel 8088 (16/8 bit) CPU in the IBM PC and PC/XT, augmented with prioritized interrupts and DMA channels. The 16-bit version was an upgrade for the motherboard buses of the Intel 80286 CPU (and expanded interrupt and DMA facilities) used in the IBM AT, with improved support for bus mastering. The ISA bus was therefore synchronous with the CPU clock, until sophisticated buffering methods were implemented by chipsets to interface ISA to much faster CPUs.
ISA was designed to connect peripheral cards to the motherboard and allows for bus mastering. Only the first 16 MB of main memory is addressable. The original 8-bit bus ran from the 4.77 MHz clock of the 8088 CPU in the IBM PC and PC/XT. The original 16-bit bus ran from the CPU clock of the 80286 in IBM PC/AT computers, which was 6 MHz in the first models and 8 MHz in later models. The IBM RT PC also used the 16-bit bus. ISA was also used in some non-IBM compatible machines such as Motorola 68k-based Apollo (68020) and Amiga 3000 (68030) workstations, the short-lived AT&T Hobbit and the later PowerPC-based BeBox.
Companies like Dell improved the AT bus's performance but in 1987, IBM replaced the AT bus with its proprietary Micro Channel Architecture (MCA). MCA overcame many of the limitations then apparent in ISA but was also an effort by IBM to regain control of the PC architecture and the PC market. MCA was far more advanced than ISA and had many features that would later appear in PCI. However, MCA was also a closed standard whereas IBM had released full specifications and circuit schematics for ISA. Computer manufacturers responded to MCA by developing the Extended Industry Standard Architecture (EISA) and the later VESA Local Bus (VLB). VLB used some electronic parts originally intended for MCA because component manufacturers already were equipped to manufacture them. Both EISA and VLB were backwards-compatible expansions of the AT (ISA) bus.
Users of ISA-based machines had to know special information about the hardware they were adding to the system. While a handful of devices were essentially "plug-n-play", this was rare. Users frequently had to configure parameters when adding a new device, such as the IRQ line, I/O address, or DMA channel. MCA had done away with this complication and PCI actually incorporated many of the ideas first explored with MCA, though it was more directly descended from EISA.
This trouble with configuration eventually led to the creation of ISA PnP, a plug-n-play system that used a combination of modifications to hardware, the system BIOS, and operating system software to automatically manage resource allocations. In reality, ISA PnP could be troublesome and did not become well-supported until the architecture was in its final days.
PCI slots were the first physically-incompatible expansion ports to directly squeeze ISA off the motherboard. At first, motherboards were largely ISA, including a few PCI slots. By the mid-1990s, the two slot types were roughly balanced, and ISA slots soon were in the minority of consumer systems. Microsoft's PC 99 specification recommended that ISA slots be removed entirely, though the system architecture still required ISA to be present in some vestigial way internally to handle the floppy drive, serial ports, etc., which was why the software compatible LPC bus was created. ISA slots remained for a few more years, and towards the turn of the century it was common to see systems with an Accelerated Graphics Port (AGP) sitting near the central processing unit, an array of PCI slots, and one or two ISA slots near the end. In late 2008, even floppy disk drives and serial ports were disappearing, and the extinction of vestigial ISA (by then the LPC bus) from chipsets was on the horizon.
PCI slots are "rotated" compared to their ISA counterparts—PCI cards were essentially inserted "upside-down," allowing ISA and PCI connectors to squeeze together on the motherboard. Only one of the two connectors can be used in each slot at a time, but this allowed for greater flexibility.
The AT Attachment (ATA) hard disk interface is directly descended from the 16-bit ISA of the PC/AT. ATA has its origins in the IBM Personal Computer Fixed Disk and Diskette Adapter, the standard dual-function floppy disk controller and hard disk controller card for the IBM PC AT; the fixed disk controller on this card implemented the register set and the basic command set which became the basis of the ATA interface (and which differed greatly from the interface of IBM's fixed disk controller card for the PC XT). Direct precursors to ATA were third-party ISA hardcards that integrated a hard disk drive (HDD) and a hard disk controller (HDC) onto one card. This was at best awkward and at worst damaging to the motherboard, as ISA slots were not designed to support such heavy devices as HDDs. The next generation of Integrated Drive Electronics drives moved both the drive and controller to a drive bay and used a ribbon cable and a very simple interface board to connect it to an ISA slot. ATA is basically a standardization of this arrangement plus a uniform command structure for software to interface with the HDC within the drive. ATA has since been separated from the ISA bus and connected directly to the local bus, usually by integration into the chipset, for much higher clock rates and data throughput than ISA could support. ATA has clear characteristics of 16-bit ISA, such as a 16-bit transfer size, signal timing in the PIO modes and the interrupt and DMA mechanisms.
ISA bus architecture
The PC/XT-bus is an eight-bit ISA bus used by Intel 8086 and Intel 8088 systems in the IBM PC and IBM PC XT in the 1980s. Among its 62 pins were demultiplexed and electrically buffered versions of the 8 data and 20 address lines of the 8088 processor, along with power lines, clocks, read/write strobes, interrupt lines, etc. Power lines included −5 V and ±12 V in order to directly support pMOS and enhancement mode nMOS circuits such as dynamic RAMs among other things. The XT bus architecture uses a single Intel 8259 PIC, giving eight vectorized and prioritized interrupt lines. It has four DMA channels originally provided by the Intel 8237. Three of the DMA channels are brought out to the XT bus expansion slots; of these, 2 are normally already allocated to machine functions (diskette drive and hard disk controller):
The PC/AT-bus, a 16-bit (or 80286-) version of the PC/XT bus, was introduced with the IBM PC/AT. This bus was officially termed I/O Channel by IBM. It extends the XT-bus by adding a second shorter edge connector in-line with the eight-bit XT-bus connector, which is unchanged, retaining compatibility with most 8-bit cards. The second connector adds four additional address lines for a total of 24, and 8 additional data lines for a total of 16. It also adds new interrupt lines connected to a second 8259 PIC (connected to one of the lines of the first) and 4 × 16-bit DMA channels, as well as control lines to select 8- or 16-bit transfers.
The 16-bit AT bus slot originally used two standard edge connector sockets in early IBM PC/AT machines. However, with the popularity of the AT-architecture and the 16-bit ISA bus, manufacturers introduced specialized 98-pin connectors that integrated the two sockets into one unit. These can be found in almost every AT-class PC manufactured after the mid-1980s. The ISA slot connector is typically black (distinguishing it from the brown EISA connectors and white PCI connectors).
Number of devices
Motherboard devices have dedicated IRQs (not present in the slots). 16-bit devices can use either PC-bus or PC/AT-bus IRQs. It is therefore possible to connect up to 6 devices that use one 8-bit IRQ each and up to 5 devices that use one 16-bit IRQ each. At the same time, up to 4 devices may use one 8-bit DMA channel each, while up to 3 devices can use one 16-bit DMA channel each.
Varying bus speeds
Originally, the bus clock was synchronous with the CPU clock, resulting in varying bus clock frequencies among the many different IBM "clones" on the market (sometimes as high as 16 or 20 MHz), leading to software or electrical timing problems for certain ISA cards at bus speeds they were not designed for. Later motherboards or integrated chipsets used a separate clock generator, or a clock divider which either fixed the ISA bus frequency at 4, 6, or 8 MHz or allowed the user to adjust the frequency via the BIOS setup. When used at a higher bus frequency, some ISA cards (certain Hercules-compatible video cards, for instance), could show significant performance improvements.
8/16-bit incompatibilities
Memory address decoding for the selection of 8 or 16-bit transfer mode was limited to 128 KiB sections, leading to problems when mixing 8- and 16-bit cards as they could not co-exist in the same 128 KiB area. This is because the MEMCS16 line is required to be set based on the value of LA17-23 only.
Past and current use
ISA is still used today for specialized industrial purposes. In 2008 IEI Technologies released a modern motherboard for Intel Core 2 Duo processors which, in addition to other special I/O features, is equipped with two ISA slots. It is marketed to industrial and military users who have invested in expensive specialized ISA bus adaptors, which are not available in PCI bus versions.
Similarly, ADEK Industrial Computers is releasing a motherboard in early 2013 for Intel Core i3/i5/i7 processors, which contains one (non-DMA) ISA slot.
The PC/104 bus, used in industrial and embedded applications, is a derivative of the ISA bus, utilizing the same signal lines with different connectors. The LPC bus has replaced the ISA bus as the connection to the legacy I/O devices on recent motherboards; while physically quite different, LPC looks just like ISA to software, so that the peculiarities of ISA such as the 16 MiB DMA limit (which corresponds to the full address space of the Intel 80286 CPU used in the original IBM AT) are likely to stick around for a while.
ATA
As explained in the History section, ISA was the basis for development of the ATA interface, used for ATA (a.k.a. IDE) hard disks. Physically, ATA is essentially a simple subset of ISA, with 16 data bits, support for exactly one IRQ and one DMA channel, and 3 address bits. To this ISA subset, ATA adds two IDE address select ("chip select") lines (i.e. address decodes, effectively equivalent to address bits) and a few unique signal lines specific to ATA/IDE hard disks (such as the Cable Select/Spindle Sync. line.) In addition to the physical interface channel, ATA goes beyond and far outside the scope of ISA by also specifying a set of physical device registers to be implemented on every ATA (IDE) drive and a full set of protocols and device commands for controlling fixed disk drives using these registers. The ATA device registers are accessed using the address bits and address select signals in the ATA physical interface channel, and all operations of ATA hard disks are performed using the ATA-specified protocols through the ATA command set. The earliest versions of the ATA standard featured a few simple protocols and a basic command set comparable to the command sets of MFM and RLL controllers (which preceded ATA controllers), but the latest ATA standards have much more complex protocols and instruction sets that include optional commands and protocols providing such advanced optional-use features as sizable hidden system storage areas, password security locking, and programmable geometry translation.
A further deviation between ISA and ATA is that while the ISA bus remained locked into a single standard clock rate (for backward hardware compatibility), the ATA interface offered many different speed modes, could select among them to match the maximum speed supported by the attached drives, and kept adding faster speeds with later versions of the ATA standard (up to 133 MB/s for ATA-6, the latest.) In most forms, ATA ran much faster than ISA, provided it was connected directly to a local bus (e.g. southbridge-integrated IDE interfaces) faster than the ISA bus.
XT-IDE
Before the 16-bit ATA/IDE interface, there was an 8-bit XT-IDE (also known as XTA) interface for hard disks. It was not nearly as popular as ATA has become, and XT-IDE hardware is now fairly hard to find. Some XT-IDE adapters were available as 8-bit ISA cards, and XTA sockets were also present on the motherboards of Amstrad's later XT clones as well as a short-lived line of Philips units. The XTA pinout was very similar to ATA, but only eight data lines and two address lines were used, and the physical device registers had completely different meanings. A few hard drives (such as the Seagate ST351A/X) could support either type of interface, selected with a jumper.
Many later AT (and AT successor) motherboards had no integrated hard drive interface but relied on a separate hard drive interface plugged into an ISA/EISA/VLB slot. There were even a few 80486 based units shipped with MFM/RLL interfaces and drives instead of the increasingly common AT-IDE.
Commodore built the XT-IDE based peripheral hard drive / memory expansion unit A590 for their Amiga 500 and 500+ computers that also supported a SCSI drive. Later models – the A600, A1200, and the Amiga 4000 series – use AT-IDE drives.
PCMCIA
The PCMCIA specification can be seen as a superset of ATA. The standard for PCMCIA hard disk interfaces, which included PCMCIA flash drives, allows for the mutual configuration of the port and the drive in an ATA mode. As a de facto extension, most PCMCIA flash drives additionally allow for a simple ATA mode that is enabled by pulling a single pin low, so that PCMCIA hardware and firmware are unnecessary to use them as an ATA drive connected to an ATA port. PCMCIA flash drive to ATA adapters are thus simple and inexpensive, but are not guaranteed to work with any and every standard PCMCIA flash drive. Further, such adapters cannot be used as generic PCMCIA ports, as the PCMCIA interface is much more complex than ATA.
Emulation by embedded chips
Although most modern computers do not have physical ISA buses, almost all PCs — x86-32, and x86-64 — have ISA buses allocated in physical address space. some Southbridges and some CPUs themselves provide services such as temperature monitoring and voltage readings through ISA buses as ISA devices.
Standardization
IEEE started a standardization of the ISA bus in 1985, called the P996 specification. However, despite there even having been books published on the P996 specification, it never officially progressed past draft status.
Modern ISA cards
There still is an existing user base with old computers, so some ISA cards are still manufactured, e.g. with USB ports or complete single board computers based on modern processors, USB 3.0, and SATA.
See also
PC/104 - Embedded variant of ISA
Low Pin Count (LPC)
Extended Industry Standard Architecture (EISA)
Micro Channel architecture (MCA)
VESA Local Bus (VLB)
Peripheral Component Interconnect (PCI)
Accelerated Graphics Port (AGP)
PCI-X
PCI Express (PCI-E or PCIe)
List of computer bus interfaces
Amiga Zorro II
NuBus
Switched fabric
List of device bandwidths
CompactPCI
PC card
Universal Serial Bus (USB)
Legacy port
Backplane
References
Further reading
Intel ISA Bus Specification and Application Notes - Rev 2.01; Intel; 73 pages; 1989.
External links
Computer buses
Motherboard expansion slot
X86 IBM personal computers
IBM PC compatibles
Legacy hardware
Computer hardware standards | Operating System (OS) | 1,075 |
Kernel preemption
In computer operating system design, kernel preemption is a property possessed by some kernels (the cores of operating systems), in which the CPU can be interrupted in the middle of executing kernel code and assigned other tasks (from which it later returns to finish its kernel tasks.
That is to say, the scheduler is permitted to forcibly perform a context switch (on behalf of a runnable and higher-priority process) on a driver or other part of the kernel during its execution, rather than co-operatively waiting for the driver or kernel function (such as a system call) to complete its execution and return control of the processor to the scheduler when done. It is used mainly in monolithic and hybrid kernels, where all or most device drivers are run in kernel space. Linux is an example of a monolithic-kernel operating system with kernel preemption.
The main benefit of kernel preemption is that it solves two issues that would otherwise be problematic for monolithic kernels, in which the kernel consists of one large binary (rather than multiple "services", as in a microkernel-based OS like Windows NT/Vista/7/10 or macOS). Without kernel preemption, two major issues exist for monolithic and hybrid kernels:
A device driver can enter an infinite loop or other unrecoverable state, crashing the whole system.
Some drivers and system calls on monolithic kernels can be slow to execute, and cannot return control of the processor to the scheduler or other program until they complete execution.
See also
Linux kernel scheduling and preemption
References
Scheduling (computing)
Operating system kernels | Operating System (OS) | 1,076 |
File Explorer
File Explorer, previously known as Windows Explorer, is a file manager application that is included with releases of the Microsoft Windows operating system from Windows 95 onwards. It provides a graphical user interface for accessing the file systems. It is also the component of the operating system that presents many user interface items on the screen such as the taskbar and desktop. Controlling the computer is possible without Windows Explorer running (for example, the command in Task Manager on NT-derived versions of Windows will function without it, as will commands typed in a command prompt window).
Overview
Windows Explorer was first included with Windows 95 as a replacement for File Manager, which came with all versions of Windows 3.x operating systems. Explorer could be accessed by double-clicking the new My Computer desktop icon or launched from the new Start Menu that replaced the earlier Program Manager. There is also a shortcut key combination: . Successive versions of Windows (and in some cases, Internet Explorer) introduced new features and capabilities, removed other features, and generally progressed from being a simple file system navigation tool into a task-based file management system.
While "Windows Explorer" or "File Explorer" is a term most commonly used to describe the file management aspect of the operating system, the Explorer also houses the operating system's search functionality and File Type associations (based on filename extensions), and is responsible for displaying the desktop icons, the Start Menu, the Taskbar, and the Control Panel. Collectively, these features are known as the Windows shell.
After a user logs in, the explorer process is created by the userinit process. Userinit performs some initialization of the user environment (such as running the login script and applying group policies) and then looks in the registry at the Shell value and creates a process to run the system-defined shell – by default, Explorer.exe. Then Userinit exits. This is why Explorer.exe is shown by various process explorers with no parent – its parent has exited.
History
In 1995, Microsoft first released test versions of a shell refresh, named the Shell Technology Preview, and often referred to informally as "NewShell". The update was designed to replace the Windows 3.x Program Manager/File Manager based shell with Windows Explorer. The release provided capabilities quite similar to that of the Windows "Chicago" (codename for Windows 95) shell during its late beta phases, however was intended to be nothing more than a test release. There were two public releases of the Shell Technology Preview, made available to MSDN and CompuServe users: May 26, 1995 and August 8, 1995. Both held Windows Explorer builds of 3.51.1053.1. The Shell Technology Preview program never saw a final release under NT 3.51. The entire program was moved across to the Cairo development group who finally integrated the new shell design into the NT code with the release of NT 4.0 in July 1996.
Windows 98 and Windows Desktop Update
With the release of the Windows Desktop Update (packaged with Internet Explorer 4 as an optional component, and included in Windows 98), Windows Explorer became "integrated" with Internet Explorer, most notably with the addition of navigation arrows (back and forward) for moving between recently visited directories, as well as Internet Explorer's Favorites menu.
An address bar was also added to Windows Explorer, which a user could type in directory paths directly, and be taken to that folder.
Another feature that was based on Internet Explorer technology was customized folders. Such folders contained a hidden web page that controlled the way the Windows Explorer displayed the contents of the folder.
Windows ME and Windows 2000
The "Web-style" folders view, with the left Explorer pane displaying details for the object currently selected, is turned on by default. For certain file types, such as pictures and media files, a preview is also displayed in the left pane. The Windows 2000 Explorer featured an interactive media player as the previewer for sound and video files. However, such a previewer can be enabled in Windows ME through the use of folder customization templates. Windows Explorer in Windows 2000 and Windows ME allows for custom thumbnail previewers and tooltip handlers. The default file tooltip displays file title, author, subject and comments; this metadata may be read from a special NTFS stream, if the file is on an NTFS volume, or from a COM Structured Storage stream, if the file is a structured storage document. All Microsoft Office documents since Office 95 make use of structured storage, so their metadata is displayable in the Windows 2000 Explorer default tooltip. File shortcuts can also store comments which are displayed as a tooltip when the mouse hovers over the shortcut.
The right-hand pane, which usually just lists files and folders, can also be customized. For example, the contents of the system folders aren't displayed by default, instead showing in the right pane a warning to the user that modifying the contents of the system folders could harm their computer. It's possible to define additional Explorer panes by using DIV elements in folder template files. This feature was abused by computer viruses that employed malicious scripts, Java applets, or ActiveX controls in folder template files as their infection vector. Two such viruses are VBS/Roor-C and VBS.Redlof.a.
Other Explorer UI elements that can be customized include columns in "Details" view, icon overlays, and search providers: the new DHTML-based search pane is integrated into Windows 2000 Explorer, unlike the separate search dialog found in all previous Explorer versions.
Search capabilities were added, offering full-text searches of documents, with options to filter by date (including arbitrary ranges like "modified within the last week"), size, and file type. The Indexing Service has also been integrated into the operating system and the search pane built into Explorer allows searching files indexed by its database. The ability to customize the standard buttons was also added.
Windows XP and Windows Server 2003
There were significant changes made to Windows Explorer in Windows XP, both visually and functionally. Microsoft focused especially on making Explorer more discoverable and task-based, as well as adding several new features to reflect the growing use of a computer as a digital hub.
Windows Explorer in Windows Server 2003 contains all the same features as Windows XP, but the task panes and search companion are disabled by default.
Task pane
The task pane is displayed on the left-hand side of the window instead of the traditional folder tree view. It presents the user with a list of common actions and destinations that are relevant to the current directory or file(s) selected. For instance, when in a directory containing mostly pictures, a set of "Picture tasks" is shown, offering the options to display these pictures as a slide show, to print them out, or to go online to order prints. Conversely, a folder containing music files would offer options to play those files in a media player or to go online to purchase music. Windows XP had a Media bar but it was removed with SP1. The Media Bar was only available with Windows XP RTM.
Every folder also has "File and Folder Tasks", offering options to create new folders, share a folder on the local network, publish files or folders to a website, and other common tasks like copying, renaming, moving, and deleting files or folders. File types that have identified themselves as being printable also have an option listed to print the file.
Underneath "Other Places" is a "Details" pane which gives additional information – typically file size and date, but depending on the file type, a thumbnail preview, author, image dimensions, or other details.
The "Folders" button on the Windows Explorer toolbar toggles between the traditional tree view of folders, and the task pane. Users can get rid of the task pane or restore it using the sequence: Tools – Folder Options – General – Show Common Tasks/Use Windows Classic Folders.
Search companion
Microsoft introduced animated "Search Companions" in an attempt to make searching more engaging and friendly; the default character is a puppy named Rover (previously used in Microsoft Bob), with three other characters (Merlin the magician, Earl the surfer, and Courtney) also available. These search companions use the same technology as Microsoft Office's Office Assistants, even incorporating "tricks" and sound effects, and they can be used as Office Assistants if their files are copied into the C:\Windows\msagent\chars folder.
The search capability itself is fairly similar to Windows ME and Windows 2000, with one major addition: Search can also be instructed to search only files that are categorical "Documents" or "Pictures, music and video"; this feature is noteworthy largely because of how Windows determines what types of files can be classified under these categories. In order to maintain a relevant list of file types, Windows Explorer connects to Microsoft and downloads a set of XML files that define what these file types are. The Search Companion can be disabled in favor of the classic search pane used in Windows 2000 by using the Tweak UI applet from Microsoft's PowerToys for Windows XP, or by manually editing the registry.
Image handling
Windows XP improves image preview in Explorer by offering a Filmstrip view. "Back" and "Previous" buttons facilitate navigation through the pictures, and a pair of "Rotate" buttons offer 90-degree clockwise and counter-clockwise (lossy) rotation of images. Aside from the Filmstrip view mode, there is a 'Thumbnails' mode, which displays thumbnail-sized images in the folder. A Folder containing images will also show thumbnails of four of the images from that folder overlaid on top of a large folder icon.
Web publishing
Web sites that offer image hosting services can be plugged into Windows Explorer, which the user can use to select images on their computer, and have them uploaded correctly without dealing with comparatively complex solutions involving FTP or web interfaces.
Other changes
Explorer gained the ability to understand the metadata of a number of types of files. For example, with images from a digital camera, the Exif information can be viewed, both in the Properties pages for the photo itself, as well as via optional additional Details View columns.
A Tile view mode was added, which displays the file's icon in a larger size (48 × 48), and places the file name, descriptive type, and additional information (typically the file size for data files, and the publisher name for applications) to the right.
The Details view also presented an additional option called "Show in Groups" which allows the Explorer to separate its contents by headings based on the field which is used to sort the items.
The taskbar can be locked to prevent it from accidentally being moved.
Windows Explorer also gained the ability to burn CDs and DVD-RAM discs in Windows XP.
Ability to create and open ZIP files called "compressed folders".
Ability to open Cabined (.cab) files.
If a .htm or .html file is copied or moved, the accompanying _files suffix folder is copied or moved among it automatically.
Removed and changed features
The sort order has changed compared to the one in Windows 2000. For file names containing numbers Windows Explorer now tries to sort based on numerical value rather than just comparing each number digit by digit.
Windows Vista and Windows Server 2008
Search, organizing and metadata
Windows Explorer includes significant changes from previous versions of Windows such as improved filtering, sorting, grouping and stacking. Combined with integrated desktop search, Windows Explorer allows users to find and organize their files in new ways, such as stacks. The new Stacks viewing mode groups files according to the criterion specified by the user. Stacks can be clicked to filter the files shown in Windows Explorer. There is also the ability to save searches as virtual folders or search folders. A search folder is simply an XML file, which stores the query in a form that can be used by the Windows search subsystem. When accessed, the search is executed and the results are aggregated and presented as a virtual folder. Windows Vista includes six search folders by default: recent documents, recent e-mail, recent music, recent pictures and videos, recent changed, and "Shared by Me". Additionally, search operators for properties were introduced, such as kind:music. Since at least Windows 7, comparison operators "greater than" and "less than" are supported to search for any supported attribute such as date ranges and file sizes, like size:>100MB to search for all files that are greater than 100MB. Attributes sortable and searchable in Windows Explorer include pictures' dimensions, Exif data such as aperture and exposure, video duration and framerate and width.
When sorting items, the sort order no longer remains consistently Ascending or Descending. Each property has a preferred sort direction. For example, sort by date defaults to descending order, as does size. But name and type default to ascending order.
Searching for files containing a given text string became problematic with Vista unless the files had been indexed. An alternative is to use the findstr command-line function. After right-clicking on a folder one can open a command-line prompt in that folder.
Windows Explorer also contains modifications in the visualization of files on a computer. A new addition to Windows Explorer in Vista and Server 2008 is the details pane, which displays metadata and information relating to the currently selected file or folder. The details pane will also display a thumbnail of the file or an icon of the filetype if the file does not contain visual information. Furthermore, different imagery is overlaid on thumbnails to give more information about the file, such as a picture frame around the thumbnail of an image file, or a filmstrip on a video file.
The details pane also allows for the change of some textual metadata such as author and title in files that support them within Windows Explorer. A new type of metadata called tags allows users to add descriptive terms to documents for easier categorization and retrieval. Some files support open metadata, allowing users to define new types of metadata for their files. Out-of-the-box, Windows Vista and Windows Server 2008 supports Microsoft Office documents and most audio and video files. Support for other file types can however be added by writing specialized software to retrieve the metadata at the shell's request. Metadata stored in a file's alternate data stream only on NTFS volumes cannot be viewed and edited through the summary tab of the file's properties anymore. Instead, all metadata is stored inside the file, so that it will always travel with the file and not be dependent on the file system.
Layout and icons
Windows Explorer in Windows Vista and Windows Server 2008 also introduces a new layout. The task panes from Windows XP are replaced with a toolbar on top and a navigation pane on the left. The navigation pane contains commonly accessed folders and preconfigured search folders. Eight different views are available to view files and folders, including extra large, large, medium, small, list, details, tiles, and content. In addition, column headers now appear in all icon viewing modes, unlike Windows XP where they only appear in the details icon viewing mode. File and folder actions such as cut, copy, paste, undo, redo, delete, rename and properties are built into a dropdown menu which appears when the Organize button is clicked. It is also possible to change the layout of the Explorer window by using the Organize button. Users can select whether to display classic menus, a search pane, a preview pane, a reading pane, and the navigation pane. The preview pane enables users to preview files (e.g., documents or media files) without opening them. If an application, such as Office 2007, installs preview handlers for file types, then these files can also be edited within the preview pane itself.
Windows Vista saw the introduction of the breadcrumb bar for easier navigation. As opposed to the prior address bar which displayed the current folder in a simple editable combobox, this new style structures the path into clickable levels of folder hierarchy (though falls back to the classic edit mode when a blank area is clicked), enabling the user to skip as many levels as desired in one click rather than repeatedly clicking "Up". It is also possible to navigate to any subfolder of the current folder using the arrow to the right of the last item. The menu bar is now hidden by default but reappears temporarily when the user presses Alt.
Check boxes in Windows Explorer allow the selection of multiple files. Free and used space on all drives is shown in horizontal indicator bars. Icons of various sizes are supported: 16 x 16, 24 x 24, 32 x 32, 48 x 48, 64 x 64, 96 x 96, 128 x 128 and 256 x 256. Windows Explorer can zoom the icons in and out using a slider or by holding down the Ctrl key and using the mouse scrollwheel. Live icons can display the content of folders and files themselves rather than generic icons.
Other changes
With the release of Windows Vista and Server 2008 and Windows Internet Explorer 7 for Windows XP, Internet Explorer is no longer integrated with Windows Explorer. In Windows Vista and Server 2008 (and in Windows XP as well if IE7 or 8 is installed), Windows Explorer no longer displays web pages, and IE7 does not support use as a file manager, although one will separately launch the other as necessary.
When moving or copying files from one folder to another, if two files have the same name, an option is now available to rename the file; in previous versions of Windows, the user was prompted to choose either a replacement or cancel moving the file. Also, when renaming a file, Explorer only highlights the filename without selecting the extension. Renaming multiple files is quicker as pressing Tab automatically renames the existing file or folder and opens the file name text field for the next file for renaming. Shift+Tab allows renaming in the same manner upwards.
Support for burning data on DVDs (DVD±R, DVD±R DL, DVD±R RW) in addition to CDs and DVD-RAM using version 2.0 of the Image Mastering API, as well as Live File System support was added.
If a file is in use by another application, Windows Explorer tells users to close the application and retry the file operation. Also, a new interface IFileIsInUse is introduced into the API which developers can use to let other applications switch to the main window of the application that has the file open or simply close the file from the "File in Use" dialog. If the running application exposes these operations by means of the IFileIsInUse interface, Windows Explorer, upon encountering a locked file, allows the user to close the file or switch to the application from the dialog box itself.
Windows Vista introduced precluded support for the Media Transfer Protocol.
Removed and changed features
The ability to customize the layout and buttons on the toolbars has been removed in Windows Vista's Explorer, as has the ability to add a password to a zip file (compressed folder). The Toolbar button in Explorer to go up one folder from the current folder has been removed (the function still exists however, one can move up a folder by pressing + ). Although still fully available from the menus and keyboard shortcuts, toolbar buttons for Cut, Copy, Paste, Undo, Delete, Properties and some others are no longer available. The Menu Bar is also hidden by default but is still available by pressing the Alt key or changing its visibility in the layout options. Several other features are removed such as showing the size on the status bar without selecting items, storing metadata in NTFS alternate data streams, the IColumnProvider interface which allowed addition of custom columns to Explorer and folder background customization using desktop.ini.
The option "Managing pairs of Web pages and folders" is also removed, and the user has no way of telling Vista that a .html file and the folder with the same name that was created when saving a complete web page from IE should be treated separately, that is, they cannot delete the folder without deleting the html file as well.
The ability to right-click a folder and hit "Search" was removed in Windows Vista Service Pack 1. Users must open the folder they wish to search in and enter their keywords in the search field located on the top right corner of the window. Alternatively, users can specify other search parameters through the "Advanced Search" UI, which can be accessed by clicking on the Organize Bar and selecting Search Pane under the Layout submenu. Pressing F3 also opens the "Advanced Search" interface.
Windows 7 and Windows Server 2008 R2
Libraries
Windows Explorer in Windows 7 and Windows Server 2008 R2 supports libraries, virtual folders described in a .library-ms file that aggregates content from various locations – including shared folders on networked systems if the shared folder has been indexed by the host system – and present them in a unified view. Searching in a library automatically federates the query to the remote systems, in addition to searching on the local system, so that files on the remote systems are also searched. Unlike search folders, Libraries are backed by a physical location which allows files to be saved in the libraries. Such files are transparently saved in the backing physical folder. The default save location for a library may be configured by the user, as can the default view layout for each library. Libraries are generally stored in the libraries special folder, which allows them to be displayed on the navigation pane.
By default, a new user account in Windows 7 contains four libraries, for different file types: Documents, Music, Pictures, and Videos. They are configured to include the user's profile folders for these respective file types, as well as the computer's corresponding Public folders.
In addition to aggregating multiple storage locations, Libraries enable Arrangement Views and Search Filter Suggestions. Arrangement Views allow users to pivot their views of the library's contents based on metadata. For example, selecting the "By Month" view in the Pictures library will display photos in stacks, where each stack represents a month of photos based on the date they were taken. In the Music library, the "By Artist" view will display stacks of albums from the artists in their collections, and browsing into an artist stack will then display the relevant albums.
Search Filter Suggestions are a new feature of the Windows 7 and Windows Server 2008 R2 Explorer's search box. When the user clicks in the search box, a menu shows up below it showing recent searches as well as suggested Advanced Query Syntax filters that the user can type. When one is selected (or typed in manually), the menu will update to show the possible values to filter by for that property, and this list is based on the current location and other parts of the query already typed. For example, selecting the "tags" filter or typing "tags:" into the search box will display the list of possible tag values which will return search results.
The metadata written within the file, implemented in Vista, is also utilized in Windows 7. This can sometimes lead to long wait times displaying the contents of a folder. For example, if a folder contains many large video files totaling hundreds of gigabytes, and the Window Explorer pane is in Details view mode showing a property contained within the metadata (for example Date, Length, Frame Height), Windows Explorer might have to search the contents of the whole file for the meta data. Some damaged files can cause a prolonged delay as well. This is due to metadata information being able to be placed anywhere within the file, beginning, middle, or end, necessitating a search of the whole file. Lengthy delays also occur when displaying the contents of a folder with many different types of program icons. The icon is contained in the metadata. Some programs cause the activation of a virus scan when retrieving the icon information from the metadata, hence producing a lengthy delay.
Arrangement Views and Search Filter Suggestions are database-backed features that require that all locations in the Library be indexed by the Windows Search service. Local disk locations must be indexed by the local indexer, and Windows Explorer will automatically add locations to the indexing scope when they are included in a library. Remote locations can be indexed by the indexer on another Windows 7 and Windows Server 2008 R2 machine, on a Windows machine running Windows Search 4 (such as Windows Vista or Windows Home Server), or on another device that implements the MS-WSP remote query protocol.
Federated search
Windows Explorer also supports federating search to external data sources, such as custom databases or web services, that are exposed over the web and described via an OpenSearch definition. The federated location description (called a Search Connector) is provided as a .osdx file. Once installed, the data source becomes queryable directly from Windows Explorer. Windows Explorer features, such as previews and thumbnails, work with the results of a federated search as well.
Other changes
Windows 7 and Windows Server 2008 R2 support showing icons in the context menu and creating cascaded context menus with static verbs in submenus using the Registry instead of a shell extension.
The search box in the Explorer window and the address bar can be resized.
Certain folders in the navigation pane can be hidden to reduce clutter.
Progress bars and overlay icons on an application's button on the taskbar.
Content view which shows thumbnails and metadata.
Buttons to toggle the preview pane and create a new folder.
Removed or changed features
In Windows 7, several features have been removed from Windows Explorer, including the collapsible folder pane, overlay icon for shared items, remembering individual folder window sizes and positions, free disk space on the status bar, icons on the command bar, ability to disable Auto Arrange and Align to Grid, sortable column headings in other views except details view, ability to disable full row selection in details view, automatic horizontal scrolling and scrollbar in the navigation pane and maintaining selection when sorting from the Edit menu.
Windows 8 and Windows Server 2012
The file manager on Windows 8 and Windows Server 2012 is renamed File Explorer and introduces new features such as a redesigned interface incorporating a ribbon toolbar, and a redesigned file operation dialog that displays more detailed progress and allows for file operations to be paused and resumed. The details pane from Windows Vista and 7 was removed and replaced with a narrower pane with no icons and fewer detail columns. But other details are displayed by hovering over the file's name.
Windows 10 and Windows Server 2016
The icons in File Explorer have been redesigned. They are flatter and simpler in design. The window border padding is thinner than previous versions. Windows 10 Creators Update and later versions come with a new Universal File Explorer (also known as the UWP File Explorer). Although hidden, it can be opened by creating a shortcut pointing to "explorer shell:AppsFolder\c5e2524a-ea46-4f67-841f-6a9465d9d515_cw5n1h2txyewy!App"
Windows 10, version 1809 and Windows Server 2019
A "dark mode" has been added to File Explorer in Windows 10, version 1809 and Windows Server 2019. The Universal File Explorer also includes new features.
Windows 10, version 1909
Windows Search and OneDrive have been integrated into File Explorer's search feature in Windows 10, version 1909.
Windows 11
In Windows 11, the File Explorer has undergone significant design revisions, with the ribbon interface introduced with Windows 8 being replaced with a new command bar. Translucency, shadows, and rounded geometry have also been added, following the Fluent Design System.
Extensibility
File Explorer can be extended to support non-default functionality by means of Windows shell extensions, which are COM objects that plug the extended functionality into Windows Explorer. Shell extensions can be in the form of shell extension handlers, toolbars or even namespace extensions that allow certain folders (or even non-filesystem objects such as the images scanned by a scanner) to be presented as a special folder. File Explorer also allows metadata for files to be added as NTFS alternate data streams, separate from the data stream for the file.
Shell extension handlers are queried by the shell beforehand for modifying the action the shell takes. They can be associated on a per file type – where they will show up only when a particular action takes place on a particular file type – or on a global basis – which are always available. The shell supports the following extension handlers:
Namespace extensions are used by Explorer and Common Dialogs to either display some data – which are not necessarily persisted as files – in a folder-like view or to present data in a way that is different from their organization on the file system. This feature can be exploited by a any hierarchical data source that can be represented as a file system like the Windows one, including Cloud-based implementation. Special folders, such as My Computer and Network Places in Windows Explorer are implemented this way, as are Explorer views that let items in a mobile phone or digital camera be explored. Source-control systems that use Explorer to browse source repositories also use Namespace extensions to allow Explorer to browse the revisions. To implement a namespace extension, the IPersistFolder, IShellView, IShellFolder, IShellBrowser and IOleWindow interfaces need to be implemented and registered. The implementation needs to provide the logic for navigating the data store as well as describing the presentation. Windows Explorer will instantiate the COM objects as required.
While Windows Explorer natively exposes the extensibility points as COM interfaces, .NET Framework can also be used to write some types of extensions, using the COM Interop functionality of .NET Framework. While Microsoft itself makes available extensions – such as the photo info tool – which are authored using .NET Framework, they currently recommend against writing managed shell extensions, as only one instance of the CLR (prior to version 4.0) can be loaded per-process. This behavior will cause conflicts if multiple managed add-ins, targeting different versions of the CLR, are attempted to be run simultaneously.
See also
Comparison of file managers
List of alternative shells for Windows
Notes and references
External links
Sullivan, Kent. "The Windows 95 User Interface: A Case Study in Usability Engineering" (1996) for Association for Computing Machinery. (Sullivan was a developer on the Windows 95 UI team)
How To Customize the Windows Explorer Views in Windows XP
MSDN: Creating Shell Extension Handlers, Windows Dev Center, May 31, 2018
The Complete Idiot's Guide to Writing Shell Extensions, by Michal Dunn, March 15, 2006
Namespace extensions – the undocumented Windows Shell, by Henk Devos, November 30, 1999
Disk image emulators
File managers for Microsoft Windows
File managers
FTP clients
Explorer
Windowing systems
Window managers | Operating System (OS) | 1,077 |
IBM 5520
The IBM 5520 Administrative System was a text, electronic document‐distribution and data processing system, announced by IBM General Systems Division (GSD) in 1979.
Configuration
The system offered linked text-editing work stations that shared a storage unit a central processor unit (IBM 5525), CRT-based display stations (IBM 5253 and 5254), a daisy wheel printer (IBM 5257) and an ink jet printer (IBM 5258). Depending on the model, from one to 18 display stations and from three to 12 printers could be attached. The processor unit has a same case that used in a IBM System/34 midrange computer.
Other systems, i.e. 6670 Information Distributor, Office System/6, 6240 Mag Card Typewriter-Communicating and System/370 could be connected for electronic document distribution.
Market share
The New York Times quoted a technology analyst's view of the 5520 as "a competitive product in(to) a rapidly growing field."
While the Office System/6 introduced two years earlier by IBM Office Products Division (OPD) was focused on word processing, the new 5520 intended to complement existing products lines with text editing and data processing power. The need for action became urgent because IBM was losing market share to companies such as Wang who exploited display (size), storage (magnetic cards), and computing (performance) weaknesses of IBM's established product lines by aggressive marketing, including direct comparisons.
See also
Wang WPS and OIS
References
IBM minicomputers
Computer-related introductions in 1979 | Operating System (OS) | 1,078 |
System Deployment Image
A System Deployment Image (aka SDI) is a file format used primarily with Microsoft products to contain an arbitrary disk image, including boot sector information.
Description
The System Deployment Image (SDI) file format is often used to allow the use of a virtual disk for startup or booting. Some versions of Microsoft Windows allow for "RAM booting", which is essentially the ability to load an SDI file into memory and then boot from it. The SDI file format also lends itself to network booting using the Preboot Execution Environment (PXE). Another usage is hard disk imaging.
The SDI file itself is partitioned into the following sections:
Boot BLOB This contains the actual boot program, STARTROM.COM. This is analogous to the boot sector of a hard disk.
Load BLOB This typically contains NTLDR and is launched by the boot BLOB.
Part BLOB This contains the actual boot runtime (i.e. the contents of the disk image including any Operating System [OS] files) and also includes the boot.ini (used by NTLDR) and ntdetect.com files which should be located within the root directory of the runtime. The size of the runtime cannot exceed 500 MB. In addition to this requirement the runtime must also be capable of dealing with the fact that it is booting from a ramdisk. This implies that the runtime must include the "Windows RAM Disk Driver" component (specified within the boot.ini).
Disk BLOB This is flat HDD image starting with a MBR. It is used for hard drive imaging instead of booting. Also only Disk BLOBs can be mounted with Microsoft's utilities.
SDI usually contains either Disk BLOB (HD cloning or temporary SDI) or three other of them (bootable SDI).
Windows Vista or Windows PE 2.0 boot sequence includes a boot.sdi file, which contains Part BLOB for an empty NTFS volume and a Table-of-Contents slot for the WIM image, which is stored on a separate on-disk file.
SDI features
SDI driver
SDI files can be mounted as virtual disk drives and assigned a drive letter if an SDI driver is available to allow this. A SDI driver is a type of storage driver and is commonly used with Windows XP Embedded.
SDI management
Microsoft provides a tool called the "SDI File Manager" (sdimgr.exe) which can be used for the purpose of manipulating SDI files. Some of the tasks which this tool facilitates are:
The creation of an SDI image file.
The creation of an SDI image file from an existing hard disk partition.
The verification of an existing SDI image.
SDI loader
The mechanism which allows for the creation, addition and removal of virtual disk drives. SDI Loader and Driver work with Disk BLOB.
See also
Windows Imaging Format
References
Computer file formats | Operating System (OS) | 1,079 |
Opteron
Opteron is AMD's x86 former server and workstation processor line, and was the first processor which supported the AMD64 instruction set architecture (known generically as x86-64 or AMD64). It was released on April 22, 2003, with the SledgeHammer core (K8) and was intended to compete in the server and workstation markets, particularly in the same segment as the Intel Xeon processor. Processors based on the AMD K10 microarchitecture (codenamed Barcelona) were announced on September 10, 2007, featuring a new quad-core configuration. The most-recently released Opteron CPUs are the Piledriver-based Opteron 4300 and 6300 series processors, codenamed "Seoul" and "Abu Dhabi" respectively.
In January 2016, the first ARMv8-A based Opteron-branded SoC was released, though it is unclear what, if any, heritage this Opteron-branded product line shares with the original Opteron technology other than intended use in the server space.
Technical description
Two key capabilities
Opteron combines two important capabilities in a single processor:
native execution of legacy x86 32-bit applications without speed penalties
native execution of x86-64 64-bit applications
The first capability is notable because at the time of Opteron's introduction, the only other 64-bit architecture marketed with 32-bit x86 compatibility (Intel's Itanium) ran x86 legacy-applications only with significant speed degradation. The second capability, by itself, is less noteworthy, as major RISC architectures (such as SPARC, Alpha, PA-RISC, PowerPC, MIPS) have been 64-bit for many years. In combining these two capabilities, however, the Opteron earned recognition for its ability to run the vast installed base of x86 applications economically, while simultaneously offering an upgrade-path to 64-bit computing.
The Opteron processor possesses an integrated memory controller supporting DDR SDRAM, DDR2 SDRAM or DDR3 SDRAM (depending on processor generation). This both reduces the latency penalty for accessing the main RAM and eliminates the need for a separate northbridge chip.
Multi-processor features
In multi-processor systems (more than one Opteron on a single motherboard), the CPUs communicate using the Direct Connect Architecture over high-speed HyperTransport links. Each CPU can access the main memory of another processor, transparent to the programmer. The Opteron approach to multi-processing is not the same as standard symmetric multiprocessing; instead of having one bank of memory for all CPUs, each CPU has its own memory. Thus the Opteron is a Non-Uniform Memory Access (NUMA) architecture. The Opteron CPU directly supports up to an 8-way configuration, which can be found in mid-level servers. Enterprise-level servers use additional (and expensive) routing chips to support more than 8 CPUs per box.
In a variety of computing benchmarks, the Opteron architecture has demonstrated better multi-processor scaling than the Intel Xeon which didn't have a point to point system until QPI and integrated memory controllers with the Nehalem design. This is primarily because adding another Opteron processor increases memory bandwidth, while that is not always the case for Xeon systems, and the fact that the Opterons use a switched fabric, rather than a shared bus. In particular, the Opteron's integrated memory controller allows the CPU to access local RAM very quickly. In contrast, multiprocessor Xeon system CPUs share only two common buses for both processor-processor and processor-memory communication. As the number of CPUs increases in a typical Xeon system, contention for the shared bus causes computing efficiency to drop. Intel migrated to a memory architecture similar to the Opteron's for the Intel Core i7 family of processors and their Xeon derivatives.
Multi-core Opterons
In April 2005, AMD introduced its first multi-core Opterons. At the time, AMD's use of the term multi-core in practice meant dual-core; each physical Opteron chip contained two processor cores. This effectively doubled the computing performance available to each motherboard processor socket. One socket could then deliver the performance of two processors, two sockets could deliver the performance of four processors, and so on. Because motherboard costs increase dramatically as the number of CPU sockets increase, multicore CPUs enable a multiprocessing system to be built at lower cost.
AMD's model number scheme has changed somewhat in light of its new multicore lineup. At the time of its introduction, AMD's fastest multicore Opteron was the model 875, with two cores running at 2.2 GHz each. AMD's fastest single-core Opteron at this time was the model 252, with one core running at 2.6 GHz. For multithreaded applications, or many single threaded applications, the model 875 would be much faster than the model 252.
Second-generation Opterons are offered in three series: the 1000 Series (single socket only), the 2000 Series (dual socket-capable), and the 8000 Series (quad or octo socket-capable). The 1000 Series uses the AM2 socket. The 2000 Series and 8000 Series use Socket F.
AMD announced its third-generation quad-core Opteron chips on September 10, 2007
with hardware vendors announcing servers in the following month. Based on a core design codenamed Barcelona, new power and thermal management techniques were planned for the chips. Earlier dual core DDR2 based platforms were upgradeable to quad core chips.
The fourth generation was announced in June 2009 with the Istanbul hexa-cores. It introduced HT Assist, an additional directory for data location, reducing the overhead for probing and broadcasts. HT Assist uses 1 MB L3 cache per CPU when activated.
In March 2010 AMD released the Magny-Cours Opteron 6100 series CPUs for Socket G34. These are 8- and 12-core multi-chip module CPUs consisting of two four or six-core dies with a HyperTransport 3.1 link connecting the two dies. These CPUs updated the multi-socket Opteron platform to use DDR3 memory and increased the maximum HyperTransport link speed from 2.40 GHz (4.80 GT/s) for the Istanbul CPUs to 3.20 GHz (6.40 GT/s).
AMD changed the naming scheme for its Opteron models. Opteron 4000 series CPUs on Socket C32 (released July 2010) are dual-socket capable and are targeted at uniprocessor and dual-processor uses. The Opteron 6000 series CPUs on Socket G34 are quad-socket capable and are targeted at high-end dual-processor and quad-processor applications.
Socket 939
AMD released Socket 939 Opterons, reducing the cost of motherboards for low-end servers and workstations. Except for the fact they have 1 MB L2 Cache (versus 512 KB for the Athlon64) the Socket 939 Opterons are identical to the San Diego and Toledo core Athlon 64s, but are run at lower clock speeds than the cores are capable of, making them more stable.
Socket AM2
Socket AM2 Opterons are available for servers that only have a single-chip setup. Codenamed Santa Ana, rev. F dual core AM2 Opterons feature 2 × 1 MB L2 cache, unlike the majority of their Athlon 64 X2 cousins which feature 2 × 512 KB L2 cache. These CPUs are given model numbers ranging from 1210 to 1224.
Socket AM2+
AMD introduced three quad-core Opterons on Socket AM2+ for single-CPU servers in 2007. These CPUs are produced on a 65 nm manufacturing process and are similar to the Agena Phenom X4 CPUs. The Socket AM2+ quad-core Opterons are code-named "Budapest." The Socket AM2+ Opterons carry model numbers of 1352 (2.10 GHz), 1354 (2.20 GHz), and 1356 (2.30 GHz.)
Socket AM3
AMD introduced three quad-core Opterons on Socket AM3 for single-CPU servers in 2009. These CPUs are produced on a 45 nm manufacturing process and are similar to the Deneb-based Phenom II X4 CPUs. The Socket AM3 quad-core Opterons are code-named "Suzuka." These CPUs carry model numbers of 1381 (2.50 GHz), 1385 (2.70 GHz), and 1389 (2.90 GHz.)
Socket AM3+
Socket AM3+ was introduced in 2011 and is a modification of AM3 for the Bulldozer microarchitecture. Opteron CPUs in the AM3+ package are named Opteron 3xxx.
Socket F
Socket F (LGA 1207 contacts) is AMD’s second generation of Opteron socket. This socket supports processors such as the Santa Rosa, Barcelona, Shanghai, and Istanbul codenamed processors. The “Lidded land grid array” socket adds support for DDR2 SDRAM and improved HyperTransport version 3 connectivity. Physically the socket and processor package are nearly identical, although not generally compatible with socket 1207 FX.
Socket G34
Socket G34 (LGA 1944 contacts) is one of the third generation of Opteron sockets, along with Socket C32. This socket supports Magny-Cours Opteron 6100, Bulldozer-based Interlagos Opteron 6200, and Piledriver-based "Abu Dhabi" Opteron 6300 series processors. This socket supports four channels of DDR3 SDRAM (two per CPU die). Unlike previous multi-CPU Opteron sockets, Socket G34 CPUs will function with unbuffered ECC or non-ECC RAM in addition to the traditional registered ECC RAM.
Socket C32
Socket C32 (LGA 1207 contacts) is the other member of the third generation of Opteron sockets. This socket is physically similar to Socket F but is not compatible with Socket F CPUs. Socket C32 uses DDR3 SDRAM and is keyed differently so as to prevent the insertion of Socket F CPUs that can use only DDR2 SDRAM. Like Socket G34, Socket C32 CPUs will be able to use unbuffered ECC or non-ECC RAM in addition to registered ECC SDRAM.
Micro-architecture update
The Opteron line saw an update with the implementation of the AMD K10 microarchitecture. New processors, launched in the third quarter of 2007 (codename Barcelona), incorporate a variety of improvements, particularly in memory prefetching, speculative loads, SIMD execution and branch prediction, yielding an appreciable performance improvement over K8-based Opterons, within the same power envelope.
In 2007 AMD introduced a scheme to characterize the power consumption of new processors under "average" daily usage, named average CPU power (ACP).
Socket FT3
The Opteron X1150 and Opteron X2150 APU are used with the BGA-769 or Socket FT3.
Features
CPUs
x86 CPU features table
APUs
APU features table
Models
For Socket 940 and Socket 939 Opterons, each chip has a three-digit model number, in the form Opteron XYY. For Socket F and Socket AM2 Opterons, each chip has a four-digit model number, in the form Opteron XZYY. For all first, second, and third-generation Opterons, the first digit (the X) specifies the number of CPUs on the target machine:
1 – Designed for uniprocessor systems
2 – Designed for dual-processor systems
8 – Designed for systems with 4 or 8 processors
For Socket F and Socket AM2 Opterons, the second digit (the Z) represents the processor generation. Presently, only 2 (dual-core, DDR2), 3 (quad-core, DDR2) and 4 (six-core, DDR2) are used.
Socket C32 and G34 Opterons use a new four-digit numbering scheme. The first digit refers to the number of CPUs in the target machine:
4 – Designed for uniprocessor and dual-processor systems.
6 – Designed for dual-processor and four-processor systems.
Like the previous second and third generation Opterons, the second number refers to the processor generation. "1" refers to AMD K10-based units (Magny-Cours and Lisbon), "2" refers to the Bulldozer-based Interlagos, Valencia, and Zurich-based units, and "3" refers to the Piledriver-based Abu Dhabi, Seoul, and Delhi-based units.
For all Opterons, the last two digits in the model number (the YY) indicate the clock frequency of a CPU, a higher number indicating a higher clock frequency. This speed indication is comparable to processors of the same generation if they have the same amount of cores, single-cores and dual-cores have different indications despite sometimes having the same clock frequency.
The suffix HE or EE indicates a high-efficiency/energy-efficiency model having a lower TDP than a standard Opteron. The suffix SE indicates a top-of-the-line model having a higher TDP than a standard Opteron.
Starting from 65 nm fabrication process, the Opteron codenames have been based on Formula 1 hosting cities; AMD has a long term sponsorship with F1's most successful team, Ferrari.
Opteron (130 nm SOI)
Single-core – SledgeHammer (1yy, 2yy, 8yy)
CPU-Steppings: B3, C0, CG
L1-Cache: 64 + 64 KB (Data + Instructions)
L2-Cache: 1024 KB, full speed
MMX, Extended 3DNow!, SSE, SSE2, AMD64
Socket 940, 800 MHz HyperTransport
Registered DDR SDRAM required, ECC possible
VCore: 1.50 V – 1.55 V
Max Power (TDP): 89 W
First Release: April 22, 2003
Clockrate: 1.4–2.4 GHz (x40 – x50)
Opteron (90 nm SOI, DDR)
Single-core – Venus (1yy), Troy (2yy), Athens (8yy)
CPU-Steppings: E4
L1-Cache: 64 + 64 KB (Data + Instructions)
L2-Cache: 1024 KB, full speed
MMX, Extended 3DNow!, SSE, SSE2, SSE3, AMD64
Socket 940, 800 MHz HyperTransport
Socket 939/Socket 940, 1000 MHz HyperTransport
Registered DDR SDRAM required for socket 940, ECC possible
VCore: 1.35 V – 1.4 V
Max power (TDP): 95 W
NX Bit
64-bit segment limit checks for VMware-style binary-translation virtualization.
Optimized Power Management (OPM)
First Release: December 2004
Clockrate: 1.6 – 3.0 GHz (x42 – x56)
Dual-core – Denmark (1yy), Italy (2yy), Egypt (8yy)
CPU-Steppings: E1, E6
First Release: April 2005
Clockrate: 1.6–2.8 GHz (x60, x65, x70, x75, x80, x85, x90)
Socket 939/Socket 940, 1000 MHz HyperTransport
NX bit
Opteron (90 nm SOI, DDR2)
Dual-core – Santa Ana (12yy), Santa Rosa (22yy, 82yy)
CPU steppings: F2, F3
L1-Cache: 64 + 64 KB (Data + Instructions)
L2-Cache: 2 × 1024 KB, full speed
MMX, Extended 3DNow!, SSE, SSE2, SSE3, AMD64
Socket F, 1000 MHz HyperTransport – Opteron 22yy, 82yy
Socket AM2, 1000 MHz HyperTransport – Opteron 12yy
VCore: 1.35 V
Max power (TDP): 95 W
NX bit
AMD-V Virtualization
Optimized Power Management (OPM)
First release: ?????? 2006
Clockrate: 1.8–3.2 GHz (xx10, xx12, xx14, xx16, xx18, xx20, xx22, xx24)
Opteron (65 nm SOI)
Quad-core – Barcelona (23xx, 83xx) 2360/8360 and below, Budapest (13yy) 1356 and below
CPU steppings: BA, B3
L1-Cache: 64 + 64 KB (Data + Instructions) per core
L2-Cache: 512 KB, full speed per core
L3-Cache: 2048 KB, shared
MMX, Extended 3DNow!, SSE, SSE2, SSE3, AMD64, SSE4a, ABM
Socket F, Socket AM2+, HyperTransport 3.0 (1.6 GHz-2 GHz)
Registered DDR2 SDRAM required, ECC possible
VCore: 1.2 V
Max power (TDP): 95 Watts
NX bit
2nd-generation AMD-V Virtualization with Rapid Virtualization Indexing (RVI)
Split power plane dynamic power management
First release: September 10, 2007
Clockrate: 1.7–2.5 GHz
Opteron (45 nm SOI)
Quad-core – Shanghai (23xx, 83xx) 2370/8370 and above, Suzuka (13yy) 1381 and above
CPU-Steppings: C2
L3-Cache: 6 MB, shared
Clockrate: 2.3–2.9 GHz
HyperTransport 1.0, 3.0
20% reduction in idle power consumption
support for DDR2 800 MHz memory (Socket F)
support for DDR3 1333 MHz memory (Socket AM3)
6-core – Istanbul (24xx, 84xx)
Released June 1, 2009.
CPU-Steppings: D0
L3-Cache: 6 MB, shared
Clockrate: 2.2–2.8 GHz
HyperTransport 3.0
HT Assist
support for DDR2 800 MHz memory
8-core – Magny-Cours MCM (6124–6140)
Released March 29, 2010.
CPU-Steppings: D1
Multi-chip module consisting of two quad-core dies
L2-Cache, 8 × 512 KB
L3-Cache: 2 × 6 MB, shared
Clockrate: 2.0–2.6 GHz
Four HyperTransport 3.1 at 3.2 GHz (6.40 GT/s)
HT Assist
support for DDR3 1333 MHz memory
Socket G34
12-core – Magny-Cours MCM (6164-6180SE)
Released March 29, 2010
CPU-Steppings: D1
Multi-chip module consisting of two hex-core dies
L2-Cache, 12 × 512 KB
L3-Cache: 2 × 6 MB, shared
Clockrate: 1.7–2.5 GHz
Four HyperTransport 3.1 links at 3.2 GHz (6.40 GT/s)
HT Assist
support for DDR3 1333 MHz memory
Socket G34
Quad-core – Lisbon (4122, 4130)
Released June 23, 2010
CPU-Steppings: D0
L3-Cache: 6 MB
Clockrate: 2.2 GHz (4122), 2.6 GHz (4130)
Two HyperTransport links at 3.2 GHz (6.40 GT/s)
HT Assist
Support for DDR3-1333 memory
Socket C32
Hex-core – Lisbon (4162-4184)
Released June 23, 2010
CPU-Steppings: D1
L3-Cache: 6 MB
Clockrate: 1.7-2.8 GHz
Two HyperTransport links at 3.2 GHz (6.40 GT/s)
HT Assist
Support for DDR3-1333 memory
Socket C32
Opteron (32 nm SOI) – First Generation Bulldozer Microarchitecture
Quad-core – Zurich (3250-3260)
Released March 20, 2012.
CPU-Steppings: B2
Single processor Bulldozer module
L2-Cache: 2 × 2 MB
L3-Cache: 4 MB
Clockrate: 2.5 GHz (3250) – 2.7 GHz (3260)
HyperTransport 3 (5.2 GT/s)
HT Assist
support for DDR3 1866 MHz memory
Turbo CORE support, up to 3.5 GHz (3250), up to 3.7 GHz (3260)
Supports uniprocessor configurations only
Socket AM3+
Eight-core – Zurich (3280)
Released March 20, 2012.
CPU-Steppings: B2
Single processor Bulldozer module
L2-Cache: 4 × 2 MB
L3-Cache: 8MB
Clockrate: 2.4 GHz
HyperTransport 3 (5.2 GT/s)
HT Assist
support for DDR3 1866 MHz memory
Turbo CORE support, up to 3.5 GHz
Supports uniprocessor configurations only
Socket AM3+
6-core – Valencia (4226-4238)
Released November 14, 2011.
CPU-Steppings: B2
Single die consisting of three dual-core Bulldozer modules
L2-Cache: 6 MB
L3-Cache: 8 MB, shared
Clockrate: 2.7-3.3 GHz (up to 3.1-3.7 GHz with Turbo CORE)
Two HyperTransport 3.1 at 3.2 GHz (6.40 GT/s)
HT Assist
support for DDR3 1866 MHz memory
Turbo CORE support
Supports up to dual-processor configurations
Socket C32
8-core – Valencia (4256 HE-4284)
Released November 14, 2011.
CPU-Steppings: B2
Single die consisting of four dual-core Bulldozer modules
L2-Cache: 8 MB
L3-Cache: 8 MB, shared
Clockrate: 1.6-3.0 GHz (up to 3.0-3.7 GHz with Turbo CORE)
Two HyperTransport 3.1 at 3.2 GHz (6.40 GT/s)
HT Assist
support for DDR3 1866 MHz memory
Turbo CORE support
Supports up to dual-processor configurations
Socket C32
Quad-core – Interlagos MCM (6204)
Released November 14, 2011.
CPU-Steppings: B2
Multi-chip module consisting of two dies, each with one dual-core Bulldozer module
L2-Cache: 2 × 2 MB
L3-Cache: 2 × 8 MB, shared
Clockrate: 3.3 GHz
HyperTransport 3 at 3.2 GHz (6.40 GT/s)
HT Assist
support for DDR3 1866 MHz memory
Does not support Turbo CORE
Supports up to quad-processor configurations
Socket G34
8-core – Interlagos (6212, 6220)
Released November 14, 2011.
CPU-Steppings: B2
Multi-chip module consisting of two dies, each with two dual-core Bulldozer modules
L2-Cache: 2 × 4 MB
L3-Cache: 2 × 8 MB, shared
Clockrate: 2.6, 3.0 GHz (up to 3.2 and 3.6 GHz with Turbo CORE)
Four HyperTransport 3.1 at 3.2 GHz (6.40 GT/s)
HT Assist
support for DDR3 1866 MHz memory
Turbo CORE support
Supports up to quad-processor configurations
Socket G34
12-core – Interlagos (6234, 6238)
Released November 14, 2011.
CPU-Steppings: B2
Multi-chip module consisting of two dies, each with three dual-core Bulldozer modules
L2-Cache: 2 × 6 MB
L3-Cache: 2 × 8 MB, shared
Clockrate: 2.4, 2.6 GHz (up to 3.1 and 3.3 GHz with Turbo CORE)
Four HyperTransport 3.1 at 3.2 GHz (6.40 GT/s)
HT Assist
support for DDR3 1866 MHz memory
Turbo CORE support
Supports up to quad-processor configurations
Socket G34
16-core – Interlagos (6262 HE-6284 SE)
Released November 14, 2011.
CPU-Steppings: B2
Multi-chip module consisting of two dies, each with four dual-core Bulldozer modules
L2-Cache: 2 × 8 MB
L3-Cache: 2 × 8 MB, shared
Clockrate: 1.6-2.7 GHz (up to 2.9-3.5 GHz with Turbo CORE)
Four HyperTransport 3.1 at 3.2 GHz (6.40 GT/s)
HT Assist
support for DDR3 1866 MHz memory
Turbo CORE support
Supports up to quad-processor configurations
Socket G34
Opteron (32 nm SOI) – Piledriver microarchitecture
Quad-core – Delhi (3320 EE, 3350 HE)
Released December 4, 2012.
CPU-Steppings: C0
Single die consisting of two Piledriver modules
L2-Cache: 2 × 2 MB
L3-Cache: 8 MB, shared
Clockrate: 1.9 GHz (3320 EE) – 2.8 GHz (3350 HE)
1 × HyperTransport 3 (5.2 GT/s per link)
HT Assist
support for DDR3 1866 MHz memory
Turbo CORE support, up to 2.5 GHz (3320 EE), up to 3.8 GHz (3350 HE)
Supports uniprocessor configurations only
Socket AM3+
Eight-core – Delhi (3380)
Released December 4, 2012.
CPU-Steppings: C0
Single die consisting of four Piledriver modules
L2-Cache: 4 × 2 MB
L3-Cache: 8 MB, shared
Clockrate: 2.6 GHz
1 × HyperTransport 3 (5.2 GT/s per link)
HT Assist
support for DDR3 1866 MHz memory
Turbo CORE support, pp to 3.6 GHz
Supports uniprocessor configurations only
Socket AM3+
4-core – Seoul (4310 EE)
Released December 4, 2012
CPU-Steppings: C0
Single die consisting of two Piledriver modules
L2-Cache: 2 × 2 MB
L3-Cache: 8 MB, shared
Clockrate: 2.2 GHz
2 × HyperTransport 3.1 at 3.2 GHz (6.40 GT/s per link)
HT Assist
support for DDR3 1866 MHz memory
Turbo CORE support, up to 3.0 GHz
Supports up to dual-processor configurations
Socket C32
6-core – Seoul (4332 HE – 4340)
Released December 4, 2012
CPU-Steppings: C0
Single die consisting of three Piledriver modules
L2-Cache: 3 × 2 MB
L3-Cache: 8 MB, shared
Clockrate: 3.0 GHz (4332 HE) – 3.5 GHz (4340)
2 × HyperTransport 3.1 at 3.2 GHz (6.40 GT/s per link)
HT Assist
support for DDR3 1866 MHz memory
Turbo CORE support, from 3.5 GHz (4334) to 3.8 GHz (4340)
Supports up to dual-processor configurations
Socket C32
8-core – Seoul (4376 HE and above)
Released December 4, 2012
CPU-Steppings: C0
Single die consisting of four Piledriver modules
L2-Cache: 4 × 2 MB
L3-Cache: 8 MB, shared
Clockrate: 2.6 GHz (4376 HE) – 3.1 GHz (4386)
2 × HyperTransport 3.1 at 3.2 GHz (6.40 GT/s per link)
HT Assist
support for DDR3 1866 MHz memory
Turbo CORE support, from 3.6 GHz (4376 HE) to 3.8 GHz (4386)
Supports up to dual-processor configurations
Socket C32
Quad-core – Abu Dhabi MCM (6308)
Released November 5, 2012.
CPU-Steppings: C0
Multi-chip module consisting of two dies, each with one Piledriver module
L2-Cache: 2 MB per die (4 MB total)
L3-Cache: 2 × 8 MB, shared within each die
Clockrate: 3.5 GHz
4 × HyperTransport 3.1 at 3.2 GHz (6.40 GT/s per link)
HT Assist
support for DDR3 1866 MHz memory
Does not support Turbo CORE
Supports up to quad-processor configurations
Socket G34
Eight-core – Abu Dhabi MCM (6320, 6328)
Released November 5, 2012.
CPU-Steppings: C0
Multi-chip module consisting of two dies, each with two Piledriver module
L2-Cache: 2 × 2 MB per die (8 MB total)
L3-Cache: 2 × 8 MB, shared within each die
Clockrate: 2.8 GHz (6320) – 3.2 GHz (6328)
4 × HyperTransport 3.1 at 3.2 GHz (6.40 GT/s per link)
HT Assist
support for DDR3 1866 MHz memory
Turbo CORE support, from 3.3 GHz (6320) to 3.8 GHz (6328)
Supports up to quad-processor configurations
Socket G34
12-core – Abu Dhabi MCM (6344, 6348)
Released November 5, 2012.
CPU-Steppings: C0
Multi-chip module consisting of two dies, each with three Piledriver module
L2-Cache: 3 × 2 MB per die (12 MB total)
L3-Cache: 2 × 8 MB, shared within each die
Clockrate: 2.6 GHz (6344) – 2.8 GHz (6348)
4 × HyperTransport 3.1 at 3.2 GHz (6.40 GT/s per link)
HT Assist
support for DDR3 1866 MHz memory
Turbo CORE support, from 3.2 GHz (6344) to 3.4 GHz (6348)
Supports up to quad-processor configurations
Socket G34
16-core – Abu Dhabi MCM (6366 HE and above)
Released November 5, 2012.
CPU-Steppings: C0
Multi-chip module consisting of two dies, each with four Piledriver module
L2-Cache: 4 × 2 MB per die (16 MB total)
L3-Cache: 2 × 8 MB, shared within each die
Clockrate: 1.8 GHz (6366 HE) – 2.8 GHz (6386 SE)
4 × HyperTransport 3.1 at 3.2 GHz (6.40 GT/s per link)
HT Assist
support for DDR3 1866 MHz memory
Turbo CORE support, from 3.1 GHz (6366 HE) to 3.5 GHz (6386 SE)
Supports up to quad-processor configurations
Socket G34
Opteron X (28 nm bulk) – Jaguar microarchitecture
Quad-core – Kyoto (X1150)
Released May 29, 2013
Single SoC with one Jaguar module and integrated I/O
Configurable CPU frequency and TDP
L2 Cache: 2MB shared
CPU frequency: 1.0–2.0 GHz
Max. TDP: 9–17W
Support for DDR3-1600 memory
Socket FT3
Quad-core APU – Kyoto (X2150)
Released May 29, 2013
Single SoC with one Jaguar module, integrated GCN GPU and I/O
Configurable CPU/GPU frequency and TDP
L2 Cache: 2MB shared
CPU frequency: 1.1–1.9 GHz
GPU frequency: 266–600 MHz
GPU cores: 128
Max. TDP: 11–22W
Support for DDR3-1600 memory
Socket FT3
Opteron A (28 nm) – ARM Cortex-A57 ARM microarchitecture
A1100-series
The Opteron A1100-series "Seattle" (28 nm) are SoCs based on ARM Cortex-A57 cores that use the ARMv8-A instruction set. They were first released in January 2016.
Cores: 4–8
Frequency: 1.7–2.0 GHz
L2 Cache: 2 MB (4 core) or 4 MB (8 core)
L3 Cache: 8 MB
Thermal Design Power: 25 W (4 core) or 32 W (8 core)
Up to 64 GB DDR3L-1600 and up to 128GB DDR4-1866 with ECC
SoC peripherals include 14 × SATA 3, 2 × integrated 10 GbE LAN, and eight PCI Express lanes in ×8, ×4, and ×2 configurations
Opteron X (28 nm bulk) – Excavator microarchitecture
Released June, 2017
Dual-core – Toronto (X3216)
L2-Cache: 1 MB
CPU frequency: 1.6 GHz
Turbo CORE support, 3.0 GHz
GPU frequency: 800 MHz
TDP: 12-15W
support for DDR4 1600 MHz memory
Quad-core – Toronto (X3418 & X3421)
L2-Cache: 2 × 1 MB
CPU frequency: 1.8 GHz - 2.1 GHz
Turbo CORE support, 3.2 GHz - 3.4GHz
GPU frequency: 800 MHz
TDP: 12-35W
support for DDR4 2400 MHz memory
Supercomputers
Opteron processors first appeared in the top 100 systems of the fastest supercomputers in the world list in the early 2000s. By the summer of 2006, 21 of the top 100 systems used Opteron processors, and in the November 2010 and June 2011 lists the Opteron reached its maximum representation of 33 of the top 100 systems. The number of Opteron-based systems decreased fairly rapidly after this peak, falling to 3 of the top 100 systems by November 2016, and in November 2017 only one Opteron-based system remained.
Several supercomputers using only Opteron processors were ranked in the top 10 systems between 2003 and 2015, notably:
Red Storm – Sandia National Laboratories – system in November 2006.
Jaguar – Oak Ridge National Laboratory – various configurations held top 10 positions between 2005 and 2011, including in November 2009 and June 2010.
Ranger – Texas Advanced Computing Center – system in June 2008.
Kraken – National Institute for Computational Sciences – system in November 2009.
Hopper – National Energy Research Scientific Computing Center – system in November 2010.
Other top 10 systems using a combination of Opteron processors and compute accelerators have included:
IBM Roadrunner – Los Alamos National Laboratory – system in 2008. Composed of Opteron processors with IBM PowerXCell 8i co-processors.
The only system remaining on the list (as of November 2017), also using Opteron processors combined with compute accelerators:
Titan (supercomputer) – Oak Ridge National Laboratory – system in 2012, as of November 2017. Composed of Opteron processors with Nvidia Fermi (microarchitecture) GPU-based accelerators.
Issues
Opteron without Optimized Power Management
AMD released some Opteron processors without Optimized Power Management (OPM) support, which use DDR memory. The following table describes those processors without OPM.
Opteron recall (2006)
AMD recalled some E4 stepping-revision single-core Opteron processors, including ×52 (2.6 GHz) and ×54 (2.8 GHz) models which use DDR memory. The following table describes affected processors, as listed in AMD Opteron ×52 and ×54 Production Notice of 2006.
The affected processors may produce inconsistent results if three specific conditions occur simultaneously:
The execution of floating point-intensive code sequences
Elevated processor temperatures
Elevated ambient temperatures
A software verification tool for identifying the AMD Opteron processors listed in the above table that may be affected under these specific conditions is available, only to AMD OEM partners. AMD will replace those processors at no charge.
Recognition
In the February 2010 issue of Custom PC (a UK-based computing magazine focused on PC hardware), the AMD Opteron 144 (released in Summer 2005) appeared in the "Hardware Hall of Fame". It was described as "The best overclocker's CPU ever made" due to its low cost and ability to run at speeds far beyond its stock speed. (According to Custom PC, it could run at "close to 3 GHz on air".)
See also
List of AMD Opteron microprocessors
TDP power cap
References
External links
Official Opteron homepage
AMD Technical Docs
AMD K8 Opteron technical specifications
AMD K8 Dual Core Opteron technical specifications
Interactive AMD Opteron rating and product ID guide
Understanding the Detailed Architecture of AMD's 64 bit Core
Comparison between Xeon and Opteron processor performance
AMD: dual-core Opteron to 3 GHz
Advanced Micro Devices x86 microprocessors
64-bit microprocessors | Operating System (OS) | 1,080 |
PC Open Architecture Developers' Group
PC Open Architecture Developers' Group (OADG, Japanese: ) is a consortium of the major Japanese personal computer manufacturers. Sponsored by IBM during the 1990s, it successfully guided Japan's personal computer manufacturing companies at that time into standardising to an IBM PC-compatible and open architecture.
History
Before the advent of the IBM PC in 1981 in the United States, there were many different varieties and designs of personal computer. Examples from that era include the Tandy RadioShack and Commodore. These machines were each based upon a different computer architecture and the software programs that ran on them were compatible only with the machine they had been designed for. In Japan, except for the MSX, this situation continued well into the early 1990s, because three of Japan's major electronics manufacturers (NEC, Sharp and Fujitsu) had also designed their own unique personal computers; although NEC with its NEC 9801 was at that time the most successful.
The American computer manufacturer IBM had entered the Japanese market with its own IBM 5550 computer. Japanese-language-capable computers at the time, however, had special requirements in terms of processor capability and screen size, and IBM's JX project, emphasizing compatibility with the IBM PC, enjoyed limited success. The whole situation was felt by many to be hindering the healthy growth of the Japanese computer industry, particularly since domestic and overseas software vendors had to develop, test and support many different software programs to run on the many different kinds of personal computers sold in Japan.
IBM developed the operating software DOS/V in Japan, and licensed it to other Japanese PC manufacturers. To promote the IBM PC architecture on which DOS/V worked, IBM sponsored a consortium which was named the PC Open Architecture Developers' Group (OADG) in 1991 and made public its internal architecture and interfaces. At the height of this enterprise, the consortium included amongst its members the major Japanese PC manufactures, such as Toshiba and Hitachi, and overseas manufacturers such as Acer of Taiwan and Dell of the United States. Together, they not only strove to develop a unified architecture, but also produced a number of DOS/V-compatible application software programs and participated in the major computer shows. By the time Microsoft's computer operating system Windows 95 had arrived in 1995, the IBM PC architecture, using DOS/V, was already a predominant force in Japan.
Members
In 2003, membership included the following companies:
Sharp Corporation
Sony Corporation
Toshiba Corporation
IBM Japan
Hitachi
Fujitsu
Panasonic Corporation
See also
AX consortium
OS/2
NEC PC-98
FM Towns
Toshiba J-3100
MSX
References
External links
PC Open Architecture Developers' Group (former official web site)
Free Standards Group OADG is a member of the Free Standards Group.
Personal computers
IBM PC compatibles | Operating System (OS) | 1,081 |
Format (command)
In computing, format, a command-line utility that carries out disk formatting. It is a component of various operating systems, including 86-DOS, MS-DOS, IBM PC DOS and OS/2, Microsoft Windows and ReactOS.
Overview
The command performs the following actions by default on a floppy disk, hard disk drive, solid state (USB), or other magnetic medium (it will not perform these actions on optical media):
clearing the FAT entries by changing them to
clearing the FAT root directory by changing any values found to
checking each cluster to see if it is good or bad and marking it as good or bad in the FAT
Any storage device must have its medium structured to be useful. This process is referred to as "creating a filesystem" in Unix, Linux, or BSD. Under these systems different commands are used. The commands can create many kinds of file systems, including those used by DOS, Windows, and OS/2.
Implementations
The command is also available in Intel ISIS-II, iRMX 86, MetaComCo TRIPOS, AmigaDOS, Zilog Z80-RIO, Microware OS-9, DR FlexOS, TSL PC-MOS, SpartaDOS X, Datalight ROM-DOS, IBM/Toshiba 4690 OS, PTS-DOS, SISNE plus, and in the DEC RT-11 operating system.
Microsoft DOS and Windows
On MS-DOS, the command is available in versions 1 and later.
Optionally (by adding the /S, for "system" switch), format can also install a Volume Boot Record. With this option, Format writes bootstrap code to the first sector of the volume (and possibly elsewhere as well). Format always writes a BIOS Parameter Block to the first sector, with or without the /S option.
Another option (/Q) allows for what Microsoft calls "Quick Format". With this option the command will not perform steps 2 and 3 above. Format /Q does not alter data previously written to the media.
Typing "format" with no parameters in MS-DOS 3.2 or earlier would automatically, without prompting the user, format the current drive; however in MS-DOS 3.3 and later it would simply produce the error: "required parameter missing".
DR/Novell DOS
DR DOS 6.0 includes an implementation of the command.
FreeDOS
The FreeDOS version was developed by Brian E. Reifsnyder and is licensed under the GPL.
ReactOS
The ReactOS implementation is based on a free clone developed by Mark Russinovich for Sysinternals in 1998. It is licensed under the GPL.
It was adapted to ReactOS by Emanuele Aliberti in 1999 and supports FAT, FAT32, FATX, EXT2, and BtrFS filesystems.
See also
Disk formatting
Data recovery
convert
File Allocation Table
Design of the FAT file system
fdisk
PC DOS 7.10 Format32
Notes
References
Further reading
External links
Microsoft Windows XP Professional Product Documentation: "format"
Open source FORMAT implementation that comes with MS-DOS v2.0
MSKB255867: How to Use the Fdisk Tool and the Format Tool to Partition or Repartition a Hard Disk
Microsoft DOS format command
Recovery Console format command
External DOS commands
Hard disk software
Microsoft free software
MSX-DOS commands
OS/2 commands
Windows commands | Operating System (OS) | 1,082 |
UserLAnd Technologies
UserLAnd Technologies is a free and open-source ad-free compatibility layer mobile app that allows Linux distributions, computer programs, computer games and numerical computing programs to run on mobile devices without requiring a root account. UserLAnd also provides a program library of popular free and open-source Linux-based programs to which additional programs and different versions of programs can be added.
Overview
Unlike other Linux compatibility layer mobile apps, UserLAnd does not require a root account. UserLAnd's ability to function without root directories, also known as "rooting," avoids "bricking" or the non-functionality of the mobile device while the Linux program is in use, which in addition to making the mobile device non-functional may void the device's warranty. Furthermore, the requirement of programs other than UserLAnd to "root" your mobile device has proven a formidable challenge for inexperienced Linux users. A prior application, GNURoot Debian, attempted to similarly run Linux programs on mobile devices, but it has ceased to be maintained and, therefore, is no longer operational.
UserLAnd allows those with a mobile device to run Linux programs, many of which aren't available as mobile apps. Even for those Linux applications, e.g. Firefox, which have mobile versions available, people often find that their user experience with these mobile versions pales in comparison with their desktop. UserLAnd allows its users to recreate that desktop experience on their mobile device.
UserLAnd currently only operates on Android mobile devices. UserLAnd is available for download on Google Play and F-Droid.
Operation
To use UserLAnd, one must first download – typically from F-Droid or the Google Play Store – the application and then install it. Once installed, a user selects an app to open. When a program is selected, the user is prompted to enter login information and select a connection type. Following this, the user gains access to their selected program.
Program library
UserLAnd is pre-loaded with the distributions Alpine, Arch, Debian, Kali, and Ubuntu; the web browser Firefox; the desktop environments LXDE and Xfce; the deployment environments Git and IDLE; the text-based games Colossal Cave Adventure and Zork; the numerical computing programs gnuplot, GNU Octave and R; the office suite LibreOffice; and the graphics editors GIMP and Inkscape. Further Linux programs and different versions of programs may be added to this program library.
Reception
A review on Slant.co listed UserLAnd's "Pro's": support for VNC X sessions, no "rooting" required, easy setup, and that it's free and open-source; and "Con's": its lack of support for Lollipop and the difficulty of use for non-technical users. On the contrary, OS Journal found that the lack of a need to "root" your mobile device made using UserLAnd considerably easier than Linux compatibility layer applications, a position shared with SlashGear's review of UserLAnd. OS Journal went on to state that with UserLAnd one could do "almost anything" and "you’re (only) limited by your insanity" with respect to what you can do with the application. Linux Journal stated that "UserLAnd offers a quick and easy way to run an entire Linux distribution, or even just a Linux application or game, from your pocket." SlashGear stated that UserLAnd is "absolutely super simple to use and requires little to no technical knowledge to get off the ground running."
See also
OS virtualization and emulation on Android
References
2018 software
Android emulation software
Compatibility layers
Computing platforms
Cross-platform software
Free software programmed in Java (programming language)
Free system software
Linux APIs
Linux emulation software
Wine (software) | Operating System (OS) | 1,083 |
Splashtop OS
Splashtop OS (previously known as SplashTop) is a discontinued Linux distribution intended to serve as instant-on environment for personal computers. It is open source software with some closed source components. The original concept of Splashtop was that it was intended to be integrated on a read-only device and shipped with the hardware, rather than installed by the user. It did not prevent the installation of another operating system for dual booting. It was an instant-on commercial Linux distribution targeting PC motherboard vendors and other device manufacturers. The first OEM partner for the original Splashtop was ASUS, and their first joint product was called Express Gate. Later, other computer manufacturers also built Splashtop into certain models and re-branded it under different names. The aspects below detailing these events are retained verbatim from past articles, for historical reference.
It boots in about 5 seconds, was thus marketed as "instant-on". It uses Bootsplash, SquashFS, Blackbox, SCIM, and the Linux kernel 2.6.
Support for Splashtop OS has been withdrawn and downloads of Splashtop OS have been disabled on the Splashtop website. Its popularity quickly declined after announcing an agreement with Microsoft and most vendors who included it eventually started using a version that required a windows installation and later simply dropped it. Splashtop Inc. then focused on a remote desktop solution.
Features
Splashtop features a graphical user interface, a web browser based on Mozilla Firefox 2.0 (later updated to Firefox 3.0), a Skype VoIP client, a chat client based on Pidgin, and a stripped-down file manager based on PCManFM. It also includes Adobe Flash Player 10.
Splashtop OS shipping in HP, Dell, Lenovo, Acer, and other OEMs was based on Mozilla-based web browser. Google declined to be the search engine as Google did not want to revenue share on search traffic with DeviceVM.
Despite Splashtop OS is Linux based, Splashtop closed partnership with Yahoo! and Microsoft Bing as search engines. After assessing Splashtop OS technology, Google decided to launch its own ChromeOS and Chromebook.
The online downloadable version of Splashtop OS (beta) version 0.9.8.1 uses Microsoft Bing as search engine, a Chromium-based web browser with Adobe Flash Player plug-in preinstalled. Existing Windows bookmarks and Wi-Fi settings can be imported from Windows.
Most versions of Asus motherboards no longer come with Splashtop preinstalled, as the manufacturer now limits the inclusion of its built-in Express Gate flash drive to "Premium" motherboards such as the P6T Deluxe and P7P55D-E Premium. Other Asus motherboards allow installation of the compact OS via a Windows-only based installer on its support CD. Installation from CD requires a Windows partition to store 500 MB of files, which has to be a SATA drive defined as IDE (no support for AHCI).
If one doesn't have a Windows-based machine, it is possible to install Splashtop on a USB hard drive, from the sources.
As of June 2010, Splashtop, as supplied for Asus motherboards, had no support for add-on wireless hardware such as PCI cards.
Internals
Splashtop can work with a 512 MB flash memory embedded on the PC motherboard. The flash memory can be also emulated on the Windows drive (see below). A proprietary core engine starts at the BIOS boot and loads a specialized Linux distribution called a Virtual Appliance Environment (VAE). While running this VAE, the user can launch Virtual Appliances (VA) or container. Skype is a VA or container, for instance.
The Sony VAIO versions such as 1.3.4.3 are installed as VAIO Quick Web Access. The installer and the resulting SquashFS files occupy roughly 2×250 MB. The SquashFS files consist of a hidden and two hidden folders and in the Windows -partition, where corresponds to for a DOS file system emulation of a USB flash drive. The MD5 checksums of the various bootsplash xxxx and Virtual Appliance xxxx files (including a special Firefox configuration) are noted in for a simple integrity check at the Splashtop start. VAIO laptops offer special buttons ASSIST, WEB, or VAIO depending on the model. The power button on these laptops triggers an ordinary PC boot process, the WEB button starts Splashtop. If a Windows-version configured for VAIO is already running the WEB button only starts the default browser.
The open sources used for major parts of different Splashtop versions can be downloaded. Parts of Splashtop are subject to patents.
DeviceVM owns various patents around instant-on techniques, including being the first OS to leverage on-board flash for enhanced performance, and intelligently cache hardware probing info so next boot will be faster. Many techniques are now incorporated by Microsoft and other modern OS for fast startup.
Products using Splashtop
Asus distributed Splashtop in various motherboards and laptops, including select products from Eee family, under name "Express Gate". Splashtop was also available in netbooks and laptops from various vendors under names "Acer InstantView", "HP QuickWeb", "Dell Latitude On", "Lenovo Quick Start", "LG Smart On", "VAIO Quick Web Access" and "Voodoo IOS".
Total shipment achieved over 100 million computers annually by 2009.
See also
HyperSpace
Latitude ON
coreboot
Extensible Firmware Interface
Open Firmware
OpenBIOS
References
External links
BIOS
Embedded Linux distributions
Linux distributions used in appliances
Linux distributions | Operating System (OS) | 1,084 |
Atari SIO
The Serial Input/Output system, universally known as SIO, was a proprietary peripheral bus and related software protocol stacks used on the Atari 8-bit family to provide most input/output duties for those computers. Unlike most I/O systems of the era, such as RS-232, SIO included a lightweight protocol that allowed multiple devices to be attached to a single daisy-chained port that supported dozens of devices. It also supported plug-and-play operations. SIO's designer, Joe Decuir, credits his work on the system as the basis of USB.
SIO was developed in order to allow expansion without using internal card slots as in the Apple II, due to problems with the FCC over radio interference. This required it to be fairly flexible in terms of device support. Devices that used the SIO interface included printers, floppy disk drives, cassette decks, modems and expansion boxes. Some devices had ROM based drivers that were copied to the host computer when booted allowing new devices to be supported without native support built into the computer itself.
SIO required logic in the peripherals to support the protocols, and in some cases a significant amount of processing power was required - the Atari 810 floppy disk drive included a MOS Technology 6507 for instance. Additionally, the large custom connector was expensive. These drove up costs of the SIO system, and Decuir blames this for "sinking the system". There were unsuccessful efforts to lower the cost of the system during the 8-bits history.
The name "SIO" properly refers only to the sections of the operating system that handled the data exchange, in Atari documentation the bus itself is simply the "serial bus" or "interface bus", although this is also sometimes referred to as SIO. In common usage, SIO refers to the entire system from the operating system to the bus and even the physical connectors.
History
FCC problem
The SIO system ultimately owes its existence to the FCC's rules on the allowable amount of RF interference that could leak from any device that directly generated analog television signals. These rules demanded very low amounts of leakage and had to pass an extensive testing suite. These rules were undergoing revisions during the period when Atari's Grass Valley group was designing the Colleen machine that would become the Atari 800.
The Apple II, one of the few pre-built machines that connected to a television in that era, had avoided this problem by not including the RF modulator in the computer. Instead, Apple arranged a deal with a local electronics company, M&R Enterprises, to sell plug-in modulators under the name Sup'R'Mod. This meant the Apple did not, technically, generate television signals and did not have to undergo FCC testing. One of Atari's major vendors, Sears, felt this was not a suitable solution for their off-the-shelf sales, so to meet the interference requirements they encased the entire system in a cast-aluminum block 2 mm thick.
Colleen was originally intended to be a game console, the successor to the Atari 2600. The success of the Apple II led to the system being repositioned as a home computer, and this market required peripheral devices. On machines like the Apple II, peripherals were supported by placing an adapter card in one of the machine's internal card slots, running a cable through a hole in the case, and connecting the device to that cable. A hole large enough for such a cable would mean Colleen would fail the RF tests, which presented a serious problem. Additionally, convection cooling the cards would be very difficult.
TI diversion
During a visit in early 1978, a Texas Instruments (TI) salesman demonstrated a system consisting of a fibre optic cable with transceivers molded into both ends. Joe Decuir suggested they could use this to send the video signal to an external RF modulator, which would be as simple to use as the coaxial cable one needed to run the signal to the television anyway. Now the computer could have normal slots; like the Apple II, the RF portion would be entirely external and could be tested on its own separately from the computer.
When Decuir explained his concept, the salesman's "eyes almost popped out." Unknown to the Grass Valley team, TI was at that time in the midst of developing the TI-99/4 and was facing the same problem with RF output. When Decuir later explained the idea to his boss, Wade Tuma, Tuma replied that "No, the FCC would never let us get away with that stunt." This proved to be true; TI used Decuir's idea, and when they took it to the FCC in 1979, they rejected it out of hand. TI had to redesign their system, and the resulting delay meant the Atari's reached the market first.
SIO
With this path to allowing card slots stymied, Decuir returned to the problem of providing expansion through an external system of some sort.
By this time, considerable work had been carried out on using the Atari's POKEY chip to run a cassette deck by directly outputting sounds that would be recorded to the tape. It was realized that, with suitable modifications, the POKEY could bypass digital-to-analog conversion hardware and drive TTL output directly. To produce a TTL digital bus, the SIO system used two of the POKEY's four sound channels to produce steady tones that represented clock signals of a given frequency. A single-byte buffer was used to send and receive data; every time the clock signal toggled, one bit from the buffer would be read or written. When all eight bits were read or written, the system generated an interrupt that triggered the operating system to read or write more data.
Unlike a cassette interface, where only a single device would normally be used, an external expansion port would need to be able to support more than one device. To support this, a simple protocol was developed and several new pins added to the original simple cassette port. Most important among these was the COMMAND pin, which triggered the devices to listen for a 5-byte message that activated one of the devices on the bus and asked it for data (or send it commands). They also added the PROCEED and INTERRUPT pins which could be used by the devices to set bits in control registers in the host, but these were not used in the deployed system. Likewise, the timing signals generated by the POKEY were sent on the CLOCKOUT and CLOCKIN pins, although the asynchronous protocol did not use these.
Description
Hardware
The SIO bus was implemented using a custom 13-pin D-connector arrangement (although not D-subminiature) with the male connectors on the devices and the female connectors on either end of the cables. The connectors were physically robust to allow repeated use, with very strong pins in the device socket and sprung connectors in the cables, as opposed to friction fit as in a typical D-connector. Most devices had in and out ports to allow daisy chaining peripherals, although the Atari 410 Program Recorder had to be placed at the end of the chain and thus did not include an out port.
Communications
SIO was controlled by the Atari's POKEY chip, which included a number of general purpose timers. Four of these allowed fine control over the timing rates, and were intended to be used for sound output by connecting them to an digital-to-analog converter (D-to-A) and then mixing them into the television signal before entering the RF modulator. These were re-purposed as the basis of the SIO system, used as clocks in some modes, or to produce the output signals directly in others.
The system included a single "shift register" that was used to semi-automate most data transfers. This consisted of a single 8-bit value, LSB first, that was used to buffer reads and writes. The user accesses these through two memory locations known as SEROUT for writing and SERIN for reading. These were "shadow registers", locations in the RAM that mirrored registers in the various support chips like POKEY. The data bits were framed with a single zero start bit and a single one stop bit, and no parity was used.
To write data in synchronous mode, the POKEY's main timer channels were set to an appropriate clock rate, say 9600 bit/s. Any data written to the SEROUT register was then sent one bit at a time every time the signal went high. It was timed so the signal returned low in the middle of the bit. When all 10 bits (including the start and stop) had been sent, POKEY sent a maskable interrupt to the CPU to indicate it was ready for another byte. On reading, if another byte of data was received before the SERIN was read, the 3rd bit of the SKSTAT was set to true to indicate the overflow. Individual bits being read were also sent to the 4th bit of SKSTAT as they arrived, allowing direct reading of the data without waiting for the framing to complete.
The system officially supported speeds up to 19,200 bit/s, but this rate was chosen only because the Atari engineer's protocol analyzer topped out at that speed. The system was actually capable of much higher performance. A number of 3rd party devices, especially floppy drives, used custom hardware and drivers to greatly increase the transmission speeds to as much as 72,000 bit/s.
Although the system had CLOCKOUT and CLOCKIN pins that could, in theory, be used for synchronous communications, in practice only the asynchronous system was used. In this case, a base speed was set in as above in the POKEY, which would follow changes of up to 5% from this base rate. This made it much easier to work with real devices where mechanical or electrical issues caused the slight variation in the rates over time. One example was the cassette deck, where tape stretch could alter the speed, another is a modem, there the remote system may not be exactly clocked to a given speed.
Device control
The SIO system allowed devices to be daisy chained, and thus required some way of identifying that information on the various data pins was intended for a specific device on the chain. This was accomplished with the COMMAND pin.
The COMMAND pin was normally held high, and when it was pulled low, devices on the bus were required to listen for a "command frame". This consisted of a 5-byte packet; the first byte was the device ID, the second was a device-specific command number, and then two auxiliary bytes of data that could be used by the driver for any purpose. These four were followed by a checksum byte. The COMMAND pin went high again when the frame was complete.
On reception of the packet, the device specified in the first byte was expected to reply. This consisted of a single byte containing an ASCII character, "A" for Acknowledge if the packet was properly decoded and the checksum matched, "N" otherwise. For commands that exchanged data, the command frame would be followed by a "data frame" from or to the selected device. This frame would then be acknowledged by the receiver with a "C" for Complete or "E" for error. Since every packet of 128 data bytes required another command frame before the next could be sent, throughput was effected by latency issues; the Atari 810 disk drive normally used a 19,200 bit/s speed, but was limited to about 6,000 bit/s as a result of the overhead.
Devices were enumerated mechanically, typically using small DIP switches. Each class of device was given a different set of 16 potential numbers based on hexadecimal numbers, the $30 range for disk drives and $40 for printers, for instance. However, each driver could support as many or as few devices as it wanted; the Atari 820 printer driver supported only a single printer numbered $40, while the disk drivers could support four drives numbered $31 to $34.
Cassette use
Design of what became the SIO had started as a system for interfacing to cassette recorders using the sound hardware to generate the appropriate tones. This capability was retained in the production versions, allowing the Atari 410 and its successors to be relatively simple devices.
When set to operate the cassette, the outputs from channel 1 and 2 of the POKEY were sent to the DATAOUT rather than the clock pins. The two channels were set to produce tones that were safe to record on the tape, 3995 Hz for a zero was in POKEY channel 2 and 5326 Hz for a one was in channel 1. In this mode, when the POKEY read bits from the SERIN, any 1's resulted in channel 1 playing into the data pin, and 0's played channel 2. In this fashion, a byte of data was converted into tones on the tape. Reading, however, used a different system, as there was no A-to-D converter in the computer. Instead, the cassette decks included two narrow-band filters tuned to the two frequencies. During a read, the output of one or the other of these filters would be asserted as the bits were read off the tape. These were sent as digital data back to the host computer.
Because the tape was subject to stretching and other mechanical problems that could speed or slow transport across the heads, the system used asynchronous reads and writes. Data was written in blocks of 132 bytes per record, with the first two bytes being the bit pattern "01010101 01010101". An inter-record gap between the blocks with no tones allowed the operating system to know when a new record was starting by looking for the leading zero. It then rapidly read the port and timed the transitions of the timing bits from 0 to 1 and back to determine the precise data rate. The next byte was a control byte specifying if this was a normal record of 128 data bytes, a short block, or an end-of-file. Up to 128 bytes of data followed, itself followed by a checksum byte, including everything before the checksum.
The operation was further controlled by the MOTOR pin in the SIO port, dedicated to this purpose. When this pin was low, the motor in the deck was turned off. This allowed the user to press play or play and record without the tape beginning to move. When the appropriate command was entered on the computer, MOTOR would be asserted and the cassette would begin to turn.
Another dedicated pin was the AUDIOIN, which was connected directly to the sound output circuits between the POKEY's D-to-A converters and the final output so that any signal on the pin mixed with the sound from the POKEY (if any) and was then sent to the television speaker. This was connected to the left sound channel in the cassette, while the right channel was connected to the data pins. This allowed users to record normal sounds on the left channel and then have them play through the television. This was often combined with direct motor control to produce interactive language learning tapes and similar programs. Some software companies would record sounds or music on this channel to make the loading process more enjoyable.
See also
Special input/output
Notes
References
Bibliography
Computer buses
Serial buses
Atari 8-bit family | Operating System (OS) | 1,085 |
BootSkin
BootSkin is a computer program for Microsoft Windows 2000, Windows XP and Windows Vista that allows users to change the screen displayed while the operating system is booting. It is made by Stardock, and distributed for free under the WinCustomize brand.
BootSkin uses a boot-time device driver (vidstub.sys) to access the display directly using VESA BIOS Extensions (VBE), unlike other bootscreen changers which alter the boot screen image inside the kernel. This has the advantage of not modifying system files, and makes higher-resolution boot screens possible; standard boot screens are limited to 640x480 with 16 colors. Some graphics cards and chipsets do not support VBE well, preventing their use with BootSkin.
Due to severe restrictions on color depth, many images are not suitable for use as boot skins. Successful skins tend to take advantage of the limitations through the use of a limited palette and dithering.
Installing BootSkin unattended is simple matter of using the /silent switch, but there does not seem to be any way of applying a skin without actually clicking the apply button in the program
References
External links
BootSkin home page
BootSkin XP section on WinCustomize
BootSkin section on DeviantArt
Windows-only freeware
Stardock software | Operating System (OS) | 1,086 |
Mainsoft
Mainsoft is a software company, founded in 1993, that develops interoperability software products for Microsoft Windows and Linux/Unix platforms.
History
Founding
Mainsoft was founded in 1993, mainly to propose integration products between Windows and other systems.
Mainsoft was one of the main providers for the Microsoft Windows Interface Source Environment (WISE) program, a licensing program from Microsoft which allowed developers to recompile and run Windows-based applications on UNIX and Macintosh platforms.
WISE software development kits (SDKs) were not directly provided by Microsoft. Instead Microsoft established partnerships to several software providers which in turn sold WISE SDKs to end-users.
After the WISE program, Microsoft extended its agreements with Mainsoft to port Windows Media Player 6.3 and Internet Explorer to Unix.
Microsoft integration
Since then, Mainsoft activity shifted to integration of Microsoft SharePoint into IBM products (IBM Lotus Notes, IBM WebSphere, Rational Jazz) and products focusing on .NET Framework and JavaEE.
To be able to develop WISE SDKs, software providers needed to have access to Windows internals source code. In 2004, more than 30000 source files from Windows 2000 and Windows NT 4.0 were leaked to the internet. It was later discovered that the source of the leak originated from Mainsoft.
See also
Internet Explorer for UNIX
Grasshopper, a Mainsoft framework to allow the use of VisualBasic and C# applications on a Java Application Server
References
External links
Company website
MainWin, a Mainsoft platform allowing to port Windows applications to Unix and Linux
Companies established in 1993
Cloud computing providers
Software companies of Israel | Operating System (OS) | 1,087 |
Decimal computer
Decimal computers are computers which can represent numbers and addresses in decimal as well as providing instructions to operate on those numbers and addresses directly in decimal, without conversion to a pure binary representation. Some also had a variable wordlength, which enabled operations on numbers with a large number of digits.
Early computers
Early computers that were exclusively decimal include the ENIAC, IBM NORC, IBM 650, IBM 1620, IBM 7070, UNIVAC Solid State 80. In these machines, the basic unit of data was the decimal digit, encoded in one of several schemes, including binary-coded decimal (BCD), bi-quinary and two-out-of-five code. Except for the IBM 1620 and 1710, these machines used word addressing. When non-numeric characters were used in these machines, they were encoded as two decimal digits.
Other early computers were character oriented, providing instructions for performing arithmetic on character strings of decimal numerals, using BCD or excess-3 (XS-3) for decimal digits. On these machines, the basic data element was an alphanumeric character, typically encoded in six bits. UNIVAC I and UNIVAC II used word addressing, with 12-character words. IBM examples include IBM 702, IBM 705, the IBM 1400 series, IBM 7010, and the IBM 7080.
Later computers
The IBM System/360, introduced in 1964 to unify IBM's product lines, used per character binary addressing, and also included instructions for packed decimal arithmetic as well as binary integer arithmetic, and binary floating point. It used 8-bit characters and introduced EBCDIC encoding, though ASCII was also supported.
The Burroughs B2500 introduced in 1966 also used 8-bit EBCDIC or ASCII characters and could pack two decimal digits per byte, but it did not provide binary arithmetic, making it a decimal architecture.
More modern computers
Several microprocessor families offer limited decimal support. For example, the 80x86 family of microprocessors provide instructions to convert one-byte BCD numbers (packed and unpacked) to binary format before or after arithmetic operations
. These operations were not extended to wider formats and hence are now slower than using 32-bit or wider BCD 'tricks' to compute in BCD. The x87 FPU has instructions to convert 10-byte (18 decimal digits) packed decimal data, although it then operates on them as floating-point numbers.
The Motorola 68000 provided instructions for BCD addition and subtraction; as does the 6502. In the much later 68000 family-derived processors, these instructions were removed when the Coldfire instruction set was defined, and all IBM mainframes also provide BCD integer arithmetic in hardware. The Zilog Z80, Motorola 6800 and its derivatives, together with other 8-bit processors, and also the Intel x86 family have special instructions that support conversion to and from BCD. The Psion Organiser I handheld computer’s manufacturer-supplied software implemented its floating point operations in software using BCD entirely. All later Psion models used binary only, rather than BCD.
Decimal arithmetic is now becoming more common; for instance, three decimal types with two binary encodings were added to the 2008 IEEE 754r standard, with 7-, 16-, and 34-digit decimal significands.
The IBM Power6 processor and the IBM System z9 have implemented these types using the Densely Packed Decimal binary encoding, the first in hardware and the second in microcode.
See also
Ternary computer
References
Further reading
(NB. This title provides detailed description of decimal calculations, including explanation of binary-coded decimals and algorithms.)
(NB. At least some batches of this reprint edition were misprints with defective pages 115–146.)
Classes of computers
Early computers
Decimal computers | Operating System (OS) | 1,088 |
Compaq Presario R3000
The Compaq Presario R3000 Line is a Series of laptops designed and built by Hewlett-Packard Corporation. They originally shipped with Microsoft Windows XP but could be configured with 98, 2000, or ME. The series used Intel or AMD Processors, could be ordered with 128 MB (128 MiB) up to 1 GB of RAM (with some being reserved for graphical memory), and could come with an ATI Mobility Radeon 9000/9100 or Nvidia GeForce 4 integrated graphics chip. The integrated sound card was made by Analog Devices and outputs to JBL Pro speakers that sit above the keyboard. Certain configurations included an integrated Broadcom 54G wireless networking card. Connection ports include USB, Firewire, 3.5mm audio output, 3.5mm audio input, s-video output, VGA output, and parallel. One port that is special to this series is an Expansion Port for HP's Expansion Dock that allows an extra array of ports when the laptop is docked. Several optical media options were available including a standard DVD read-only drive up to a DVD+RW/CD-RW drive at varying speeds. The computer is encased in a black and silver plastic shell, weighs about ten pounds, and has two cooling fans, both mounted under the keyboard.
Presario R3000
References | Operating System (OS) | 1,089 |
Microcontroller
A microcontroller (MCU for microcontroller unit) is a small computer on a single metal-oxide-semiconductor (MOS) integrated circuit (IC) chip. A microcontroller contains one or more CPUs (processor cores) along with memory and programmable input/output peripherals. Program memory in the form of ferroelectric RAM, NOR flash or OTP ROM is also often included on chip, as well as a small amount of RAM. Microcontrollers are designed for embedded applications, in contrast to the microprocessors used in personal computers or other general purpose applications consisting of various discrete chips.
In modern terminology, a microcontroller is similar to, but less sophisticated than, a system on a chip (SoC). An SoC may include a microcontroller as one of its components, but usually integrates it with advanced peripherals like a graphics processing unit (GPU), a Wi-Fi module, or one or more coprocessors.
Microcontrollers are used in automatically controlled products and devices, such as automobile engine control systems, implantable medical devices, remote controls, office machines, appliances, power tools, toys and other embedded systems. By reducing the size and cost compared to a design that uses a separate microprocessor, memory, and input/output devices, microcontrollers make it economical to digitally control even more devices and processes. Mixed signal microcontrollers are common, integrating analog components needed to control non-digital electronic systems. In the context of the internet of things, microcontrollers are an economical and popular means of data collection, sensing and actuating the physical world as edge devices.
Some microcontrollers may use four-bit words and operate at frequencies as low as for low power consumption (single-digit milliwatts or microwatts). They generally have the ability to retain functionality while waiting for an event such as a button press or other interrupt; power consumption while sleeping (CPU clock and most peripherals off) may be just nanowatts, making many of them well suited for long lasting battery applications. Other microcontrollers may serve performance-critical roles, where they may need to act more like a digital signal processor (DSP), with higher clock speeds and power consumption.
History
Background
The first multi-chip microprocessors, the Four-Phase Systems AL1 in 1969 and the Garrett AiResearch MP944 in 1970, were developed with multiple MOS LSI chips. The first single-chip microprocessor was the Intel 4004, released on a single MOS LSI chip in 1971. It was developed by Federico Faggin, using his silicon-gate MOS technology, along with Intel engineers Marcian Hoff and Stan Mazor, and Busicom engineer Masatoshi Shima. It was followed by the 4-bit Intel 4040, the 8-bit Intel 8008, and the 8-bit Intel 8080. All of these processors required several external chips to implement a working system, including memory and peripheral interface chips. As a result, the total system cost was several hundred (1970s US) dollars, making it impossible to economically computerize small appliances.
MOS Technology introduced its sub-$100 microprocessors in 1975, the 6501 and 6502. Their chief aim was to reduce this cost barrier but these microprocessors still required external support, memory, and peripheral chips which kept the total system cost in the hundreds of dollars.
Development
One book credits TI engineers Gary Boone and Michael Cochran with the successful creation of the first microcontroller in 1971. The result of their work was the TMS 1000, which became commercially available in 1974. It combined read-only memory, read/write memory, processor and clock on one chip and was targeted at embedded systems.
During the early-to-mid-1970s, Japanese electronics manufacturers began producing microcontrollers for automobiles, including 4-bit MCUs for in-car entertainment, automatic wipers, electronic locks, and dashboard, and 8-bit MCUs for engine control.
Partly in response to the existence of the single-chip TMS 1000, Intel developed a computer system on a chip optimized for control applications, the Intel 8048, with commercial parts first shipping in 1977. It combined RAM and ROM on the same chip with a microprocessor. Among numerous applications, this chip would eventually find its way into over one billion PC keyboards. At that time Intel's President, Luke J. Valenter, stated that the microcontroller was one of the most successful products in the company's history, and he expanded the microcontroller division's budget by over 25%.
Most microcontrollers at this time had concurrent variants. One had EPROM program memory, with a transparent quartz window in the lid of the package to allow it to be erased by exposure to ultraviolet light. These erasable chips were often used for prototyping. The other variant was either a mask programmed ROM or a PROM variant which was only programmable once. For the latter, sometimes the designation OTP was used, standing for "one-time programmable". In an OTP microcontroller, the PROM was usually of identical type as the EPROM, but the chip package had no quartz window; because there was no way to expose the EPROM to ultraviolet light, it could not be erased. Because the erasable versions required ceramic packages with quartz windows, they were significantly more expensive than the OTP versions, which could be made in lower-cost opaque plastic packages. For the erasable variants, quartz was required, instead of less expensive glass, for its transparency to ultraviolet light—to which glass is largely opaque—but the main cost differentiator was the ceramic package itself.
In 1993, the introduction of EEPROM memory allowed microcontrollers (beginning with the Microchip PIC16C84) to be electrically erased quickly without an expensive package as required for EPROM, allowing both rapid prototyping, and in-system programming. (EEPROM technology had been available prior to this time, but the earlier EEPROM was more expensive and less durable, making it unsuitable for low-cost mass-produced microcontrollers.) The same year, Atmel introduced the first microcontroller using Flash memory, a special type of EEPROM. Other companies rapidly followed suit, with both memory types.
Nowadays microcontrollers are cheap and readily available for hobbyists, with large online communities around certain processors.
Volume and cost
In 2002, about 55% of all CPUs sold in the world were 8-bit microcontrollers and microprocessors.
Over two billion 8-bit microcontrollers were sold in 1997, and according to Semico, over four billion 8-bit microcontrollers were sold in 2006. More recently, Semico has claimed the MCU market grew 36.5% in 2010 and 12% in 2011.
A typical home in a developed country is likely to have only four general-purpose microprocessors but around three dozen microcontrollers. A typical mid-range automobile has about 30 microcontrollers. They can also be found in many electrical devices such as washing machines, microwave ovens, and telephones.
Cost to manufacture can be under $0.10 per unit.
Cost has plummeted over time, with the cheapest 8-bit microcontrollers being available for under in 2018, and some 32-bit microcontrollers around US$1 for similar quantities.
In 2012, following a global crisis—a worst ever annual sales decline and recovery and average sales price year-over-year plunging 17%—the biggest reduction since the 1980s—the average price for a microcontroller was US$0.88 ($0.69 for 4-/8-bit, $0.59 for 16-bit, $1.76 for 32-bit).
In 2012, worldwide sales of 8-bit microcontrollers were around $4 billion, while 4-bit microcontrollers also saw significant sales.
In 2015, 8-bit microcontrollers could be bought for $0.311 (1,000 units), 16-bit for $0.385 (1,000 units), and 32-bit for $0.378 (1,000 units, but at $0.35 for 5,000).
In 2018, 8-bit microcontrollers can be bought for $0.03, 16-bit for $0.393 (1,000 units, but at $0.563 for 100 or $0.349 for full reel of 2,000), and 32-bit for $0.503 (1,000 units, but at $0.466 for 5,000). A lower-priced 32-bit microcontroller, in units of one, can be had for $0.891.
In 2018, the low-priced microcontrollers above from 2015 are all more expensive (with inflation calculated between 2018 and 2015 prices for those specific units) at: the 8-bit microcontroller can be bought for $0.319 (1,000 units) or 2.6% higher, the 16-bit one for $0.464 (1,000 units) or 21% higher, and the 32-bit one for $0.503 (1,000 units, but at $0.466 for 5,000) or 33% higher.
Smallest computer
On 21 June 2018, the "world's smallest computer" was announced by the University of Michigan. The device is a "0.04mm3 16nW wireless and batteryless sensor system with integrated Cortex-M0+ processor and optical communication for cellular temperature measurement." It "measures just 0.3 mm to a side—dwarfed by a grain of rice. [...] In addition to the RAM and photovoltaics, the new computing devices have processors and wireless transmitters and receivers. Because they are too small to have conventional radio antennae, they receive and transmit data with visible light. A base station provides light for power and programming, and it receives the data." The device is 1/10th the size of IBM's previously claimed world-record-sized computer from months back in March 2018, which is "smaller than a grain of salt", has a million transistors, costs less than $0.10 to manufacture, and, combined with blockchain technology, is intended for logistics and "crypto-anchors"—digital fingerprint applications.
Embedded design
A microcontroller can be considered a self-contained system with a processor, memory and peripherals and can be used as an embedded system. The majority of microcontrollers in use today are embedded in other machinery, such as automobiles, telephones, appliances, and peripherals for computer systems.
While some embedded systems are very sophisticated, many have minimal requirements for memory and program length, with no operating system, and low software complexity. Typical input and output devices include switches, relays, solenoids, LED's, small or custom liquid-crystal displays, radio frequency devices, and sensors for data such as temperature, humidity, light level etc. Embedded systems usually have no keyboard, screen, disks, printers, or other recognizable I/O devices of a personal computer, and may lack human interaction devices of any kind.
Interrupts
Microcontrollers must provide real-time (predictable, though not necessarily fast) response to events in the embedded system they are controlling. When certain events occur, an interrupt system can signal the processor to suspend processing the current instruction sequence and to begin an interrupt service routine (ISR, or "interrupt handler") which will perform any processing required based on the source of the interrupt, before returning to the original instruction sequence. Possible interrupt sources are device dependent, and often include events such as an internal timer overflow, completing an analog to digital conversion, a logic level change on an input such as from a button being pressed, and data received on a communication link. Where power consumption is important as in battery devices, interrupts may also wake a microcontroller from a low-power sleep state where the processor is halted until required to do something by a peripheral event.
Programs
Typically micro-controller programs must fit in the available on-chip memory, since it would be costly to provide a system with external, expandable memory. Compilers and assemblers are used to convert both high-level and assembly language codes into a compact machine code for storage in the micro-controller's memory. Depending on the device, the program memory may be permanent, read-only memory that can only be programmed at the factory, or it may be field-alterable flash or erasable read-only memory.
Manufacturers have often produced special versions of their micro-controllers in order to help the hardware and software development of the target system. Originally these included EPROM versions that have a "window" on the top of the device through which program memory can be erased by ultraviolet light, ready for reprogramming after a programming ("burn") and test cycle. Since 1998, EPROM versions are rare and have been replaced by EEPROM and flash, which are easier to use (can be erased electronically) and cheaper to manufacture.
Other versions may be available where the ROM is accessed as an external device rather than as internal memory, however these are becoming rare due to the widespread availability of cheap microcontroller programmers.
The use of field-programmable devices on a micro controller may allow field update of the firmware or permit late factory revisions to products that have been assembled but not yet shipped. Programmable memory also reduces the lead time required for deployment of a new product.
Where hundreds of thousands of identical devices are required, using parts programmed at the time of manufacture can be economical. These "mask programmed" parts have the program laid down in the same way as the logic of the chip, at the same time.
A customized micro-controller incorporates a block of digital logic that can be personalized for additional processing capability, peripherals and interfaces that are adapted to the requirements of the application. One example is the AT91CAP from Atmel.
Other microcontroller features
Microcontrollers usually contain from several to dozens of general purpose input/output pins (GPIO). GPIO pins are software configurable to either an input or an output state. When GPIO pins are configured to an input state, they are often used to read sensors or external signals. Configured to the output state, GPIO pins can drive external devices such as LEDs or motors, often indirectly, through external power electronics.
Many embedded systems need to read sensors that produce analog signals. This is the purpose of the analog-to-digital converter (ADC). Since processors are built to interpret and process digital data, i.e. 1s and 0s, they are not able to do anything with the analog signals that may be sent to it by a device. So the analog to digital converter is used to convert the incoming data into a form that the processor can recognize. A less common feature on some microcontrollers is a digital-to-analog converter (DAC) that allows the processor to output analog signals or voltage levels.
In addition to the converters, many embedded microprocessors include a variety of timers as well. One of the most common types of timers is the programmable interval timer (PIT). A PIT may either count down from some value to zero, or up to the capacity of the count register, overflowing to zero. Once it reaches zero, it sends an interrupt to the processor indicating that it has finished counting. This is useful for devices such as thermostats, which periodically test the temperature around them to see if they need to turn the air conditioner on, the heater on, etc.
A dedicated pulse-width modulation (PWM) block makes it possible for the CPU to control power converters, resistive loads, motors, etc., without using many CPU resources in tight timer loops.
A universal asynchronous receiver/transmitter (UART) block makes it possible to receive and transmit data over a serial line with very little load on the CPU. Dedicated on-chip hardware also often includes capabilities to communicate with other devices (chips) in digital formats such as Inter-Integrated Circuit (I²C), Serial Peripheral Interface (SPI), Universal Serial Bus (USB), and Ethernet.
Higher integration
Micro-controllers may not implement an external address or data bus as they integrate RAM and non-volatile memory on the same chip as the CPU. Using fewer pins, the chip can be placed in a much smaller, cheaper package.
Integrating the memory and other peripherals on a single chip and testing them as a unit increases the cost of that chip, but often results in decreased net cost of the embedded system as a whole. Even if the cost of a CPU that has integrated peripherals is slightly more than the cost of a CPU and external peripherals, having fewer chips typically allows a smaller and cheaper circuit board, and reduces the labor required to assemble and test the circuit board, in addition to tending to decrease the defect rate for the finished assembly.
A micro-controller is a single integrated circuit, commonly with the following features:
central processing unit ranging from small and simple 4-bit processors to complex 32-bit or 64-bit processors
volatile memory (RAM) for data storage
ROM, EPROM, EEPROM or Flash memory for program and operating parameter storage
discrete input and output bits, allowing control or detection of the logic state of an individual package pin
serial input/output such as serial ports (UARTs)
other serial communications interfaces like I²C, Serial Peripheral Interface and Controller Area Network for system interconnect
peripherals such as timers, event counters, PWM generators, and watchdog
clock generator often an oscillator for a quartz timing crystal, resonator or RC circuit
many include analog-to-digital converters, some include digital-to-analog converters
in-circuit programming and in-circuit debugging support
This integration drastically reduces the number of chips and the amount of wiring and circuit board space that would be needed to produce equivalent systems using separate chips. Furthermore, on low pin count devices in particular, each pin may interface to several internal peripherals, with the pin function selected by software. This allows a part to be used in a wider variety of applications than if pins had dedicated functions.
Micro-controllers have proved to be highly popular in embedded systems since their introduction in the 1970s.
Some microcontrollers use a Harvard architecture: separate memory buses for instructions and data, allowing accesses to take place concurrently. Where a Harvard architecture is used, instruction words for the processor may be a different bit size than the length of internal memory and registers; for example: 12-bit instructions used with 8-bit data registers.
The decision of which peripheral to integrate is often difficult. The microcontroller vendors often trade operating frequencies and system design flexibility against time-to-market requirements from their customers and overall lower system cost. Manufacturers have to balance the need to minimize the chip size against additional functionality.
Microcontroller architectures vary widely. Some designs include general-purpose microprocessor cores, with one or more ROM, RAM, or I/O functions integrated onto the package. Other designs are purpose built for control applications. A micro-controller instruction set usually has many instructions intended for bit manipulation (bit-wise operations) to make control programs more compact. For example, a general purpose processor might require several instructions to test a bit in a register and branch if the bit is set, where a micro-controller could have a single instruction to provide that commonly required function.
Microcontrollers traditionally do not have a math coprocessor, so floating point arithmetic is performed by software. However, some recent designs do include an FPU and DSP optimized features. An example would be Microchip's PIC32 MIPS based line.
Programming environments
Microcontrollers were originally programmed only in assembly language, but various high-level programming languages, such as C, Python and JavaScript, are now also in common use to target microcontrollers and embedded systems. Compilers for general purpose languages will typically have some restrictions as well as enhancements to better support the unique characteristics of microcontrollers. Some microcontrollers have environments to aid developing certain types of applications. Microcontroller vendors often make tools freely available to make it easier to adopt their hardware.
Microcontrollers with specialty hardware may require their own non-standard dialects of C, such as SDCC for the 8051, which prevent using standard tools (such as code libraries or static analysis tools) even for code unrelated to hardware features. Interpreters may also contain nonstandard features, such as MicroPython, although a fork, CircuitPython, has looked to move hardware dependencies to libraries and have the language adhere to a more CPython standard.
Interpreter firmware is also available for some microcontrollers. For example, BASIC on the early microcontrollers Intel 8052; BASIC and FORTH on the Zilog Z8 as well as some modern devices. Typically these interpreters support interactive programming.
Simulators are available for some microcontrollers. These allow a developer to analyze what the behavior of the microcontroller and their program should be if they were using the actual part. A simulator will show the internal processor state and also that of the outputs, as well as allowing input signals to be generated. While on the one hand most simulators will be limited from being unable to simulate much other hardware in a system, they can exercise conditions that may otherwise be hard to reproduce at will in the physical implementation, and can be the quickest way to debug and analyze problems.
Recent microcontrollers are often integrated with on-chip debug circuitry that when accessed by an in-circuit emulator (ICE) via JTAG, allow debugging of the firmware with a debugger. A real-time ICE may allow viewing and/or manipulating of internal states while running. A tracing ICE can record executed program and MCU states before/after a trigger point.
Types
, there are several dozen microcontroller architectures and vendors including:
ARM core processors (many vendors)
ARM Cortex-M cores are specifically targeted toward microcontroller applications
Microchip Technology Atmel AVR (8-bit), AVR32 (32-bit), and AT91SAM (32-bit)
Cypress Semiconductor's M8C core used in their PSoC (Programmable System-on-Chip)
Freescale ColdFire (32-bit) and S08 (8-bit)
Freescale 68HC11 (8-bit), and others based on the Motorola 6800 family
Intel 8051, also manufactured by NXP Semiconductors, Infineon and many others
Infineon: 8-bit XC800, 16-bit XE166, 32-bit XMC4000 (ARM based Cortex M4F), 32-bit TriCore and, 32-bit Aurix Tricore Bit microcontrollers
Maxim Integrated MAX32600, MAX32620, MAX32625, MAX32630, MAX32650, MAX32640
MIPS
Microchip Technology PIC, (8-bit PIC16, PIC18, 16-bit dsPIC33 / PIC24), (32-bit PIC32)
NXP Semiconductors LPC1000, LPC2000, LPC3000, LPC4000 (32-bit), LPC900, LPC700 (8-bit)
Parallax Propeller
PowerPC ISE
Rabbit 2000 (8-bit)
Renesas Electronics: RL78 16-bit MCU; RX 32-bit MCU; SuperH; V850 32-bit MCU; H8; R8C 16-bit MCU
Silicon Laboratories Pipelined 8-bit 8051 microcontrollers and mixed-signal ARM-based 32-bit microcontrollers
STMicroelectronics STM8 (8-bit), ST10 (16-bit), STM32 (32-bit), SPC5 (automotive 32-bit)
Texas Instruments TI MSP430 (16-bit), MSP432 (32-bit), C2000 (32-bit)
Toshiba TLCS-870 (8-bit/16-bit)
Many others exist, some of which are used in very narrow range of applications or are more like applications processors than microcontrollers. The microcontroller market is extremely fragmented, with numerous vendors, technologies, and markets. Note that many vendors sell or have sold multiple architectures.
Interrupt latency
In contrast to general-purpose computers, microcontrollers used in embedded systems often seek to optimize interrupt latency over instruction throughput. Issues include both reducing the latency, and making it be more predictable (to support real-time control).
When an electronic device causes an interrupt, during the context switch the intermediate results (registers) have to be saved before the software responsible for handling the interrupt can run. They must also be restored after that interrupt handler is finished. If there are more processor registers, this saving and restoring process may take more time, increasing the latency. (If an ISR does not require the use of some registers, it may simply leave them alone rather than saving and restoring them, so in that case those registers are not involved with the latency.) Ways to reduce such context/restore latency include having relatively few registers in their central processing units (undesirable because it slows down most non-interrupt processing substantially), or at least having the hardware not save them all (this fails if the software then needs to compensate by saving the rest "manually"). Another technique involves spending silicon gates on "shadow registers": One or more duplicate registers used only by the interrupt software, perhaps supporting a dedicated stack.
Other factors affecting interrupt latency include:
Cycles needed to complete current CPU activities. To minimize those costs, microcontrollers tend to have short pipelines (often three instructions or less), small write buffers, and ensure that longer instructions are continuable or restartable. RISC design principles ensure that most instructions take the same number of cycles, helping avoid the need for most such continuation/restart logic.
The length of any critical section that needs to be interrupted. Entry to a critical section restricts concurrent data structure access. When a data structure must be accessed by an interrupt handler, the critical section must block that interrupt. Accordingly, interrupt latency is increased by however long that interrupt is blocked. When there are hard external constraints on system latency, developers often need tools to measure interrupt latencies and track down which critical sections cause slowdowns.
One common technique just blocks all interrupts for the duration of the critical section. This is easy to implement, but sometimes critical sections get uncomfortably long.
A more complex technique just blocks the interrupts that may trigger access to that data structure. This is often based on interrupt priorities, which tend to not correspond well to the relevant system data structures. Accordingly, this technique is used mostly in very constrained environments.
Processors may have hardware support for some critical sections. Examples include supporting atomic access to bits or bytes within a word, or other atomic access primitives like the LDREX/STREX exclusive access primitives introduced in the ARMv6 architecture.
Interrupt nesting. Some microcontrollers allow higher priority interrupts to interrupt lower priority ones. This allows software to manage latency by giving time-critical interrupts higher priority (and thus lower and more predictable latency) than less-critical ones.
Trigger rate. When interrupts occur back-to-back, microcontrollers may avoid an extra context save/restore cycle by a form of tail call optimization.
Lower end microcontrollers tend to support fewer interrupt latency controls than higher end ones.
Memory technology
Two different kinds of memory are commonly used with microcontrollers, a non-volatile memory for storing firmware and a read-write memory for temporary data.
Data
From the earliest microcontrollers to today, six-transistor SRAM is almost always used as the read/write working memory, with a few more transistors per bit used in the register file.
In addition to the SRAM, some microcontrollers also have internal EEPROM for data storage; and even ones that do not have any (or not enough) are often connected to external serial EEPROM chip (such as the BASIC Stamp) or external serial flash memory chip.
A few microcontrollers beginning in 2003 have "self-programmable" flash memory.
Firmware
The earliest microcontrollers used mask ROM to store firmware. Later microcontrollers (such as the early versions of the Freescale 68HC11 and early PIC microcontrollers) had EPROM memory, which used a translucent window to allow erasure via UV light, while production versions had no such window, being OTP (one-time-programmable). Firmware updates were equivalent to replacing the microcontroller itself, thus many products were not upgradeable.
Motorola MC68HC805 was the first microcontroller to use EEPROM to store the firmware. EEPROM microcontrollers became more popular in 1993 when Microchip introduced PIC16C84 and Atmel introduced an 8051-core microcontroller that was first one to use NOR Flash memory to store the firmware. Today's microcontrollers almost exclusively use flash memory, with a few models using FRAM, and some ultra-low-cost parts still use OTP or Mask-ROM.
See also
List of common microcontrollers
List of Wi-Fi microcontrollers
List of open-source hardware projects
Microbotics
Programmable logic controller
Single-board microcontroller
References
External links | Operating System (OS) | 1,090 |
MagiC
MagiC is a third party and now open-sourced multitasking-capable TOS-compatible operating system for Atari computers, including some newer clone systems manufactured later. There are also variants that run as part of Mac and PC emulation environments, as well as on macOS Intel-Mac computers.
Features
The kernel of MagiC is largely written in hand-coded assembly language for Motorola 68000, and offers:
Extensive Atari TOS compatibility, the developer also created an improved variant (KAOS)
Restricted MiNT/MultiTOS compatibility
Preemptive multitasking
Loadable file systems and long file names
Significant performance advantages over both the original TOS and MiNT/MultiTOS platform on the same hardware
Disadvantages
MagiC was originally a commercial product and not freely available, like MiNT
MagiC is not 100% compatible with the original TOS
Drivers and file systems from MiNT are not compatible with MagiC
Magic-Mac and Magic-PC variants only run under Mac OS and Microsoft Windows respectively, not e.g. Linux distributions
Some Atari ST programs assume they alone control the machine, are troublesome when multitasked (mostly graphics glitches)
History and variants
Atari platform
MagiC was originally released as Mag!X (or MagiX) in 1992. At that time, TOS featured only limited multitasking in the form of desk accessory programs, simple programs accessed from the "Desk" menu and that multitasked using cooperative task switching. In contrast, MagiC offered preemptive multitasking, giving the ability to run multiple (well-behaved) GEM applications as well as other non-graphical software on the Atari ST series the Atari STE and Atari TT.
The name changed from Mag!X to MagiC with the release of version 3.0, which added many improvements and a significant amount of MiNT compatibility. Version 4.0 added support for the Atari Falcon, and finally in 1995 version 5.0 brought the significant addition of loadable file system support, along with an implementation of VFAT with long file names, and a number of other improvements to the GEMDOS layer including threads and signals.
Clone machines
MagiC versions 6.0 through 6.2 were released also for use with Atari clone machines of the late 1990s (e.g. Milan manufactured by MILAN Computersystems, Hades by Medusa Computer Systems). They include significant enhancements, such as support for FAT32, increased MiNT compatibility, and support for newer processors and hardware found in the clone systems. Version 6.2 is the latest for Atari machines.
Apple Macintosh
Atari was slow to improve the hardware of its systems, and in the mid- to late 1990s it was apparent that the Apple Macintosh systems, and some clones by other manufacturers, were a superior hardware platform. Given that Ataris and Macs shared a very similar user interface, the latter were a logical upgrade path for many Atari users. So in 1994 a variant of MagiC known as MagiCMac was released, allowing Atari ST users to run their software on modern Mac hardware.
At first MagiCMac was offered for Macs with Motorola 680x0 CPU, a version for PPC CPUs followed. Later releases offered improved integration with the classic Mac OS, and allowed well-behaved Atari software to access the native graphics modes offered by the host machine, in addition to emulations of the standard Atari screen modes . Version 6.2 is the latest for machines with Mac OS classic (up to version 9.2).
PowerPC and Mac OS X
With introduction of Mac OS X on newer PowerMacs, the original MagiC-Mac would no longer run as it operated at a low level within the former Mac OS classic in order to function. Newer OS X versions have no system-wide emulation layer for Motorola 680x0 code included, as was the case before. So in 2002 a reworked variant MagiC-Mac X for OS X was released.
The program itself is a "Carbon" program; it did run under Mac OS X only, not with Mac OS 9.x or in the "Classic Environment". To maximise effectiveness it contained improved code, and integrated parts of the Asgard68k emulator written in hand-optimised PPC assembler (also used in MESS and MAME projects), to reach high emulation speeds on machines with PowerPC processors (typically PowerPC G4 and G5 Macs). MagiC-Mac X was updated in 2004 and 2009, becoming a "Universal Binary" and running natively on both older PowerPC Macs and newer Macs with Intel processors under Mac OS X (version 10.4 "Tiger" to 10.6 "Snow Leopard"). Version 2.0 is the latest for PowerPC machines.
IBM PC and older Windows
In summer 1996 the version MagiC-PC was released, now allowing Atari ST users to run their software on top of MS-DOS based Windows 9x to ME, as well as under more modern Windows NT 4 to XP. Atari files and directories were organised in drive containers, which represented bigger file archives for Windows. Windows' own directories were mapped as partitions to access them. Networking access and printing via Windows and Novell NetWare was provided for the Atari environment.
System requirements for emulating an Atari ST or STE system were:
A PC with minimum of 16 megabytes of RAM
An Intel 80486 processor, or those comparable in performance by other manufacturers
For speed similar to an Atari Falcon system (with Motorola 68030):
An Intel Pentium (P5/80586) at 100 MHz and higher, or comparable processors of other manufacturers
To achieve faster program execution than on original Atari environments, higher clocked CPUs and more usable system memory were good upgrades for PCs.
Modern Windows
MagiC-PC is fast but unsupported on newer versions of Windows. It does still work but may cause problems (hangs) when trying to shut down the Atari session itself (pausing the emulation and then closing it is possible as work-around). It can help to change the original "Shutdown" program that comes with MagiC (and is ending an Atari session) for a different one. Restarting a session is then done using the "MagiC" menu bar under Windows. Installing Magic-PC on a USB flash drive is also possible, so the emulation environment can be used on computers under Windows 7 and higher.
An alternative to MagiC-PC is Hatari, especially under other free operating systems like Linux. Because the program is written in plain C, using SDL libs and in part UAE (emulator) for multimedia and hardware, it requires quite performant processors (over 1 GHz for Atari ST/STE emulation, over 2 GHz for Atari Falcon emulation). For faster program execution the machine should be at least of the Pentium 4 or Athlon XP class respectively.
AtariX for macOS Intel-Macs
The successor to MagiC-Mac X on the Apple platform is AtariX, also coded by Andreas Kromke. It has also been released under GPL v3 lately. The software integrates in part the Musashi 68k emulator written in plain C. AtariX is not as optimised as its predecessor once was, but the code written in C makes it more portable. Thus it will not reach the emulation speeds the former software had, but AtariX is aimed to run under more modern macOS (up to version 10.13 "High Sierra" at least), and Intel-only Mac systems with more performant processors.
NVDI for MagiC
Another third party system enhancement for the Atari platform was NVDI originally developed by Sven und Wilfried Behne. It implemented advanced and accelerated graphics functions, improved driver functionality, and productivity utilities with Atari programs. The last stand-alone version 5.02/5.03 of NVDI, released in the early 2000s, worked with standard Atari TOS, MagiC for Atari, MagiC-PC, MagiC-Mac, and extra graphics cards for Ataris (ET 4000, Matrix MatGraph, Computerinsel NOVA). As bundle with MagiC it was renamed to MVDI.
NVDI offered highly optimised graphics routines in Atari environment (TOS or MagiC), emulation speed is raised under Windows and Mac OS via Magic-PC and Magic-Mac by mapping most of the Atari VDI calls to those of the host operating system. In Windows this is done using GDI calls, using native PC code for these functions. Similar functionality and higher speed for graphics was provided with MagiC-Mac, using QuickDraw calls in the classic Mac OS environment.
NVDI allows for the use of up to millions of colours, for text on screen it supports Bitstream Speedo Fonts, TrueType and PostScript fonts installed on Windows and classic Mac OS, and features modernised printing capabilities via GDOS for programs, run natively on the Atari and in emulation on PC and Mac.
MagiC Desk
MagiC's implementation of the GEM Desktop was greatly enhanced over the version included in the original TOS systems. Initially named Mag!X Desk, but changing to MagiC Desk with the release of MagiC 3.0, it offered features missing from the original Desktop, including:
Parallel (i.e. in the background) copy/move/delete/format operations
Long file names
Aliases (symbolic links)
Colour icon support
Unlike the GEM Desktop, MagiC Desk was not built into MagiC but instead could be launched as an application at startup. It is possible to start MagiC with another shell when wished (popular alternative shells including Jinnee and Thing). Diverse software can expand the usability of MagiC, extra network support e.g. is provided by MagiC Net.
GPL Release
In 2018 MagiC developer Andreas Kromke released the sources of MagiC variants and MagiC Desk and other software under the GPL version 3, including the extra NVDI/MVDI enhancement which came with MagiC.
Provided as open source are:
TOS, and KAOS (an improved TOS variant with many bugs removed)
MagiX / MagiC for Atari computers, MagiC-Mac for classic Mac OS (Motorola 68000 variants)
Magic-Mac X for older Mac OS X on PowerPCs, and AtariX for newer macOS on Apple–Intel architecture
NVDI/MVDI for MagiC, as enhancement to the MagiC environment
See also
emuTOS, an Atari single-tasking operating system component
MiNT, another Atari multi-tasking operating system component
Hatari (emulator), a free Atari ST/TT/Falcon emulator
ARAnyM (emulator), a free Atari ST/TT/Falcon virtual machine-emulator
Motorola 68000 series, 16- and 32-bit CPUs of the original Atari and Amiga era
References
External links
ASH distributor page, info on MagiC and variants (German)
Programmer documentation, including detailed description of MagiC APIs
The MagiC Documentation Project
Network support (MagiC-Net a.o.) for MagiC
Atari-Mac-MagiC on GitLab - Sources of MagiC a.o. components
AtariX on GitLab – Sources of the AtariX computer emulator for macOS
GEM software
Windows software
MacOS software
Atari ST software
Atari operating systems
Disk operating systems
Free software operating systems | Operating System (OS) | 1,091 |
Communications server
Communications servers are open, standards-based computing systems that operate as a carrier-grade common platform for a wide range of communications applications and allow equipment providers to add value at many levels of the system architecture.
Based on industry-managed standards such as AdvancedTCA, MicroTCA, Carrier Grade Linux and Service Availability Forum specifications, communications servers are the foundational platform upon which equipment providers build network infrastructure elements for deployments such as IP Multimedia Subsystem (IMS), IPTV and wireless broadband (e.g. WiMAX).
Support for communications servers as a category of server is developing rapidly throughout the communications industry. Standards bodies, industry associations, vendor alliance programs, hardware and software manufacturers, communications server vendors and users are all part of an increasingly robust communications server ecosystem.
Regardless of their specific, differentiated features, communications servers have the following attributes: open, flexible, carrier-grade, and communications-focused.
Attributes
Open
Based on industry-managed open standards
Broad, multi-vendor ecosystem
Industry certified interoperability
Availability of tools that facilitate development and integration of applications at the standardized interfaces
Multiple competitive options for standards-based modules
Flexible
Designed to easily incorporate application-specific added value at all levels of the solution
Can be rapidly repurposed as needs change to protect customer investment
Multi-level, scalable, bladed architecture
Meets needs of multiple industries beyond telecommunications, such as medical imaging, defense and aerospace
Carrier grade
Designed for
Longevity of supply
Extended lifecycle (>10 years) support
High availability (>5NINES)
“Non-disruptively” upgradeable and updateable
Hard real time capability to ensure quality of service for critical traffic
Meets network building regulations
Industry-managed standards
Several industry-managed standards are critical to the success of communications servers, including:
AdvancedTCA
The Advanced Telecommunications Computing Architecture (ATCA) is a series of PCI Industrial Computers Manufacturers Group (PICMG) specifications, targeted to meet the requirements for carrier grade communications equipment. This series of specifications incorporates the latest trends in high speed interconnect technologies, next generation processors and improved reliability, manageability and serviceability.
AdvancedMC
The PICMG Advanced Mezzanine Card specification defines the base-level requirements for a wide range of high-speed mezzanine cards optimized for, but not limited to, AdvancedTCA Carriers. AdvancedMC enhances AdvancedTCA's flexibility by extending its high-bandwidth, multi-protocol interface to individual hot-swappable modules.
MicroTCA
This PICMG specification provides a framework for combining AdvancedMC modules directly, without the need for an AdvancedTCA or custom carrier. MicroTCA is aimed at smaller equipment – such as wireless base stations, Wi-Fi and WiMAX radios, and VoIP access gateways where small physical size low entry cost, and scalability are key requirements.
Carrier Grade Linux
An enhanced version of Linux for use in a highly available, secure, scalable, and maintainable carrier grade system. The specification is managed by the CGL Working Group of the Open Source Development Labs.
HPI and AIS
These Service Availability Forum (SA Forum) specifications define standard interfaces for telecom platform management and high-availability software.
The Hardware Platform Interface (HPI) specification defines the interface between high availability middleware and the underlying hardware and operating system.
At a higher layer than HPI, the Application Interface Specification (AIS) defines the application programming interface between the high availability middleware and the application. AIS allows an application to run on multiple computing modules, and applications that support AIS can migrate more easily between computing platforms from different manufacturers that support the standard.
In addition to the standards development organizations mentioned above, four industry associations / vendor alliance programs are playing key roles in the development of the communications server ecosystem.
Industry associations
SCOPE Alliance
SCOPE Alliance is an industry alliance committed to accelerating the deployment of carrier grade base platforms for service provider applications. Its mission is to help, enable and promote the availability of open carrier grade base platforms based on Commercial-Off-The-Shelf hardware / software and Free Open Source Software building blocks, and to promote interoperability to better serve Service Providers and consumers.
Communications Platforms Trade Association
The Communications Platforms Trade Association (CP-TA) is an association of communications platforms and building block providers dedicated to accelerating the adoption of SIG-governed, open specification-based communications platforms through interoperability certification. With industry collaboration, the CP-TA plans to drive a mainstream market for open industry standards-based communications platforms by certifying interoperable products.
Vendor alliance programs
Intel Communications Alliance
The Intel Communications Alliance is a community of communications and embedded developers and solutions providers committed to the development of modular, standards-based solutions on Intel technologies.
Motorola Communications Server Alliance
The Motorola Communications Server Alliance is an ecosystem of technology, service and solution providers aligned to provide standards-based solution elements validated with Motorola's communications servers. Alliance participants receive access to Motorola embedded communications computing product roadmaps, development systems, and participate in marketing activities with Motorola.
Mobicents Open Source Communications Community
The Mobicents Open Source Communications Community is an ecosystem of technology, service and solution providers aligned to provide Open Source, Open Standards-based communication software. Community members contribute to the Mobicents product roadmaps, research, development, and marketing activities.
See also
SAForum
SCOPE Alliance
OpenHPI
OpenSAF
Telecommunications equipment
Servers (computing) | Operating System (OS) | 1,092 |
Pre-installed software
Pre-installed software (also known as bundled software) is software already installed and licensed on a computer or smartphone bought from an original equipment manufacturer (OEM). The operating system is usually factory-installed, but because it is a general requirement, this term is used for additional software apart from the bare necessary amount, usually from other sources (or the operating system vendor).
Unwanted factory-installed software (also known as crapware or bloatware) can include major security vulnerabilities, like Superfish, which installs a root certificate to inject advertising into encrypted Google search pages, but leaves computers vulnerable to serious cyberattacks that breach the security used in banking and finance websites.
Some "free download" websites use unwanted software bundling that similarly installs unwanted software.
Unwanted software
Often new PCs come with factory-installed software which the manufacturer was paid to include, but is of dubious value to the purchaser. Most of these programs are included without the user's knowledge, and have no instructions on how to opt-out or remove them.
A Microsoft executive mentioned that within the company these applications were dubbed craplets (a portmanteau of crap and applet). He suggested that the experience of people buying a new Windows computer can be damaged by poorly designed, uncertified third-party applications installed by vendors. He stated that the antitrust case against Microsoft prevented the company from stopping the pre-installation of these programs by OEMs. Walt Mossberg, technology columnist for The Wall Street Journal, condemned "craplets" in two columns published in April 2007, and suggested several possible strategies for removing them.
The bundling of these unwanted applications is often performed in exchange for financial compensation, paid to the OEM by the application's publisher. At the 2007 Consumer Electronics Show, Dell defended this practice, stating that it keeps costs down, and implying that systems might cost significantly more to the end user if these programs were not factory-installed. Some system vendors and retailers will offer, for an additional charge, to remove unwanted factory-installed software from a newly purchased computer; retailers, in particular, will tout this service as a "performance improvement." In 2008, Sony Corporation announced a plan to charge end users US$50 for the service; Sony subsequently decided to drop the charge for this service and offer it for free after many users expressed outrage. Microsoft Store similarly offers a range of "Signature Edition" computers sold in a similar state, as well as extended warranty and support packages through Microsoft.
On smartphones
Mobile phones typically come with factory-installed software provided by its manufacturer or mobile network operator; similarly to their PC equivalents, they are sometimes tied to account management or other premium services offered by the provider. The practice was extended to smartphones via Android, as carriers often bundle apps provided by themselves and third-party developers with the device and, furthermore, install them into the System partition, making it so that they cannot be completely removed from the device without performing unsupported modifications to its firmware (such as rooting) first.
Some of these apps may run in the background, consuming battery life, and may also duplicate functionality already provided by the phone itself; for example, Verizon Wireless has bundled phones with a redundant text messaging app known as "Messages+" (which is set as the default text messaging program in lieu of the stock messaging app included within the OS), and VZ Navigator (a subscription service redundant to the free Google Maps service). In addition, apps bundled by OEMs may also include special system-level permissions that bypass those normally enforced by the operating system.
Android 4.0 attempted to address these issues by allowing users to "disable" apps—which hides them from application menus and prevents them from running. However, this does not remove the software from the device entirely, and they still consume storage unless they are removed via unsupported modifications. Android 5.0 began to allow carrier apps to be automatically downloaded from Google Play Store during initial device setup instead; they are installed the same way as user-downloaded apps, and can be uninstalled normally.
By contrast, Apple does not allow operators to customize iPhone in this manner. However, the company has faced criticism for including an increasing number of factory-installed apps in iOS that cannot be removed.
Legal considerations
In April 2014, South Korea implemented new regulatory guidelines for the mobile phone industry, requiring non-essential apps bundled on a smartphone to be user-removable.
In December 2019, Russia passed a law effective 1 July 2020, which require that specific types of consumer electronics devices be factory-installed with applications developed by Russian vendors. The goal of this law is to discourage the use of foreign competitors.
See also
Bundled software
Product bundling
Bundleware
Shovelware
Tying (commerce)
Application software
References
Software distribution | Operating System (OS) | 1,093 |
BBS
BBS may refer to:
Technologies
Bulletin board system, a computer that allows users to dial into the system over a phone line or telnet connection
BIOS Boot Specification, a system firmware specification related to initial program load (IPL; AKA booting)
Blum Blum Shub, a pseudorandom number generator
Organisations
Bahrain Bayan School, a school in Bahrain
Bangladesh Bureau of Statistics
Birmingham Business School, a business school in the UK
Bologna Business School, a business school in Italy
Budapest Business School, a business school in Hungary
BBS Kraftfahrzeugtechnik AG, an automobile wheel manufacturer
Bodu Bala Sena, a Sri Lankan political organization
British Blind Sport, a sporting charity for people who are visually impaired
British Boy Scouts
British Bryological Society
Badger Boys State, a youth government camp held in Wisconsin
Science
Bardet–Biedl syndrome, a genetic disorder
Behavioral and Brain Sciences, a peer-reviewed journal
Berg Balance Scale, a test of a person's static and dynamic balance abilities
Bogart–Bacall syndrome, a vocal misuse disorder named after Humphrey Bogart and Lauren Bacall
Borate buffered saline
Breeding bird survey, which monitors the status and trends of bird populations
Arts and media
Baton Broadcast System, a system of Canadian television stations
Bhutan Broadcasting Service, a radio and television service in Bhutan
BBS Productions, a film production company of early 1970s New Hollywood
Kingdom Hearts Birth by Sleep, a video game for the PlayStation Portable
Transport
Bhubaneswar railway station, Odisha, India (Indian Railways station code)
Bras Basah MRT station, Singapore (MRT station abbreviation)
Other uses
Behavior-based safety, a risk reduction method
Bachelor of Business Studies, an academic degree
BBs, another name for the metal ammunition used by a BB gun
BBs, another name for the plastic-pellet ammunition used by an airsoft gun
Bronze Bauhinia Star, in the Hong Kong honors system | Operating System (OS) | 1,094 |
List of computer size categories
This list of computer size categories attempts to list commonly used categories of computer by the physical size of the device and its chassis or case, in descending order of size. One generation's "supercomputer" is the next generation's "mainframe", and a "PDA" does not have the same set of functions as a "laptop", but the list still has value, as it provides a ranked categorization of devices. It also ranks some more obscure computer sizes. There are different sizes like-mini computers, microcomputer, mainframe computer and super computer.
Large computers
Supercomputer
Minisupercomputer
Mainframe computer
Midrange computer
Superminicomputer
Minicomputer
Microcomputers
Interactive kiosk
Arcade cabinet
Personal computer (PC)
Desktop computer—see computer form factor for some standardized sizes of desktop computers
full-sized
All-in-One
compact
Home theater
Home computer
Mobile computers
Desktop replacement computer or desknote
Laptop computer
Subnotebook computer, also known as a Kneetop computer; clamshell varieties may also be known as minilaptop or ultraportable laptop computers
Tablet personal computer
Handheld computers, which include the classes:
Ultra-mobile personal computer, or UMPC
Personal digital assistant or enterprise digital assistant, which include:
HandheldPC or Palmtop computer
Pocket personal computer
Electronic organizer
Pocket computer
Calculator, which includes the class:
Graphing calculator
Scientific calculator
Programmable calculator
Accounting / Financial Calculator
Handheld game console
Portable media player
Portable data terminal
Handheld
Smartphone, a class of mobile phone
Feature phone
Wearable computer
Single board computer
Wireless sensor network components
Plug computer
Stick PC, a single-board computer in a small elongated casing resembling a stick
Microcontroller
Smartdust
Nanocomputer
Others
Rackmount computer
Blade server
Blade PC
Small form factor personal computer (SFF, ITX, DTX.etc.)
Distinctive marks
The classes above are not rigid; there are "edge devices" in most of them. For instance, the "subnotebook" category can usually be distinguished from the "PDA" category because a subnotebook has a keyboard and a PDA has not; however, tablet PCs may be larger than subnotebooks (making it seemingly correct to classify them as laptops) and also lack a keyboard, while devices such as the Handspring Treo 600 have something that might charitably be called a keyboard, but are still definitely in the "smartphone" category.
In the higher end of the spectrum, this informal and somewhat humorous rule might help:
You can throw a laptop if you wanted to
You can lift a workstation if you need to
You can tilt a minicomputer if you need to
You cannot move a mainframe, even if you tried
Categories
:Category:Supercomputers
:Category:Mainframe computers
:Category:Minicomputers
:Category:Portable computers
:Category:Mobile computers
:Category:Laptops
:Category:Notebooks
:Category:Tablet computers
:Category:Subnotebooks
:Category:Portable computers
:Category:Pocket computers
:Category:Personal digital assistants
:Category:Calculators
:Category:Handheld game consoles
:Category:Information appliances
:Category:Wearable computers
:Category:Embedded systems
:Category:Wireless sensor network
See also
Classes of computers
Computer form factor
Form factor (design)
References
List of computer size categories
Computer size categories | Operating System (OS) | 1,095 |
Microarchitecture
In computer engineering, microarchitecture, also called computer organization and sometimes abbreviated as µarch or uarch, is the way a given instruction set architecture (ISA) is implemented in a particular processor. A given ISA may be implemented with different microarchitectures; implementations may vary due to different goals of a given design or due to shifts in technology.
Computer architecture is the combination of microarchitecture and instruction set architecture.
Relation to instruction set architecture
The ISA is roughly the same as the programming model of a processor as seen by an assembly language programmer or compiler writer. The ISA includes the instructions, execution model, processor registers, address and data formats among other things. The microarchitecture includes the constituent parts of the processor and how these interconnect and interoperate to implement the ISA.
The microarchitecture of a machine is usually represented as (more or less detailed) diagrams that describe the interconnections of the various microarchitectural elements of the machine, which may be anything from single gates and registers, to complete arithmetic logic units (ALUs) and even larger elements. These diagrams generally separate the datapath (where data is placed) and the control path (which can be said to steer the data).
The person designing a system usually draws the specific microarchitecture as a kind of data flow diagram. Like a block diagram, the microarchitecture diagram shows microarchitectural elements such as the arithmetic and logic unit and the register file as a single schematic symbol. Typically, the diagram connects those elements with arrows, thick lines and thin lines to distinguish between three-state buses (which require a three-state buffer for each device that drives the bus), unidirectional buses (always driven by a single source, such as the way the address bus on simpler computers is always driven by the memory address register), and individual control lines. Very simple computers have a single data bus organization they have a single three-state bus. The diagram of more complex computers usually shows multiple three-state buses, which help the machine do more operations simultaneously.
Each microarchitectural element is in turn represented by a schematic describing the interconnections of logic gates used to implement it. Each logic gate is in turn represented by a circuit diagram describing the connections of the transistors used to implement it in some particular logic family. Machines with different microarchitectures may have the same instruction set architecture, and thus be capable of executing the same programs. New microarchitectures and/or circuitry solutions, along with advances in semiconductor manufacturing, are what allows newer generations of processors to achieve higher performance while using the same ISA.
In principle, a single microarchitecture could execute several different ISAs with only minor changes to the microcode.
Aspects
The pipelined datapath is the most commonly used datapath design in microarchitecture today. This technique is used in most modern microprocessors, microcontrollers, and DSPs. The pipelined architecture allows multiple instructions to overlap in execution, much like an assembly line. The pipeline includes several different stages which are fundamental in microarchitecture designs. Some of these stages include instruction fetch, instruction decode, execute, and write back. Some architectures include other stages such as memory access. The design of pipelines is one of the central microarchitectural tasks.
Execution units are also essential to microarchitecture. Execution units include arithmetic logic units (ALU), floating point units (FPU), load/store units, branch prediction, and SIMD. These units perform the operations or calculations of the processor. The choice of the number of execution units, their latency and throughput is a central microarchitectural design task. The size, latency, throughput and connectivity of memories within the system are also microarchitectural decisions.
System-level design decisions such as whether or not to include peripherals, such as memory controllers, can be considered part of the microarchitectural design process. This includes decisions on the performance-level and connectivity of these peripherals.
Unlike architectural design, where achieving a specific performance level is the main goal, microarchitectural design pays closer attention to other constraints. Since microarchitecture design decisions directly affect what goes into a system, attention must be paid to issues such as chip area/cost, power consumption, logic complexity, ease of connectivity, manufacturability, ease of debugging, and testability.
Microarchitectural concepts
Instruction cycles
To run programs, all single- or multi-chip CPUs:
Read an instruction and decode it
Find any associated data that is needed to process the instruction
Process the instruction
Write the results out
The instruction cycle is repeated continuously until the power is turned off.
Multicycle microarchitecture
Historically, the earliest computers were multicycle designs. The smallest, least-expensive computers often still use this technique. Multicycle architectures often use the least total number of logic elements and reasonable amounts of power. They can be designed to have deterministic timing and high reliability. In particular, they have no pipeline to stall when taking conditional branches or interrupts. However, other microarchitectures often perform more instructions per unit time, using the same logic family. When discussing "improved performance," an improvement is often relative to a multicycle design.
In a multicycle computer, the computer does the four steps in sequence, over several cycles of the clock. Some designs can perform the sequence in two clock cycles by completing successive stages on alternate clock edges, possibly with longer operations occurring outside the main cycle. For example, stage one on the rising edge of the first cycle, stage two on the falling edge of the first cycle, etc.
In the control logic, the combination of cycle counter, cycle state (high or low) and the bits of the instruction decode register determine exactly what each part of the computer should be doing. To design the control logic, one can create a table of bits describing the control signals to each part of the computer in each cycle of each instruction. Then, this logic table can be tested in a software simulation running test code. If the logic table is placed in a memory and used to actually run a real computer, it is called a microprogram. In some computer designs, the logic table is optimized into the form of combinational logic made from logic gates, usually using a computer program that optimizes logic. Early computers used ad-hoc logic design for control until Maurice Wilkes invented this tabular approach and called it microprogramming.
Increasing execution speed
Complicating this simple-looking series of steps is the fact that the memory hierarchy, which includes caching, main memory and non-volatile storage like hard disks (where the program instructions and data reside), has always been slower than the processor itself. Step (2) often introduces a lengthy (in CPU terms) delay while the data arrives over the computer bus. A considerable amount of research has been put into designs that avoid these delays as much as possible. Over the years, a central goal was to execute more instructions in parallel, thus increasing the effective execution speed of a program. These efforts introduced complicated logic and circuit structures. Initially, these techniques could only be implemented on expensive mainframes or supercomputers due to the amount of circuitry needed for these techniques. As semiconductor manufacturing progressed, more and more of these techniques could be implemented on a single semiconductor chip. See Moore's law.
Instruction set choice
Instruction sets have shifted over the years, from originally very simple to sometimes very complex (in various respects). In recent years, load–store architectures, VLIW and EPIC types have been in fashion. Architectures that are dealing with data parallelism include SIMD and Vectors. Some labels used to denote classes of CPU architectures are not particularly descriptive, especially so the CISC label; many early designs retroactively denoted "CISC" are in fact significantly simpler than modern RISC processors (in several respects).
However, the choice of instruction set architecture may greatly affect the complexity of implementing high-performance devices. The prominent strategy, used to develop the first RISC processors, was to simplify instructions to a minimum of individual semantic complexity combined with high encoding regularity and simplicity. Such uniform instructions were easily fetched, decoded and executed in a pipelined fashion and a simple strategy to reduce the number of logic levels in order to reach high operating frequencies; instruction cache-memories compensated for the higher operating frequency and inherently low code density while large register sets were used to factor out as much of the (slow) memory accesses as possible.
Instruction pipelining
One of the first, and most powerful, techniques to improve performance is the use of instruction pipelining. Early processor designs would carry out all of the steps above for one instruction before moving onto the next. Large portions of the circuitry were left idle at any one step; for instance, the instruction decoding circuitry would be idle during execution and so on.
Pipelining improves performance by allowing a number of instructions to work their way through the processor at the same time. In the same basic example, the processor would start to decode (step 1) a new instruction while the last one was waiting for results. This would allow up to four instructions to be "in flight" at one time, making the processor look four times as fast. Although any one instruction takes just as long to complete (there are still four steps) the CPU as a whole "retires" instructions much faster.
RISC makes pipelines smaller and much easier to construct by cleanly separating each stage of the instruction process and making them take the same amount of time—one cycle. The processor as a whole operates in an assembly line fashion, with instructions coming in one side and results out the other. Due to the reduced complexity of the classic RISC pipeline, the pipelined core and an instruction cache could be placed on the same size die that would otherwise fit the core alone on a CISC design. This was the real reason that RISC was faster. Early designs like the SPARC and MIPS often ran over 10 times as fast as Intel and Motorola CISC solutions at the same clock speed and price.
Pipelines are by no means limited to RISC designs. By 1986 the top-of-the-line VAX implementation (VAX 8800) was a heavily pipelined design, slightly predating the first commercial MIPS and SPARC designs. Most modern CPUs (even embedded CPUs) are now pipelined, and microcoded CPUs with no pipelining are seen only in the most area-constrained embedded processors. Large CISC machines, from the VAX 8800 to the modern Pentium 4 and Athlon, are implemented with both microcode and pipelines. Improvements in pipelining and caching are the two major microarchitectural advances that have enabled processor performance to keep pace with the circuit technology on which they are based.
Cache
It was not long before improvements in chip manufacturing allowed for even more circuitry to be placed on the die, and designers started looking for ways to use it. One of the most common was to add an ever-increasing amount of cache memory on-die. Cache is very fast and expensive memory. It can be accessed in a few cycles as opposed to many needed to "talk" to main memory. The CPU includes a cache controller which automates reading and writing from the cache. If the data is already in the cache it is accessed from there – at considerable time savings, whereas if it is not the processor is "stalled" while the cache controller reads it in.
RISC designs started adding cache in the mid-to-late 1980s, often only 4 KB in total. This number grew over time, and typical CPUs now have at least 512 KB, while more powerful CPUs come with 1 or 2 or even 4, 6, 8 or 12 MB, organized in multiple levels of a memory hierarchy. Generally speaking, more cache means more performance, due to reduced stalling.
Caches and pipelines were a perfect match for each other. Previously, it didn't make much sense to build a pipeline that could run faster than the access latency of off-chip memory. Using on-chip cache memory instead, meant that a pipeline could run at the speed of the cache access latency, a much smaller length of time. This allowed the operating frequencies of processors to increase at a much faster rate than that of off-chip memory.
Branch prediction
One barrier to achieving higher performance through instruction-level parallelism stems from pipeline stalls and flushes due to branches. Normally, whether a conditional branch will be taken isn't known until late in the pipeline as conditional branches depend on results coming from a register. From the time that the processor's instruction decoder has figured out that it has encountered a conditional branch instruction to the time that the deciding register value can be read out, the pipeline needs to be stalled for several cycles, or if it's not and the branch is taken, the pipeline needs to be flushed. As clock speeds increase the depth of the pipeline increases with it, and some modern processors may have 20 stages or more. On average, every fifth instruction executed is a branch, so without any intervention, that's a high amount of stalling.
Techniques such as branch prediction and speculative execution are used to lessen these branch penalties. Branch prediction is where the hardware makes educated guesses on whether a particular branch will be taken. In reality one side or the other of the branch will be called much more often than the other. Modern designs have rather complex statistical prediction systems, which watch the results of past branches to predict the future with greater accuracy. The guess allows the hardware to prefetch instructions without waiting for the register read. Speculative execution is a further enhancement in which the code along the predicted path is not just prefetched but also executed before it is known whether the branch should be taken or not. This can yield better performance when the guess is good, with the risk of a huge penalty when the guess is bad because instructions need to be undone.
Superscalar
Even with all of the added complexity and gates needed to support the concepts outlined above, improvements in semiconductor manufacturing soon allowed even more logic gates to be used.
In the outline above the processor processes parts of a single instruction at a time. Computer programs could be executed faster if multiple instructions were processed simultaneously. This is what superscalar processors achieve, by replicating functional units such as ALUs. The replication of functional units was only made possible when the die area of a single-issue processor no longer stretched the limits of what could be reliably manufactured. By the late 1980s, superscalar designs started to enter the market place.
In modern designs it is common to find two load units, one store (many instructions have no results to store), two or more integer math units, two or more floating point units, and often a SIMD unit of some sort. The instruction issue logic grows in complexity by reading in a huge list of instructions from memory and handing them off to the different execution units that are idle at that point. The results are then collected and re-ordered at the end.
Out-of-order execution
The addition of caches reduces the frequency or duration of stalls due to waiting for data to be fetched from the memory hierarchy, but does not get rid of these stalls entirely. In early designs a cache miss would force the cache controller to stall the processor and wait. Of course there may be some other instruction in the program whose data is available in the cache at that point. Out-of-order execution allows that ready instruction to be processed while an older instruction waits on the cache, then re-orders the results to make it appear that everything happened in the programmed order. This technique is also used to avoid other operand dependency stalls, such as an instruction awaiting a result from a long latency floating-point operation or other multi-cycle operations.
Register renaming
Register renaming refers to a technique used to avoid unnecessary serialized execution of program instructions because of the reuse of the same registers by those instructions. Suppose we have two groups of instruction that will use the same register. One set of instructions is executed first to leave the register to the other set, but if the other set is assigned to a different similar register, both sets of instructions can be executed in parallel (or) in series.
Multiprocessing and multithreading
Computer architects have become stymied by the growing mismatch in CPU operating frequencies and DRAM access times. None of the techniques that exploited instruction-level parallelism (ILP) within one program could make up for the long stalls that occurred when data had to be fetched from main memory. Additionally, the large transistor counts and high operating frequencies needed for the more advanced ILP techniques required power dissipation levels that could no longer be cheaply cooled. For these reasons, newer generations of computers have started to exploit higher levels of parallelism that exist outside of a single program or program thread.
This trend is sometimes known as throughput computing. This idea originated in the mainframe market where online transaction processing emphasized not just the execution speed of one transaction, but the capacity to deal with massive numbers of transactions. With transaction-based applications such as network routing and web-site serving greatly increasing in the last decade, the computer industry has re-emphasized capacity and throughput issues.
One technique of how this parallelism is achieved is through multiprocessing systems, computer systems with multiple CPUs. Once reserved for high-end mainframes and supercomputers, small-scale (2–8) multiprocessors servers have become commonplace for the small business market. For large corporations, large scale (16–256) multiprocessors are common. Even personal computers with multiple CPUs have appeared since the 1990s.
With further transistor size reductions made available with semiconductor technology advances, multi-core CPUs have appeared where multiple CPUs are implemented on the same silicon chip. Initially used in chips targeting embedded markets, where simpler and smaller CPUs would allow multiple instantiations to fit on one piece of silicon. By 2005, semiconductor technology allowed dual high-end desktop CPUs CMP chips to be manufactured in volume. Some designs, such as Sun Microsystems' UltraSPARC T1 have reverted to simpler (scalar, in-order) designs in order to fit more processors on one piece of silicon.
Another technique that has become more popular recently is multithreading. In multithreading, when the processor has to fetch data from slow system memory, instead of stalling for the data to arrive, the processor switches to another program or program thread which is ready to execute. Though this does not speed up a particular program/thread, it increases the overall system throughput by reducing the time the CPU is idle.
Conceptually, multithreading is equivalent to a context switch at the operating system level. The difference is that a multithreaded CPU can do a thread switch in one CPU cycle instead of the hundreds or thousands of CPU cycles a context switch normally requires. This is achieved by replicating the state hardware (such as the register file and program counter) for each active thread.
A further enhancement is simultaneous multithreading. This technique allows superscalar CPUs to execute instructions from different programs/threads simultaneously in the same cycle.
See also
Control unit
Hardware architecture
Hardware description language (HDL)
Instruction-level parallelism (ILP)
List of AMD CPU microarchitectures
List of Intel CPU microarchitectures
Processor design
Stream processing
VHDL
Very large-scale integration (VLSI)
Verilog
References
Further reading
Central processing unit
Instruction processing
Microprocessors | Operating System (OS) | 1,096 |
IBM 3270 PC
The IBM 3270 PC (IBM System Unit 5271), released in October 1983, is an IBM PC XT containing additional hardware that, in combination with software, can emulate the behaviour of an IBM 3270 terminal. It can therefore be used both as a standalone computer, and as a terminal to a mainframe.
IBM later released the 3270 AT (IBM System Unit 5273), which is a similar design based on the IBM PC AT. They also released high-end graphics versions of the 3270 PC in both XT and AT variants. The XT-based versions are called 3270 PC/G and 3270 PC/GX and they use a different System Unit 5371, while their AT counterparts (PC AT/G and PC AT/GX) have System Unit 5373.
Technology
The additional hardware occupies nearly all the free expansion slots in the computer. It includes a video card which occupies 1-3 ISA slots (depending on what level of graphics support is required), and supports CGA and MDA video modes. The display resolution is 720×350, either on the matching 14-inch color monitor (model 5272) or in monochrome on an MDA monitor.
A further expansion card intercepts scancodes from the 122-key 3270 keyboard, translating them into XT scancodes which are then sent to the normal keyboard connector. This keyboard, officially called the 5271 Keyboard Element, weighs 9.3 pounds.
The final additional card (a 3278 emulator) provides the communication interface to the host mainframe.
Models
3270 PC (System Unit 5271) - original 3270 PC, initially offered in three different Models numbered 2, 4, and 6. Model 2 has non-expandable memory of 256 KB and a single floppy drive. Model 4 has expandable memory, a second floppy drive, and a parallel port. Model 6 replaces one of the floppy drives with a 10 MB hard disk. Model 6 had a retail price of $6,210 at its launch (with 512KB RAM), not including display, cables and software; a working configuration with an additional 192KB RAM, color display (model 5272) and the basic cabling and software (but without support for host/mainframe-side graphics) ran to $8,465. A 1985 review by PC Magazine found that fast file transfer to and from the mainframe was the strong selling point of this unit: file transfers that took hours with an Irma board took only minutes with the boards (and software) that came with the 3270 PC. The 3270 PC suffered however from incompatibility problems with other XT hardware and DOS software (for example, Microsoft QuickBasic). Its 3278 mainframe board was also considered lackluster, in comparison with an IBM 3279 graphics terminal, because it provides only a 24-line display during mainframe sessions, requiring either PC-side scrolling for the 32-line applications typically used with a 3279, or explicit 24-line application support on the mainframe side.
later released Models 24 (two floppy drives) and 26 (floppy plus 10 MB hard-disk) supported and were bundled with the IBM 3295 Plasma Monitor. This monochrome display is intended to provide a high-capacity text terminal for simultaneous mainframe sessions. It has two sets of fonts: one with 6x12-pixel characters, with which it can display text in 62 rows by 160 columns, and a larger font with 9x16-pixel characters, with which it can display 46 rows by 106 columns of text.
Models 30, 50, and 70 have 640 KB RAM on the system board, and follow same disk pattern as the initial models (one floppy, two floppies, and floppy plus hard drive) but with a 20 MB hard disk. They (and all subsequent models) also revert to the 5151/5272 Display Adapter (no Plasma Monitor support).
Models 31, 51, and 71 revert to 256 KB RAM on the system board but were also shipped with an Expanded Memory Adapter (XMA) with 1 MB RAM standard. Optionally these models (released in 1986) could be equipped with up to 2 MB of XMA.
Models P30/P50/P70 and P31/P51/P71 were like Models 30/50/70 and respectively 31/51/71 but with a 101-key (AT-style) keyboard replacing the 5271 Keyboard.
Models 31/51/71 and all P-models, require version 3.0 of the Control Program.
3270 PC/G - 3270 PC with improved graphics hardware and mouse support; it was sold together with an IBM 5279 Color Display, which is powered by an IBM 5278 Display Attachment Unit All Points Addressable (APA) graphics card providing 720x512 resolution and CGA emulation for compatibility. At its launch, the retail price for this configuration was $11,240.
initially offered as Model 12, 14, and 16; these have similar disk configuration options (one floppy, two floppies, and one floppy plus on hard disk) as the basic 3270 PC, but had more standard memory at 384 KB, 512, and respectively 576 KB. Even the basic Model 12 has a parallel port and supports a mouse. The mouse (IBM 5277) is optional though even for Model 16, and had a list price of $340.
3270 PC/GX - Extended APA graphics support (1024x1024) provided by the IBM 5378 Display Attachment Unit; shipped with a 19-inch color or monochrome monitor (IBM 5379). Price at launch was $18,490, although adding the basic software and cables ran close to $20,000.
The basic 3270 PC could not be upgraded to the PC/G or PC/GX. These two models use a different basic unit (System Unit 5371), itself priced at $6,580 (for Model 16) without graphics.
Later, AT-based models:
3270 AT (System Unit 5273) - corresponds to the 3270 PC, but based on an IBM AT.
3270 AT/G and GX (System Unit 5373) - corresponds to the 3270 PC/G and PC/GX, respectively, but based on an IBM AT.
Software
At its launch, the 3270 PC used the 3270 PC Control Program as its operating system. PC DOS 2.0 (and later 2.1) can run as a task under the Control Program. Only one PC DOS task can be run at any given time, but in parallel with this, the Control Program can run up to four mainframe sessions. The Control Program also provides a basic windowing environment, with up to seven windows; besides the four mainframe and one DOS session, it also provides two notepads. The notepads can be used to copy text from the PC DOS session to the mainframe sessions but not vice versa. Given the small size of the character display, a review by PC Magazine concluded that the windowing features were hardly useful, and the notepads even less so. The Control Program was also described as a "memory hog" in this review, using about 200 KB of RAM in a typical configuration. More useful were the specialized PC DOS file transfer utilities that were available (called simply and ), which allow files to be exchanged with the mainframe and provide ASCII/EBCDIC conversion. The list prices for the Control Program and file transfer utilities were $300 and $600, respectively. At the launch of the 3270 PC, the Control Program was the distinguishing software feature between a 3270 PC and an XT with an added 3278 board.
IBM considered the 3270 PC Control Program to be mainframe software, so it did not provide user-installable upgrades. Upgrades had to be installed by expert system programmers.
The PC/G and PC/GX models run a mainframe-graphics-capable version of the Control Program called the Graphics Control Program (GCP). On the mainframe side, the IBM Graphical Data Display Manager (GDDM) release 4 (and later) is compatible with these two workstations. The GDDM provided support for local pan and zoom (without taxing the host mainframe) on the PC/G and PC/GX.
In 1987 IBM released the IBM 3270 Workstation Program, which supports both XT and AT models of the 3270 PCs, as well as the plain XT and AT models (even with an XT or AT keyboard) with a 3278 board. It allows up to six concurrent DOS 3.3 sessions, but the number of mainframe sessions and notepads remained the same (four and two, respectively).
Reception
BYTE in 1984 praised the 3270 PC's 3278 emulation and color monitor, and concluded that the computer was "a must" for those seeking high-quality graphics or mainframe communications.
See also
IBM PC XT
IBM PC AT
Personal Computer XT/370
Professional Graphics Controller
3270 emulator
References
External links
Detailed technical information about the 3270 PC (basic model, not the PC/G or PC/GX)
Full list of 5271 models
Scanned documentation on bitsavers
3270 PC
3270 PC
Computer-related introductions in 1983 | Operating System (OS) | 1,097 |
Carl Sassenrath
Carl Sassenrath (born 1957 in California) is an architect of operating systems and computer languages. He brought multitasking to personal computers in 1985 with the creation of the Amiga Computer operating system kernel, and he is the designer of the REBOL computer language, REBOL/IOS collaboration environment, the Safeworlds AltME private messaging system, and other products. Carl is currently a Principal Engineer at Roku, Inc.
Background
Carl Sassenrath was born in 1957 to Charles and Carolyn Sassenrath in California. His father was a chemical engineer involved in research and development related to petroleum refining, paper production, and air pollution control systems.
In the late 1960s his family relocated from the San Francisco Bay Area to the small town of Eureka, California. From his early childhood Sassenrath was actively involved in electronics, amateur radio, photography, and filmmaking. When he was 13, Sassenrath began working for KEET a PBS public broadcasting television station. A year later he became a cameraman for KVIQ (American Broadcasting Company affiliate then) and worked his way up to being technical director and director for news, commercials, and local programming.
In 1980 Sassenrath graduated from the University of California, Davis with a B.S. in EECS (electrical engineering and computer science). During his studies he became interested in operating systems, parallel processing, programming languages, and neurophysiology. He was a teaching assistant for graduate computer language courses and a research assistant in neuroscience and behavioral biology. His uncle, Dr. Julius Sassenrath, headed the educational psychology department at UC Davis, and his aunt, Dr. Ethel Sassenrath, was one of the original researchers of THC at the California National Primate Research Center.
Career
Hewlett-Packard
During his final year at the university, Sassenrath joined Hewlett Packard's Computer Systems Division as a member of the Multi-Programming Executive (MPE) file system design group for HP3000 computers. His task was to implement a compiler for a new type of control language called Outqueue—a challenge because the language was both descriptive and procedural. A year later, Sassenrath became a member of the MPE-IV OS kernel team and later part of the HPE kernel group.
While at HP Sassenrath became interested in minimizing the high complexity found in most operating systems of that time and set out to formulate his own concepts of a microkernel-based OS. He proposed them to HP, but found the large company complacent to the "smaller OS" ideas.
In late 1981 and early 1982 Sassenrath took an academic leave to do atmospheric physics research for National Science Foundation at Amundsen–Scott South Pole Station. Upon returning, Sassenrath reached an agreement with HP to pursue independent research into new areas of computing, including graphical user interfaces and remote procedure call methods of distributed computing.
Later in 1982, impressed by the new computing ideas being published from Xerox PARC, Sassenrath formed an HP project to develop the modern style of window-based mouse-driven GUIs. The project, called Probus (for professional business workstation) was created on a prototype Sun Microsystems workstation borrowed from Andy Bechtolsheim while he was at Stanford University. Probus clearly demonstrated the power of graphical user interfaces, and the system also incorporated hyperlinks and early distributed computing concepts.
At HP, Sassenrath was involved with and influenced by a range of HP language projects including Ada, Pascal, Smalltalk, Lisp, Forth, SPL, and a variety of experimental languages.
Amiga Computer
In 1983, Carl Sassenrath joined Amiga Computer, Inc., a small startup company in Silicon Valley. As Manager of Operating Systems he was asked to design a new operating system for the Amiga, an advanced multimedia personal computer system that later became the Commodore Amiga.
As a sophisticated computer for its day (Amiga used 28 DMA channels along with multiple coprocessors), Sassenrath decided to create a preemptive multitasking operating system within a microkernel design. This was a novel approach for 1983 when other personal computer operating systems were single tasking such as MS-DOS (1981) and the Macintosh (1984).
The Amiga multitasking kernel was also one of the first to implement a microkernel OS methodology based on a real-time message passing (inter-process communication) core known as Exec (for executive) with dynamically loaded libraries and devices as optional modules around the core.
This design gave the Amiga OS a great extensibility and flexibility within the limited memory capacity of computers in the 1980s. Sassenrath later noted that the design came as a necessity of trying to integrate into ROM dozens of internal libraries and devices including graphics, sound, graphical user interface, floppy disc, file systems, and others. This dynamic modular method also allowed hundreds of additional modules to be added by external developers over the years.
After the release of the Amiga in 1985, Sassenrath left Commodore-Amiga to pursue new programming language design ideas that he had been contemplating since his university days.
Apple Computer
In 1986, Sassenrath was recruited to Apple Computer's Advanced Technology Group (ATG) to invent the next generation of operating systems. He was part of the Aquarius project, a quad-core CPU project (simulated on Apple's own Cray XMP-48) that was intended to become a 3D-based successor to the Macintosh.
During that period the C++ language had just been introduced, but Sassenrath, along with many other Apple researchers, preferred the more pure OO implementation of the Smalltalk language. Working at ATG with computing legends like Alan Kay, Larry Tessler, Dan Ingalls, Bill Atkinson and others provided Sassenrath with a wealth of resources and knowledge that helped shape his views of computing languages and systems.
Sassenrath Research
In 1988, Sassenrath left Silicon Valley for the mountains of Ukiah valley, 2 hours north of San Francisco. From there he founded multimedia technology companies such as Pantaray, American Multimedia, and VideoStream. He also implemented the Logo programming language for the Amiga, managed the software OS development for CDTV, one of the first CD-ROM TV set-top boxes, and wrote the OS for Viscorp Ed, one of the first Internet TV set-top boxes.
REBOL Technologies
In 1996, after watching the growth and development of programming languages like Java, Perl, and Python, Sassenrath decided to publish his own ideas within the world of computer languages. The result was REBOL, the relative expression-based object language. REBOL is intended to be lightweight, and specifically to support efficient distributed computing.
Sassenrath describes REBOL as a balance between the concepts of context and symbolism, allowing users to create new relationships between symbols and their meanings. By doing so, he attempts to merge the concepts of code, data, and metadata. Sassenrath considers REBOL experimental because it provides greater control over context than most other programming languages. Words can be used to form different grammars in different contexts (called dialecting). Sassenrath claims REBOL is the ultimate endpoint for the evolution of markup language methodologies, such as XML.
In 1998, Sassenrath founded REBOL Technologies, a company he still runs. The company has released several versions of REBOL and produced additional products such as REBOL/View, REBOL/Command, REBOL/SDK, and REBOL/IOS.
Sassenrath implemented REBOL V3.0 and released it to GitHub on December 12, 2012: https://github.com/rebol/r3.
Roku
Since 2010, Sassenrath has worked at Roku, Inc in product development.
Personal
Sassenrath lives in Ukiah, California, where he grows grapes and makes wine, and is interested in amateur radio, video production, quantum electrodynamics, and boating. He volunteers with the Television Improvement Association, a community organization that brings free, over-the-air television broadcasts into the Ukiah area.
Other references
Amiga ROM Kernel Reference Manual: Exec; Carl Sassenrath; Commodore; 1986
Guru's Guide to the Commodore Amiga; Carl Sassenrath; 1989
The Object Oriented Amiga Exec; Tim Holloway; Byte Magazine; 1991
REBOL Bots; Web Techniques; 9/1999
Inside the REBOL Scripting Language; Dr. Dobb's Journal; 6/2000
REBOL for Dummies; Ralph Roberts; Hungry Minds; 2000
REBOL Programming; Olivier Auverlot; Éditions Eyrolles; 2001
Computing Encyclopedia, Vol 5: People; Smart Computing; 2002
The REBOL IOS Distributed Filesystem; Dr. Dobb's Journal; 9/2002
The REBOL/Core Users Guide; Carl Sassenrath; 2000–2005
Notes
External links
Personal home page
Biographic notes at REBOL.com
Carl's Blog at REBOL.com
TIA - The TV Improvement Association
Interview Obligement, May 2007
MakeDoc - Lightweight document markup
Jeudy, Sébastien, Interview with Carl Sassenrath, Obligement, May 2007, accessed October 10, 2013
1957 births
Living people
American computer scientists
Programming language designers
Amiga people
People from Ukiah, California
University of California, Davis alumni
Engineers from California
Amateur radio people | Operating System (OS) | 1,098 |
ROM image
A ROM image, or ROM file, is a computer file which contains a copy of the data from a read-only memory chip, often from a video game cartridge, or used to contain a computer's firmware, or from an arcade game's main board. The term is frequently used in the context of emulation, whereby older games or firmware are copied to ROM files on modern computers and can, using a piece of software known as an emulator, be run on a different device than which they were designed for. ROM burners are used to copy ROM images to hardware, such as ROM cartridges, or ROM chips, for debugging and QA testing.
Creation
ROMs can be copied from the read-only memory chips found in cartridge-based games and many arcade machines using a dedicated device in a process known as dumping. For most common home video game systems, these devices are widely available, examples being the Doctor V64, or the Retrode.
Dumping ROMs from arcade machines, which are highly customized PCBs, often requires individual setups for each machine along with a large amount of expertise.
Copy protection mechanisms
While ROM images are often used as a means of preserving the history of computer games, they are also often used to facilitate the unauthorized copying and redistribution of modern games. Viewing this as potentially reducing sales of their products, many game companies have incorporated so-called features into newer games which are designed to prevent copying, while still allowing the original game to be played. For instance, the Nintendo GameCube used non-standard 8 cm DVD-like optical media, which for a long time prevented games stored on those discs from being copied. It was not until a security hole was found in Phantasy Star Online Episode I & II that GameCube games could be successfully copied, using the GameCube itself to read the discs.
SNK also employed a method of copy prevention on their Neo Geo games, starting with The King of Fighters in 1999, which used an encryption algorithm on the graphics ROMs to prevent them from being played in an emulator. Many thought that this would mark the end of Neo Geo emulation. However, as early as 2000, hackers found a way to decrypt and dump the ROMs successfully, making them playable once again in a Neo Geo emulator.
Another company which used to employ methods of copy prevention on their arcade games was Capcom, which is known for its CPS-2 arcade board. This contained a heavy copy protection algorithm which was not broken until 7 years after the system's release in 1993. The original crack by the CPS2Shock Team was not a true emulation of the protection because it used XOR tables to bypass the original encryption and allow the game to play in an emulator. Their stated intent was to wait until CPS-2 games were no longer profitable to release the decryption method (three years after the last game release). The full decryption algorithm was cracked in 2007 by Nicola Salmoria, Andreas Naive and Charles MacDonald of the MAME development team.
Another copy prevention technique used in cartridge-games was to have the game attempt to write to ROM. On an authentic cartridge this would do nothing; however, emulators would often allow the write to succeed. Pirate cartridges also often used writable chips instead of ROM. By reading the value back to see whether the write succeeded, the game could tell whether it was running from an authentic cartridge. Alternatively, the game may simply attempt to overwrite critical program instructions, which if successful renders it unplayable.
Some games, such as Game Boy games, also had other hardware such as memory bank controllers connected to the cartridge bus. The game would send data to this hardware by attempting to write it to specific areas of ROM; thus, if the ROM were writable, this process would corrupt data.
Capcom's latest arcade board is the CPS-3. This was resistant to emulation attempts until June 2007, when the encryption method was reverse-engineered by Andreas Naive. It is currently implemented by MAME and a variant of the CPS-2 emulator Nebula.
Uses
Emulation
Video game console emulators typically take ROM images as input files.
Software ROM
ROM images are used when developing for embedded computers. Software which is being developed for embedded computers is often written to ROM files for testing on a standard computer before it is written to a ROM chip for use in the embedded systems.
Digital preservation
The lifespan of digital media is rarely great. While black-and-white photographs may survive for a century or more, many digital media can become unreadable after only 10 years. This is beginning to become a problem as early computer systems may be presently fifty or sixty years old while early home video consoles may be almost thirty years old. Due to this aging, there is a significant worry that many early computer and video games may not survive without being transferred to new media. So, those with an interest in preservation are actively seeking older arcade and video games and attempting to dump them to ROM images. When stored on standardized media such as CD-ROMs and DVD-ROMs, they can be copied to future media with significantly reduced effort.
The trend towards mass digital distribution of ROM image files, while potentially damaging to copyright holders, may also have a positive effect on preservation. While over time many original ROM copies of older games may deteriorate, be broken or thrown away, a copy in file form may be distributed throughout the world, allowing games which would otherwise have been lost a greater chance of survival.
Hacks and fan translations
Once games have been made available in ROM format, it is possible for users to make modifications. This may take the form of altering graphics, changing game levels, tweaking difficulty factor, or even translation into a language for which a game was not originally made available. Hacks can often take humorous forms, as is the case with a hack of the NES version of Mario Bros., titled Afro Mario Brothers, which features the famous brothers wearing Afro haircuts. The Metroid Redesign mod is a hack of Super Metroid that revamps the game and adds new objectives.
A large scene has developed to translate games into other languages. Many games receive a release in one part of the world, but not in another. For example, many role-playing video games released in Japan go unreleased in the West and East outside Japan. A group of fan translators will often translate the game themselves to meet demand for titles. For example, the 1995 game Tales of Phantasia was only officially released in Japan; DeJap Translations translated the game's on-screen text into English in 2001. Further to this, a project called Vocals of Phantasia was begun to translate the actual speech from the game. An official English version was not released until March 2006, some five years after the text translation was released. Another example was that of Mother 3, a Japan-only sequel to the cult-favorite Earthbound. In spite of massive fan response and several petitions for an English translation, the only response from Nintendo was that Mother 3 would be translated and released in Europe, which it never was. Instead, the fan website Starmen.net undertook a massive translation project and released the translated version of Mother 3 in October, 2008. The translation was praised by fans and even employees from Nintendo, Square Enix, and other industry professionals.
The Japanese N64 game Dōbutsu no Mori (Animal Forest) has also been translated into English. The game was originally only released on N64 in Japan, but it was ported to GameCube and renamed Animal Crossing.
Hacks may range from simple tweaks such as graphic fixes and cheats, to full-blown redesigns of the game, in effect creating an entirely new game using the original as a base.
Similar image types
Image files derived from computer tape are known as tape images, while those derived from floppy disks and CD-ROMs (and other disk formats) are known as disk images. Images copied from optical media are also called ISO images, after one of the standard file systems for optical media, ISO 9660.
Creating images from other media is often considerably easier and can often be performed with off-the-shelf hardware. For example, the creation of tape images from games stored on magnetic tapes (from, for example, the Sinclair ZX80 computer) generally involves simply playing the magnetic tape using a standard audio tape player connected to the line-in of a PC sound card. This is then recorded to an audio file and transformed into a tape image file using another program. Likewise, many CD and DVD games may be copied using a standard PC CD/DVD drive.
References
External links
Nintendo's Intellectual Property FAQ
GameFAQs Help : Game Piracy: ROMs and Warez Information
EmuFAQ Addendum - The Question of ROMs
Computer memory
Firmware
Video game emulation | Operating System (OS) | 1,099 |
Subsets and Splits