id
stringlengths 30
34
| text
stringlengths 0
75.5k
| industry_type
stringclasses 1
value |
---|---|---|
2014-23/1194/en_head.json.gz/27692 | Jump to: navigation, searchArambilet: Dots on the I's, D-ART 2009 Online Digital Art Gallery, exhibited at IV09 and CG09 computer Graphics conferences, at Pompeu Fabra University, Barcelona; Tianjin University, China; Permanent Exhibition at the London South Bank UniversityComputer art is any art in which computers play a role in production or display of the artwork. Such art can be an image, sound, animation, video, CD-ROM, DVD-ROM, videogame, web site, algorithm, performance or gallery installation. Many traditional disciplines are now integrating digital technologies and, as a result, the lines between traditional works of art and new media works created using computers has been blurred. For instance, an artist may combine traditional painting with algorithm art and other digital techniques. As a result, defining computer art by its end product can thus be difficult. Computer art is by its nature evolutionary since changes in technology and software directly affect what is possible. Notable artists in this vein include James Faure Walker, Manfred Mohr, Ronald Davis, Joseph Nechvatal, Matthias Groebel, George Grie, Olga Kisseleva, John Lansdown, Perry Welman, and Jean-Pierre Hébert.Contents1 History2 Output devices3 Graphic software4 Robot Painting5 References6 See also7 Further readingHistory[edit] Picture by drawing machine 1, Desmond Paul Henry, c.1960sThe precursor of computer art dates back to 1956-1958, with the generation of what is probably the first image of a human being on a computer screen, a (George Petty-inspired)[1] pin-up girl at a SAGE air defense installation.[2] Desmond Paul Henry invented the Henry Drawing Machinine in 1960; his work was shown at the Reid Gallery in London in 1962, after his machine-generated art won him the privilege of a one-man exhibition. In 1963 James Larsen of San Jose State University wrote a computer program based on artistic principles, resulting in an early public showing of computer art in San Jose, California on May 6, 1963.[3][4]By the mid-1960s, most individuals involved in the creation of computer art were in fact engineers and scientists because they had access to the only computing resources available at university scientific research labs. Many artists tentatively began to explore the emerging computing technology for use as a creative tool. In the summer of 1962, A. Michael Noll programmed a digital computer at Bell Telephone Laboratories in Murray Hill, New Jersey to generate visual patterns solely for artistic purposes .[5] His later computer-generated patterns simulated paintings by Piet Mondrian and Bridget Riley and become classics.[6] Noll also used the patterns to investigate aesthetic preferences in the mid-1960s.The two early exhibitions of computer art were held in 1965- Generative Computergrafik, February 1965, at the Technische Hochschule in Stuttgart, Germany, and Computer-Generated Pictures, April 1965, at the Howard Wise Gallery in New York. The Stuttgart exhibit featured work by Georg Nees; the New York exhibit featured works by Bela Julesz and A. Michael Noll and was reviewed as art by The New York Times.[7] A third exhibition was put up in November 1965 at Galerie Wendelin Niedlich in Stuttgart, Germany, showing works by Frieder Nake and Georg Nees. Analogue computer art by Maughan Mason along with digital computer art by Noll were exhibited at the AFIPS Fall Joint Computer Conference in Las Vegas toward the end of 1965.Joseph Nechvatal 2004 Orgiastic abattOirIn 1968, the Institute of Contemporary Arts (ICA) in London hosted one of the most influential early exhibitions of computer art called | 计算机 |
2014-23/1194/en_head.json.gz/27847 | Creative Commons and Webcomics by T Campbell June 12, 2005 - 23:00 | T Campbell Traditional copyright faces webcomics with an uncomfortable choice. Its restrictions, properly enforced, would mean a virtual end to crossovers and homages, fan art, fan fiction, and many other staples that make the webcomic a more entertaining creation and foster artistic growth.
A total lack of copyright, however, leaves unscrupulous readers free to “bootleg†subscription sites, program tools to deprive comics of advertising revenue, and even profit from others’ labor without permission. The Creative Commons license presents a possible solution. It lets copyright holders to grant some of their rights to the public while retaining others, through a variety of licensing and contract schemes, which may include dedication to the public domain or open content licensing terms.
Answering a few of our copyright questions are some well-known names associated with Creative Commons, and a cartoonist who’s well-versed in technological issues.
Lawrence Lessig is chairman of the Creative Commons project, a Professor of Law at Stanford Law School and founder of the school's Center for Internet and Society. He has also taught at Harvard, Chicago and Berlin and represented web site operator Eric Eldred in the ground-breaking case Eldred v. Ashcroft, a challenge to the 1998 Sonny Bono Copyright Term Extension Act.
Neeru Paharia is Assistant Director of Creative Commons and was instrumental in creating its "copyright comics" section, which explains the hows and whys of Creative Commons and is released under a Creative Commons license.
Mia Garlick is the new General Counsel for Creative Commons.
JD Frazer is the creator of UserFriendly, one of the first online strips to gather a truly massive following. His comic, about tech life and work, has featured many references to free software and the open source movement, yet he's had to defend his own IP and copyright from those who'd be a little too free with them.
Finally, Cory Doctorow, widely hailed as one of the greatest SF authors of the 21st century, has written extensively on copyright issues and released novels under different versions of the Creative Commons license. He is outreach coordinator for the Electronic Frontier Foundation.
Comixpedia: I'd like to kick things off with the question that motivated this roundtable in the first place: Which rights should online cartoonists keep for themselves, and what rights should they give back to the community? Doctorow: Here's one: if you go into a comics shop today and buy the first perfect-bound collection of a new book, you'll get issues 1-5. Ask the shop clerk for issue 6 after you finish it, and he'll tell you that the series if up to book 8 now, and 6 and 7 are gone forever. You can wait six months and get the next collection of issues 6-10, or you can start reading the serial at book 8.
The interesting thing is that the audience for perfect-bound collections is different -- and broader -- than the (dwindling) market for individual comics. They shop in bookstores, not specialty stores, and if we could convert some fraction of them to comics readers, we could migrate these people to the specialty stores and prop up that core business. But it is prohibitively expensive to warehouse and distribute all the comics after their month on the shelf has expired.
Now, what would happen if every comic was downloadable from the web 30 days after it was published (e.g., the day of its "display until" date)?
Would it cannibalize any of the existing sales of comics?
I don't think so. The board-and-bag fans will not eschew the stores just because they can get a reading copy online. Maybe you'll lose the "reading copy" sales to readers who buy two copies and keep one pristine and untouched, but they're a tiny, tiny minority of comics buyers.
Would it lead to new sales of comics?
Yes. People who want to convert from bookstore-based collection-readers to monthly serial readers can fill in the gap between the current collection and the monthly by downloading it. This is the largest pool of untapped, ready-to-be-converted buyers for monthly books, and they represent a sizable potential infusion of new money into the field.
The rights question is, "How to best enable this?"
Baen Books publishes series science fiction, and if you buy book 13 of a Baen Book, it comes with a CD ROM containing books 1-12, and you can also download these gratis from their website, and recirculate them. There is no [digital rights management]. Universally Baen's experience is that they sell more of book 13, and MORE OF BOOKS 1-12. The word-of-mouth effect of people sharing these books around, making concordances, quoting long sections of them in online reviews, emailing them to friends, etc -- it drives WAY more sales than it displaces.
If every perfect-bound collection came with a CD with all the episodes to then, and if the web-sites supplemented that with the episodes to the current-minus-one, all licensed under a CC license allowing for noncommercial remixing and redistribution, it would solve this.
As to which rights creators should retain, I don't have any opinion. I think, though, that between creators and publishers, there should be a nonexclusive transfer of the rights to make unlimited noncommercial redistributions and remixes to the public.
Lessig: I think it is useful to be clear about the sense of "should" here. Obviously, one could have all sorts of moral intuitions about how much anyone "should" "give back to the community." But I also think it is most useful to distinguish that sense of "should" from the much more pedestrian sense of "how should I do this if I want to build the biggest, most successful community around my work." Of course we can discuss both. But it would be helpful to separate the two because the issues within each are very different. The moral-should talks about what artists owe others given what they build upon, etc. The practical-should talks about what techniques work best. I take it Cory was addressing the practical-should question -- usefully in my view.
Frazer: Speaking as both a publisher and creator here. I don't have any gatekeepers between myself and my audience/consumer base.
I think cartoonists (or any creator for that matter) should always retain the ultimate meta-right of discretion. When a fan asks for permission to do X with my content, I very rarely say no unless X involves something commercial. Simply asking shows me that the fan respects my rights as a creator and that in itself is enough most of the time for me to agree to his or her request.
That probably isn't helpful for this kind of discussion, however, as I agree there are *some* rights that we should assign to the community with a blanket contract. "Hey, Murray, mind if I stick one of your strips up on my home page" isn't the kind of question I'd like to answer fifty times a day every day.
I think the key here, for me, lies in the simple question: "If you do X with the creator's work, will it hurt the creator?"
Things that hurt can be lumped under two broad categories: actions that interfere with the creator's bread-and-butter, and actions that threaten the integrity of a creator's work. A good example of the former: some enthusiastic fan decides to mirror all 7+ years of my archives and make it available to the public, thereby offering my content in a location that I don't control and probably causing my ad revenue to plummet. A good example of the latter: a bitter ex-fan decides to take the strips and replace the writing with, say, something you might find in Penthouse Forum. Not to say that there isn't a market for that kind of thing, just that UserFriendly has always been PG-rated for very specific reasons. I also don't appreciate someone else taking years of my own sweat and tears and in minutes turning it into something of which I don't particularly approve.
If it doesn't hurt the creator, I'm all for it, really. If it wasn't for the fact that people could readily pass strips and links on to one another, I doubt very much UF (or most successful online content plays) would be enjoying the success it does. As the lion's share of my income comes from banner ad sales, I always prefer people to view the strip on the site itself. However, I can hardly begrudge anyone wanting to email a particular favorite strip or even a small series of them to friends. I very much believe it all ends up helping me in the end, and anecdotal evidence props that belief up.
Rights I think that the community should have without having to ask the creator include being | 计算机 |
2014-23/1194/en_head.json.gz/27916 | A Choreography provides the means to describe the service interactions between multiple parties from a global (or service neutral) perspective. This means that it is possible for an organisation to define how an end-to-end business process should function, regardless of whether orchestrated or peer-to-peer service collaboration will be used.
Although in simple situations, a BPEL process description can provide a description of the interactions between multiple services, this only works where a single orchestrating process is in control. The benefit of the choreography description is that it can be used to provide a global view of a process across multiple orchestrated service domains.
This document will outline how the Choreography Description is being used as part of SAVARA to provide SOA governance capabilities for each phase of the SOA lifecycle. When a validated design has been approved by the users, it can be used to generate an initial skeleton of the implementation for each service. The current version of SAVARA enables a skeleton implementation to be generated as a service
implementation (e.g. WS-BPEL process). 1.1. WS-CDL
WS-CDL, or Web Service Choreography Description Language, is a candidate recommendation from W3C. Although associated with W3C and Web Services, it is important to begin by stating that the Choreography Description Language (CDL) is not web service specific.
The purpose of CDL is to enable the interactions between a collection of peer to peer services to be described from a neutral (or global) perspective. This is different to other standards, such as WS-BPEL, that describe interactions from a service specific viewpoint.
In essence a choreography description declares roles which will pass messages between each other, called interactions. The interactions are ordered based on a number of structuring mechanism which enables loops, conditional, choices and parallelism to be described. In CDL variables used for messages and for conditionals are all situated at roles. There is no shared state rather there is a precise description of the state at each role and a precise description of how these roles interact in order to reach some notion of common state in which information is exchanged and processed between them.
In CDL we use interactions and these structuring mechanisms to describe the observable behaviour, the messages exchanges and the rules for those exchanges and any supporting observable state on which they depend, of a system.
1.2. pi4soa
pi4soa is an open source project established to demonstrate the potential benefits that a global model (as described using CDL) can provide when building an SOA. The open source project is managed by the Pi4 Technologies Foundation, which is a collaboration between industry and academia.
Building complex distributed systems, without introducing unintended consequences, is a real challenge. Although the Choreography Description Language provides a means of describing complex systems at a higher level, and therefore help to reduce such complexity, it does not necessarily guarantee that erronous situations cannot occur due to inappropriately specified interactions. The research, being carried out by members of the Pi4 Technologies Foundation, into the global model and endpoint projection is targeted at identifying potential unintended consequences, to ensure that a global description of a system can be reliably executed and can be free from unintended consequences. The tool suite currently offers the ability to:
Define a choreography description
Export the description to a range of other formats, such as BPMN, UML activity/state/sequence models, and HTML
Define scenarios (equivalent to sequence diagrams), with example messages, which can then be simulated against an associated choreography
1.3. SOA Lifecycle Governance 1.3.1. Design Time Governance
Design-time governance is concerned with ensuring that the resulting system correctly implements requirements (whether functional or non-functional). A choreography description can be used to ensure that the implemented system meets the behavioural requirements.
The behavioural requirements can be captured as a collection of scenarios (e.g. sequence diagrams) with associated example messages. This enables an unambiguous representation of the business requirements to be stored in a machine processable form, which can subsequently be used to validate other phases of the SOA lifecycle.
Once the choreography description for the SOA has been defined, it can be validated against the scenarios, to ensure that the choreography correctly handles all of the business requirements.
Once the service enters the implementation phase, it is important to ensure that it continues to adhere to the design and therefore meets the business requirements. Currently this is achieved through the use of techniques such as continuous testing. However this is only as reliable as the quality of the unit tests that have been written.
When a 'structured' implementation language has been used, such as WS-BPEL,
it will be possible to infer the behaviour of the service being implemented, to compare it against the choreography description. Detecting incorrectly implemented behaviour at the earliest possible time saves on downstream costs associated with finding and fixing errors. By using static validation against the original design, it ensures that the implemented service will deliver its expected behaviour first time. This is important in building large scale SOAs where different services may be implemented in different locations. | 计算机 |
2014-23/1194/en_head.json.gz/29449 | U.S. Army Says No to Windows 7, Yes to Vista Upgrade
99 comment(s) - last by Dfere.. on May 29 at 8:50 AM
The Army has decided to upgrade all of its computers, like those shown here (at the NCO Academy's Warrior Leaders Course) to Windows Vista. It says the adoption will increase its security and improve standardization. It also plans to upgrade from Office 2003 to Office 2007. As many soldiers have never used Vista or Office '07, it will be providing special training to bring them up to speed. (Source: U.S. Army)
Army will upgrade all its computers to Vista by December
For those critics who bill Microsoft's Windows Vista a commercial failure for failing to surpass Windows XP in sales, and inability to capitalize in the netbook market, perhaps they should reserve judgment a bit longer. Just as Windows 7 hype is reaching full swing in preparation for a October release, the U.S. Army announced that like many large organizations, it will wait on upgrading to Windows 7. However, unlike some, it is planning a major upgrade -- to Windows Vista.
The U.S. Army currently has 744,000 desktop computers, most of which run Windows XP. Currently only 13 percent of the computers have upgraded to Windows Vista, according Dr. Army Harding, director of Enterprise Information Technology Services. It announced in a press release that it will be upgrading all of the remaining systems to Windows Vista by December 31st. The upgrade was mandated by a Fragmentary Order published Nov. 22, 2008.
In addition to Windows Vista, the Army's version of Microsoft's Office will also be upgraded. As with Windows, the Army is forgoing the upcoming new version -- Office 2010 -- in favor to an upgrade to Office 2007. Currently about half of the Army's computers run Office 2003 and half run Office 2007.
The upgrade will affect both classified and unclassified networks. Only standalone weapons systems (such as those used by nuclear depots) will remain unchanged. Dr. Harding states, "It's for all desktop computers on the SIPR and NIPRNET."
Army officials cite the need to bolster Internet security and standardize its information systems as key factors in selecting a Windows Vista upgrade. Likewise, they believe that an upgrade to Office 2007 will bring better document security, and easier interfacing to other programs, despite the steeper learning curve associate with the program (which is partially due to the new interface, according to reviewers).
Sharon Reed, chief of IT at the Soldier Support Institute, says the Army will provide resources to help soldiers learn the ropes of Windows Vista. She states, "During this process, we are offering several in-house training sessions, helpful quick-tip handouts and free Army online training."
The U.S. Army will perhaps be the largest deployment of Windows Vista in the U.S. Most large corporations keep quiet about how many Windows Vista systems versus Windows XP systems they've deployed. However, past surveys and reports indicate that most major businesses have declined to fully adopt Windows Vista. Likewise, U.S. public schools and other large government organizations have only, at best, partially adopted of Vista.
RE: Missing the point
meant 2TB*an edit button would been nice* Parent
Windows 7 to Offer Better Hyper-Threading Support
Study: 83 % of Businesses Won't Deploy Windows 7 Next Year
Companies Adopt "Just Say No" Policy On Vista, Wait For Windows 7 | 计算机 |
2014-23/1194/en_head.json.gz/29605 | Faster Company
The leaders of IBM's 100,000-person IT staff knew that their team had many strengths. But the team also had one big weakness: It was too slow. Thus was born a group of change agents dedicated to speeding up Big Blue.
By the time you read this, the Speed Team at IBM will have just about raced right out of existence. It has been a brief -- but intense -- journey. Last November, during dinner with some members of his nearly 200-person leadership council, Steve Ward, VP of business transformation and chief information officer at IBM, decided that saving time -- making decisions faster, writing software faster, completing projects faster -- needed "to be much higher up on the agenda." To the hungry startups that were gnawing at the edges of IBM's businesses, working in Internet time seemed as natural as, well, sitting through review meetings did to veteran IBMers. Ward was worried that if IBM didn't reset its clock, those startups would clean its clock. "One of the things that frustrates me the most," says Ward, 45, "is the length of time between the 'aha' moment and the moment when you actually start changing the organization's direction, getting it to where it needs to be." The length of time between Ward's "aha" moment and his first action was less than eight hours. The morning after the leadership-council dinner, the Speed Team was born. Ward summoned 21 IBMers and gave them a simple assignment: Get the IT group -- a staggering 100,000 people worldwide -- moving faster than ever, with a focus on the fast development of Web-oriented applications. It was a huge responsibility. Unlike many companies, in which the IT group is considered a cost center and reports to the CFO, IBM's IT team is considered so crucial to the company's fortunes that Ward reports to J. Bruce Harreld, IBM's senior vice president of strategy and one of CEO Lou Gerstner's closest confidants. Translation: The Speed Team was going to be operating in the corporate fast lane. The team's coleaders -- Jane Harper, 45, director of Internet technology and operations, and Ray Blair, 45, director of e-procurement -- had strong reputations for pushing projects forward at a blazing pace. After talking to Ward about their mandate, the two leaders decided that the team should have a finite life span -- roughly six months. "I think that we will have failed if the Speed Team is still together three years from now," explains Harper. "Our plan, when we started this, was to come together, look at what works, look at why projects get bogged down, create some great recommendations about how to achieve speed, get executive buy-in, and try to make those recommendations part of the fabric of the business."
Watch Out for Speed Bumps Steve Ward built the Speed Team with IBM employees who had led groundbreaking projects that were completed in an unusually short amount of time. Karen Ughetta, 43, an IBMer based in Raleigh, North Carolina, earned a spot on the team because of her work on e-community, a Web-based platform that encourages collaboration among various teams. Gina Poole, 39, founder of developerWorks, a Web site launched last September to help IBM forge stronger relationships with software companies, was drafted for how quickly she'd gotten that site up and running. Since the group was made up of people who had been associated with success stories, it decided to use those stories to begin its efforts: What were the shared characteristics of fast-moving projects within IBM? "Every day, we do really good things in this company," says Harper. "We wanted to know why some projects happened more quickly than others. Then we wanted to look at those slower projects and ask, 'What were the barriers to speed?' We started calling those barriers 'speed bumps.' " Members of the Speed Team talked about how they had managed to avoid speed bumps in their own projects. Blair, for example, had devised a set of Web-based systems known within IBM as e-procurement. Starting from ground zero in early 1998, he created processes that enabled IBM employees to use the Web to search for lower airfares and to find ways for IBM to work with its partners to ensure that the company was getting the lowest possible price on parts. As Blair was building those applications, he found that many of the rules governing application development at IBM "were bound to slow things down." Those rules had been created with perfectly good intentions -- mainly, to keep the quality of the company's software high -- but few of them distinguished between different kinds of software projects. "One size just doesn't fit all," Blair says. "But we had a single process that all applications went through, whether they were simple or complex. We had to tailor the process to each project." Blair worked with his supervisors to leapfrog several steps of the official application-development process, and by the time his e-procurement project was one year old, he had moved a remarkable $1.8 billion worth of IBM's purchasing to the Web -- and had saved the company $80 million in the process. This year, he predicts that the company will save about $250 million. The speedy creation of e-procurement has accelerated life in other parts of the company as well. Thanks to e-procurement, the turnaround time for approving purchase requisitions is down from 2 weeks to 24 hours. Harper was no stranger to fast-track projects, either. Back in mid-1994, Harper had helped John Patrick, IBM's VP of Internet technology, to build IBM.com, the company's first Web site. She had also been a key player in developing IBM's Web sites for high-profile events, such as Wimbledon, the U.S. Open, the 1996 Olympic Games in Atlanta, and the chess matches between Deep Blue and Gary Kasparov. Now, among her other responsibilities, Harper runs the 20-person WebAhead lab, in Southbury, Connecticut, which prototypes new technologies. The WebAhead lab is itself a prototype for how IBM teams can work together efficiently, seamlessly, and quickly. The Speed Team mined the lab's culture for a number of its major insights. WebAhead employees work in a single shared-office setup, not unlike that of a high-school computer lab -- long tables of several employees arranged in rows. The overall atmosphere is informal, even messy. A sign on the door reads, "This is not your father's IBM." The lab's purpose is simple and liberating: "Our team is funded to do cool stuff for IBM," says Bill Sweeney, 42, a WebAhead manager. "We don't have to think about increasing sales of a product line. We just have to think about the next important thing that might hit us." In December 1997, the WebAhead group was first to explore how IBM could benefit from instant messaging, a technology more commonly associated with teenagers zapping one another from all over the world than with business. Last year, WebAhead began distributing an instant-messaging client to IBM employees, and before long, 200,000 employees -- including CIO Ward -- had adopted it as a way to get quick answers from colleagues. "People think nothing of messaging me," says Ward. "That technology lets me reach down several levels in the company, and it lets others reach up. It's an important tool that allows me to sniff out a problem quickly." All members of WebAhead are passionate about their pet projects, whether it's IBM systems infrastructure specialist Judy Warren talking about a new wireless-networking setup for the lab or Konrad Lagarde explaining how he and some colleagues are devising a way for employees to use their cell-phones to access the IBM intranet. Lagarde shows off a Sprint PCS phone that can find an employee's phone number, determine that person's location, and then send an instant message, electronically or by phone. He says that voice recognition using IBM's own ViaVoice technology is the next step. How long did it take Lagarde and his coworkers to put together the system that translated Web data into phone data? "Maybe a day or two," he replies. "And we went to lunch both days." Other group members have created animated characters that walk around a Web page, directing users to relevant content (total development time for each character: three hours) and a project called the Video Watercooler, which, when completed, will link WebAhead staffers in Southbury, Connecticut to colleagues in Cambridge, Massachusetts; Somers, New York; and Evanston, Illinois. Instead of forcing users to schedule times for formal videoconferences, the Watercooler will encourage more frequent, informal interactions among employees -- brainstorming or simply talking about what they're working on. But even people who always drive in the passing lane occasionally hit a speed bump. In late January, that's what happened to the Video Watercooler project: The servers required to process the hefty stream of incoming and outgoing data hadn't arrived yet. "It's embarrassing, but hardware is sometimes a constraint," Harper says. "We're waiting for boxes." Customer needs take precedence over internal requests. After a moment, Harper adds, "We're making lots of phone calls." But she doesn't sound happy about the situation. Through her work with the WebAhead group, Harper had already unlocked several of the secrets to speed. One such secret: Speed is its own reward. Employees were encouraged and energized when they saw their pet projects being deployed in a matter of weeks, rather than months or quarters. Harper made sure that those pet projects were pertinent to the company by issuing a clear mission that itself focused on speed: "We're interested in things that help people collaborate faster, message faster, and make decisions faster." Harper learned something else from the WebAhead experience: Companies don't move faster; people do. And not all people, whatever their technical credentials, are cut out to move at the speed that's necessary to keep pace with Internet time. Which is why Harper is serious about turbocharging the recruiting process for all of IBM's leading-edge Internet groups, which include alphaWorks, Next Generation Internet, and WebAhead. Last summer, she helped start IBM's Extreme Blue internship program for software engineers in Cambridge, Massachusetts. This year, Harper will expand that program to a second location: Stanford University's campus, in Palo Alto, California. There, interns will live together MTV Real World-style in a fraternity house while working on high-profile projects. Extreme Blue helps IBM reach the most fleet-footed engineers before Internet startups do. After last summer's pilot in Cambridge, the program is already starting to feed the ranks. By late January, WebAhead had offered jobs to five Extreme Blue participants; several other promising interns had decided to stay in college for an extra year to get their master's degree. "We get them during holidays and vacations, though," Harper says.
Fast Talk: It's about Time After examining many fast-moving projects, including e-procurement, the Speed Team began outlining what those projects had in common. It then created the "Success Factors for Speed," six attributes that all successful projects had in common: strong leaders, team members who were speed demons, clear objectives, a strong communication system, a carefully tailored process (rather than a one-size-fits-all approach), and | 计算机 |
2014-23/1194/en_head.json.gz/29664 | Tower of the Ancients (c) Fiendish Games
Windows 95/98, Pentium 166MMX, 3D accelerator, 32Mb Ram
Monday, February 7th, 2000 at 12:42 AM
By: FitFortDanga
Tower of the Ancients review
I'm not sure which day it was that God created Tetris, but the very next day all the clones started crawling out of the woodwork. From the irreverent creators of Natural Fawn Killers and Hot Chix 'n' Gear Stix comes this Biblical-themed puzzle game. When the people of Babylon began building the Tower of Babel to climb to Heaven, God punished their arrogance by giving them all different languages. But what if God wasn't paying attention? That's where you come in: it's your job to stop them. I'll leave it to the theologians to decide whether or not this is blasphemous.
How do you accomplish your task? Surprise, surprise... you rotate and place falling blocks. Each block has three symbols on it (crucifix, star of David, that little Jesus fish, etc). Line up three or more of the same symbol in a line and they get zapped out of the structure. The more you knock out with one move, the more points you get. And of course, eliminating blocks shifts the other blocks down, which can trigger a chain reaction knocking out more portions of the tower. You can shift the order of the symbols within the blocks before they hit the tower, and rotate the tower. You won't be needing a reference card for these controls.
After a while, you get a wider variety of symbols, making the situation more difficult. Each stage requires that you remove a certain number of blocks to progress to the next level. If the tower hits the top of the screen before you complete the objective, the hand of God sweeps down and it's game over. Some levels only count matches in a particular direction (horizontal, vertical, or diagonal). Later on in the game you get special blocks to help you -- a "Wild Card" block and a "Spite Block" (which eliminates all of whichever symbol it lands on).
At first, I had a pretty good time playing Tower of the Ancients. It's a decent twist on the Tetris theme. But it doesn't take long to tire of it. Once you've done the first 10 levels or so, you've seen all the game has to offer. After that, it's just more of the same, except the levels require more blocks to complete. There didn't seem to be any end to it, it just gets harder and harder until you lose. This works for Tetris because the speed of the game ramps up with each level until it reaches a frenzied pace. In Tower, the speed remains constant while subsequent levels will take longer and longer to complete. You can't save the game at all, so it's either continue after you lose, or start all over again next time you play. The whole game is essentially frustration or boredom without reward.
The graphics are what you'd expect from a budget developer: crude. Lighting "effects" consist of pure white hyper-pixelated blobs randomly floating around the screen. The backgrounds are somewhat pleasing, but nothing that couldn't have been done five years ago. The hand of God strangely resembles the hand of a malnourished Ethiopian.
On the audio front, things aren't much better. The music is appropriately epic-sounding, with ominous choirs and Gregorian chants. However, it tends to be choppy, and get buried in the cluttered ambient sounds. The blast of "Hallelujah" you get after pulling off nice combo is satisfying, but the "laser" sound of a row being wiped out will get under your skin after one or two levels.
There's no reason for this game not to have multiplayer. I'm reasonably sure that with three days training in whatever language this is coded in (I'm guessing Fortran) I could stick two towers next to each other and map a secondary group of controls for side-by-side play.
So is it worth your $15? I'm going to have to say no. Unless you enjoy endlessly repetitive tasks, this game won't hold the interest of even the most diehard puzzle fan for more than 10 minutes. Run far away from this one, and don't look back or you might turn into a pillar of salt.
Written By: FitFortDanga
Graphics [8/20]Sound [6/15]Gameplay [9/30]Funfactor [6/20]Multiplayer [0/5]Overall Impression [2/10 | 计算机 |
2014-23/1194/en_head.json.gz/30175 | LinuxInsider > Exclusives
| Next Article in Exclusives
Power to the Wiki-People
The conflict that erupted between French intelligence authorities and the Wikimedia Foundation over its refusal to remove a "sensitive" Wikipedia page goes right to the heart of free expression, says Geoffrey Brigham, the foundation's general counsel. Decisions on takedown requests are based less on filing the right paperwork or adhering to a particular country's laws than on the Wikipedia community's principles.
Earlier this month, agents for France's top intelligence agency, the Direction Central du Renseignement Interieur, or DCRI, were accused of trying to force a Wikipedia volunteer to remove a Wikipedia page describing a French military radio relay station. The volunteer, a library curator, reportedly was threatened with jail unless he complied.
Before any of the bullying took place, the DCRI had gone the conventional route, contacting the Wikimedia Foundation, which is Wikipedia's parent organization. The Wikimedia Foundation declined to remove the material but said it would be happy to comply upon the receipt of proper legal documentation.
The saga had a few unintended consequences. For one thing, the French authority hoping to stymie the spread of information about the station ended up increasing traffic to the page 1,000-fold. The events also raised interesting questions about what, exactly, constitutes a legitimate Wikipedia takedown request. If a national security-related request from France -- which is not some backwater, totalitarian regime -- is not heeded, then what is? What makes a takedown request legitimate?
In this TechNewsWorld podcast, we talk with Geoffrey Brigham, the general counsel and board secretary for the Wikimedia Foundation, which hosts a number of projects, including Wikipedia. Brigham explains the logistics of a Wikipedia takedown request, what the criteria for legitimacy are, and how Wikipedia's linguistic expansion -- which invariably means a geographic expansion -- affects this process.
Download the podcast (14:41) or use the player:
TechNewsWorld: One of the things that kind of caught people off guard or that attracted attention with this instance I mentioned in France is that the story came from France, which is thought of as a very liberal, very open-minded, very free speech-conducive country. And it makes you wonder -- if these sorts of problems are happening in France, they must be happening quite a bit elsewhere, especially as Wikipedia expands and incorporates new languages and new countries into its network.
Tell me, first off, how you begin to sift through and begin to determine what is legitimate and what is not from what must be a whole lot of requests that you receive?
Geoffrey Brigham: Well, David, one of the first things I think you need to keep in mind is that we do not actually write Wikipedia. It's our community. It's tens of thousands, hundreds of thousands, millions of contributors who are putting together the biggest encyclopedia in the world. And that community exercises its editorial responsibilities and discretions as you would expect with anything of this size and nature. So 99 percent of requests to remove content for various reasons usually go through the community. And the community is able to work through those requests based on the Wikipedia policies that are in place.
Now, a very small percentage of those requests are sometimes referred to us. And we look at those requests on a case-by-case basis. We first ask ourselves, "Are the requests consistent with the policies of the Wikipedia community -- the policies of the community itself as written?" We ask ourselves, "Is there an immediate threat of harm?" in which case we would act responsibly. But it's a case-by-case analysis based on values that our community holds dear as it's writing a project that is based on free information, publicly available, reliable sources.
TNW: Are the criteria that the community uses to evaluate these requests -- are those static across the multitude of countries and nations that Wikipedia is available in? Is it one singular set of rules? Or are these things kind of fluid according to where the information is being seen or where it originated?
Brigham: Well, our projects of course are not based on countries; they're based on languages. And each language project will have its own set of rules. But many of them are consistent at the highest level. For example, we don't tolerate plagiarism. Another example is, we don't allow copyrighted materials, the use of copyrighted materials, except for certain exceptions like fair use. And our community is quite active in ensuring that there is no plagiarism, that there is no misuse of copyrighted materials.
So the good-news story about this is, actually, we get involved with very few takedown requests. In the realm of copyrighted materials, for example, we only receive 15 requests when other major websites receive tens of thousands, if not hundreds of thousands. And that's because our community is vigilant about writing a credible encyclopedia. They are, as I say, constantly reviewing, and are actually listening to others who are suggesting that there may be content that needs to be taken down. They do the evaluation, and they take it down. ...
TNW: It seems like perhaps with copyright violation or especially plagiarism, there would be a pretty objective way to determine whether or not the rule was broken. I mean, if something is plagiarized, then it will probably be Googleable, or if not on Google, then it will be written somewhere else and you can say, "Okay, this is plagiarism and we can take it down." But in instances where it is perhaps subjective -- like you talked about if it's a threat -- how does something that's a little� less objective, less clear-cut -- how do you view those? And how are those kind of worked through when you come to those requests?
Brigham: Well, our community is very smart. They know how to handle nuances and subtleties. So they will take those requests and actually have discussions, and very informed discussions, before making decisions. Like any decision in life that deals with a subtle issue, it requires reflection, and our community is very good at that. We do the same thing at the Foundation: We will evaluate the requests against the many values that we have, including values of free expression that are extremely important to us. So like any subtle request, we do a case-by-case analysis. | 计算机 |
2014-23/1194/en_head.json.gz/32357 | HomeNewsProductsSupportContactAbout About Written by Administrator Saturday, 13 March 2010 15:34 A Very Brief Synopsis of our History
To read a more in-depth history of Pie in the Sky Software, see this page.
Pie in the Sky Software started in the late 1980's. The first product was a 3D screensaver for DOS called Innermission. It was a 'TSR' which stood for 'Terminate and Stay Resident' program. It was shareware and cost $5.00.
The next title was a 3D flight simulator for DOS. This was released as shareware, and eventually became quite successful. It was a real thrill to go to a local KMart and buy a copy.
After Corncob there were some commercial retail products. But the most successful product was the Game Creation System, which was a set of tools which allowed people to make their own 3D games and distribute them without learning to program.
However, when the 3D graphics market exploded, a small company with just a few people could no longer expect to keep up with the fast changes in hardware. We fell behind and could not keep our game engine competitive. And so not long into the 21st century we gave up, and everybody involved went back to having normal jobs.
Now after nearly a decade, Pie in the Sky Software is opening its virtual doors again. The focus will be more about making fun stuff that being a money-producing business.
Many years ago, Pie in the Sky Software created 3D entertainment software for DOS and later for Windows. Products included
Corncob 3D - a 3D flight simulator for DOS written mostly in 16bit 8086 assembler.
Lethal Tender - a 3D First Person Shooter for DOS sold in retail stores.
The Game Creation System - A set of software tools for creating first person 3D games with no programming.
Last Updated on Sunday, 14 March 2010 04:43 Copyright © Add Your Footer Text Here 2009 All rights reserved Joomla Templates By
Joomladesigns.co.uk | 计算机 |
2014-23/1194/en_head.json.gz/32775 | FAQ Search Memberlist Photo Gallery Register Profile Log in to check your private messages Log in One Thing At A Time
-> iPhone, iPod touch and iPad
SmallwheelsSenior MemberJoined: 06 Sep 2008Posts: 266
Posted: Sat Feb 20, 2010 1:59 pm Post subject: One Thing At A Time
Since I don't own a smart phone I don't know what they can do or how they operate. I've read that the iPhone OS can only run one application at a time. Some people have speculated that the next version of the OS will allow multitasking. This would carry over to the iPad (the product that interests me). I thought of a feature that would be beneficial to iPad users if there will not be a way to multitask on it. Maybe this feature already exists. I don't know. If the iPad is fast and switches between apps quickly will a multitasking capability be necessary in all situations? I sent a message to Apple about a feature request. It was for them to make all applications have an additional button beside the close program button. That new button would be the remember button. It would cause browsers to close the current page and reopen the same page upon the next time the program was opened. I proposed that such a button be made for all applications. If that were the case, users could switch between apps and go back and forth without needing to reconfigure their apps or retype internet addresses. Essentially it would be a slightly slower version of using Expose.
Wouldn't that alleviate most of the needs for multitasking? Surely it wouldn't allow streaming radio to work constantly while doing other things. It wouldn't fix the problem of not being able to do something else while downloading a video.
There are probably plenty of workarounds to make the iPhone more capable right now. Developers will take the iPad even further.
There is a report at one of the Apple reporting sites (I can't find the story quickly) that says the SDK allows ways to get more applications running at the same time. After watching the video on this page I learned that I'm using the wrong terminology about multitasking. http://www.appleinsider.com/articles/10/02/18/inside_apples_ipad_multitasking.html
The man in the video also said that the applications opened and closed quickly and he didn't feel the need to have multiple applications running to do simple copying and pasting. His explanation of how the security works on the iPhone OS makes me think having multiple applications open at once isn't necessarily a good thing. Smallwheels
www.MrMoneyHelper.com
http://DoNotDieYet.blogspot.com
www.MySpace.com/Beninate
Posted: Fri Apr 23, 2010 7:25 pm Post subject: iPhone OS 4.0 Fixed Some Shortcomings
Now that iPhone OS 4.0 has been announced with all of its new features, it seems the iPad is closer than ever to taking the spot of home computer away from laptops and desktops. We'll see how it goes this fall when it is released. I'm really glad I didn't order one at the release date. It has given me time to learn of its capabilities. I won't be waiting until the next version comes out but I will now wait for the fall edition that ships with the 4.0 software.
Smallwheels www.MrMoneyHelper.com http://DoNotDieYet.blogspot.com www.MySpace.com/Beninate | 计算机 |
2014-23/1194/en_head.json.gz/33107 | [Unpaid] Requesting a Sketched Map
Thread: Requesting a Sketched Map
iamversatility
Posts 6 Requesting a Sketched Map
Hello, I am a writer who is working on several different projects right now, and I was hoping to get a projects world sketched out to help me with my writing to get a better idea of my world. Like I said I am working on several different things so I am going to post information on each one and let you decide which you think will be the most fun for you.
If your interested please note me on Deviant Art since I think I can't get messages here yet iamversatility (Ray Denef Rose) on deviantART
The first world is the world for a comic called Mechanized, its planned for at least 5 book series with each book having its one genre / theme to it.
Mechanized: Gauntlet
Steampunk Western
So the story is about a society Living in extremely drastic environments ranging from the bitter cold of the tundra to the dehydrating heat of the desert Many different environments occur and can happen at any moment extreme global warming. Food and other important resources are scarce so the rulers of this world force people to compete in gladiatorial arena matches to win these resources.
In less developed areas the rulers are doing the same thing, only it is to the death, and the participants in their death games are kidnapped from various regions and forced to fight.
The way they fight is with devices called Gauntlets, they also use other things as well but the most powerful of the devices is the gauntlet. The gauntlets have unlimited possibilities.
Theme Song:
Steam punkish western metal feel
__________________________________________________ _____________________
Mechanized: Boot camp
Modern Day Military Base
A place where heroes are born from mere children, they will rise up and take the world by storm.
Sequel to Mechanized: The Gauntlet, Prequel to Mechanized: The Word War
The younger lives of the war heroes, as cadets in boot camp. This is when the military is using standard mechanized equipment nothing personalized. This is where the next generation of mechanized devices is born from personal mechanized devices.
Modern upbeat metal romance feel
Mechanized: World War
Modern Day Battle on the Field
The cadets have been called to active duty, and been entered into different divisions. The first world war was about to begin, with every other nation sending waves to fight against them.
Wartime awry dark with heroes light feel so like Metal Gospel
Mechanized: Colosseum
Twenty-First Century
The War has finally ended, by the Colosseum Peace Treaty. Now every year a tournament held around the world open to anyone with the means to enter can. From each nation a select few are chosen to make up a team of Professional pilots to represent their nation, and bring glory to them.
Modern upbeat sporty feel
Mechanized: Universe
Future In Space
Forced to abandon Earth, only the chosen masses can evacuate. Ten massive ships launch into outer space in all different directions. Their destinations are unknown, as well as their survival.
Futuristic universe fantasy feel
The second world is the world for a story called The Crimson Sea its a world where Harpies and Mermaids are locked in war for survival they are the last beings on the world that they know of until a Siren, and Human appear from a oasis cavern after being secluded from the outside world for a long time.
In a world half consumed by water, and the other half consumed by land a war between two races breaks out . The Harpy half human an ancient race lost millennium ago in a war for survival. The Harpy is also half avian, or bird; the Harpy has a Human body with wings, and a tail; they also have razor sharp talons, and teeth. The Mermaid was also half human, but unlike the Harpy their other half was Pisces, or fish; the Mermaid has a Human body from the waist up, they also have a fish's tail, and razor sharp claws and teeth. These two races are in a war for survival, They kill each other, and consume their flesh to keep themselves from starving. There are only two choices of food for these carnivorous predators; the other race, or becoming cannibals and feasting on their own race.
Mermaids reign over half the world covered in water, they are the rulers of its domain. In the shallow water there lie spikes of coral just beneath the surface, they are unable to be seen from above. The Mermaids use these to lure their prey near them either making them self impale themselves, or impale their prey through combat. There are also deep abysses that Mermaids used to drown their prey, in these abysses there are caverns where the Mermaids live with their pods. The Mermaids control the element of water, they have implemented this into a variety of uses, to help their survival.
Harpies rule over the other half of the world covered in land, they are the masters of the sky and owner of the land. Mountains of spikes in various sizes, make up the land of this world. Spikes that tower over the surface of the water are used as Harpy nests, and watch towers. Other spikes rest feet above the surface of the water, which the Harpies use to impale their prey on. The Harpies control the element of air, they have implemented this into a variety of uses, to aid them in the war.
After each skirmish between the two races the blood spilt would drain out of the injured, and the killed down into the sea. The once tranquil blue sea, and silky white feathers of the Harpies, and white hair of the Mermaids has since turned crimson. The crimson blood would continue to be spilt with no end in sight. All hope ceased to exist, the two Races gave up what little humanity they had long ago, all that was left was the hollow shell of savages. They had no hopes, no dreams; all they cared about was surviving as long as possible. How long until they decide to resort to cannibalism to survive. The races have grown more savage with each passing day, and more desperate with each passing night. The light of hope was was dwindling in the darkness, pure desperation filled this world.
In an isolated location, a place thought destroyed long ago was a young Harpy man, and a young Mermaid woman. Through complete isolation they managed to find each other and fall in love. If it was anywhere else in the world the circumstances would of been different, and they would of been mortal enemies fated to fight, and consume each other in order to survive.
this isolated location was a cavern in the earth that was swallowed up, and cut off from the rest of the world for hundreds of years. It was a tropical oasis in the middle of a barren post apocalyptic world; it had water, soil, air, plants, and even small animals of multiple variety. Everything contained within the confines of the oasis walls adapted in order to survive, until it finally became an oasis.
As the two grew older they had twins, one child was half Harpy, and half Mermaid; the other child was completely Human. The Human child was born small and fragile requiring constant attention and affection. While the Siren child was contained within an egg where it would continue to grow with all of its needs inside the egg. The only thing the egg needed was warmth, and time to hatch. As time passed the parents grew old, and along with them so did the human child, and his Siren twin brother. The older his parents got the more he had to take care of his still unhatched brother. The twins were 15 years old when his parents finally passed away, and the human child was left all alone to take care of his still unhatched brother.
He cared for his brother kept his egg safe, and warm just waiting for his brother to break out of the imprisonment that is the egg. The two brothers grew up together though they were so different, they were also very similar. Even though the child didn't hatch yet, ever sense his brother was born he has been able to sense everything going on around him. So even though they can't talk to each other, they still have an unbreakable bond already formed; all's that is left is waiting for when he hatches.
Twenty years have passed by since they first were arrived inside the oasis, and finally the Siren child hatches. Both being 20 years old now stand before each other, looking in to each others eyes, not a single word is exchanged they stand before each other silently. In that moment they remembered a conversation their parents had while they were still inside their mothers womb. It was then they understood their purpose, they knew what it was they had to do. The Human child opened a hole straight up to the surface using his earth magic, and climbed on top of his Siren brothers back. Using his air magic he takes off straight up to the surface, the Human child uses his fire magic to melt the earth to cover the hole, and the Siren child uses his water magic to quickly cool it. The silently tell each other "that should contain the life inside the oasis until its needed, now to finish what our parents couldn't" | 计算机 |
2014-23/1194/en_head.json.gz/33366 | HomeComputing & SoftwareSoftwareAppsScribblenauts Remix (iPhone Application) Scribblenauts Remix (iPhone Application)
Scribblenauts for the iPhone IntroductionAlong with "Peggle", another game that I have recently reviewed, I bought and downloaded Scribblenauts Remix to my iPhone last weekend after noticing that it was on offer on the iTunes App Store for just 69p. I've owned and played the original version Scribblenauts for the Nintendo DS for quite a while now so was already aware of the game and its objectives so was eager to see how it transferred to the iPhone. So what do I think of it?The PremiseScribblenauts Remix takes 40 levels from both the original DS version of the game and it's sequel, Super Scribblenauts and includes a further 10 brand new levels exclusive to this download. You control the character of Maxwell; he's a funny looking chap who's job it is to collect stars from each of the levels he enters by creating objects and animals to help him achieve his task and as the makers of the game state; you are only limited by your imagination as to what you can create.An example of a level could be that the star you need to collect is situated higher than Maxwell can reach so by creating a ladder and placing it under the star you can make him climb up to retrieve it, alternatively you might be faced with a mob of murderous zombies between Maxwell and the star so will need to create something that will take on the crowd and clear a path for him. Every level is different although the objective remains the same, collect the stars and as the game progresses the levels get more complicated and you really do have to think about what you need to create to be able to win.Controlling Maxwell and Playing the GameThe touchscreen of the iPhone is used for the game and at the start of each level you are given a Hint which is designed to help you complete the stage. Pressing either the right or left hand side of the screen controls the direction Maxwell walks and located at the top right hand corner of the screen is a notepad icon which enables you to type the name of the object or animal you want to create. Once you have decided what you want to create it then appears on the screen and you can then interact with it or put it to use although do bear in mind that if you create an animal or monster then it could turn on Maxwell and end up killing him so be careful in what you choose to make. This version of the game allows you to use adjectives (these weren't recognised in the original DS game) so you can give your creations specific emotions or behaviours, for example an "evil teapot" will have wings and fangs rather than "teapot" which would just be a flat inanimate object. It's things like this that make the game fun to play and you really can let your imagination run wild.I have to admit that it's not an easy game at times and some of the levels did leave me scratching my head as to what it was I had to actually do. The game recognises if you have been stuck on a level for a period of time though and does offer additional hints although I did find them very hit and miss at how useful they actually were and I do think that a few of the levels bordered on "impossible" rather than "challenging" and so I did begin to lose patience on the odd occasion and stopped the game only to return to it later to retry. Saying that though the game is rather addictive and once you start a level you do want the sense of satisfaction you get from completing it so I have persevered with it as there are multiple solutions to every level and you can replay them to try something new if you want to.It's not my favourite game on my iPhone in all honesty as although it is limitless in what you can create I do find that I end up repeating myself in some of the levels and tend to create the items that I know will actually help the character of Maxwell rather than hindering him. The good thing about what you create is that it can easily be deleted so if you make something that is useless then you can get rid of it and try something new, you don't have to be a brilliant speller either as the game will suggest alternatives if you spell the name of the item incorrectly and does seem to know what you want to make. This is handy I must admit and even though I do think I can spell quite well I think this function makes the game accessible to younger players whose spelling might not be perfect and there are educational aspects to the game which would undoubtedly help people improve their vocabulary and spelling overall. I wouldn't say the game was exciting though and I do think that some may be put off by its intricacies, the fact that there are times when some levels do seem impossible is off-putting and it's not a game that I can play for hours on end as it can be very frustrating but I can see its appeal and it is a handy game to dip in and out of for short periods of time.Graphics/Sound etcIf you have played either of the DS games then you will recognise the graphics in the iPhone version as they are identical, they are cartoon-y in appearance, bright and clear to see and share the same characteristics as the other formats. My supplied picture shows how the game is presented and there are no flaws with this aspect of the game as far as I'm concerned. The accompanying music isn't anything to get overly excited about as it's just generic backing tracks which don't cause too much of a distraction, I always mute any games that I play as I find I get annoyed after being continually bombarded with music.Overall thoughts and Value for MoneyIf you have ever played Scribblenauts or its follow up on the DS then you're going to know how the game is played on the iPhone. The fact that there are some duplicated levels on this version that appear on the original releases shouldn't be a reason not to buy the game as there are multiple solutions and every level is different. I don't find it to be an all-encompassing game and it's not one that I can play for hours on end but I do like it and I do think it was well worth the 69p I paid for it. Having checked the iTunes store for the purpose of this review I have noticed that the game has gone back up to its original price of £2.99 which I do think is on the overly expensive side. I wouldn't have bought it for a shade under three quid and I do think that you would need to be an 'uber fan' to fork that sort of money out for it especially as it isn't brand new and the majority of the levels already appear on the DS releases. It is a keeper though, I'm not going to delete it from my phone and I will dip and out of it if I have a spare 20 minutes to pass and want to see how creative I can be.Should you buy it? at 69p it's a no-brainer for the size of the game, the levels, graphics etc but at £2.99 I do think that it's expensive. My advice would be to wait until it's back on offer and snap it up then, it will hold a lot of appeal to all ages and it could be educational for anyone looking to improve their vocabulary and spelling.My version of the game is the 1.1 release which was updated on the 20th of October this year, it is 136MB in size and is released by Warner Bros games. It does have an age rating of 9+ which I do feel is justified, there are some sequences of cartoon violence in some of the levels and I do think that you need to have a good grasp of written English to be able to play the game to its full potential therefore younger players might not be able to fully understand the game or be able to take on the levels they are presented with.Recommended? Yes, but not at £2.99 - I don't think it's worth that sort of money and my 3 star rating here represents an average score for a just-above-average game in my opinion. Thanks for reading my review. Comments
Line Up (iPhone Application)
Angry Birds (iPhone application)
Tumbledrop (iPhone application)
Rock Band (iPhone application)
Flight Control (iPhone Application)
Carmageddon (iOS)
Dark Nebula Episode Two
Shakespeare Pro
Static Motion | 计算机 |
2014-23/1194/en_head.json.gz/35468 | Plesk control panel bug left FTC sites (and thousands more) exposed to Anons
A critical vulnerability in Parallels' Plesk Panel Web hosting administration …
A critical vulnerability in some versions of Parallels' Plesk Panel control panel software appears to have been key to the recent penetration of two servers hosting websites for the Federal Trade Commission. The vulnerability in the software, which is used for remote administration of hosted servers at a large number of Internet hosting companies, could spell bad news for hosting providers who haven't applied the latest updates, as well as their customers. Because the vulnerability allows someone to make significant changes to the user accounts, files, and security of a targeted site, hackers who took advantage of the Plesk vulnerability may still have access to sites they have breached even after patches are applied. If your site is hosted with a provider that uses Plesk for site administration, it's worth taking a good look at the content on your server, and the accounts configured to access it—and resetting all your accounts' passwords.
Originally developed by Virginia-based Plesk Inc., and acquired by Parallels (then SWSoft) in 2003, Plesk allows an administrator to create FTP and e-mail accounts, as well as manage other aspects of the associated hosting account. And as with other control panel applications for hosted sites, such as CPanel, it can also draw on an "Application Vault" to install common software packages (i.e., Drupal CMS and WordPress blog software) that are preconfigured for the hosting environment. Plesk is widely used in the hosting industry. Rackspace offers Plesk-based control of some hosting accounts, as does Media Temple—the hosting provider whose servers housed the FTC sites business.ftc.gov and OnGuardOnline.gov, among others. The software is also used by government and educational institutions; the Department of Energy's Lawrence Berkeley National Laboratory uses it as part of its web self-service. That sort of footprint makes Plesk a prime target for hackers looking to take control of websites. Keys to the kingdom
On some hosting platforms, it's also possible to create an FTP account that can gain access through a secure shell (SSH) terminal session, as documented by Media Temple's knowledge base. In the case of the FTC hacks, it appears that just such an account was used to gain access to Media Temple's servers, pull data from the MySQL databases powering the Drupal and WordPress sites, and then delete the contents of the server and post new content—going well beyond the usual sort of web defacement.
There's reason for concern that the breaches may well extend well beyond the FTC servers, and beyond Media Temple. Members of the Antisec group have claimed they have a substantial number of other government sites already compromised and ready for defacement.
The critical vulnerability in Plesk as described by Parallels in a knowledge-base entry is in the API of a number of versions of the software. Other applications can drive Plesk through a PHP interface called agent.php; the vulnerability allows hackers to use a SQL injection attack—sending SQL queries to the interface as part of a post—and thereby gaining access to the Plesk server software with full administrative access. They could then create accounts that give them the ability to log into the server remotely with administrative rights. And in cases where administrators could create accounts with SSH access, they could create new user accounts with full access to the file system that could then be used to further exploit the host itself.
Parallels product manager Blake Tyra said in a Plesk forum post that patches that fix the agent.php vulnerability have been available since September. But according to some customers, the e-mail alerting them to the critical nature of the vulnerability in unpatched versions of Plesk was not sent until February 10. That e-mail advised customers to apply updates immediately to versions 8, 9, and 10 of the software if they had not already been patched. "Parallels has been informed of a SQL injection security vulnerability in some older versions of Plesk," the message read. "Parallels takes the security of our customers very seriously and urges you to act quickly by applying these patches." Beyond applying the patch, Tyra said that fixing the vulnerability would also require the resetting of all customer's passwords. "If they were already at the identified update levels, you should be OK," he wrote. "If not, and you see POST requests to agent.php that are not from you (or any components you have that may be integrating with Plesk), prior to applying the updates, this could be cause for concern. Any requests to agent.php after applying the updates should be harmless. Because of the nature of the vulnerability (i.e. SQL injection), there is the potential for the attacker to maintain access to the server even after the original entry point was closed if they gained access to any user accounts. Especially because of the last point, this is why we recommend that any compromised server have its passwords reset as soon as possible."
But even then, the damage may already have been done, through installation of other back-doors on the system. And some Plesk customers have expressed concern over whether the patches have been effective in shutting down the exploit, especially if they've already been hacked. Parallels did not respond to inquiries from Ars about the exploit.
Media Temple, for its part, runs multiple versions of Plesk; depending on when customers acquired a server and whether they have upgraded service, their server may be running Plesk 8, 9, or 10 based on a random sampling of sites checked by Ars. According to sources familiar with the hack of the FTC sites on Media Temple's servers, Plesk was at least part of the route members of Anonymous' Antisec collective used to gain access to the sites. And it's not clear whether Media Temple was aware of the critical nature of the Plesk vulnerability at the time the sites were hacked—the first site was defaced on January 24, and the second server may have already been compromised by the time Parallels' alert email was sent out.
In a follow-up interview with Ars, Media Temple chief marketing officer Kim Brubeck said that Media Temple was "dealing with an additional problem with Plesk," but would not say if it was directly connected to the FTC site breaches. There's reason for concern that the breaches may well extend well beyond the FTC servers, and beyond Media Temple. Members of the Antisec group have claimed they have a substantial number of other government sites already compromised and ready for defacement. That doesn't begin to include the potential number of sites powered by Plesk-connected servers that have been compromised for other purposes, including the infection of sites with malware, creation of malicious pages within sites for use in clickjacking attacks, or other more covert hacking and fraud schemes.
Not ready for .gov
When Media Temple knew about the Plesk problem isn't the only thing in doubt in the FTC site hacks. There's also the question of whether the sites should have been on Media Temple in the first place, and whether they were prepared for the security implications of hosting even a relatively harmless federal government "microsite." Media Temple and Fleishman-Hilliard (the global public relations firm that developed the FTC sites that were hacked) have given conflicting accounts regarding whether Media Temple knew it was hosting federal government websites, and whether it presented itself as ready to handle the potential elevated security issues that came with them.
Brubeck had previously hinted that the blame for the hack belonged with Fleishman-Hilliard, because they had declined to update application software on the site. And she had said that Media Temple, as a policy, did not pursue government customers, because the company's data center is not certified as compliant with Federal Information Security Management Act (FISMA) regulations.
But since the sites built by Fleishman-Hilliard were "microsites," containing mostly consumer-facing information and not handling personal data, they didn't fall under the government's FISMA security regulations. According to Fleishman-Hilliard's Washington, DC web services team leader Dave Gardner, the FTC's use of Media Temple dates back to 2010, when he and his team introduced the agency to the hosting provider as a low-cost option for hosting Drupal-based websites. An e-mail from Media Temple to Fleishman-Hilliard, boosting their .gov cred. Sean Gallagher
In June of 2010, Gardner called Media Temple's support center to ask if the company was hosting any other government customers. In an e-mail Gardner shared with Ars, a support representative for Media Temple told Gardner, "we do host government sites, and the servers are in compliance with your needs because of previous inquiries and customers."
That exchange apparently came as news to Brubeck, who told Ars in an intial interview that Media Temple wasn't interested in hosting .gov sites, because hosting government domains "paints a big bulls-eye" on servers for attention from hackers such as Anonymous. According to Brubeck, Media Temple's executives were unaware that the FTC was hosting .gov sites on the company's virtual dedicated host service until the FTC hack occurred. That lack of awareness was apparently in spite of the fact that it was the FTC that actually paid for the service, receiving invoices directly from the hosting company. When asked about the why Media Temple didn't know that servers that were being paid for directly by the government were running government sites, Brubeck said that the account was flagged by Media Temple's strategic accounts group as a "creative" account because Fleishman-Hilliard made the initial contact with them. The invoicing could be set up however the customer wanted, she said, and "unfortunately, the system doesn't flag us for things like that."
The FTC isn't the only federal agency that used Media Temple. For example, the Department of Health and Human Services uses the provider to host another Drupal-based site for the Presidential Commission for the Study of Bioethical Issues, Bioethics.gov. A number of state and local governments also operate .gov domains on Media Temple. "For Media Temple to claim ignorance of hosting the FTC—or other government—sites is completely false," said Bill Pendergast, the general manager of Fleishman-Hilliard's DC operations. "In their own words, Media Temple is deep in this area, with what they claim to be the appropriate level of compliance. It's hard to see how their fiction helps anyone get to a constructive outcome."
Media Temple is now eager to get the .gov bullseye off its back. The company contacted Fleishman-Hilliard and the FTC after the second server breach, asking them to move any additional sites off the hosting company's servers within 48 hours.
Further technical details on the hack may be a while in coming from Media Temple or the FTC. The FBI is now investigating the case, and both the hosting company and the FTC have declined to comment further. | 计算机 |
2014-23/1194/en_head.json.gz/36041 | Sign up King's Quest Omnipedia Navigation
The Wizard and the Princess
King's Quest 2
Wizard and the Princess Prologue
The King's Appeal
This is the legend of King's Quest...
Kolyma
Llewdor
Harlin
Manannan
Lolotte
Ancient Ones
Watchlist Random page Recent changes KQ6 development
3,298pages on this wiki This article is a stub. You can help King's Quest Omnipedia by expanding it. This article concerns the development of King's Quest VI: Heir Today, Gone Tomorrow. It is a repository of details concerning early prototype ideas for the game up to its final finished version. The game was written and designed by Roberta Williams and Jane Jensen (text and dialogue was done by Jane Jensen). They were also the game directors along with William D. Skirvin.
BackgroundEdit
From the opening sequence of the game, there could be no doubt that if King’s Quest V redefined what computer gaming actually was, King’s Quest VI provided the quality standard for the next generation. The state-of-the-art “floating camera” sequence that opened the game, featuring young Prince Alexander as he sets out to find his “girl in the tower,” gave computer gamers the world over a real view of what the new age of multimedia computers could bring to classic storytelling. The character graphics were based on motion captures of real actors, giving the game an unprecedented ‘feel’ of reality. The King’s Quest VI love song “Girl In the Tower,” a soulful duet featuring the voices of Bob Bergthold and Debbie Seibert, rivaled the best motion picture anthems of the year. Continuing in a long tradition, Jane Jensen, who would go on to design the industry bestselling Gabriel Knight: Sins of the Fathers, assisted Roberta Williams in game design of this epic.[1]
Title ChangesEdit
At one point, KQ6 had a different title, King's Quest VI: Genie, Meenie, Minie, Moe.[2]
Welcome in the King's Quest 6 HintbookEdit
Welcome to the world of King's Quest VI! This astonishing journey through the imagination was 14 months in the making, and by far the most ambitious project Sierra has ever attempted. The King's Quest VI team is not ashamed to admit they're delighted with the results. We hope you will be, too. Come on, let's take a look... May 1991. King's Quest VI begins. Series creator Roberta Williams and co-designer Jane Jensen meet for the first time to discuss the design. Jane and Roberta worked together for the whole month of July and part of August to come up with the design ideas.
After five months of hard work and long hours, the documentation for the design was complete. The rest of the thirteen person development team begins work. Project manager and co-director Bil Skirvin and the King's Quest IV artists begin the storyboard and character sketches. Shortly thereafter, the background painter, John Shroades, begins the pencil sketches for the game's 80 background paintings.
The video-capture animation process begins. Roberta and Bil have carefully chosen the actors and costuming for the entire game. The 2000 plus character actions in King's Quest VI will be produced by capturing the movement of the live actors on video, then on the computer. In the end, the animation and backgrounds must match up believably. Michael Hutchison leads the efforts of the animators as each cell of the video-captured actors is artistically enhanced on the computer to more closely fit the hand-painted backgrounds.
While animation and backgrounds are in progress, Jane writes the scripts. The scripts define for the programmers what the game response will be for any player action, including the timing and placement of the animation. The scripts also provide the more than 6,000 lines of written messages that will appear in the final game.
As the art progresses, the team's programmers, lead by Robert Lindsley, begin the intricate process of weaving the game elements together with code.
Meanwhile, team composer, Chris Braymen leads music in writing original themes for each of the game's major characters and locations, and producing innumerable sound effects that take place during game play.
Robin Bradley is the game's quality assurance tester. He will play each scene in the game over and over again, making sure that the programming, art, text, and design are running smoothly.
Incredible as it may seem, all these elements begin nearly simultaneously. The different team members coordinate their efforts, and slowly, the game takes an amazing and complex shape.
July, 1992. The last few months of the project are critical. Every aspect of the game is tested again and again, day after day, and polished to perfection. The days grow long and the team grows tired, but a special kind of excitement is in the air. The game will ship in less than two months!
September, 1992. More than a year after it was began, King's Quest VI is finished. The team turns its baby over to marketing and distribution, who have been working alongside them to promote the product since it's early stages. King's Quest VI is ready to ship--But work on the game is hardly finished. After initial distribution, it will be translated into five languages, as well as being converted to a CD-ROM full voice version.
From there, King's Quest VI will go on to astound and delight an audience of a least half a million game players. We hope you're one of them!
Excerpts from the Game Design Document for "King's Quest VI" by Roberta Williams and Jane JensenEdit
This article is a stub. You can help King's Quest Omnipedia by expanding it. Game Bible - Easiest Path OutlineEdit
The Easiest Path is the linear path through the game to the marriage of Cassima and Alexander, doing absolutely none of the optional sub-plots and puzzles. There are five major 'milestones' the player must complete to finish the game. The Easiest Path has been organized as five acts, corresponding to those five milestones. These acts are internal and invisible to the player. They are:
I) Finding a way to travel
II) Getting past the Gnome Guards
III) Finishing the labyrinth
IV) Beauty & the Beast
V) Castle of the Crown infiltration and subsequent victory over the Genie & the Vizier
The last portion of the game, from the castle infiltration through the end-game cartoon differs quite a bit depending on whether the player's only done the required puzzles (Easiest Path), has done optional puzzles, and which optional puzzles the player has done.
Global Views /Actions:Edit
1) Magic Map. Alexander travels via the use o | 计算机 |
2014-23/1194/en_head.json.gz/38125 | Commentary, Twitter Etiquette
When Is A Re-Tweet Not A Re-Tweet? When It’s Something I Never Actually Said
By Shea Bennett on June 18, 2009 2:26 PM
The re-tweet is one of the backbones of the Twitter system and it plays a significant part in making links, and the sites and articles that they lead to, go ‘viral’. The ripple effect of a message getting re-tweeted throughout the network is a beautiful thing to see, and if you’re the recipient of all that resulting traffic, a reason for some celebration.
However, you have to be careful. I’m not a subscriber to the notion that suggests it’s poor etiquette to alter the existing prose when doing a re-tweet, but I do think you have to make distinctions between what the original poster (OP) said, and anything you have added yourself.
On several occasions I’ve seen things that I’ve never actually said ‘re-tweeted’ in my name, simply because the re-tweeter changed all the words but left the RT @Sheamus part alone. Often this is an accident on their part, and it can end up with amusing consequences.
Or far more severe ones; like the @reply, you could do a lot of damage to a person’s reputation with a series of re-tweets if you intentionally set out to make an individual ‘say’ things that they never did. Not only does this bad information go out to everybody in your network but, perhaps ironically, thanks to further re-tweets, it has the potential to quickly spread to millions of people.
RT @KarlRove I was rooting for Obama all the way!
This is why I use and recommended the via tag over the RT. For me – and I accept this might be a personal view – the RT should, for the most part, be a literal re-posting of the original message. If you tamper with it, I think you need to do everything you can to ensure that your words are clearly separate from the OP’s. More often than not the RT @Username part comes first, right at the beginning of the message, and I think that the words that follow are seen by the majority as coming from that user.
Via, meantime, because it comes after the message, is less dependent on literal representation. Via implies that you’re simply sharing information passed on to you by another in your network, and that you’re not necessarily replicating their prose.
There are exceptions to both of these ‘rules’, of course. Applying your own text before the RT @Username part is perfectly acceptable, i.e.:
I really like this! RT @Username The original text goes here. http://original-link.com
And you can easily use parentheses to separate any comments you have added from the OP. In these instances I favour adding my username, too, for clarity. For example:
RT @Username The original text goes here. http://original-link.com [I really like this - Sheamus]
As said, I almost exclusively use the via tag. It’s available as an automatic option in Seesmic Desktop, which is great, and because there’s less emphasis on me maintaining the text of the OP, I can re-write things to my own satisfaction. I’m still giving credit to the OP for the link – something I fully believe in – and if it was principally their text I was focused on re-tweeting (or something like a quote), I’d leave things alone. But otherwise, it’s via pretty much all the time for me.
Most people still favour RT, and that’s fine, but I do think you need to be mindful about exactly what you’re re-tweeting. Most importantly, make sure the person actually said it. Nobody likes to have words put in their mouth, and while most of the time an accidental re-tweet is harmless, you could very easily – by intent or otherwise – do a lot of damage to an individual by presenting your own views and opinions as belonging to them.
Twitter's Big (And Untapped) Opportunity With B2B MarketersTwitter's Most Powerful Advertising Feature (That You're Not Using)Why Thanking Someone For A Retweet Might Actually Be A Good Idea After AllThree Brand Fails That Prove Auto-Replies On Twitter Are A Bad Idea
Tags: Re-Tweet, Retweet, RT, Twitter Etiquette, Twittercism, via Comments
<< PREVIOUS10 Things That T-W-I-T-T-E-R Might Stand For NEXT >>POLL: Why Do YOU Block Somebody On Twitter? Mediabistro Course | 计算机 |
2014-23/1194/en_head.json.gz/38486 | Original URL: http://www.psxextreme.com/ps2-reviews/478.html
Outrun 2: Coast to Coast
Gameplay: 7
Control: 8
Replay Value: 5
I totally see the merit of the PSP version of OutRun 2006: Coast 2 Coast, because it offers the one coveted feature that Ridge Racer and Burnout Legends do not... online play.
The PlayStation 2 version also offers online play, but I just can't bring myself to wholeheartedly recommend that particular iteration of the game.
It's a matter of perspective. The PSP catalog isn't saturated with racing games, and, as of this writing, OutRun 2006 is the only "traditional" racer available for the system that includes an online multiplayer mode.
Contrast that to the PS2's library, which is overflowing with racers of all sorts, many having online modes.
OutRun 2006 shares a few aspects in common with Burnout Revenge, and therein lies the problem. OutRun 2006 and Burnout Revenge are both simple "gun the gas and go" arcade-style racers. Both games offer high-speed racing on multiple tracks. Both sell for $29.99 right now. With regards to OutRun 2006, that thirty bucks gets you a simple arcade-style racer with 12 Ferrari cars and 32 single-road track segments. Depending on the mode you play, a course will consist of anywhere between two and five of those track segments. Now, compare that to Burnout Revenge, where thirty bucks gets you fleshed out car handling, an awesome crash mode, more than 60 unlicensed vehicles, and 20 full-length courses filled to bursting with alternate paths and shortcuts. The math just doesn't add up in OutRun's favor.
Not that OutRun 2006 isn't fun to play. It is, if you like easygoing racers. This modern re-tooling of OutRun brings the classic series into the modern age with updated graphics, improved car handling, and a bunch of brand-new tracks, but otherwise remains faithful to the simplistic nature of the original 1986 arcade game.
It's you in the driver's seat and your buxom blonde girlfriend in the passenger seat... of an expensive Ferrari.
"Simplistic" is the key word. Each track segment is a single four-lane road without any shortcuts. Press the gas button and go. Depending on the mode you choose, you'll need to take first place, get the best time, or satisfy your girlfriend's commands. To accomplish those goals, you'll have to do your best to stay on the road and navigate the many curves up ahead. The game introduces a few sim-like aspects to the series, like drifting and slipstream tailgating, but is otherwise still rooted in the trademark OutRun rule (try not to run into things). Smacking into the back of another car or sideswiping a guard rail will slow your car down and cost you precious milliseconds. Unlike the original OutRun, however, end-over-end crashes only happen in OutRun 2006 if you slam hood-first into a railing at high speed. Also, the time limits aren't as Draconian. Whereas it was nearly impossible to reach the end of the Goal D and Goal E paths in the original OutRun without loads of practice, most players should be able to do so in OutRun 2006 after an hour or so.
There are traffic and rival cars to deal with, but traffic only appears in pre-set spots and rival cars follow pre-set lines. They'll keep the same speed and won't try to block you if you try to pass, so taking first place is entirely a matter of driving fast and not slapping into the rail too frequently. Rubber-band A.I. is in full-effect though, so, once you do take the lead, they'll catch-up and stay right on your tail for the rest of the race.
Generally speaking, OutRun 2006 has all of the trappings of a Sega-produced arcade racer. Cars handle more like toys than heavy vehicles and the physics are such that if you smack into a rail or the back of another car, you'll only slow down a little bit (though the other car may flip end-over end, land right-side-up, and continue as if nothing happened). Drifting is exaggerated, such that you can skid through a turn with the car at a 180-degree angle for a solid five-seconds without losing much in the way of speed. Similar to recent Ridge Racer games, some corners have pre-ordained drift areas, where if you perform a drift going into the turn, the game will pull you through it automatically. Thankfully, OutRun 2006 doesn't hold the player's hand nearly as much as Ridge Racer 6 does.
As for play modes and extras, the menu includes about what you'd expect from a game that was originally produced as a sit-down arcade machine. OutRun 2006 includes all 15 tracks from OutRun 2, which was released in the arcades and for the Xbox platform, as well as all 15 tracks from the arcade-only follow-up, OutRun 2 SP. Two bonus tracks are also included, but they're not the Daytona or Sega GT tracks that were included in the Xbox version of OutRun 2. Again, keep in mind, the tracks in OutRun 2006 are shorter than the typical courses featured in other racing games. Each takes roughly fifty-seconds to just over a minute to complete. Play modes include arcade, outrun, time trial, heart attack, coast 2 coast, and multiplayer. In the arcade, outrun, and time trial modes, players tackle the courses in a "choose your own adventure" style format, just like in the original OutRun, where a branching path at the end of each track gives you two choices as to what the following track will be. Heart attack mode is an inspired mission mode, where your character's girlfriend calls out brief missions on the fly, which you then have to satisfy. Some of the missions are downright fun, such as "hit the cars" and "avoid the UFOs." The coast 2 coast mode brings it all together, packaging up time trials, drift missions, standard races, and heart attack missions into a lengthy career. Each event earns you OutRun miles, which are points that you can use to buy the unlockable items in the "showroom" menu. Some people won't appreciate the manner in which OutRun's unlockables must be bought. Everything is unlocked in the arcade mode, but the only items available from the get-go in other modes are a couple cars and a small sampling of tracks, music selections, and body colors. Additional tracks, music, and paint jobs must be purchased from the showroom menu using the OutRun miles that you earn. Problem is, many items cost anywhere between 5,000 and 30,000 miles, and the average event only gives up approximately 1,000 miles. You'll get more for lengthier races or for getting AAA ranks on heart attack missions, and less for short races or events that you've already completed.
Multiplayer is supported, via the Internet on the PSP and PS2, and across local-area networks on the PS2. Internet races allow as many as six players to participate, whereas LAN games allow up to eight. Few people will probably ever make use of the PS2's LAN option, though, since eight individual PS2 systems and TVs are required to make full use of it. The multiplayer setup is serviceable, but woefully barebones. A friend list is supported, but there's no method for chat, either through a text-based room or via a voice headset. A modest selection of options lets players specify car types, courses, and whether or not to enable "catch up" boosts. In general, online games are as smooth as offline races.
Although the PSP and PS2 versions of the game can't play against one-another online, the two versions can be linked together using a USB cable to transfer progress and unlockables back and forth. This is nice, because you can have one overall career file shared between the two games. Not so nice, though, is that certain unlockables must be purchased on the PSP, while others are exclusive to the PS2. This means that anyone that buys just a single version of the game won't be able to unlock everything in the showroom (unless they use a cheat code). It wouldn't be so bad if the exclusives were limited only to cars, because many car types handle the same, but certain courses and music selections that are also exclusive to either version of the game.
Personally, I don't mind that the game is simplistic. What irks me is that there isn't much to the overall experience. Thirty-two tracks may sound like plenty, but that's nothing when you consider that each track can be run in less than a minute. Also, despite all of the various missions and events in the different modes, it actually doesn't take very long at all to complete the game. After four hours, my own progress indicator had already reached 90% and I had already unlocked half of the showroom. The remaining 10% took a couple more hours, and I'm still working on the showroom (raises fist at OutRun miles!). I'm not sure why a PS2-owner would pick up this game, when there are other, more fleshed-out games available for the same cost. Unless you're really into arcade style racers, or specifically adore OutRun, you're certainly better off picking up a game like Burnout Revenge, which offers a similar experience and online play, but with larger courses and a greater variety of modes. The same can't be said of the PSP, however, since there are still but a few racers available for that platform, and OutRun 2006 is the only one that offers online play.
In terms of aesthetics, OutRun 2006 can best be described as sunny. Most races take place during the daytime, with the sun blazing above and only a few clouds marring an otherwise deep-blue sky. Except for an occasional quick jaunt through a city neighborhood or tunnel, tracks typically offer an open view of the surrounding environment, affording players the opportunity to gawk at beautifully sweeping farmlands, mountains, and beach-side areas. Car models are always glossy and don't incur damage. There isn't a preponderance of visual effects, but if you pay attention you'll notice car exhaust trails, tire marks, dirt clouds, and grass clippings at the appropriate times.
Objectively speaking, the graphics in the PS2 version are a couple years behind the curve, at least in comparison to games like Gran Turismo 4 and Burnout Revenge. The PSP version doesn't seem so outdated, mainly because racers on that hardware tend to yield weaker visuals than comparable PS2 games. The game has no trouble pushing polygons, in the form of dozens of cars and hundreds of buildings and objects, but the textures used to fill-in the scenery are rather lo-rez and muddy (especially when you focus on the roadway or the mountains). The sense of speed is good, but the simple toy-like car models and choppy crash animations are less-than impressive. There's a bit of commuter traffic here and there, but, again, the "wow" factor disappears once you realize you're seeing the same dozen cars rubberstamped over and over again. Of course, one could make the point that Sega intentionally kept those aspects simple as a nod to the original OutRun arcade game. Certain visual aspects are top-notch, particularly the window reflections, spark effects, and shadows. The draw distance is great, even in the PSP version, and water surfaces are suitably reflective and shimmery. For better or worse, OutRun 2006 looks like a typical Sega arcade game.
My main gripe against the visuals is that no anti-aliasing or filtering was applied to the graphics. As a result, horizontal objects, such as guard rails and fences, appear to mesh together or distort. If you played Ridge Racer V when the PS2 was first introduced, you have some idea of the phenomenon I'm describing (the black lines outlining the metal fences on either side of the track would crack and jumble together when they weren't viewed head-on). The problem isn't as prevalent in OutRun 2006, but it's still pervasive enough to be bothersome.
I also suspect that the video output is being rendered internally at a lower resolution than the final output. Obviously, I can't prove that, but it would explain why the "jaggies" described above actually get worse when you set the game to output a 480-progressive video signal. Typically, when I toggle a game to send a 480p signal to my widescreen HDTV, the graphics become sharper and you can make out fine details that you otherwise wouldn't see in the standard interlaced image. In the case of OutRun 2006, the image does become cleaner on the 480p setting, but edges look "blockier" and the distortion of horizontal lines becomes worse.
The lack of anti-aliasing, and the appearance of jaggies, isn't as big of an issue with the PSP version of the game. The handheld's smaller screen and tinier pixels do a lot to minimize jaggies, and also help hide the other blemishes that are obvious in the PS2 version. The textures don't look as muddy and lines don't distort as much. There's some "slowdown" in spots, and the framerate isn't always steady, but, overall, the graphics in the PSP version meet or exceed those of its console cousin. As for the audio, the music and sound effects are also decidedly retro, but in a pleasant way. Multiple remixes of original OutRun tunes, like "Magical Sound Shower" and "Splash Wave" are available, along with a dozen or so other equally-sunny compositions. Sound effects consist primarily of generic engine revs and tire squeals, but they're enhanced by the game's immaculate use of surround-sound, which literally makes it so that you can hear cars whoosh-by as you pass them.
On the whole, OutRun 2006: Coast 2 Coast is good for what it is... a simple, arcade style racer that retains the core concepts of the original OutRun arcade game while incorporating many necessary gameplay and audio-visual enhancements. Anyone that owns a PSP system should definitely check the game out, while PS2 owners should approach with caution. There are just too many other, better racers on the PS2 that you should play even before OutRun 2006 comes to mind. Despite its shortcomings though, OutRun 2006 still manages to be fun, so you probably won't hate yourself too much if you bring it home.
5/5/2006 Frank Provo | 计算机 |
2014-23/1194/en_head.json.gz/39572 | A-B Testing
Jan 17, 2014 General Nonsense | Notify
Most of you are familiar with A-B testing for websites. You randomly display one of two website designs and track which design gets the most clicks. People do A-B testing because it works. But where else does it work?When I asked for opinions about why anyone would NOT buy my new book, How to Fail..., the most common opinion I got (mostly via email) is that the title and the cover are the "obvious" problem. Folks tell me that a book with "fail" in the title isn't a good gift item, and no one wants it seen on their own shelf for vanity reasons.To me, the interesting thing about this common observation is the certainty of the folks who make it. For them, it just seems totally obvious that the title and cover are the problem. And when you add the "memoire" confusion, they say the cover is killing the book.Does that sound right to you? This is one of those interesting cases of common sense versus experience.Here's the problem with the theory that the title and cover are prohibiting sales: As far as I know, no one with actual experience in publishing would agree with it.Publishers will tell you -- as they have told me on several occasions -- that no one can predict which books will do well, with the obvious exception of some big-name celebrity books. No one with publishing experience can accurately predict sales based on the book's title, cover, or even the content. Success comes from some unpredictable mix of the zeitgeist, timing, and pure luck.That's why a jillion books are published every year and probably 99% are not successful. If publishers had the power to turn dogs into hits by tweaking the titles and the covers, wouldn't they be doing it?Have you ever heard of books being retitled and republished with a new cover and going from ignored to huge? Me neither. Maybe it happened once, somewhere. But in general, it isn't a thing.Would you have predicted that there would be a hugely successful series of how-to books that call their buyers dummies and idiots? And how the hell did Who Moved My Cheese sell more than three copies worldwide? None of this stuff is predictable.Or is it?I try to stay open-minded about this sort of thing. And I wondered if there was an easy way to do A-B testing without actually retooling the hard cover. (That would be a huge hassle for a variety of boring reasons.) I could do Google Adwords testing to see which titles drive more traffic to Amazon and Barnes & Noble. But people would still see the real title when they arrived.I could look into issuing a new Kindle version with a friendlier title. That's probably a bigger hassle than you think, even though one imagines it shouldn't be. And for best seller tracking, it would look like two books each selling half as much as a single book might have.So I have two questions.1. Do you believe publishers are wrong about the importance of the title/cover2. Is there a practical way to do A-B testing for books already published? If it turns out that some sort of rebranding of books does increase sales, you could start a company that does nothing but buy poorly-selling but well-written books from publishers who have given up on them. Then apply A-B testing to create a title and cover that will perform better. It's like free money.The absence of such a company, or such a practice within an existing publishing house, makes me think this approach is unlikely to work. But it doesn't seem impossible that it could work either. Votes: +15
Drowlord
I just looked it up in IMDB -- it has an "also known as" section for movies.
Looks like "Captain Phillips (2013)" was also shopped around as "A Captain's Duty" and "A Captain's Story" before they settled on "Captain Phillips". This is the new Tom Hanks story about Somali pirates.
Actually, there are a ton of amusing articles that cover this topic with examples of pre-and-post changes. "3000" was the original name of "Pretty Woman", "Shoeless Joe" was changed to "Field of Dreams". "Do Androids dream of Electric Sheep?" was changed to "Blade Runner."
OrganicDevices
BTW just republish it as an e-book with two different titles and covers if you really want to do the experiment and have no profit motive. See which one does best.
whtllnew
@danbert8
[I think it's a case of armchair quarterbacking. Everyone has their own opinions and explanations after something happens, but they aren't the ones making the calls in the planning phase. Everyone can see failure and come up with reasons it could have been prevented, but it takes real talent to stop failure before it occurs. ]
Something in that, but Scott DID ask us why its failing, so all this is exactly what he wants. And this may not be the reason its failing, but I thought it was a bad cover the moment I saw it.
And while were still on the subject, Scott, this is something else which may or may not have a bearing on why it isnt doing as well as you like:
I read your book. Im not surprised by the five star reviews; it is indeed, as you suggest, enjoyable and seemingly useful. But I myself would only give it three, maybe four stars. Why? Because when I went back to reread parts of it I didnt have the same reaction to it. It wasnt nearly as enjoyable or readable. You might say thats because I wasnt reading it the whole way through like the first time and the word magic you used doesnt work when you do that. I beleive that. But it doesnt alter the fact that I have lots of books that I can pick up, read some part of it that I like, put it down satisfied and I cant do that with your book. So how do I give it five stars?
What does this have to do with how successful your book is, you ask? Maybe Im not the only one who had that reaction. Maybe other folks who had that reaction figured a book they couldnt enjoy rereading wasnt worth a buy.
This phenomenon has been thoroughly investigated in a controlled setting, see for example Science Vol 311, p.854 (2006) by Matthew J. Salganik, et al. "Unpredictability in an Artificial Cultural Market." Yes, there seems to be no rationality about it. If you are interested, I have a pdf that I can email.
Basically, success is impossible to predict and relies on a phenomenon Salganik et al. call "cumulative advantage" where a song, novel etc. becomes more successful simply because it is successful. Publishers themselves, in moments of honesty, admit they cannot predict this. Apparently, for example, the first Harry Potter book was rejected by 8 publishers before finding a home. The rejection of The Beatles as "just another guitar band" is another famous example.
In an example one of the co-authors of the above article, Duncan Watts, quotes the publisher of Lynne Truss’s surprise best seller, “Eats, Shoots & Leaves,†who, when asked to explain its success, replied that “it sold well because lots of people bought it.†That said, your "systems" idea should, if valid, still be able to position the book for the best chance of success. Otherwise you can boil it down to the #1 piece of advice I always give aspiring young scientists who ask me for it: "Be in the right place at the right time."
I'm pretty sure they rebrand movies in the manner you're suggesting. And I'm pretty sure they do the kind of A/B testing that you're talking about, too. Maybe it's a question of how much is invested in a piece of media that determines whether rebranding makes sense.
i.e. if we can turn this pig's ear into a purse, can we recoup millions of invested dollars? In your case, time might be valuable like that. However, I would guess that book publishers usually see intellectual property is a more hit-and-miss thing, and aren't as inclined to push the same book twice. Pushing the book is their main cost, and the sucker author's time is meaningless to then.
danbert8
I think it's a case of armchair quarterbacking. Everyone has their own opinions and explanations after something happens, but they aren't the ones making the calls in the planning phase. Everyone can see failure and come up with reasons it could have been prevented, but it takes real talent to stop failure before it occurs.
1) If the book were wrapped in brown paper and only the title (handwritten) was on it, along with many other books on the shelf done the same way, how many people would pick it up? You could do the same thing with just "A Scott Adams Memoir" with no title or cover art and see if it's your name that carries weight. Finally you could do just the cover art, no title or name, and see how people react. Personally, I think your name on the book carries more weight than the title or the artwork.
2) If you want to do an A-B test for your book, instead of referring to it as "How To Fail..." start referring to it as "... And Still Win Big" or "... Kind of Story of My Life". See how many more click-throughs you get with those hyperlinks.
If you really want to increase the unit sales (and help others) just lower the price of the Kindle edition to 2.99 and then pay Amazon to feature the book on the Kindles that have the "Special offers" enabled that display your book cover (or improved book cover) when powered off. There have been numerous times when an advertisement on my wife's Kindle caught my eye. That should drive the book higher on the best seller lists which should also help sell the paper editions of the book.
Kingdinosaur
Scott, for question one think about the first 5 minutes when you meet someone. That's the time your brain is hardwiring our opinion of them. The same holds true for books.
I read a book about story structure (Story Engineering) and it said that you've basically got a few pages to get people interested in the book. A book title and cover is something that needs to appeal to people when compared to the ones next to it on a shelf or a website.
Let's say you go to a bookstore or browse for one on the net. You aren't looking for a specific title or author. Most people don't methodically make comparisons by reviewing each one thoroughly. They make A-B comparisons until something interests them. Then they investigate further. So I think you need to ask: which groups do you want reading this book and what are the things that get their attention?
BTW, companies don't always know what's in their best interest:
http://www.youtube.com/watch?v=Mhfd_DOJIMI
I bought and enjoyed your book, but I was already familiar with the "Scott Adams Brand" through Dilbert and the blog. I think any book with a title that begins "How To Fail..." from an unknown source, would not be immediately appealing. What to do about it now? ...I have no idea
Inmare
1. No. I am a layman in a room full of experts who know how to track down elephants in the room.
2. Covers and titles are frequently changed for different markets and especially for translations. This is not a "true" A/B testing of course, but there may be markets that are similar enough in relevant regards (level of income, distribution of industrial sectors, level of education). Plus this stretches your condition of "already on the market" a little bit.
Stovetop159
I don't see what the difficult part is about A-B testing as long as you don't do it in the field...
Pay to have a run the test with a test audience. Movie studios do it all the time. I'm sure that there are marketing companies that will gather a pool of volunteers and figure out a way to show them different cover possibilities and poll them on which they are more likely to buy or give as a gift.
Then when you find the best cover title combination, you rebrand the book. That happens all the time.
Melvin1
I mostly agree with your hypothesis. And I never minded the title; sometimes being a little different makes people notice. But I do find the color hideous. And the graphic looks cartoon-y without the Dilbert-style appeal. It doesn't look serious -- even in a fun way. Worst of all, it's a giant foot stepping on a little guy. That screams "how 'the man' is !$%*!$%* you." That is not something I'd care to look into further.
smays
I'm a daily reader of your blog so I think of myself as something of an expert on what you are thinking and feeling. Not as much as your wife but probably more so than an aunt or uncle. And you seem more concerned about the sales of the new book than I would have expected. You don't need the money. And you've learned to manage your expectations about your fellow man. You've led us to the water, do you really care if we drink it? (Lots of did) I'd be suspicious if lots of people bought and loved your book. It would make me question my own judgement in some fundamental way.
My favorite SA books is God's Debris. I don't know what your expectation for that book was but I can't imagine you thought there would be long lines. My own take on GD is it will be the Dead Sea Scrolls of some future era. What I'm saying here is maybe it's about timing. Perhaps there's nothing you can (or could have done) beside wait. Let me know if that strategy flies with your publisher.
webar
You seem to be struggling with the goal of increasing sales of your book. A wise person told me these important things recently: • Goals are for losers. Systems are for winners.
• “Passion†is bull. What you need is personal energy.
• A combination of mediocre skills can make you surprisingly valuable.
• You can manage your odds in a way that makes you look lucky to others.
Maybe you should read this book? http://www.amazon.com/How-Fail-Almost-Everything-Still/dp/1591846919/
deldran
[1. Do you believe publishers are wrong about the importance of the title/cover]
Yes. The phrase "don't judge a book by it's cover" exists because our default reaction is do just that. I suspect the reason rebranding isn't done for books is due to cost. Plus a publisher/author/agent would be admitting that they made a mistake by not suggesting something better the first time. It could end up with a lot of finger pointing and sour a relationship.
Who does come up with the title? It seems that the publisher would leave final say in the author's hands so as to avoid blame if the sales aren't as good as expected. But that means their "expert" opinions could get over-ridden.
[2. Is there a practical way to do A-B testing for books already published?]
I don't think so, until you are ready to stop making money off it. At that point, you offer it for free under several titles and ask people that have read the book to choose one of those titles when recommending to their friends that they download it. Other opinions based on not having read the book:
1) The title makes it a difficult gift to give. It's sounds like an accusation that the receiver is serial failure. I'd be hesitant to give or recommend a book with that title without having the time to caveat the gift and the relationship with the receiver to know they'll take it well. It's kinda like the opposite of getting someone "Oh the places you'll go" upon graduation (which has a much more positive title).
2) I only buy books I expect to reread (usually sci-fi series, more books in the arc is better, and longer books are better). This is the type of book I'd get from the library.
3) I don't expect the book to make me more successful. Success still takes work and most people don't like to work hard enough to achieve big success. I'm comfortable where I am now given the amount of work I put in. 4) I suspect that your book may also suffer from "what kind of book is this?" confusion. You being who you are will cause a lot of people to expect it to be a humor book. The "kind of the story of my life" part of the title makes it look like an autobiography. You describe the content as self-help style book (I haven't read it yet; I'm still waiting for my copy from the library which is where I tend to get books I expect to be able to read in a few hours). Which genre is it being compared against?
5) (More related to success pie blog) I think the backlash against the 1% is more related to inflated C-level salaries. Your pay should be commensurate with effort and talent. I don't think CEOs are working 273 times harder than the average worker. And I don't think they are that much more talented than those below them who are making much less. That makes their work overvalued, and since CEOs sit on the board to set CEO pay, makes them look like d-bags and easy to hate. The exception is for those that built their companies from the ground up like Elon Musk.
AtlantaDude
It is interesting that you mention the "For Dummies" series. I can honestly say that I resist buying them simply because of the title, and have given some (unserious) thought to starting a competing series of books "For the As Yet Uninformed"
anichini
1. As someone who pointed to the title as a reason I wouldn't give it as a gift to certain people, I certainly think the title/cover is important. HOWEVER that doesn't mean I believe publishers could predict which combination will work any better than you or I could. Perhaps your title/cover, as is, still sell more copies than something more amenable to my gift-buying sensibility.
2. I agree; I cannot think of an obvious way to split-test with books.
benoitpablo
Scott, it's probably too late to fix any problems, for reasons stated already.
Nothing about Dilbert suggests "self-help success", and your name only indicates that to people who already know who you are and (like me) would have bought it anyways.
I like the cover, but it looks like a funny memoir as opposed to a personal growth/success book.
That said, you have more money than you'll ever use so I guess we can just write this off then.
group29
The exception that proves the rule?
Dr. Agus was one of the cancer doctors to Steve Jobs, who retitled Agus’ book, The End of Illness.
http://www.readyformedia.com/the-end-of-illness/
I can't find a better source for this story, heard on the radio the other day. | 计算机 |
2014-23/1194/en_head.json.gz/39753 | Chris J. Lee
HomeRésuméWorkColophonBlogContact Dallas Drupal Developer Chris Lee is a graduate of Michigan State University. In 2007, he earned a degree in Telecommunication, Information Systems, and New Media. Originally from Michigan, Chris Lee grew up in West Bloomfield, Michigan. While living most of his life in Michigan has been steadily building a career in the interactive industry.
He was fortunate he had an early start. As a precocious web developer and originally a web designer, his passion started developing websites at the early age of 13. In seventh grade, his first website was a hand coded virtual tour website developed with Windows 95's Notepad and Adobe Photoshop 5. This website was designed and developed for his computer class.
Today, Chris is an active member of the North Texas Drupal Community. He has attended and spoken at various Web Conferences across Texas. In the past, Chris has spoke at Dallas Drupal Days 2011 & 2012, Co-lead a BoF at Drupalcon Portland, spoke at Refresh Detroit events, and Arbcamp 2007. The presentation topics range from Front End Performance, to Sass, to JavaScript libraries.
His first fifteen seconds of fame included a mention in Smashing Magazine, an e-zine dedicated to spotlighting design and development trends, regarding top portfolio sites.
In his spare time, Chris likes to participate in weight lifting, photography, table tennis, cooking, yelping, watching sports and traveling with his girlfriend.
Chris currently resides in Plano, Texas.
© 2014 Chris J. Lee | 计算机 |
2014-23/1194/en_head.json.gz/41299 | Saga: Rage of the Vikings (c) Cryo Interactive
Pentium 166, 32Mb Ram, 4x CD-ROM 51%
Wednesday, May 19th, 1999 at 04:38 PM
By: Lobo
Saga: Rage of the Vikings review
Although it has not been a greatly anticipated game (in fact, not at all), Cryo Interactive has put in "their best efforts" to produce Saga: Rage of the Vikings. This game has been thrown into the masses of past and present real time strategy games to fend for itself and fight its way to the top. Does it face up to games like Brood Wars or will it perish like so many others?
I guess all I can say about the storyline is that it is quite humorous. The funny part about it is that there is none. I searched long and hard to try and find it yet all I was left with was the fact that it involved vikings and a briefing of your mission objectives at the beginning of each level. However, I didn’t let that put me down and I strode forward irrespective of the lack of storyline.
With great excitement I slammed my Enter key and eagerly awaited for the game to load. As soon as the menu popped up, my smile fell to a frown. Staring at me was a muddy brown menu with little variety added to it. The menu was not bright at all and made a bad reflection on the game itself. The thing that I found enjoyable was the music playing in the backround. It had a great tempo and made you eager to get stuck in the game. I moved on quickly and clicked on New Game. Once again I was horrified to see a list of 25 map files , 8 of which were tutorial maps with names like BASIC_01.sna and ICE.sna . I also noticed a multiplayer directory with a handful of multiplayer maps. Once I was in the game, considering I have a Celeron 300 (Over-clocked to 450 of course ) � Matrox Mystique � 64 Megs RAM, the graphics were not CPU intensive at all. As a result, the game ran quite smoothly. The graphics were pretty dull and "cheap" though. Obviously there is no 3dfx support in this game. The graphics department definitely does not fare well against other games in this genre.
Saga is similar to Settlers 3, where your objective is to build up a settlement , harvest resources, trade , gain wealth and destroy anyone who opposes you. I didn’t like the idea of having ALL your units, including warriors and other fighting units, joining the peasants in harvesting and construction of buildings. The thought of you best fighting unit, which you paid good money to train, on his knees in the dirt harvesting resourses just didn’t seem right to me. Another thing I noticed was the blistering pace at which your units move and conduct their duties. My first thought was that the game speed was too high, so I toned it right down and returned to the action, but now my units were moving far too slow and small tasks like picking berries took ages.So I had to make do with my men sprinting across the screen like flashes of light. Soon I had reached the point in the game where you face your enemy. Rushing towards my base came a vicious warthog. Why he wanted to flatten all my men and my settlement by himself I don’t know, but my men swiftly moved in and literally made mince-meat of it and sent it to the warehouse. A bit disappointing for a first battle. Besides that, there were two things that baffled and irritated me to no end. The first was the attempt of making the game seem more realistic by having clouds pass by every 2 or 3 seconds. This was a miserable failure. Instead you had large blotches of white zooming by like fighter jets and blocking your view. The second thing that I found incredibly baffling was the fact that you could control your cattle! In one of the tutorial missions, you began with one cow. To my amazement I could click on my cow and send him off exploring or even get him to harvest some resourses. Funny enough, the objective of the mission was to increase the cattle from 1 to 2. My question is: How is it possible to reproduce with only 1 cow?? But that is better left for Cryo Interactive to answer. One good thing I found about the game was that there was a lot of attention to detail on the buildings and the units. It’s not often you see a software title with detail like this. Also, the movement and smoothness of the units was really good and deserves at least a "thumbs up" for it.
When considering the sound in the game, there are a mix of good and bad things. Some of the bad things were the unrealistic and high pitched sound effects. Chopping at trees sounded like techno music on a pc-speaker and became very irritating. The game does not support 3d sound or any other special sound engines. I also found that there were a lack of sound effects and I spent most of my time in silence. The best part about the sound is the unit speech. This was very crisp and clear and certainly was given a lot of attention to by Cryo Interactive.
The controls for Saga are your basic real time strategy keys, with the mouse doing most of the work moving units, commanding units, conducting trade etc. It also has a range of shortcut keys for those who want to get things done quickly. Obviously there is no support for gamepads otherwise it would make the game feel a bit awkward.
Saga most certainly supports multiplay by offering IPX connection, TCP/IP, Modem or Serial Cable connection. You can also choose to take on the computer in a scenario map. This proves to be quite fun and the battles are grand and very intense. The amount of players that can play depends on the map chosen , but if you want to, you could load up the map editor provided and make your own battle ground for all your buddies to enjoy. The multiplayer section of the game seems to be the strong point. Its fast, furious and entertaining.
Saga: Rage of the Vikings unfortunately was a disappointment. I expected a lot more. I felt that the multiplayer section didn’t let me down and was quite superb, but that still can’t be the foundation for the entire game. This certainly isn’t a groundbreaking game, but if your CPU speed is unable to run the latest fast paced games, then this might be a choice for you.
Great Multiplayer section
Unit speech
Units move to quickly
Not polished at all
Written By: Lobo | 计算机 |
2014-23/1194/en_head.json.gz/41987 | Keep Talking and Nobody Explodes: The deadly combination of isolation and cooperation
Started by Ironman273
Keep Talking and Nobody Explodes: The deadly combination of isolation and cooperationBy Ben Kuchera on Jan 30, 2014 at 8:01a
You and the bomb are alone together. The good news is that you're the only one who will die if you fail to defuse the device.
The better news is that your friends are speaking to you through a radio, and they have access to the bomb’s documentation. The bad news is that it’s still your life hanging in the balance.
Keep Talking and Nobody Explodes is a game created by three developers and a musician for the Global Game Jam, and the player defusing the bomb wears an Oculus Rift headset to isolate themselves from the rest of the team. The rest of the players look at a hard copy of the bomb’s manual, which is laid out almost like a logic problem.
The player next to the bomb has to describe what they see, as they are the only person who can "touch" or examine the explosive device. The team has to use those descriptions to decipher the instructions and tell the player next to the bomb what to do. It’s a game about communication, pressure, and the ability to work together.
It also requires a whole lot of specialized hardware.YOU'LL BE THE ONLY ONE WHO DIES
"The key to the whole experience is that one person feels isolated and in their own place, and the other players have no concept about what they could be seeing," Ben Kane told Polygon.
Kane was one of the developers who created the project and, although they didn’t start out thinking about the game as a commercial product, interest in a for-pay release has exploded. The team is discussing a possible next step towards turning their prototype into a full game. But it won't be easy.
"The bomb is actually fairly simple in the current iteration. It’s made of independent components that get randomly generated, so the bomb is different every time," Kane explained. "Within those components there is some variation on those properties."
Designing a randomly generated bomb is possible, but the game also requires the use of a hard copy manual that the other players use to decipher the steps needed to defuse each version of the bomb. The challenge is to make both the bomb’s design and the paper manual dynamic. This would be easy if players used a device such as a phone or tablet to view the manual, but that loses some of the joy of the game.
"I personally love the hard copy concept, just the idea of having people huddled around a table, spreading out papers everywhere, adding to the chaos of it," Kane said. "If you had people standing around looking at their iPhones it wouldn’t be quite so interesting."
This is where the tension comes in. The player inside the virtual reality simulation has to examine the bomb and explain what he sees, while the rest of the team pores over the documentation for clues about how to successfully defuse the explosive before it kills the person in the room.
The game currently requires the use of the Razer Hydra, a now-discontinued motion controller for the PC, but that will have to be changed for the final release.
"The ideal way to play it is with some sort of virtual reality headset and some sort of one-to-one motion controller, like the STEM system or the Hydra," Kane said.
"Having said that, if this ever does goes commercial, we’d pretty much be forced to support as many things as we could, keyboard and mouse support would have to be implemented in some way, or you’d never be able to reach a big enough audience."
There are interesting ways to do that without requiring virtual reality, however. You could even play the game over Skype: One person sees the bomb on a standard monitor, while the players in a remote location use only voice chat to explain the process needed to defuse the explosive.A NEXT-GENERATION GAME
It may be optimal to allow players to use a mouse and keyboard along with standard screens from a business perspective, but the real joy in the game comes from the immersion of being alone in the room with the bomb, and that's a feeling that requires the virtual reality headset and the motion controls. This is the sort of game that causes enthusiasts to get excited about the possibility of retail virtual reality hardware in 2014.
The game requires a large amount of specialized hardware, but it gives players in the same room the illusion of being isolated, and the feeling that at least one life is in danger if they mess up. The player wearing the headset also has to deal with a high degree of concentration, immersion, and of the fear of what happens if anyone messes up.
In fact, Kane's original idea was to map the player's view to a physical object in the game that would be thrown around the room if the bomb went off. The result would be a first-person view of having their head blown off.
"People weren't on board with that," he explained. "It was a bit too grotesque."
Source: Polygon
Cool idea. Its this kind of experimentation that can lead to some amazing experiences beyond traditional gaming.
That has to be the worst game name ever. I don't want people to keep talking OR not explode.
madd-hatter
Crazy Hat Salesman
Location: World Citizen
That has to be the worst game name ever. | 计算机 |
2014-23/1194/en_head.json.gz/42231 | Computational Sciences & Mathematics Division Staff Awards & Honors
Bora Akyol New IEEE Senior Member
In August 2013, Dr. Bora Akyol was named an Institute of Electrical and Electronics Engineers Senior Member, the highest grade for IEEE members. To achieve the upgrade, Bora demonstrated significant performance in his scientific field, earning recognition from his peers for technical and professional excellence.
Bora Akyol
Before joining PNNL in 2009, Bora was a technical leader at Cisco Systems, where his work involved service blades for the Catalyst 6500 Series switches, 1250 and 1140 Series 802.11n access points, and Internet Key Exchange and Internet Protocol Security protocols, as well as next-generation, identity-based networking products. He has published two Internet Engineering Task Force (IETF) Requests for Comment and holds 15 patents in the areas of wireless and Ethernet networks, network security, congestion control, and software engineering. He is a longtime active member of both IETF and IEEE. At PNNL, Bora is a senior research scientist in CSMD's Data Intensive Scientific Computing group, conducting research and development in network security, information sharing protocols, and Smart Grid. He also serves as cyber security lead for the Pacific Northwest Smart Grid Demonstration project. He earned both his MS and Ph.D. in electrical engineering from Stanford University. Page 29 of 154 | 计算机 |
2014-23/1194/en_head.json.gz/42671 | 'Temple Run' could become the latest mobile game to be turned into a movie
Paranormal romance 'Ghost' may be floating to TV screens
Computer History Museum releases Apple II's DOS source code from 1978
The Computer History Museum has posted scans of a particularly historic document: the original DOS source code for the 1978 Apple II. It was the first computer with a built-in floppy drive ready to use out of the box, requiring intricate software to match Steve Wozniak's elegant and simple hardware. The new documents include both the code itself and the original documents, showing the crinkled punchcards and sprocket-feed paper that defined the early Apple days.
Written in just seven weeks, the code was a rush job for Apple contractor Paul Laughton, who was paid $13,000 for his work. The result is something between a file management system and a modern-day operating system, allowing applications a simple way to access disk files through BASIC commands. The source code is officially still under copyright to Apple, circa 1978, but in the case of the Computer History Museum, it seems Cupertino was willing to let it slide.
9to5 Mac
Source Computer History Museum
Related Items dos apple ii computer history museum floppy disks Apple | 计算机 |
2014-23/1194/en_head.json.gz/42793 | About User Experience Magazine
UXPA
Select a pageSelect a page... id="menu-item-15">Home
id="menu-item-1768">Past Issues
id="menu-item-2447">Book Reviews
id="menu-item-1081">About User Experience Magazine
id="menu-item-5885">UXPA
Search: User Experience
The Magazine of the User Experience Professionals Association UXPA Facebook page
UXPA Twitter
UXPA LinkedIn profile
Search: World Usability Day 2006: Forty Thousand People Hear How to Make Life Easier
Elizabeth Rosenzweig, Caryn Saitz
The second World Usability Day built on the success of its inaugural year. On 14 November, more than forty thousand people worldwide celebrated World Usability Day 2006. With over 225 events in 175 cities in forty countries, thousands of visitors joined the over ten thousand volunteers who helped organize and execute these events.
The year 2006 brought an increase of more than 90 percent in the number of events and an increase in the number of attendees at each event, making the growth exponential, significant, and very exciting. Additionally, the concept of usability was promoted for the first time in Paraguay, Iceland, Egypt, Kuwait, and Japan.
The focus this year was on “Making Life Easy.” Of the more than forty thousand people participating in World Usability Day 2006, the largest percentage were professionals involved in usability, design, engineering, technology, research, product development, government, and marketing. However, there was strong outreach to general consumers and youth as well. Events, which addressed issues in healthcare, education, communications, government, and more, were hosted at museums, libraries, art galleries, shopping malls, cafes, and offices.
Museums The U.S. cities of St. Louis and Boston focused on education and hosted their events in their science museums. These events featured activities for adults and youth focused on how user-centered design and engineering affects their everyday lives. Both museums sponsored an alarm clock rally: usability volunteers interviewed visitors about how long they thought it would take to set the time on six different alarm clocks. The visitors then were timed while they set the alarms and interviewed about the outcomes. The volunteers encouraged the visitors to figure out for themselves what factors affected their speed and accuracy. Visitors rated the rally as both highly enjoyable and educational, which surprised the museum staff. The staff said, “We need more of this—permanently.”
(For more information about the Boston events, see the UPA Voice article.)
Shopping Malls Brazil featured twelve events around the country. University students in Curitiba spent time in the largest shopping mall speaking to shoppers about remote control usage. This event made front page headlines and was available via multiple radio and television interviews.
Communities In Johannesburg, South Africa, the World Usability Day organizing committee and the Computer Society of South Africa presented a seminar with the theme of “Making Life Easier Using IT.” They awarded two organizations with World Usability Day “Making Life Easy” awards. These awards recognized contributions to the everyday lives of South Africans.
One award went to the African Drive Project in Pretoria. The project develops models for blended learning—live and online teaching—in developing regions. The intent of the project is to provide secondary school teachers with better learning opportunities in physical science, mathematics, technology, business studies, English communication skills, and computer literacy. (See www.adp.org.za for more information.)
The other award went to the eInnovation Academy at the Cape Peninsula University of Technology. The objective of one of the Academy’s projects is to determine if cultural factors affect interface choices by learners, and then to test two different interfaces with a group of culturally diverse, advanced learners.
Both of these groups have worked to achieve usability in their application areas, to employ information technology to assist people from the local communities in their quest for information, and to take new initiatives to rural and urban areas.
Virtual WUD If you didn’t live or work near one of the 175 cities worldwide that hosted events, you could still participate in World Usability Day 2006. In an effort to support “Making Life Easy,” this year’s event featured multiple opportunities for people to support World Usability Day from anywhere in the world at any time. The opportunities included signing the World Usability Day Charter, taking a red balloon for a walk, taking photos or videos, attending a webcast or event, and participating in the world’s largest card sort.
Signing the Charter The World Usability Day Charter was officially launched as part of World Usability Day 2006. The charter addresses key worldwide sectors affected by usability: Healthcare, Education, Government, Communications, Privacy and Security, and Entertainment. The charter received more than a thousand signatures and is gaining signatures daily. To sign the charter, visit www.worldusabilityday.org/charter.
Red Balloons Take a Walk “Walking your Red Balloon” was an international effort to identify products—including maps, road signs, forms, and other items—that are either usable or not. The initiative began in London and Auckland, New Zealand and inspired people to walk around their communities with a red balloon and photograph or video the products they’ve identified. Hundreds of photos from this initiative and from local events have been placed on Flick’r photo galleries. Access them at www.flickr.com/groups/makinglifeeasy and www.flickr.com/groups/worldusabilitygallery2006. To see videos, go to YouTube at www.youtube.com/groups/worldusabilityday2006.
Webcasts With this year’s focus on accessibility, the number of webcasts more than doubled from 2005. With over forty webcasts available, people were able to join in from their office or home. Throughout the day, the webcasts enjoyed strong support and interest from participants.
Media coverage for World Usability Day 2006 was extensive. Efforts were spearheaded by PR sponsor, Weber Shandwick Worldwide. This work involved local media support from their offices in sixteen cities including London, Sydney, Bangalore, Toronto, Boston, New York, St. Louis, and more.
Planning for World Usability Day 2006 Efforts for World Usability Day 2006 officially began in March 2006, with the development of a global core committee. The website, developed by Different Solutions in Sydney, Australia, was the central point of access for participants and was launched in June 2006. The website was built using Ruby on Rails open-source web technology and followed accessibility guidelines.
In its initial phase, it featured registration modules for volunteers and event organizers and offered stories from World Usability Day 2005. In August 2006, the website launched phase two, the events module, and enabled event organizers to register their events and to post them live. The website featured the ability to view events on a map, by country, by hour, and by webcast.
The site included toolkits and resources for event planning, media coverage, and marketing opportunities. Marketing efforts included a direct mail and online campaign with posters and postcards in several languages. These can still be downloaded from the website at www.worldusabilityday.org/tools/world-usability-day-posters. Tee shirts provided by our sponsor, Techsmith, were mailed worldwide and additional tees are available for purchase at www.cafepress.com/worldusability.
Worldwide sponsors for World Usability Day 2006 included Apogee, BusinessWire, Different Solutions, Human Factors International, Intuit, Noldus, Oracle, SAP, Techsmith, Usability.ch, and Weber Shandwick Worldwide. Their generous support was critical to the success of World Usability Day and we appreciate it very much. Additionally, there were hundreds of companies that supported local events and enabled their success as well.
World Usability Day is meant to raise awareness of everyone’s right to have things that work well. The intent was to bring together, for one day, everyone interested and affected by usability. In our case, the whole is truly greater than the sum of the parts. We value the collaboration of our supporting organizations—Human Factors and Ergonomics Society (HFES), SIGCHI, STC, and User Experience Network (UXnet)—and look forward to enhancing these relationships in the future.
The impact of World Usability Day is created by the tens of thousands of people who participate and volunteer their time, skills, offices, equipment, and more. It is their commitment and energy that makes this a unique event. With all of our creativity, energy, and collaboration, World Usability Day can make a strong impact on our society. We look forward to getting together with everyone again on 8 November, World Usability Day 2007!
Topics: World Usability Day Published in: March, 2007 in Usability Around the World Rosenzweig, E., Saitz, C. (2007). World Usability Day 2006: Forty Thousand People Hear How to Make Life Easier. User Experience Magazine, 6(1).
Retrieved from http://www.uxpamagazine.org/forty_thousand_people_/
About the authorsElizabeth RosenzweigElizabeth Rosenzweig is the founder of World Usability Day, a day that seeks to recognize and promote usability as a means to making products and services easier to use. Starting in 2004, WUD has taken place every November, on the second Thursday of the month.Caryn SaitzCaryn Saitz is a creative business development and marketing executive, entrepreneur, and consultant with over twenty years of experience working in high technology, life sciences, financial services, publishing, and retail developing new business, strategic alliances, and marketing programs. Caryn is executive director of marketing and sponsorship for World Usability Day.Share this article Article topicsWorld Usability DayPublished March, 2007 in Usability Around the WorldLanguagesEnglish中文 한국어Português日本語Español User Experience Magazine Past Issues
Write for Us: Information for Authors
Advertise in UX Magazine
UXPAUXPA.org
UXPA 2014
UXPA supports people who research, design and evaluate the user experience of products and services
About UXPA
Join UXPA
© Copyright 2014 UXPA | All rights reserved | uxmagazine@usabilityprofessionals.org | 计算机 |
2014-23/1194/en_head.json.gz/43181 | & News
EuroDNS Blog > ICANN > ICANN show and tell, June 13, 2012 EuroDNS Blog
ICANN show and tell, June 13, 2012
by Luc // 13 . 06 . 2012 // ICANN
June 13, 2012, five months after the opening of the TLD Application System (TAS), the Internet Corporation for Assigned Names and Numbers (ICANN) is revealing the list of applied-for new gTLDs, along with the names of the applicants. The day is rather cleverly named, Reveal Day. On January 12, 2012 ICANN began accepting applications for new generic TLDs, the process being completed on May 30, after suffering from delays due to technical problems. With over 1900 applications, there are several big name companies with multiple applications, one of the most notable being Google. Aiming to promote their brand name and business, the company has, allegedly, made 50 applications that include .GOOGLE, .DOC, and .LOL. Reveal Day, June 13, ICANN will publicly post all new TLD character strings and the applicants behind.
Application Comment period begins June 13 and lasts for 60 days. Applicants and members of the public are invited to submit comments on any of the applications. For this process, and to remain unbiased, ICANN has hired an Independent Objector to review the comments. ICANN states, “Professor Alain Pellet has agreed to serve as the Independent Objector for the New Generic Top-level domain program. The Independent Objector will act solely in the interests of the public who use the global Internet. He is a highly regarded professor and practitioner of law and has represented governments as Counsel and Advocate in the International Court of Justice in many significant and well-known cases.” Objection period kicks off on June 13 and runs for seven months, during which objections may be filed by anyone who feels they have justification, based on legal grounds, public interest, or community standing.
Batching process is running from June 8 to June 28, 2012. ICANN’s decision “to open the domain name space to competition and choice” invited an unlimited number of applications; realizing that evaluating such a quantity could be considerable, a decision was made to implement a batching process. July 11, ICANN will announce the results of the batching process.
Initial Evaluation begins July 12, and concentrates on String and Applicant reviews. String reviews judged whether the applied for TLD is too similar to another, meets technical requirements, and whether it is a geographical name. Applicant reviews investigate whether the appropriate technical, operational and financial criteria has been met by the applicant to successfully run a registry,
Additional Program Phases run through December/January 2013. Applications passing the initial valuation without objection will, eventually, go live as a TLD, but some may be delayed. Multiple applicants for a single TLD could, if deemed equally worthy, go to auction; a process that could be very expensive for the winning bidder; there is also a dispute process in place to handle any legal issues such as trademark infringement.Trademark Clearinghouse
June 1, 2012, ICANN announced that it will be working with Deloitte and IBM on implementation of the Trademark Clearinghouse (TMCH), which will be offering full support for protection of trademark rights in the new gTLD domains.
The TMCH will be a globally available database, providing services for the authentication and validation of brands. Deloitte will provide control and data validation, whilst IBM will give technical database administration services. ICANN said, “Both providers are highly qualified, with significant experience, technical capacity, and proven ability to manage and support processes.” ICANN wanted service providers with, “a demonstrated understanding of the issues concerning global intellectual property rights and the Internet, global capacity to authenticate and validate trademark information, and experience designing, building, and operating secure transaction processing systems with 24/7/365 availability.”
Applicants gaining approval to run a registry will be required to implement a rights protection model, supported by the TMCH; during a TLD’s start-up period when a domain name is registered that matches Clearinghouse records, participating rights holders will be alerted. Whilst ICANN are still finalizing fees for access to the Clearinghouse database, the preliminary cost model – June 1, 2012, estimates, “a set-up fee of $7,000 – $10,000 will be due per TLD registry.” There is a bundled, low fee for authentication and validation services for Rightsholders, expected to be under $150 per submission for a ‘straightforward submission’; additional services due to errors in the original submission, or any appeals will increase the cost.
If all goes to plan, the first of the new gTLDs should start to appear online at the beginning of 2013… if all goes to plan.
← EuroDNS to attend ICT Spring Europe 2012
ICANN reveals applicants and new domains →
Stay up to date !
Join our newsletter and stay up to date with the latest domain industry news.
Categories Customer stories
Our team is on hand to help...
International Domain Names
Your trusted domain name registrar. Accredited with worldwide leading registries.
© 2002 - 2014 EuroDNS S.A. All rights reserved.
*Except where stated, all prices exclude VAT (15%) | 计算机 |
2014-23/1194/en_head.json.gz/43431 | Sandra Kurtzig Kenandy, ASK Group
Software industry pioneer Sandra Kurtzig founded Kenandy in 2010 with the vision to once again transform the world of manufacturing management software. She did this the first time starting in 1972 when she founded ASK Computer Systems and created the groundbreaking MANMAN product family. During her 20-year tenure as founder, chairman, and CEO of the ASK Group, Kurtzig grew the company into one of the 10 largest software companies in the world.
Now, as founder and Chairman and CEO of Kenandy, she is helping to create and drive the new industry paradigm of manufacturing management in the cloud, supporting a growing, global, social community for collaborative manufacturing. Kurtzig has been widely covered in the media and has received numerous business awards. She was the first woman to take a technology company public, and was included on Business Week's list of the top 50 corporate leaders. Her best-selling autobiography, CEO: Building a $400 Million Company from the Ground Up is published by Harvard Business School Press and is available on Amazon.com. Since retiring from ASK in 1993, Kurtzig has been the managing partner of SLK Investment Partners, a private equity investment partnership. She has also been a mentor for and investor in entrepreneurial technology companies and has taught the Business for Engineering class at Stanford University. She has served on the boards of Harvard Business School, Hoover Institution, Stanford's School of Engineering, Stanford Engineering Strategic Council, UCLA's Anderson Graduate School of Management, and UC Berkeley's Haas School of Management. Kurtzig has a B.S. in mathematics from UCLA and an M.S. in aeronautical engineering from Stanford University. Related Links:
http://kenandy.com/ Last Updated: Wed, Mar 7, 2012
Displaying 4 results for Sandra Kurtzig Page: 1
Author/Speaker
Two Generations of Entrepreneurship
In this special lecture, mother and son serial entrepreneurs Sandra and Andy Kurtzig share smart reasons for starting companies that matter. Sandra Kurtzig outlines similarities and differences between her previous ventures and her current company, Kenandy. Andy Kurtzig discusses his company, JustAnswer, and key lessons for entrepreneurs. Sandra Kurtzig · Andy Kurtzig Kenandy, JustAnswer
48:44 03/2012
Two Generations of Entrepreneurship [Entire Talk]
Starting Out at 23
Entrepreneur Sandra Kurtzig shares the personal story of starting ASK Computer Systems at age 23. Kurtzig explains challenges she faced during this time, including limited startup funds, taking on IBM, and gaining business experience on the fly.
Sandra Kurtzig Kenandy, JustAnswer
No Need for Venture Funding
Entrepreneur Sandra Kurtzig describes the unique way in which her first venture, ASK Computer Systems, gained early funding without taking venture capital. Kurtzig also articulates how the company was successful by being ahead of the market. | 计算机 |
2014-23/1194/en_head.json.gz/43464 | (Redirected from Status update)
Microblogging is a broadcast medium that exists in the form of blogging. A microblog differs from a traditional blog in that its content is typically smaller in both actual and aggregated file size. Microblogs "allow users to exchange small elements of content such as short sentences, individual images, or video links".[1] These small messages are sometimes called microposts.[2][3]
As with traditional blogging, microbloggers post about topics ranging from the simple, such as "what I'm doing right now," to the thematic, such as "sports cars." Commercial microblogs also exist to promote websites, services and products, and to promote collaboration within an organization.
Some microblogging services offer features such as privacy settings, which allow users to control who can read their microblogs, or alternative ways of publishing entries besides the web-based interface. These may include text messaging, instant messaging, E-mail, digital audio or digital video.
1 Services
3 Organizational usage
5 Related concepts
6.3 Services
Services[edit]
The first microblogs were known as tumblelogs. The term was coined by why the lucky stiff in a blog post on April 12, 2005, while describing Christian Neukirchen's Anarchaia.[4]
Blogging has mutated into simpler forms (specifically, link- and mob- and aud- and vid- variant), but I don’t think I’ve seen a blog like Chris Neukirchen’s Anarchaia, which fudges together a bunch of disparate forms of citation (links, quotes, flickrings) into a very long and narrow and distracted tumblelog.
Jason Kottke described tumblelogs on October 19, 2005:[5]
A tumblelog is a quick and dirty stream of consciousness, a bit like a remaindered links style linklog but with more than just links. They remind me of an older style of blogging, back when people did sites by hand, before Movable Type made post titles all but mandatory, blog entries turned into short magazine articles, and posts belonged to a conversation distributed throughout the entire blogosphere. Robot Wisdom and Bifurcated Rivets are two older style weblogs that feel very much like these tumblelogs with minimal commentary, little cross-blog chatter, the barest whiff of a finished published work, almost pure editing...really just a way to quickly publish the "stuff" that you run across every day on the web
However, by 2006 and 2007, the term microblog was used more widely for services provided by established sites like Tumblr and Twitter. Twitter for one is especially popular in China, with over 35 million users tweeting in 2012, according to a survey by GlobalWebIndex.[6]
As of May 2007, there were 111 microblogging sites in various countries.[citation needed] Among the most notable services are Twitter, Tumblr, FriendFeed, Cif2.net, Plurk, Jaiku and identi.ca. Different versions of services and software with microblogging features have been developed. Plurk has a timeline view that integrates video and picture sharing. Flipter uses microblogging as a platform for people to post topics and gather audience's opinions. Emote.in has a concept of sharing emotions, built over microblogging, with a timeline.[citation needed] PingGadget is a location based microblogging service. Pownce, developed by Digg founder Kevin Rose among others, integrated microblogging with file sharing and event invitations.[7] Pownce was merged into SixApart in December 2008.[8]
Other leading social networking websites Facebook, MySpace, LinkedIn, Diaspora*, JudgIt, Yahoo Pulse, Google Buzz, Google+ and XING, also have their own microblogging feature, better known as "status updates". Although status updates are usually more restricted than actual microblogging in terms of writing, it seems any kind of activity involving posting, be it on a social network site or a microblogging site, can be classified as microblogging.
Services such as Lifestream, SnapChat, and Profilactic will aggregate microblogs from multiple social networks into a single list, while other services, such as Ping.fm, will send out your microblog to multiple social networks.[citation needed]
Internet users in China are facing a different situation. Foreign microblogging services like Twitter, Facebook, Plurk, and Google+ are censored in China. The users use Chinese weibo services such as Sina Weibo and Tencent Weibo. Tailored to Chinese people, these weibos are like hybrids of Twitter and Facebook. They implement basic features of Twitter and allow users to comment to others' posts, as well as post with graphical emoticons, attach an image, music and video files.[citation needed] A survey by the Data Center of China Internet from 2010 showed that Chinese microblog users most often pursued content created by friends, experts in a specific field or related to celebrities.
Usage[edit]
Several studies, most notably by the Harvard Business School and Sysomos, have tried to analyze the user behaviour on microblogging services.[9][10] Several of these studies show that for services such as Twitter, there is a small group of active users contributing to most of the activity.[11] Sysomos' Inside Twitter [10] survey, based on more than 11 million users, shows that 10% of Twitter users account for 86% of all activity.
Twitter, Facebook, and other microblogging services are also becoming a platform for marketing and public relations,[12] with a sharp growth in the number of social media marketers. The Sysomos study shows that this specific group of marketers on Twitter is much more active than the general user population, with 15% of marketers following over 2,000 people and only .29% of the Twitter public following more than 2,000 people.[10]
Microblogging has also emerged as an important source of real-time news updates for recent crisis situations, such as the Mumbai terror attacks or Iran protests.[13][14] The short nature of updates allow users to post news items quickly, reaching its audience in seconds.
Microblogging has noticeably revolutionized the way information is consumed. It has empowered citizens themselves to act as sensors or sources of information that could lead to consequences and influence, or even cause, media coverage. People now share what they observe in their surroundings, information about events, and their opinions about topics from a wide range of fields. Moreover, these services store various metadata from these posts, such as location and time. Aggregated analysis of this data includes different dimensions like space, time, theme, sentiment, network structure etc., and gives researchers an opportunity to understand social perceptions of people in the context of certain events of interest.[15] [16] Microblogging also promotes authorship. On the microblogging platform Tumblr, the reblogging feature links the post back to the original creator.
The findings of a study by Emily Pronin of Princeton University and Harvard University's Daniel Wegner have been cited as a possible explanation for the rapid growth of microblogging. The study suggests a link between short bursts of activity and feelings of elation, power and creativity.[17]
While the general appeal and influence of microblogging seem to be growing continuously, mobile microblogging is still moving at a slower pace. Among the most popular activities carried out by mobile internet users on their devices in 2012, mobile blogging or tweeting was last on the list, with only 27% of users engaging in it.[18]
Organizational usage[edit]
Users and organizations often set up their own microblogging service – free and open source software is available for this purpose.[19] | 计算机 |
2014-23/1194/en_head.json.gz/43662 | Katamari Damacy and the return of “play” in videogames
by David Shimomura
@DHShimomura
At Indiecade East, I had the chance to speak with someone whom I truly admire, Miguel Sicart. Sicart is a professor of game design and ethics at IT University of Copenhagen’s Center for Computer Games Research and his work on ethical videogames and procedurality are renowned. He is not afraid to call himself an “ivory tower academic” and doesn’t hesitate to cast down stones from this perch. Ready for one of those such stones? Videogames don’t matter and play isn’t necessarily fun, he says. There’s a lot of language that gets thrown around when it comes to videogames, but it’s a diction based in other media: film, literature, and theater, among others. Sicart wants to introduce back into that conversation something he fears videogames might be losing: playfulness.
“There was a beautiful moment in time with LocoRoco and Katamari Damacy where all these playful, colorful, very toyish games were coming out,” he told me. I can’t help but agree: those were two of the best times I had with videogames, before or since. Sometimes I wonder what the games I play look like to other people, but with Katamari Damacy the answer was always pretty to the point. It looked like it was: fun. Whoever was watching definitely wanted a turn.
Sicart’s concern is not without merit. In the years since, videogames have begun taking themselves very seriously. The Last of Us, the last two GTAs, the Uncharted series, and a slew of others constantly remind us that videogames have become serious business. This “seriousness” has shifted the focus away from games as objects of play and onto games as objects themselves. Instead of caring about play itself, we’ve placed the discs (and the data on them) on a pedestal. Sicart believes that play is expressive, not consumptive, that games exist for us to, well, play with, not consume.
Play is very the reason that Sicart wants us to deemphasize videogames as objects. In our conversation, he said he found importance in the way that “everybody will play in slightly different ways” and it is the “way we meet and talk about play[ing]” that makes play and videogames important and interesting. He stressed, “when we play we take over the world, we reshape it, we create it, we even create the world itself.”
Instead of caring about play itself, we’ve placed the discs (and the data on them) on a pedestal. However, Sicart also wants us to be aware that play doesn’t have to be “fun” in the traditional sense—and sometimes it shouldn’t be. “It’s not like the game tells us, ‘You’re going to be playing because it’s fun.’ It’s more like, ‘I find the fun in this game but I’m going to make it harder,’” he says. Things like toggling permadeth, removing the HUD, or otherwise handicapping the player are all part of what he calls “negotiating play”: creating our own enjoyment by altering the game or the way we play. It’s not that there is anything inherently joyful in a game where you can be permanently killed but there is a specific flavor of joy or excitement created by the tension you have created."
Another example he mentions is Takeshi Kitano’s legendary game Takeshi’s Challenge. This is a game which requires the player to not interact with it for one hour, sing into the 1986 Famicom controller, and punch out an old man. It’s not a game about “fun”; it warns that it cannot be beaten or even attempted with “conventional gaming skills.” It’s a game about the strangeness of videogames.
But there are games on the horizon that seem more in line with those more playful games of the mid-aughts. I ran into Sicart the next day and talked about the upcoming Hohokum, a game he says reminds him of that era. Playing Hohokum and watching others play really helps you understand his point about play being more important than the game itself. The game itself is not a magical, fun generator; instead it’s a tool and a toy that, like Katamari Damacy, encourages the players to simply try things, to experiment. I left my time with the game a little proud that instead of using the joystick I immediately attempted to use the PS4’s touch pad which, much to the surprise of a Sony rep and me, worked. In that moment I discovered a new joy and new way of playing, and so too did the people watching me. My attempt to break the game had failed but the power of play, of experimentation and negotiation, was in full effect.
David Shimomura
David Shimomura is a freelance writer based out of the Chicago area. One day, he dreams of owning a corgi. He also writes a blog which can be found here.
Museum of Moving Image
indiecade east
Miguel Sicart
Box Art Review: The surreal consumption of Katamari Damacy
At their very core, many videogames are about consumption and collection. Whether it’s collecting gold coins, extra skills and spells, or seeing how many helpless women you can save with your brooding, malcontent masculinity (hint: too many, still, in 2014), there’s a certain rush in filling our digital coffers with stuff. In fact, videogames could be seen as ...
Inside the failed, utopian New Games Movement
The fun theorist Bernie De Koven, whose influential book The Well-Played Game was recently reissued by MIT Press, introduces himself as a visionary of the New Games Movement, a playful crusade that grew out of ‘60s counterculture, when free love, sustainable living, and psychedelic bus murals were the spirit of the time. It was in the aftermath of that era ...
How games like Katamari help us deal with consumerism and the wealth gap
In the opening scene of Wall-E, the easiest thing to do is to believe the mounds of items covering the Earth is garbage. The camera flies past desolate nuclear plants, crumbling skyscrapers, abandoned cities, and settles on great piles of… stuff. What else could they be if, if not junk?
As the camera zooms in to observe the robot’s ...
Astronaut games are simple pleasures
The human capacity to be playful never ceases to amaze. Even while hurtling through space at around 17,500 MPH (a statistic memorized during a childhood of astronautic ambitions) in a metal tube, with almost no leisure time, astronauts still make the time to make up – and play – games.
Mostly, they play with their food.
A recent Quora post by ... | 计算机 |
2014-23/1194/en_head.json.gz/43666 | Wesley W. Chu, Distinguished Professor
Inference Approach for Data Security and Privacy (ISP),
Cooperative XML (CoXML),
Medical Digital Library (KMeX),
CoBase,
KMeD
Honors and Special Recognitions
Selected Invited Presentations
Dr. Wesley W. Chu is a Distinguished Professor and past chairman (1988 to 1991) of the Computer Science Department at the University of California, Los Angeles.
He received his B.S.E. (EE) and M.S.E. (EE) from the University of
Michigan in 1960 and 1961, respectively, and received his Ph.D. (EE)
from Stanford University in 1966.
>From 1964 to 1966, Dr. Chu worked on the design of large-scale
computers at IBM in San Jose, California. From 1966 to 1969, his
research focused on computer communications and distributed databases
at Bell Laboratories in Holmdel, New Jersey. He joined the University
of California, Los Angeles in 1969. He has authored and co-authored
more than 100 articles and has edited three textbooks on information
technology and also edited a reference book (co-edited with T.Y. Lin)
on data mining.
During the first two decades of his research career, he focused
on computer communication and networks, distributed databases, memory
management, and real-time distributed processing systems. Dr. Chu has
made fundamental contributions to the understanding of statistical
multiplexing, which is widely used in current computer communications
systems and which influenced the development of the asynchronous
transfer mode (ATM) networks. He also did pioneering work in file
allocation, as well as directory design for distributed databases and
task partitioning in real-time distributive systems. This work has
influenced and has been applied in the design and development of Domain
Name Servers for the web as well as in current peer-to-peer (P2P) and
grid systems. Dr. Chu was awarded IEEE fellow for his contribution in
During the past decade, his research
interests have evolved to include intelligent (knowledge-based)
information systems and knowledge acquisition for large information
systems. He developed techniques to cluster similar objects based on
their attributes and represented this knowledge into Type Abstraction
Hierarchies to provide guidance in approximate matching of similar
objects. Using his methodology for relaxing query constraints, Dr. Chu
led the development of CoBase, a cooperative database system for structured data, and KMed, a knowledge-based multi-media medical image system. CoBase
has been successfully used in logistic applications to provide
approximate matching of objects.
Together with the medical school staff, the KMed project has been extended to the development of a Medical Digital Library,
which consists of structured data, text documents, and images. The
system provides approximate content-matching and navigation and serves
as a cornerstone for future paperless hospitals. In addition, Dr. Chu
conducts research on data mining of large information sources,
knowledge-based text retrieval, and extending the relaxation
methodology to XML (CoXML)
for information exchange and approximate XML query answering in the Web
environment. In recent years, he also research in the areas of using
inference techniques for data security and privacy protection (ISP).
Dr. Chu has received best paper awards at the 19th International Conference on Conceptual Modeling
in 2000 for his work (coauthored with D. Lee) on XML/Relational schema
transformation. He and his students have received best paper awards at
the American Medical Information Association Congress in 2002
and 2003 for indexing and retrieval of medical free text, and have also
been awarded a "Certificate of Merit" for the Medical Digital Library
demo system at the 89th Annual Meeting of the Radiological Society of North America in 2003. He is also the recipient of the IEEE Computer Society 2003 Technical Achievement Award for his contributions to Intelligent Information Systems.
Dr. Chu was the past ACM SIGCOMM chairman from 1973 to 1976, an associate editor for IEEE Transactions on Computers
for Computer Networking and Distributed Processing Systems (1978 to
1982) and he received a meritorious award for his service to IEEE. He
was the workshop co-chair of the IEEE First International Workshop on Systems Management (1993) and received a Certificate of Appreciation award for his service. He served as technical program chairs for SIGCOMM (1975), VLDB (1986), Information Knowledge Sharing (2003), and ER (2004), and was the general conference chair of ER (1997). He has served as guest editor and associate editor for several journals related to intelligent information systems. | 计算机 |
2014-23/1194/en_head.json.gz/43823 | Hotel may collect and use personal information that you submit at the Site in any manner that is consistent with uses stated in this Privacy Policy or disclosed elsewhere at the Site at the point you submit such personal information. At the time you submit personal information or make a request, the intended use of the information you submit will be apparent in the context in which you submit it and/or because the Site states the intended purpose.
By submitting personal information at the Site, you are giving your consent and permission for any use that is consistent with uses stated in this Privacy Policy or disclosed elsewhere at the Site at the point you submit such personal information, and such consent will be presumed by Hotel, unless you state otherwise at the time you submit the personal information.
Secure Reservations
If you decide to make an online reservation at the Site, you will be linked to a reservation interface and a third-party booking engine (“Booking Engine”) provided by Travelclick. While it appears to be part of our site, the Booking Engine is, in fact, provided by a third party and is governed by its privacy practices. We understand that security remains the primary concern of online consumers and have chosen the Booking Engine Vendor carefully.
Protecting your information
We would like our Site visitors to feel confident about using the Site to plan and purchase their accommodations, so Hotel is committed to protecting the information we collect. Hotel has implemented a security program to keep information that is stored in our systems protected from unauthorized access. Our Site is hosted in a secure environment. The Site servers/systems are configured with data encryption, or scrambling, technologies, and industry-standard firewalls. When you enter personal information during the reservation process, or during a customer e-mail sign-up, your data is reasonably protected to ensure safe transmission.
Withdrawing Consent to Use
If, after permitting use of your personal information, you later decide that you no longer want Hotel to include you on its mailing list, electronic or otherwise, contact you or use your personal information in the manner disclosed in this Privacy Policy or at the Site, simply tell us by sending an e-mail.
Use of Aggregated Data
Hotel is interested in improving the Site and may develop and offer new features and services. We monitor aggregated data regarding use of the Site for marketing purposes and to study, improve and promote use of the Site. In connection with such purposes, Hotel may share aggregated data with third parties collectively and in an anonymous way. Disclosure of aggregated data does not reveal personal information about individual Site users in any way that identifies who they are or how to contact them.
Exceptions to the Privacy Policy
Hotel has two exceptions to these limits on use of personal information:
Hotel may monitor and, when we believe in good faith that disclosure is required, disclose information to protect the security, property, assets and/or rights of Hotel from unauthorized use, or misuse, of the Site or anything found at the Site.
Hotel may disclose information when required by law; however, only to the extent necessary and in a manner that seeks to maintain the privacy of the individual.
To enable features at the Site, Hotel may assign one or more “cookies” to your Internet browser. Cookies, among other things, speed navigation through our Site, keep track of information so that you do not have to re-enter it each time you visit our Site, and may provide you with customized content. A cookie is an Internet mechanism composed of a small text file containing a unique identification number that permits a web server to send small pieces of information or text by means of your browser and place them on your computer’s hard drive for storage. This text lets the web server know if you have previously visited the web page. Cookies by themselves cannot be used to find out the identity of any user.
We use cookies to collect and maintain aggregated data (such as the number of visitors) to help us see which areas are most popular with our users and improve and update the content on our site. While in the process of browsing our site, you also provide us with information that doesn’t reveal your personal identity. We use this aggregated data only as explained in this Privacy Policy. We do not connect aggregated data to any name, address, or other identifying information.
You may occasionally receive cookies from unaffiliated companies or organizations, to the extent they place advertising on our Site or are linked to the Site. These third-party cookies may collect information about you when you “click” on their advertising or content or link. This practice is standard in the Internet industry. Because of the way in which the Internet operates, we cannot control collection of this information by these third parties, and these cookies are not subject to this Privacy Policy.
Children’s Privacy & Parental Consent
Please be aware that Hotel has not designed this Site for, and does not intend for it to be used by, anyone under age 18. Accordingly, this Site should not be used by anyone under age 18. Our Privacy Policy prohibits us from accepting users who are under the age of 18. Hotel specifically requests that persons under the age of 18 not use this Site or submit or post information to the Site. Should Hotel inadvertently acquire personal information or other data from users under the age of 18, Hotel will not knowingly provide this data to any third party for any purpose whatsoever, and any subsequent disclosure would be due to the fact the user under age 18 used the Site and submitted personal information without solicitation by or permission from Hotel.
Links Provided To Other Sites
Hotel may provide links to a number of other web sites that we believe might offer you useful information and services. However, those sites may not follow the same privacy policies as Hotel. Therefore, we are not responsible for the privacy policies or the actions of any third parties, including without limitation, any web site owners whose sites may be reached through this Site, nor can we control the activities of those web sites. We urge you to contact the relevant parties controlling these sites or accessing their on-line policies for the relevant information about their data collection practices before submitting any personal information or other sensitive data.
Your Consent To This Privacy Policy
Use of the Site signifies your consent, as well as the consent of the company for whom you use the Site and whose information you submit (if any), to this on-line Privacy Policy, including the collection and use of information by Hotel, as described in this statement, and also signifies agreement to the terms of use for the Site. Continued access and use of the Site without acceptance of the terms of this Privacy Policy relieves Hotel from responsibility to the user.
Policy Modifications & Contacting Hotel
The Nautical Beachfront Resort, referred herein as Hotel reserves the right to change this Privacy Policy at any time; notice of changes will be published on this page. Changes will always be prospective, not retroactive. If you have questions about our policies, please contact:
The Nautical Beachfront Resort
1000 McCulloch Blvd. N.
Phone: 928.855.2141 Fax: 928.855.8460 Toll Free: 800.892.2141 | 计算机 |
2014-23/1194/en_head.json.gz/44287 | MIT Reports to the President 2001–2002
Laboratory for Computer Science
In This Report
The principal goal of the Laboratory for Computer
Science (LCS) is to conduct research in all aspects of computer
science and information
technology, and to achieve societal impact
with our research results. Founded in 1963 under the name Project MAC,
it was renamed the Laboratory for Computer Science in 1974. Over the
last four decades, LCS members and alumni have been instrumental in the
development of many innovations, including
time-shared computing, ARPANet, the internet, the ethernet, the world
wide web, RSA public-key encryption, and many more.
As an interdepartmental laboratory in the School of Engineering, LCS
brings together faculty, researchers, and students in a broad program
of study, research, and experimentation. It is organized into 23
research groups, an administrative unit, and a computer service support
unit. The laboratory's membership comprises a total of just over 500
people, including 106 faculty and research staff; 287 graduate
students; 73 visitors, affiliates, and postdoctoral associates and
fellows; and 35 support and administrative staff. The academic
affiliation of most of the laboratory's faculty and students is with
the Department of Electrical Engineering and Computer Science. Our
research is sponsored by the US Government, primarily the Defense
Advanced Research Projects Agency and the National Science Foundation,
and many industrial sources, including NTT and the Oxygen Alliance.
Since 1994, LCS has been the principal host of the World Wide Web
Consortium (W3C) of nearly 500 organizations that helps set the
standard of a continuously evolving world wide web.
The laboratory's current research falls into four principal
categories: computer systems, theory of computation, human/computer
interactions, and computer science and biology.
In the areas of computer systems, we wish to
understand principles and develop technologies for the architecture and
use of highly scaleable information infrastructures that interconnect
human-operated and autonomous computers. This area encompasses research
in networks, architecture, and software. Research in networks and
systems increasingly addresses research issues in connection with
mobile and context-aware networking, and the development of
high-performance, practical software systems for parallel and
distributed environments. We are creating architectural innovations by
directly compiling applications onto programmable hardware, by
providing software controlled architectures for low energy, through
better cache management, and easier hardware design and verification.
Software research is directed towards improving the performance,
reliability, availability and security of computer software by
improving the methods used to create such software.
In the area of the theory of computation, we study
the theoretical underpinnings of computer science and information
technology, including algorithms, cryptography and information
security, complexity theory, distributed systems, and supercomputing
technology. As a result, theoretical work permeates our research
efforts in other areas. The laboratory expends a great deal of effort
in theoretical computer science because its impact upon our world is
expected to continue its past record of improving our understanding and
pursuit of new frontiers with new models, concepts, methods, and
algorithms.
In the human/computer interactions area, our technical goals are to
understand and construct programs and machines that have greater and
more useful sensory and cognitive capabilities so that they may
communicate with people toward useful ends. The two principal areas of
our focus are spoken dialogue systems between people and machines and
graphics systems used predominantly for output. We are also
exploring the role computer science can play in facilitating better
patient-centered, health care delivery.
In the computer science and biology area, we are interested in
exploring opportunities at the boundary of biology and computer
science. On the one hand, we want to investigate how computer science
can contribute to modern day biology research, especially with respect
to the human genome. On the other hand, we are also interested in
applying biological principles to the development of next generation | 计算机 |
2014-23/1194/en_head.json.gz/45002 | ISPs Go It Alone on E-Mail Identity Issue
The top Internet service providers have begun endorsing competing e-mail identity systems, putting in doubt which one will gain widespread adoption. America Online confirmed yesterday that it is testing the SPF (Sender Permitted From) protocol to fight forged e-mail addresses, or spoofing, which is used commonly by spammers. "We felt that it was a critical and important part of meeting the challenge of spammers' tactics, specifically the practice of spoofing AOL addresses," AOL spokesman Nicholas Graham said. AOL's move comes six weeks after Yahoo announced it would implement another e-mail identity protocol, DomainKeys. The different proposals dampen hopes that the industry would work in concert toward developing a solution to the e-mail identity problem. Under the current system, Simple Mail Transfer Protocol, a spammer easily can forge the return address on e-mail, making tracing the source of spam extremely difficult. Fixing SMTP's shortcomings tops the list for ISPs, though each has a different view about the best way to do so. AOL, Yahoo, Microsoft and EarthLink formed the Anti-Spam Technical Alliance last April to cooperate on various spam-fighting fronts, including the development of a standard for e-mail identity protocols. However, as the anti-spam alliance bogged down in discussions, Yahoo went its own way with the DomainKeys protocol, which uses public-private keys to determine sender identity. Yahoo said it would make DomainKeys freely available to other ISPs and propose it for adoption to the anti-spam alliance, though it has yet to publish the technical details of the system. Yahoo plans to implement DomainKeys sometime this year. SPF is a free open-source protocol that uses publicly available domain registration records and a list of servers the domain owners use to send mail. If e-mail is received from a sender whose records do not match up, the receiving ISP or domain could block the mail because of the suspect identity. The system is voluntary, Graham stressed, and each receiver would decide its own policies for using the tool. Pat Peterson, general manager of information services at IronPort, a San Bruno, CA, e-mail infrastructure provider, said both protocols had good points and shortcomings. The advantage of SPF, he said, is that it is light and easy to implement. The biggest downside is forwarded e-mail or e-mail sent from a different location would foul up the system. Peterson said DomainKeys would not have such problems, but would face a longer adoption cycle because of the technical requirements. "The big problem today is none of those options are on the table because there's no critical mass," he said. The proposals from Yahoo and AOL have gained little initial support, though both companies said they would propose their identity protocols to the anti-spam alliance. No other e-mail receivers have endorsed the Yahoo proposal. In addition to AOL, only a few small domains are publishing SPF, and AOL itself is not checking mail it receives for SPF. Graham said AOL would wait for the results of its test, along with feedback from others in the industry, before proceeding. Graham said AOL remained committed to fighting spam in a coordinated industry effort but that it was only natural that each ISP would have its own opinion about the best approach to establishing identity. "This is part of the effort to bring the best and brightest ideas to the table," he said. "We're very much interested in the DomainKeys proposal from Yahoo. The bottom line is the more the merrier." E-mail service providers would prefer to have a linked system of identity, along the lines of the confederated model proposed by the E-mail Service Provider Coalition's Project Lumos plan. Such a system would make establishing identity once sufficient, instead of meeting different criteria for each ISP and e-mail receiver. "It has to come to one solution," said John Matthew, vice president of operations at New York e-mail service provider Bigfoot Interactive. "That's the only way it will work." Peterson holds little hope for such a neat solution in the near term. "I think, unfortunately, it's going to be very fragmented," Peterson said. "There will be a multitude of solutions, which will make it complicated for senders for a while. It will still make it better than it is today." Other ISPs have joined the e-mail identity debate. Last week, a group of 22 ISPs and telecoms comprising 80 million subscribers formed the Messaging Anti-Abuse Working Group. A priority for the coalition is to work on solving the identity problem. MAAWG has not endorsed an authentication standard. Omar Tellez, a senior director of product development at Openwave, a Redwood City, CA, messaging-software company and a MAAWG organizer, said it would examine each proposal and look to the Internet Engineering Task Force's anti-spam research group for guidance. "We definitely believe having a whole variety of sender protocols is not the way to go," he said. | 计算机 |
2014-23/1194/en_head.json.gz/45252 | Microsoft should kill Internet Explorer
Continually creating new versions of Internet Explorer is not the way to go
Jason Cross (PC World (US online)) on 14 January, 2010 06:57
It's time for Microsoft to kill Internet Explorer. It has to be done quickly, before it's too late to rebound. The browser is bleeding market share in a way that a new version alone cannot stop. It's time for the company to rethink the browser and come at it from a fresh perspective. Microsoft needs a new browser, not a new version of an existing one.Mind you, I'm not arguing that Microsoft should get out of the browser game. I still think the software giant has a lot to offer. Features like WebSlices and InPrivate are nice advances, and Microsoft has been at the forefront of anti-phishing technology. I'm just saying that continually creating new versions of Internet Explorer is not the way to go.IE is in a terrible market share slide, and a new version isn't going to solve anything. Wikipedia averages out IE's market share to 62.69% as of December 2009. It's losing about one percent a month. Here at PCWorld.com, IE accounts for only 43.9% of browsers visiting the site - it's still ahead of any other one browser, but Firefox, Chrome, and Safari combined account for 53.7% of our visitors. IE is losing a percentage point every month worldwide, and that's just the start. There will come a tipping point where that loss will accelerate, and simply kicking out a new and shiny IE9 isn't going to stop the slide. After several years of lackluster versions, the IE name is now poison. It's time for a new browser with a new name.Certainly, Microsoft is cooking up some good tech for IE 9. This video on Channel 9 showcases some of it. Drawing the browser window with Direct2D for perfect font rendering and smoother web apps? That's pretty cool! Unfortunately, nobody will care about these things if the browser doesn't become something much more than what it is today.Here are six ways Microsoft can revive its browser business.New Name, New Face1. Drop the Internet Explorer nameCalling a browser Internet Explorer in 2010 would be like Ford still flogging the Taurus brand. It conjures up images of the old Netscape days, when the web was an entirely different place. And by the way, there are plenty of folks out there that believe, right or wrong, that Microsoft won that battle by cheating. Why not call it the Bing Browser? You're hot to push that brand name, and people actually seem to like the Bing search stuff you're doing (those who have tried it, anyway).2. Redesign the interface from scratchEvery new version of IE looks like a modified version of the one that came before. You can't ship a 2008-looking piece of software in 2010, and changing the name (see #1 above) won't get you anywhere if it still "looks like IE." Don't leave the interface design in the hands of the current Internet Explorer team, or the team that redesigned Office, or the Windows team. Give it to a group of fresh thinkers, like the designers that came up with the excellent Zune desktop software. The Zune app is smooth and clean, borderless, and doesn't rely on the staid icons and scroll bars of the general Windows UI. In other words, it looks decidedly un-Microsoft, and that's the goal here. This isn't to say this future browser should look like the Zune desktop app. Consider, however, that the Zune app does what Windows Media Player does, fundamentally (organize and play music and video files) without looking or behaving anything like it. Don't make IE like you're making another Windows Media Player, make it like you're making a new Zune client.Start axing buttons and drop-down boxes left and right. Who uses the Home button anymore? People just create new tabs with a default loading page. Reclaim as much space for actual web pages as possible, so we can see more of the sites we visit on those netbooks and ultraportable laptops with small 1280x600 screens. Dump the search box, as Google has done with Chrome - if it's not a URL, the browser should be smart enough to search our bookmarks, history, and the web. While you're at it, get rid of the title bar and status bar, and consider redesigning or moving the tabs and bookmarks bar to maximize the main browser window.3. Pile on the useful new featuresPeople don't want to browse their search history, they want to search their search history. And why make it a boring list? Let users open a "history tab" that shows sortable thumbnails of previously-visited sites that filter down instantly as you search. Work in some of the super-cool magic of that Pivot application the labs guys kicked out. That's just one simple example of how you should be re-thinking basic browser functions like bookmarks and new tabs/windows. I mean, keeping bookmarks synchronized across your PCs is a no-brainer...you need to go much further.Add-ons, Standards, and Speed4. Develop a new, robust, easy-to-develop-for add-on interfaceDo people love Firefox because it's faster? Maybe. Half the people I talk to love it because they love their add-ons. There are add-ons for Internet Explorer, but let's face it, they pale in comparison to what you get on Firefox and now Chrome. Developers should be able to create FTP clients, bookmark managers, password managers, and yes even ad- and script-blockers. Look at where Firefox and Chrome are today and ask yourself "if the browser is a platform, then what is the next logical step in add-ons?"5. Forget backward compatibilityYou're already working hard on making the next Internet Explorer more standards-compliant and offering neat new features like Direct2D acceleration, much faster javascript performance, and so on. Great. Now forget about the ridiculous "compatibility mode" present in IE 8. If people still want to see something the way IE will show it, keep making IE 8 available to them. The new browser should have one mode and one mode only - full standards compliance mode. Yes, it might break some web pages. Guess what? Ask any web developer and they'll tell you every browser presents some formatting challenges from time to time, and what they want is an honest attempt to nail the standards, not to nail the existing sites. Sites are constantly being tweaked to work correctly in one browser or another; you're not breaking anyone's back by dumping support for "the way IE used to draw things."If you really want to earn some cred with the alpha-geeks that influence the usage of everyone else, you should totally ace the Acid 3 test and the CSS Selectors suite. Yes, it's objectively more important to really get right all those features actually in use by the most popular sites in the world, but end users can't test across that the way you can. What they can and will do is run these sorts of publicly available tests and judge your browser on them, quickly proclaiming the slightest hiccup as "yet another total failure of Microsoft to understand the importance of standards compliance." Unfair? Perhaps, but that's the reality.6. You can't be too fast or too lightYour next browser is going to be gauged quite heavily on how fast it starts, renders pages, runs javascript benchmarks, and how well it utilizes multi-core systems (especially with a lot of tabs open). Don't be fooled by IE's current leadership in market share. It's like the sock market - it doesn't matter how high or low you are, only whether you're moving up or down, and IE is on a serious and prolonged downward slide. You're coming from "distant third" position in desirability, and you need to have the kind of performance that will make people switch back from browsers that are already really fast (think of the performance of future versions of Firefox and Chrome!). It simply can't go fast enough. That Direct2D acceleration trick is a great start.With all the focus on netbooks and small ultraportables these days, and the extended battery life they provide, your ability to render the tough web pages in an energy efficient manner is going to be key. Mark my words - battery benchmarks on demanding websites is the next great browser benchmark battleground. Plan to be in the lead.Can Microsoft make a comeback in the battle for the browser? I think they can, but only if the company make a major mental shift. I know what you're thinking. "A rose by any other name..." Just because Microsoft makes another browser doesn't mean it's just another Internet Explorer with a new coat of paint, but only if it is developed with the passion and purpose of creating a new browser out of whole cloth. The mindset of making a "new version of Internet Explorer" has to go. The company needs to build a completely new browser product, with a new name, new look and feel, and new features - possibly built by a new team (or at least, with a lot of fresh new blood in the existing team). Microsoft needs to approach the browser as if they don't already have one and are trying to introduce a totally new and hot consumer product into a fierce, crowded, and rapidly changing market. Are they up to the task? I think products like Zune HD and Xbox 360 are examples of how MS can make critically successful game-changing products with the right mindset, and only needs to apply this thinking to the browser.
Tags web browsersInternet Explorer
Jason Cross | 计算机 |
2014-23/1194/en_head.json.gz/46042 | The Official Soundtrack of Skyrim Available to Pre-Order
Category: Gaming
Posted: November 3, 2011 02:33PM
Author: bp9801
If you are anything like me, the music of a video game can make or break the experience. Sometimes a game's music can really set the stage, while other times it just does not work. In the case of The Elder Scrolls series, the music is a stunning score that can transport you to the world of Tamriel with relative ease. The Elder Scrolls V: Skyrim sounds like it has just as stunning of a musical score as Oblivion and Morrowind before it, if the trailers are any indicator. I have been waiting for news on the availability of Skyrim's soundtrack, and today Bethesda has confirmed its existence. You are now able to pre-order the official The Elder Scrolls V: Skyrim soundtrack, which is a four disc set that will ship after November 11th. The soundtrack is available exclusively through DirectSong, and if you order it before December 23rd, your copy will be signed by the composer, Jeremy Soule.
The Elder Scrolls V: Skyrim - The Original Game Soundtrack is available for pre-order for $29.99. | 计算机 |
2014-23/1194/en_head.json.gz/46547 | The stick, the carrot and the desktop virt project
Robin Birtstone,
The world would be a better place if it weren’t for all the users. Even the best laid technological plans can go awry when computer-hugging individuals decide that they don’t want to abandon their conventional systems or ways of working.Nowhere is this more true than in the nascent world of desktop virtualisation. Many users treat their desktops like their phones – items of personal jewellery fashioned from silicon and software. People become attached to their desktop applications, and even the wallpaper that they look at while processing spreadsheets.
How can IT departments get users to buy into a desktop virtualisation concept that takes their treasured hardware away from them and spirits their prized applications and data off to a central server they’ve never seen?Compared with the intrapersonal issues IT executives will have to deal with, the technology underpinning desktop virtualisation is simple, according to Richard Blanford, managing director of infrastructure integrator Fordway.“The organisational and political arena is where desktop virtualisation projects often fail. Success relies heavily on good communications management,” he says.Fraser Muir, director of information services and the learning resource centre at Edinburgh's Queen Margaret University, is familiar with the challenge of winning hearts and minds. He had to persuade 4,500 students and staff to move their desktop computers to a virtualised desktop infrastructure as part of a long-term project conceived in 2004.The project involved integrating Wyse thin clients and HP blades. The IT team deployed about 30 of them as part of a six-month pilot in January 2005 and decided to proceed with the rollout after gaining broad approval. Muir focused on virtualising the applications rather than the whole desktop operating system, using a Linux-based agent on the thin client to access applications as needed.However, Muir faced stiff resistance from some users who didn’t want to give up the fat clients they were used to. “It was a tiny minority, but we did have to spend a lot of energy and time on them. People came to this with a lot of misconceptions,” he recalls. Persuading them involved liberal applications of both carrot and stick.The carrotMuir had to dispel users’ concerns that the technology wouldn’t work, so the IT team ran a roadshow to demonstrate it. Their other trick was to bring users on-side by winning over influential members of university staff.“We selected some pilot users at the start who were challenging but also well established in their area so we could spend a lot of time with them and they could then go and talk to their colleagues,” he says. In this, Muir was taking a leaf out of Malcolm Gladwell’s 2001 book The Tipping Point, which described how encouraging well-connected social influencers to spread your message can accelerate a nascent concept’s acceptance into the mainstream.The stickThe University was planning to move from three older buildings into a new campus. It was to be a green building, and its heat envelope did not support the use of many PCs with a heavy heat output.“Desktop virtualisation allowed us to put low- | 计算机 |
2014-23/1195/en_head.json.gz/754 | Johnny Hughes has announced the availability of a fourth update to CentOS 4 series, a Linux distribution built from source RPM packages for Red Hat Enterprise Linux 4: "The CentOS development team is pleased to announce the release of CentOS 4.4 for i386. This release corresponds to the upstream vendor U4 release together with updates through August 26th, 2006. CentOS as a group is a community of open source contributors and users. Typical CentOS users are organisations and individuals that do not need strong commercial support in order to achieve successful operation. CentOS is 100% compatible rebuild of the Red Hat Enterprise Linux, in full compliance with Red Hat's redistribution requirements. CentOS is for people who need an enterprise class operating system stability without the cost of certification and support.
CentOS is a freely available Linux distribution which is based on Red Hat's commercial Red Hat Enterprise Linux product. This rebuild project strives to be 100% binary compatible with the upstream product, and within its mainline and updates, to not vary from that goal. Additional software archives hold later versions of such packages, along with other Free and Open Source Software RPM based packagings. CentOS stands for Community ENTerprise Operating System.
Red Hat Enterprise Linux is largely composed of free and open source software, but is made available in a usable, binary form (such as on CD-ROM or DVD-ROM) only to paid subscribers. As required, Red Hat releases all source code for the product publicly under the terms of the GNU General Public License and other licenses. CentOS developers use that source code to create a final product which is very similar to Red Hat Enterprise Linux and freely available for download and use by the public, but not maintained or supported by Red Hat. There are other distributions derived from Red Hat Enterprise Linux's source as well, but they have not attained the surrounding community which CentOS has built; CentOS is generally the one most current with Red Hat's changes.
CentOS preferred software updating tool is based on yum, although support for use of an up2date variant exists. Each may be used to download and install both additional packages and their dependencies, and also to obtain and apply periodic and special (security) updates from repositories on the CentOS Mirror Network.
CentOS is perfectly usable for a X Window based desktop, but is perhaps more commonly used as a Server Operating system for Linux Web Hosting Servers. Many big name hosting companies rely on CentOS working together with the cPanel Control Panel to bring the performance and stability needed for their web-based applications. manufacturer website
Major changes for this version are: Mozilla has been replaced by SeaMonkey
Ethereal has been replaced by Wireshark
Firefox and Thunderbird have moved to 1.5.x versions
OpenOffice.org has moved from to the 1.1.5 version
1 DVD for x86-64 based systems back to top | 计算机 |
2014-23/1195/en_head.json.gz/1207 | Parental controls on multiple computers
Didn't see this one coming: earlier today, Robby (my 7 year-old son) mentioned in passing: "Ricky [my 9 year-old son] knows the password on the computer upstairs." I didn't immediately grasp what he meant - after all, I'd set each boy up with their own account with a password and customized their desktop so they could get access to their e-mail, Club Penguin, etc. The two computers - one XP, one Vista - each had parental controls enabled, with an explicit whitelist indicating which sites they could visit.Then it hit me: he knew the parental controls password.Whoa.Sure enough, one thing (on a fairly short list) Vista's parental controls does well is it provides a report of sites that each account has accessed. Since the account can only get access to sites which have been explicitly whitelisted, this list shouldn't be that interesting. Unless there are new sites on the whitelist! Sure enough, Vista shows you which sites were unblocked in the last week. Gotta give the kid credit: he's discovered a couple adventure games online (no clue where/how - that's a discussion for another day) and logged in as me to whitelist the site so he could continue to play.Once he had the power to whitelist sites, he had the power to remove the time restrictions on his account. Turns out almost the entire time we were cooking on Thanksgiving day, he was battling ogres and advancing to a level 17 knight with an upgraded sword and a shield with magic powers. (Do I sound proud? I shouldn't, right?)As I started poking around looking for a better solution, I had a hard time finding something that would work. Here's my wish list:individual accounts for each childtime-based restrictions, both for time of day and cumulative time logged incontent filtering (i.e., no adult sites) as well as a whitelist/blacklist to enable or disable specific sitescentralized account config, ideally web-based (this allows Robin or I to administer from our own computers, instead of needing to log into theirs - and avoids having to set up duplicate controls on each computer for each user)traffic logsBefore this sounds like I'm trying to delegate responsibility for managing my kids' online experience: I'm not. I actually want them to explore, and learn to use the machines beyond pointing and clicking on things. (Looked at that way, the whole 'figure out Dad's password and then reverse-engineer the parental controls mechanism so I can get what I want' thing looks like a big success. +1 for me, I guess.)But Robin and I aren't always looking over their shoulders - whether we're cooking Thanksgiving dinner, or putting their sister to bed, or, yes, hanging out by ourselves - there are times when they're on their computer by themselves and I want them to be safe. The setup we had - XP & Vista's default controls - just didn't cut it. Things weren't centrally managed, there was no ability to restrict the total time on the computer (i.e., 'no more than 2 hours on the computer per day'), and the ability to override settings using the admin account password (which I've since changed, thank you very much!) all made for a less-than-ideal setup.I asked on Twitter, and got a couple replies but nothing that seemed tailored to what I wanted. I asked on Facebook: nothing. And a couple hours of looking online produced surprisingly little: most solutions were either single-computer solutions or, in a few cases, were hardware based. Then I stumbled on a post on flyertalk.com, of all places, looking to do exactly what I was looking to do. And the recommendation was to a service I hadn't yet found in my online research: Safe Eyes. The key for me? You can install it on up to 3 computers for no additional cost.I've now installed it on both computers the boys use, and Robin's PC as an administrator. The accounts for the boys are managed by Safe Eyes - so when they log into their accounts on either the XP or Vista machines, the Safe Eyes app logs them in (their Safe Eyes account credentials can be saved, so that they're logged in automatically); if they're logged in during a time when they're not allowed to be online, they get a dialog box telling them that.Controls are well done: it took about 20 minutes to configure what types of sites are OK (see below), which specific URLs are OK, whether they can IM, etc. The time limits are both time-of-day as well as elapsed-time, and various other controls let you ID specific programs you allow/disallow. A browser toolbar sits on Robin's computer (IE only, unfortunately - doesn't work for Chrome) that lets her add a site to the whitelist with one click - a nice feature if the kids hear about a new site they want to add to their list of visited sites.(Admin screen showing summary of each account)(Whitelist setup is centrally managed across accounts)I had both boys read and sign the "Internet Game Plan" - a good, common-sense list of things that both boys should be aware of as they spend more time online. As tech-savvy as Robin and I both are, it was good to go back over the basics as much for our benefit to make sure that the boys felt comfortable with these guidelines.Mostly, Safe Eyes is a nice technical solution to a problem that's only partly technical: as I explained to Ricky, by stealing our password he violated our trust. Had he asked us for permission to play that game, we could have looked at it together - but he didn't, and got caught. So we've dialed back his access - and he will earn it back. Safe Eyes will make it easier for us to manage that process, and give him more confidence that his effort will be rewarded.Couple of things the geek in me would like to see: IM notifications that a kid's time has expired (perhaps asking for an OK to extend the time?), a simple way to see how much time remaining in a day each child has. It's also important to note that Safe Eyes is primarily focused on Internet usage, so if your interest is more towards limiting specific apps, this may not be the right fit for you. (Almost 100% of what we do on a computer is in the browser - e-mail, IM, games, etc. - so this works just fine for us.) Safe Eyes does support "program blocking" - but as near as I can tell, it's for programs that access the Internet, not any app on your computer.In the end, Safe Eyes is pretty close to my ideal solution. I didn't need to buy new hardware, it's not hard to install, it allows me to manage everything on the web, and it will grow as we let the kids do more online without sacrificing their safety. If I wanted to install it on my Mac to simplify admin even further, though, I'd have to upgrade my license: by default, Safe Eyes allows you to install on up to 3 computers - don't get me wrong, I love their approach... but we have 4 computers. :) They also support up to 10 users across those 3 computers, which seems more than enough for any family.(Disclaimer: I signed up for Safe Eyes' affiliate program after buying my own subscription. I'm really impressed with the service so far. If you decide to sign up after clicking on that link, Safe Eyes will pay me a few bucks as a referral fee. I've done this for similar services in the past - SitterCity, Click 'n Kids - not for the compensation, just because they're great services. What little money I tend to make simply goes to off-setting the cost of using the services themselves.)
U.S. Caselaw in Google Scholar
Last night, an important new feature launched on Google Scholar: more than 80 years of US federal caselaw (including tax and bankruptcy courts) and over 50 years of state caselaw is now fully searchable online, for free at Google Scholar.This project is the culmination of much work, led by a remarkable engineer at Google named Anurag Acharya. Shortly after I arrived at Google, I heard about a small group of people working to make legal information available through Google. Given my background, I was particularly interested to see if there was a role for me - and thanks to Google's culture of encouraging employees finding 20% projects to contribute to, I was able to not only find a role but to dive in.It's been a thrill to be part of this project, but most importantly it's exhilarating to know that for the first time, US citizens have the ability to search for - and read - the opinions that govern our society. Matt DeVries, a law school roommate, has a great overview of what this means for him as a lawyer here. Tim Stanley, a pioneer in this space who I first met when he built a search engine to index the articles published in the law journal I founded, said simply, "Thanks, Google!" and then did a good job evaluating what Scholar does (and doesn't) do with the opinions. Rex Gradeless, a law student, pointed out that while this may be of interest for lawyers and law students, the real winner here is citizens who've historically not had comprehensive access to this information at all.It probably goes without saying, but in case it's not abundantly clear: working at a company that embraces projects like this is incredible. This was a labor of love for a number of co-workers (past and present), all of whom instinctively grasped why this is important and how connected it is to Google's mission. I'm very proud to work at Google today.
Daemon Sequel Freedom(tm) available for pre-order
Could not be more excited about this news: Freedom(tm), Daniel Suarez's sequel to Daemon is now available for pre-order at Amazon. Let the count-down begin: the book is available in just over 8 weeks!In case anyone doesn't remember me raving about Daemon, here's my original review, and January's follow-up post discussing the soon-to-be re-released Daemon in hard-back.Paramount has Daemon in pre-production, where the screen-writer who wrote WarGames is co-writing the screenplay.
Daemon,
University of Richmond Law School
Image via WikipediaThough I don't practice law, I'm a proud graduate of the University of Richmond Law School - it was an extraordinary three years of my life. It was there that I really learned how to think critically, learned how to argue (much to my wife's chagrin), and learned how to be an entrepreneur.Yeah, you read that right.Law school is hardly where one thinks about being (let alone becoming) an entrepreneur. Yet along with a group of fellow students, I founded a law journal that was the first in the world to publish online - with the Dean's active support and the encouragement of faculty. Publishing a scholarly law journal exclusively on the Internet was unheard of at the time, and represented a gamble for the law school. We (the students) received academic credit for our time - something up until that point only afforded to Law Review and Moot Court participants. The school's brand was closely tied to JOLT's, and it wasn't clear in the early days that this was a venture likely to succeed.But succeed it has - JOLT now counts more than 400 students as alumni, has contributed to the scholarship in the technology law space, and is very much an accepted outlet for scholars to seek out when looking to publish their work. And some of my best memories of my time at Richmond were those focused on the actual creation of the Journal - working with the administration, recruiting students to join our crazy idea, convincing professors around the country that we really would pull it off and they should submit their articles to us, evangelizing to the press and academia once we'd launched to generate buzz about the Journal. All of those skills I use today - because this was in a very real sense my first entrepreneurial endeavor.It had never occurred to me before writing this post - but the biggest gift Richmond gave me was the environment in which it was possible to be an entrepreneur. Too often you hear about entrepreneurs who fail, and fail again, and fail a third time before they find the recipe for success. But Richmond created an environment in which it was quite possible to succeed - and that became an invaluable launching point for my career.I write all of this because the Law School has produced a 10 minute video detailing the school, the surrounding community, and what the students mean to the Law School. It's a great video, and if you're thinking about going to law school, I think it's a terrific introduction to a school that should be on your short list.(Full disclosure: if you hang on long enough, you'll see my mug for about 2 seconds.)
Twitter list word cloud
I've been enjoying Twitter Lists in the last week... for those that don't know, Lists gives others the ability to "curate" Twitter IDs into groups. You can see which lists I've been added to by clicking here. It struck me that this is the first time I've had such a view into how others categorize me.I took the words others used to describe me, and then went to Wordle to generate a word cloud based on those words. (I'll bet someone builds a service to generate these word clouds automatically within a week.)Gets it about right, actually.
What Would Augsburg Do?
A couple weeks ago I attended the fall board meeting for Augsburg Fortress. Augsburg is a publisher affiliated with the ELCA, which is the largest Lutheran denomination in the United States. My connection to Augsburg is a result of a speech I gave to a group of leaders in the ELCA several years ago, and has been a remarkable experience for the last two years.It’s remarkable for several reasons: it’s my first experience sitting on a board, so that alone makes it a worthwhile effort. But what makes it so rewarding – and so challenging – is the difficulty of being part of a traditional publisher in 2009.Add to that that my day job – working at a company often blamed for many of the publishing industry’s difficulties – and it has made for quite the learning process.Last spring, Augsburg’s CEO Beth Lewis asked if I’d consider leading a discussion at our board meeting focused on Jeff Jarvis’s book, What Would Google Do? We ended up delaying the talk, in part because we’d have a new class of board members joining us in the fall and it felt like a better way to kick things off with the “new” board.On Friday, I tweeted that I’d be leading the discussion, and almost immediately Jeff tweeted right back that he’d love to eavesdrop. A few e-mails later, Jeff and I had it settled: I’d surprise the board by starting off our session by hearing from none other than the author himself – and thanks to Skype video chat, we had him projected full screen and plugged into the A/V so he could speak to us. (Miraculously, the mic on my MacBook Pro even picked up comments from people 30 feet away, making it a completely easy dialogue from 1,000 miles away.)Jeff had some great ideas to frame the discussion: ask what business you’re really in was the key, of course. But as a brother to a Presbyterian minister, he also had a rather good insight into the challenges faced by leaders in the church: how to admit mistakes, how to foster communities in the midst of declining church membership – he spoke to these challenges as someone more than passingly familiar with the dual challenges Augsburg faces.Beth asked what is probably the most critical question of Jeff: how do we avoid the “cash cow in the coal mine” – the part(s) of our business that generate revenues today but are neither core to the business nor likely to be a part of Augsburg’s future.Jeff was blunt: “pretend you needed to get rid of your print business tomorrow. Just turn it off. And imagine that there’s a kid or group of kids in a dorm room today, thinking about how to re-engage people of faith. What are they working on? What are they going to do that will threaten you?”I wanted to be respectful of Jeff’s time – he was terribly gracious to give up a part of his Saturday morning to chat with us – and we said thanks and then dove in. While I will not go into the confidential aspects of our board discussion, I did warn the board that I’d be blogging the meeting, with the goal of inviting a broader discussion – from Lutherans, from techies, from publishing vets – to figure out if there isn’t a way to be public about the challenges facing us, and hopefully identify some creative paths forward.Our first step was to throw out the key words that the board felt mattered most from Jeff’s book. More than a dozen words went up... several of the core themes of the book, many of which were obviously applicable to our challenge: trust, transparency, platform, links, beta, imperfect, abundance.But I pointed out that a biggie – perhaps the biggest – was missing: free. This isn’t easy for an established business to confront: how can we just give stuff away? We talked through the mechanics of free: it’s not what you give away, but how giving things away can expand the market for your other products (and/or create entirely new ones). I recommended Free to the group (one of the many reasons I love our CEO: she had a copy on her Kindle within a minute of my recommendation), and threw out a couple examples from Chris Anderson’s book to talk about how Free can be, as Jeff pointed out in WWGD, a business model.We also talked about data: what data could we collect – not personally identifiable data, but data about congregations, about product adoption, about customer life cycles (do families whose children attend Sunday School have adults who go to adult bible study more often? Do families who attend adult bible study volunteer more at church, donate more money to the church, or recruit friends to join?) – and how could that data be valuable to others?What’s exciting to me is that Augsburg is already a company asking “what if?” and acting on it. The best example of this is sparkhouse, a completely new effort funded by Augsburg as an entrepreneurial startup intended to completely reimagine faith-based publishing. And that’s not the only one: Augsburg has built up a number of social networks – see Creative Worship Tour as an example of how Augsburg is connecting like-minded individuals around the world to facilitate interactions and foster community around new ways of managing weekly worship.While these are great steps, they are by no means guarantees of success. Jeff talks a lot about the news industry: declining circulation, uncertain revenue future, competition from new players who didn’t even exist three years ago. But he could just as easily be talking about the church: membership is down, the average age of congregations is going up, and people are less and less focused on denominations at all when it comes to their faith. Add to that the well-known challenges of being a book publisher today and it’s clear that Augsburg has its work cut out for it.Which is why I wanted to have this discussion out in the open. In WWGD, Jeff talks repeatedly about “publicness” – and he spoke movingly of a comment left on his blog over that weekend about a widow who lost her husband to prostate cancer. (Jeff has been documenting his own battle with prostate cancer – and his successful surgery and ongoing recovery – for months.) Someone (or someones) out there will have ideas that we need to be thinking about. If you’re that kid in her dorm room thinking about reinventing publishing and community for people of faith, I want to hear from you. What would an Augsburg platform look like? (I got to define API to the board during our meeting – I doubt there are too many other publishing boards talking about APIs!) Which questions aren’t we answering? Which aren’t we asking?My fellow board members are going to be hanging out here; I’m hoping that we can foster an ongoing discussion about our future here. Thanks again to Jeff – for writing a thought-provoking book, for giving of his time this morning – and thanks to all of you, whose input and guidance I cannot wait to read.
Augsburg Fortress,
Business Strategy,
WWGD | 计算机 |
2014-23/1195/en_head.json.gz/1474 | Peter RodwellInternational Organ Foundation
Some 20 years ago I was, for my sins, the Editor of what was at that time Europe's biggest-selling computer magazine. I was frequently asked to make predictions about The Future of the Industry, the Next Killer Product, Will Anyone Ever Earn a Living from Unix, etc, etc, and sometimes I was rash enough to oblige. I won't bore everyone with a breakdown of the results; suffice it to say that I learnt two things: that the prediction business is at best a minefield; and that -- fortunately -- people have short memories.
At that time, the early 1980s, with both the IBM PC and the original Macintosh still very wet behind the ears and the industry in turmoil, nobody - but nobody, not even me - would have been rash enough to predict the Internet as we know it today. Sure, there was the Arpanet, but that was for the academics. And there were various embryonic commercial e-mail systems, few of which lasted very long. But the idea that I could sit in my home in a small village (pop. 3,500) in the mountains in central Spain yet be in constant communication with an entire community of people spread all over the world� well, ridiculous, of course.
To me, this is what PIPORG-L has become over these last 10 years: a community. As in all communities, it has a wide and refreshing variety of characters: the purists, the flippant, the witty, the learned, the learners and - most treasured of all - those who so generously share their knowledge and wisdom with the rest of us. Unlike many communities, this one is nearly always well-mannered and considerate, too.
It's difficult to think of any organ-related subject that hasn't been thoroughly discussed on PIPORG-L over these last 10 years, often several times over. One theme that does crop up regularly is the fate of organs, the general disregard of the public towards the instrument, the lack of interest from the young in learning to play it, the invasion of electronic organs, and so on. A non-organ person reading the list could be forgiven for thinking that the organ is dying fast.
Personally, I don't think that after over 2000 years, the pipe organ is dying. New organs continue to be built, although the level of this activity varies wildly between countries. I happen to live in what is probably the least musical and definitely least organ-minded country in Europe, yet even here, half a dozen or so organ builders manage to make a living both building new instruments and restoring old ones. This despite disheartening news such as that one church has installed a sort of karaoke system, a large video screen on which the words of hymns appear so that the public can sing along (and it's difficult to think of them as a 'congregation' under such circumstances). One becomes accustomed to entering a church to find an abandoned, rotting 18th century pipe organ in the gallery and a cheap electronic keyboard in the sanctuary.
(People often ask why I live in Spain, especially in view of the above. The answer is: Because I like it here, despite the above. Also, being half Irish and half British, I feel out of place both in Britain and Eire.)
Elsewhere, countries such as France and Germany have led the way in developing systems that declare certain pipe organs as belonging to the 'national heritage' or being 'national monuments', conferring on these instruments a special status that guarantees their protection. UNESCO does not include pipe organs in its World Heritage scheme, but plenty of individual churches and cathedrals are on the list, meaning that their pipe organs are also protected to at least some degree.
Conserving what we already have is, of course, essential, but it does relatively little to promote the organ and to expand awareness of the instrument. Is more needed? After all, the organ 'industry' is quite healthy in many countries, especially the United States, with its plethora of denominations and donors - both individual and institutional - who are prepared to fund the building of new organs. It would be nice to see some of those funds being used to promote the organ outside the US, in countries where such wealth is simply not available - in Latin America, for instance, where thousands of instruments languish for lack of money. In fact, one of my original intentions when I set up the IOF back in 1990 was that it could act as a channel for such funding, although this has yet to happen.
Harking back again to my computer magazine days, we copied an American idea and encouraged the setting up of something called ComputerTown UK. Briefly, the idea was that, although at the time personal computers were rare, it was clear that they would soon become all-pervasive, and that a lot of people were uncertain or even fearful of this probability. ComputerTown UK encouraged PC owners to take their machines down to the local library or other community gathering point, set them up and let people try them out, all for free. There was no official organization - we simply told people to go away and do it. We devoted a page a month in the magazine for people to report their experiences and ideas. The basic idea was to reduce technofear -- when you are familiar with something, you lose your fear of it -- and I like to think that it helped in some small way.
I don't think the public at large is afraid of pipe organs, obviously, but perhaps the same principle could be applied: those 'in the know' teaching the rest.
Perhaps this is where the PIPORG-L community can play a role. While it is of course useful for many to hear other people's experiences with different makes of organ shoes, I personally am more interested in the efforts being made by a few to promote the organ, through formal events or simply by letting a child press a few keys - and perhaps thus changing his or her life completely.
Of course there are many organ people who are doing exactly this, with Pipe Organ Encounters, etc, which should be encouraged and supported by all of us. At an individual level, organists could 'open the loft' for, say, a half hour after a service so that both children and adults could get to see the instrument close up and even have a go themselves. Organ builders - admittedly busy people with a living to earn - could hold the occasional open day or school visit; after all, theirs is a highly-skilled craft that is fascinating to many people but which few get to see. It is tempting to suggest that such a field trip should be compulsory for every Organ Committee!
Reporting in these initiatives through the medium of PIPORG-L would be invaluable to others attempting to do the same. Experience-sharing always leads to a wealth of new ideas, especially among such a highly creative community as PIPORG-L.
Let's make the next 10 years of PIPORG-L really count!
About the author: After serving as a teenage apprentice to an organ builder in the UK, Peter Rodwell wandered into, firstly, journalism and then computers (he holds degrees in both) as well as working briefly as a helicopter pilot. He is a former Editor of the UK monthly magazine "Personal Computer World", the author of 15 computing books and the co-founder of two successful software companies. He founded the International Organ Foundation in 1990 as a way of getting back to his life-long interest in pipe organs. He has lived in Spain since 1986. | 计算机 |
2014-23/1195/en_head.json.gz/6075 | Home | About Folklore
The Original Macintosh: 6 of 122 Texaco Towers
Steve Jobs, Dan Kottke, Brian Howard, Bud Tribble, Jef Raskin, Burrell Smith, George Crow
Origins, Lisa, Buildings
The office where the Mac became real
The main Apple buildings on Bandley Drive in Cupertino had boring numerical appellations (Bandley 1, Bandley 3, etc.), but from the beginning the Lisa team gave the buildings they inhabited more interesting names. The original office for the Lisa team was adjacent to a Good Earth restaurant (in fact, it was Apple's original office in Cupertino), so it was called the "Good Earth" building. When the team grew larger and took over two nearby office suites, they were designated "Scorched Earth" (because it housed the hardware engineers, who were all smokers) and "Salt of the Earth".
When the Lisa team became a separate division in 1980, they moved to a larger, two-story office building a block or two away from the main building on Bandley Drive. Everyone was so impressed at having two stories (all the other Apple buildings were single story) that the building was dubbed "Taco Towers", although I'm not sure where the "Taco" part came from.
In December 1980, the embryonic Macintosh team was residing in the Good Earth building, which was abandoned by the Lisa team for Taco Towers earlier in the year. When Steve Jobs took over the project, he moved it to a new building that was large enough to hold about fifteen or twenty people, a few blocks away from the main Apple campus at the southeast corner of Stevens Creek Boulevard and Saratoga-Sunnyvale Road.
There was a Texaco gas station at the corner, and a two-story, small, brown, wood paneled office building behind it, the kind that might house some accountants or insurance agents. Apple rented the top floor, which had four little suites split by a corridor, two on a side. Because of the proximity of the gas station and the perch on the second story, as well as the sonic overlap between "Taco" and "Texaco", the building quickly became known as "Texaco Towers".
Burrell Smith and Brian Howard took over the side of the building closest to the gas station and built a hardware lab, while Bud Tribble and Jef Raskin set up shop on the other side, installing desks with prototype Lisas to use for software development. Bud's office had four desks, but he was the only one occupying it at first. Steve didn't have an office there, but he usually came by to visit in the late afternoon.
In the corner of Bud's office, on one of the empty desks, was Burrell's 68000 based Macintosh prototype, wired-wrapped by Burrell himself, the only one currently in existence, although both Brian Howard and Dan Kottke had started wire-wrapping additional ones. Bud had written a boot ROM that filled the screen with the word "hello", rendered in a small bitmap that was thirty two pixels wide for easy drawing, which showed off the prototype's razor sharp video and distinctive black on white text.
When I started on the project in February 1981, I was given Jef's old desk in the office next to Bud's. Desk by desk, Texaco Towers began to fill up, as more team members were recruited, like Collette Askeland to lay out the PC boards, or Ed Riddle to work on the keyboard hardware. When George Crow started, there wasn't an office available for him, so he set up a table in the common foyer and began the analog board design there.
Burell and I liked to have lunch at Cicero's Pizza, which was an old Cupertino restaurant that was just across the street. They had a Defender video game, which we'd play while waiting for our order. We'd also go to Cicero's around 4pm almost every day for another round of Defender playing; Burrell was getting so good he would play for the entire time on a single quarter (see Make a Mess, Clean it Up!).
In May of 1981, Steve complained that our offices didn't seem lively enough, and gave me permission to buy a portable stereo system for the office at Apple's expense. Burrell and I ran out and bought a silver, cassette-based boom box right away, before he could change his mind. After that we usually played cassette tapes at night or on the weekends when there was nobody around that it would bother.
By early 1982, the Mac team was overflowing Texaco Towers and it was obvious that we'd have to move to larger quarters soon. Steve decided to move the team back to the main Apple campus, into Bandley 4, which had enough space for more than 50 people. The 68000 based Macintosh was born in the Good Earth building, but I still think of Texaco Towers as the place where it came of age, transitioning from a promising research project into a real, world-changing commercial product.
Back to The Original Macintosh
I Invented Burrell
• Make a Mess, Clean it Up!
• Pineapple Pizza
• Good Earth
Login to add your own ratings
Your rating:<
from Buzz Andersen on January 25, 2004 19:43:04
Interesting--when I moved out to California to work for Apple about six months ago, I ended up moving into a building in that *exact* location (southeast corner of Stevens Creek & DeAnza). How appropriate :-).
from Steve Hix on February 05, 2004 20:07:21
The taco part of Taco Towers was because the brick facing of the building roughly paralleled the architecture of Taco Bell franchises being built around the same time. Or so we were told when we visited the building from one of the boring numbered Bandley Drive offices.
from Paul Tavenier on July 15, 2004 15:45:52
As I recall, the name Taco Towers was used by Apple's facilities department to describe the ...ugly... faux Spanish style building that faced De Anza blvd. I don't recall what group used it, but it was an admin building of some sort.
The Lisa "We're the future of Apple, and we're better than you" folks were housed in Bandley 4. The windows in that building were blacked out, 'cause they were so important...
Also (nit picking here) Texaco Towers was at the northeast corner of Stevens Creek/DeAnza. The building has been replaced but the gas station is still there. The souteast corner housed the Cali Brothers feed business, and 1/2 block south was Cicero's.
from Graham Metcalfe on March 01, 2005 18:54:52
The appelation of Taco Towers was definitely in reference to the architecture of Taco Bell "restaurants" in the 70's and 80's. Same color and size of faux-adobe bricks, arches etc. For a period in the early 90's this housed the Claris offices when they were just starting up.
from Drew Page on September 23, 2005 17:16:52
You know, its funny that the Lisa people thought they were so important and the future of Apple. If that was true, then why did they put out such a problematic computer? It was difinitely possible to put out a quality computer in the early 80s. Radio Shack did it. Commodore did it. Apple did it with its own Apple II line. I wonder what went so wrong. Ego probably. Ditto for the Apple III. Taco Towers sounds pretty funny. Reminds me of Taco Bell of course. I remember seeing buildings with that type of architecture in the early 80s. Judging from the first post, that building is gone, replaced by new structures. Probably a good thing. It would be kinda neat to work for Apple in a location that made history. Too bad all of Apple is there and I am way out in DC on the east coast.
from Bill Hartter on September 29, 2006 16:53:04
I spent a week in the S.F. Bay area and it was beautiful place, but very windy. Still, it would have been nice to talk to someone.
The text of this story is licensed under a | 计算机 |
2014-23/1195/en_head.json.gz/6912 | Contact Advertise Lyx: the Multi-Platform Document Editor
posted by Killermike on Tue 27th Feb 2007 16:49 UTC "Lyx, 2/2"
External tools and extensions
For some functionality, Lyx requires the assistance of external tools.
One example of this would be when making use of a Bibtex format bibliography database. For example, if I wanted to cite the source of a quotation, I first click on the "insert citation" icon. Having done this, I can then search within and select an entry from the database as my source. Different academic disciplines have different conventions for citation format but I have Lyx setup so that it places a number within square brackets. At the end of the document, Lyx can place a key to all of the citations in the document. However, to actually insert an entry into the Bibtex database, an external tool must be used. This means that the same Bibtex database can be used between multiple documents and with any piece of software that understands the format. There are a lot of Bibtex tools available but I use a KDE app called KBib.
Obviously, Lyx isn't an office suit. Graphics editing, for example, must be done with external tools.
Document classes
Another way that Lyx can be expanded is through the use of document classes. As supplied, Lyx comes with document classes for various types of book, report, article and some esoteric formats such as those that adhere to various scientific and academic journal specifications. In addition, it also comes with a screenplay and a stage-play class.
Lyx is widely used and has great community supporting it. Check out the mailing list archive on the website. If you have a problem that you can't answer via the documentation, the user mailing list should be your next stop. In addition, whenever I have interacted with the developers, I have found them to be helpful and genuinely interested in user opinions.
I think that the quality of a support community is an important feature that is often overlooked when people are assessing a new piece of software.
New in 1.5.x
Lyx is so big that that it contains many features that I will probably never even visit, although their omission might have been a deal-breakers for other people. The new Unicode support would be an example of such a feature. That said, I might benefit from Unicode support in the future if it enables interoperability with new third-party tools.
As I said above, within this overview, I am only able to scratch the surface of the new features, but the ones that I am particularly looking forward to exploring include: the glossary support, the enhanced table support, the aforementioned enhancements of the section browser and the new MDI interface.
The GUI has had a revamp for 1.5.x and it now makes use of QT4. The 1.4.x series had somehow lost some of its GUI speed when compared with 1.3.x. The developers were aware of this and have solved the performance problems that had gradually crept in.
In general, while writing this article in the 1.5.x beta, it seems as though every menu and toolbar has been features an enhancement of some sort.
Criticisms of Lyx
Obviously, Lyx isn't suitable for every type of document creation.
Strict formatting
Lyx is designed for the creation of documents such as reports, articles and books. If I were given the task of creating a document with loose formatting, such as a leaflet, I would probably use Open Office Word. Also, although I am sure that journals and such have been created with Latex in the past, given the task of creating a print magazine, I would be more inclined to use Open Office or perhaps even a fully-fledged DTP package.
In general, one is sometimes better doing things ``the Lyx way'' as opposed to working against Lyx in terms of formatting.
Quirky user interface
It has to be said that Lyx is a bit quirky in comparison with a standard word processor. To create anything beyond the most basic document, most users would need to look at the Lyx documentation. Lyx ships with document classes for things like letters or a CV but I would doubt that even the most experienced word processor user would be able to create documents of those types without making use of the Lyx manual.
I think that too much is made of GUI consistency these days. For an application that is designed for occasional, casual use, a user interface that complies with standard conventions is a must. However, in the case of an application that is going to be used for long periods of time, for serious work, I think that a user interface that requires some learning time is acceptable. This is why people who do actual content creation favour features such as keyboard shortcuts.
In summary, I consider it to be acceptable to invest some time in learning a tool that is going to be used for serious project work. On the other hand, a person who might write two or three letters a year, might be better off with a more standard word processor.
Lyx is a wonderfully useful tool, in both conception and execution, and I would recommend that anyone who is interested in writing check it out.
Lyx certainly isn't a "do everything" text editor but I think that it's a shame that more people don't know about it. It's also a shame that there isn't some serious corporate interest in developing it. I'm left wondering what tools big organisations actually use to create their documentation. Hand written mark-up? A word processor?
A tool like Lyx creates allows content creators to concentrate on what they should be concentrating on: the creation of the content. As needed, the documents can be reliably exported to whatever format is needed at the time. The beauty of Lyx, is that you can create all of your content from within one piece of software, regardless of the eventual output format.
Mike is an average super-turbo-geek and once tried to ask a woman out using set-theory. By the time he drew a big circle around the symbol that represented him and the symbol that represented her, she had realised what he was getting at and made a run for it. Check out his website: The Unmusic Website.Table of contents
"Lyx, 1/2"
(1) 41 Comment(s) Related Articles
The death of the Urdu scriptGoodbye, Lotus 1-2-3LibreOffice 4.0 released | 计算机 |
2014-23/1195/en_head.json.gz/6953 | Microsoft's Visual Studio update addresses the con...
Microsoft's Visual Studio update addresses the connected app
Visual Studio 2013 will allow developers to store settings in the cloud, where they can be accessed by multiple computers
Joab Jackson (IDG News Service) on 26 June, 2013 16:21
How user input and follow-up interactions are parsed by Facebook.
Microsoft kicked off its Build conference in San Francisco this week by releasing a preview of the next version of its Visual Studio IDE (integrated development environment), as well as updates to other development tools."If you are interested in building a modern, connected application, and are interested in using modern development lifestyles such as 'agile,' we have a fantastic set of tools that allows you to take advantage of the latest platforms," said S. "Soma" Somasegar, corporate vice president in Microsoft's developer division, in an interview with IDG News Service.Somasegar noted, for instance, how the new Visual Studio provides more tools to help developers build applications for Windows 8.1, a beta of which is also being released this week.Microsoft is releasing a preview of Visual Studio 2013, the final version of which is due to be released by the end of the year. The company is also releasing Visual Studio 2012 update 3, and a preview of the .NET 4.5.1 runtime framework.Many of the new features in Visual Studio 2013 address the kinds of mobile, connected applications that developers need to build these days, Somasegar said. For instance, it provides new tools to profile energy and memory usage, both of which must be considered when building applications for mobile devices. It also includes a new tool for providing metrics on how responsive an app is for users.Visual Studio 2013 is also tackling the challenge of writing an application that relies on cloud services in some fashion. Microsoft is providing interface from Visual Studio to its Azure Mobile Services, which synchronizes data and settings for a program used across multiple Windows devices.Visual Studio 2013 itself will also be easier to use across multiple devices. It will allow developers to define environmental preferences, or the settings and customizations for their own versions of Visual Studio, that then can be applied to other copies of the IDE. Microsoft can store these environmental settings in the cloud, so they can be downloaded to any computer connected to the Internet."People go through a lot of trouble to set up their environment. Once they go to a different machine, they must go through the same hoopla again to get to recreate the environment they are comfortable with," Somasegar said. "Once you set up your environment, we store those settings in the cloud, and as you go to another machine, you won't have to recreate your environment."Another new feature, called Code Lens, provides "a class of information that, as a programmer, has been historically hard to get." It can show, for example, which part of a program is calling a particular method and what other methods that method calls. Visual Studio 2013 also expands its support for C++ 2011, the latest version of the C++ language. Visual Studio's feature for debugging the user's own code (as opposed to running a debugger against the entire set of code) now works with C++ 2011.Beyond Visual Studio, Microsoft is building more developer hooks into the next release of its browser, Internet Explorer 11, which is expected to be released with Windows 8.1.Microsoft has completed "a major revamp" of the tools the browser provides to developers. The browser will come with a source-code editing tool, as well as a number of built-in diagnostic tools, Somasegar said. The idea is that the developer won't have to toggle back and forth between the browser and the IDE. A Web application or page can be run, and mistakes can then be fixed, directly from within the browser.With .Net, Microsoft worked on improving performance of the runtime environment. It can also provide more diagnostic information on how much memory a .Net program is using, and provide more information in a dump report should a program crash. Also, once a developer chooses a particular platform for a .Net project, such as an ASP.Net project, .Net will only display the components that can be used on that platform.Microsoft is also releasing a white paper that offers a road map of where .Net is headed. The paper will be "one cohesive document that talks about .Net as it relates to Windows, Windows Phone, Windows Azure," Somasegar said. "It is a comprehensive document that shows people how to think about the future as it relates to their current .Net investments."When Windows 8 and Windows RT were introduced, many Windows developers voiced concerns about the future of .Net, due in no small part to how little the platform was mentioned in Microsoft's initial instructions on building Windows 8 modern applications.Somasegar said Microsoft has always encouraged, and will continue to encourage, the use of .Net as a way for developers to write "managed code" for Windows 8 and Windows RT modern applications, as well as for Windows desktop applications.In addition to issuing previews of Windows 8.1 and Visual Studio 2013 this week, Microsoft is also releasing a preview of the latest edition of the company's application lifecycle management software, Team Foundation Server 2013.Joab Jackson covers enterprise software and general technology breaking news for The IDG News Service. Follow Joab on Twitter at @Joab_Jackson. Joab's e-mail address is Joab_Jackson@idg.com
Tags Development toolsapplication developmentMicrosoftsoftware | 计算机 |
2014-23/1195/en_head.json.gz/7922 | New WebGL standard aims for 3D Web without browser plugins
The Khronos Group has formed the WebGL workgroup to define a new standard for …
The Khronos Group revealed this week that it will move forward with its plans to build a new 3D standard for the Web. Khronos, a technology industry consortium that developed OpenGL and a number of other prominent graphical standards, will devise new JavaScript APIs for natively rendering 3D graphics in webpages without requiring browser plugins. The effort is being undertaken in collaboration with Mozilla, Opera, and Google, indicating that it will receive broad support from prominent browser vendors.
Khronos first demonstrated an interest in bringing 3D to the Web back in March when it issued a joint announcement with Mozilla. At roughly the same time, Google was working on its own 3D Web technology called O3D. Google's O3D is a high-level engine that can load and display models. Mozilla's 3D Web prototype takes a very different approach and aims to expose the conventional OpenGL APIs through JavaScript. It was previously unclear how these competing visions would converge into a single standard.
The announcement this week reveals that Khronos intends to adopt Mozilla's approach. The organization has established a WebGL workgroup that will define a JavaScript binding to OpenGL ES 2.0 which can be used to build 3D engines for the Web. Model loading and other functionality will be facilitated by third-party libraries that will sit on top of the underlying OpenGL JavaScript APIs. One example of such a library is C3DL, a JavaScript framework that can load Collada models and perform other high-level tasks. C3DL is being developed by a team at Seneca University using Mozilla's early WebGL prototype.
"The Web has already seen the wide proliferation of compelling 2D graphical applications, and we think 3D is the next step for Firefox," said Mozilla's Arun Ranganathan, chair of the WebGL working group. "We look forward to a new class of 3D-enriched Web applications within Canvas, and for creative synergy between OpenGL developers and Web developers."
Google has committed itself to implementing WebGL, but will also continue developing its own O3D system. Google's view is that JavaScript is still too slow to handle raw OpenGL programming. The search giant is skeptical that WebGL will be able to deliver sufficient performance with real-world 3D usage scenarios. Google software engineer Gregg Tavares expressed his views on this subject on Tuesday in an O3D Google Group discussion thread.
"O3D is not going away. WebGL is a very cool initiative but it has a lot of hurdles to overcome. The direction of WebGL is trying to just expose straight OpenGL ES 2.0 calls to JavaScript. JavaScript is still slow in the large scheme of things," he wrote. "WebGL, being 100% dependent on JavaScript to do an application's scene graph, is going to have serious problems drawing more than a few pieces of geometry at 60hz except in very special cases or on very fast machines."
He also points out that OpenGL ES 2.0 is not ubiquitously supported on common hardware, which means that not every user will be able to view WebGL content. Despite his skepticism, he says that he and others at Google are still enthusiastic about WebGL and hope to make it work. A single team at Google is responsible for implementing both O3D and WebGL, he says, and they are strongly committed to the success of both technologies.
O3D product manager Henry Bridge, who we spoke with about the project back in April, posted a message in the thread to further clarify the nature of Google's plans for 3D. He says that O3D and WebGL are both suited for different kinds of 3D workloads right now and that Google wants to make it easy for developers to use both simultaneously.
Khronos hopes to have the first official public release of the WebGL specification ready for publication in the first half of 2010. The group is encouraging industry stakeholders to participate in the effort and contribute to producing the specifications. | 计算机 |
2014-23/1195/en_head.json.gz/7998 | Issue: Jul, 1978
Posted in: Computers, History
A Short History of Computing (Jul, 1978)
A Short History of Computing
A few weeks ago a master’s degree candidate in computer science confided, with an embarrassed laugh, that he had never seen a computer. His experience with the machines of his chosen vocation had consisted entirely of submitting punched cards through a hole in a wall and later getting printed results the same way. While his opportunities to see equipment are restricted due to his student status, there are also thousands of working programmers and analysts using large scale equipment who have no contact with existing hardware and will never have a chance to see any first or second generation computers in operation.
This is in sharp contrast with the way programmers worked in the late 1950s and early 1960s. Before 1964, when multiprogramming computers were introduced, the typical programmer had opportunities to come in contact with the computer if he or she wanted to do so. Prior to 1960, in fact, most programmers actually operated the machine when debugging their programs. These people learned of the computer as a physical device; the current programmer is more likely to think of it as a vague logical entity at the other end of a terminal. Thus, many large system programmers have the rare distinction of using a tool without knowing how it works or what it looks like. This is in spite of the fact that many important computer developments have occurred within the average programmer’s lifetime.
However, in the past year or two, dramatic reductions in the cost of minicomputer components and the advent of the microcomputer have returned the hands-on computer to respectability in two ways. First, it is now possible to justify hands-on debugging on a small computer, since the hourly rate of the programmer is higher than that of the machine. Second, the decreasing cost of home computing has fostered the birth of a new class of “renaissance programmers”: people who combine programming expertise with hardware knowledge and aren’t afraid to admit it. Renaissance programmers can learn much from the lessons of computer history; simple and inelegant hardware isn’t necessarily best, but it’s frequently cheapest.
In short, the stored program computer became a necessary tool only recently, even though various mechanical aids to computation have been in existence for centuries.
One of the first such aids was the abacus, the invention of which is claimed by the Chinese. It was known in Egypt as early as 460 BC. The Chinese version of the abacus (as shown in photo 1) consists of a frame strung with wires containing seven beads each. Part of the frame separates the topmost two beads from the lower five. The right-hand wire represents units, the next tens, the next hundreds, and so on. The operator slides the beads to perform addition and subtraction and reads the resulting sum from the final position of the beads. The principle of the abacus became known to Roman and early European traders, who adopted it in a form in which stones (called by the Latin calculi, hence the word “calculate”) are moved around in grooves on a flat board.
The use of precision instruments dates back to the Alexandrian astronomers. Like the mathematics of the period, however, the development of scientific instruments died away with the demise of the Alexandrian school. The Arabs renewed interest in astronomy in the period between 800 and 1500 AD, and it was during this time that the first specialists in instrument making appeared. The center of instrument making shifted to Nuremberg, beginning about 1400. By the middle of the 16th Century, precise engraving on brass was well advanced due in part to the interest in book printing.
Calendrical calculators used for determining the moon’s phases and the positions of the planets crop up in all the major periods of scientific thought in the past two thousand years. Parts of a Greek machine about 1800 years old, apparently used to simulate the motions of the planets, were found in 1902 in the remains of a ship off the island of Antikythera. The gears of the machine indicate amazing technical ability and knowledge. Later calendrical calculators, which were usually of the type in which two or more flat disks were rotated about the same axis, came to include a means of telling time at night by visually aligning part of the Big Dipper with the pole star.
Trigonometric calculators, working on a graphical principle, were in use in the Arabic period. Such calculators were used mainly to determine triangular relationships in surveying. The popularity of this device was renewed in 14th Century Europe; in fact, calculating aids of all kinds grew rapidly in popularity as well as in scope from this time onward, largely due to the difficulty of the current arithmetic techniques. Napier was continually seeking ways to improve computational methods through his inventions. One such invention, “Napier’s bones,” consisted of a number of flat sticks similar to the kind now used in ice cream bars. Each stick was marked off into squares containing numbers. To perform calculations, the user manipulated the sticks up and down in a manner reminiscent of the abacus. Of particular interest is the fact that Napier’s invention was used for general calculation at a time when many other devices were used for the specific determination of one measurement, such as the volume of liquid in a partly full barrel, or the range of an artillery shot.
Pascal invented and built what is often called the first real calculating machine in 1642 (shown in photo 2). The machine consisted of a set of geared wheels arranged so that a complete revolution of any wheel rotated the wheel to its left one tenth of a revolution. Digits were inscribed on the side of each wheel. Additions and subtractions could be performed by the rotation of the wheels; this was done with the aid of a stylus. Pascal’s calculator design is still widely seen in the form of inexpensive plastic versions found in variety stores.
In 1671 Leibniz invented a machine capable of multiplication and division, but it is said to have been prone to inaccuracies.
The work of Pascal, Leibniz, and other pioneers of mechanical calculation was greatly facilitated by the knowledge of gears and escapements gained through advances in the clock. In the 13th Century, a clock was devised for Alfonso X of Spain which used a falling weight to turn a dial. The weight was regulated by a cylindrical container divided into partitions and partly filled with mercury. The mercury flowed slowly through small holes in the partitions as the cylinder rotated; this tended to counterbalance the weight. By the 15th Century, the recoil of a spring regulated by an escapement had made its appearance as a source of motive power. Gear trains of increasing complexity and ingenuity were invented. Clocks could now strike on the hours, have minute and second hands (at first on separate dials), and record calendrical and astronomical events. Gears opened the door to wonderful automata and gadgets such as the Strasbourg clock of 1354. This device included a mechanical rooster which flapped its wings, stretched its metal feathers, opened its beak and crowed every day at noon. Later, important improvements in timekeeping included Galileo’s invention of the pendulum; and the accurate driving of a clock without weights or pendulum which led to the portable watch.
Although mechanical and machine shop techniques still had a long way to go (consider the 19th Century machinist’s inability to fit a piston tightly into a cylinder), the importance of mechanical inventions as aids to computation was overshadowed by electrical discoveries beginning with the invention of the battery by Volta in 1800.
During the 1700s, much experimental work had been done with static electricity. The so-called electrical machine underwent a number of improvements. Other electrical inventions like the Leyden jar appeared, but all were based on static electricity which releases very little energy in a very spectacular way. In 1820, following Volta’s discovery, Oersted recognized the principle of electromagnetism that allowed Faraday to complete the work leading to the dynamo, and eventually to the electric motor. It was not until 1873, however, that Gramme demonstrated a commercially practicable direct current motor in Vienna. Alternating current (AC) was shown to be the most feasible type of electric power for distribution, and subsequently the AC motor was invented in 1888 by Tesla. The value of electric power for transportation was quickly recognized and employed in tramways and electric railways. This led to improvements in methods for controlling electricity. Electric lighting methods sprang up like weeds during the latter half of the 19th Century. The most successful were due to the efforts of Swan in England and Edison in the United States. Work on electric lighting, the telegraph and the telephone led to the wonder of the age: radio. In 1895, Marconi transmitted a radio message over a distance of one mile, and six years later from England to Newfoundland.
As a consequence of the rapid growth of interest in the radio, much work was done on the vacuum tube. Lee de Forest discovered the principle of the triode in 1907. Until the development of the transistor, the vacuum tube was the most important device in computer technology due to its ability to respond to changes in electrical voltage in extremely short periods of time. The cathode ray tube, invented by William Crookes, was used in computers for a few years prior to 1960. It faded temporarily from view but returned in 1964 due to advances in technology that improved its economic feasibility as well as its value as a display tool. In 1948 Bardeen, Brattain and Shockley developed the transistor, which began to replace the vacuum tube in computers in 1959. The transistor has many advantages over the vacuum tube as a computer component: it lasts much longer, generates much less heat, and takes up less space. It therefore replaced the vacuum tube, only to fall prey in turn to microminiaturization. Of course, the transistor principle didn’t go away, but the little flying saucers with three wires coming out of their bases did.
Oddly enough, one of the most fundamental devices in the early history of computing predates the electronic computer by more than two hundred years. The punched card was first used to control patterns woven by the automatic loom. Although Jacquard is commonly thought to have originated the use of cards, it was actually done first by Falcon in 1728. Falcon’s cards, which were connected together like a roll of postage stamps, were used by Jacquard to control the first fully automatic loom in France, and later appeared in Great Britain about 1810 (see photo 3). At about the same time, Charles Babbage began to devote his thinking to the development of computing machinery. Babbage’s first machine, the Difference Engine, shown in photo 4, was completed in 1822 and was used in the computation of tables. His attempts to build a larger Difference Engine were unsuccessful, even though he spent £23,000 on the project (£6,000 of his own, and £17,000 of the government’s).
In 1833 Babbage began a project that was to be his life’s work and his supreme frustration: the Analytical Engine. This machine was manifestly similar in theory to modern computers, but in fact was never completed. During the forty years devoted to the project, many excellent engineering drawings were made of parts of the Analytical Engine, and some parts of the machine were actually completed at the expense of Babbage’s considerable personal fortune. The machine, which was to derive its motive power from a steam engine, was to use punched cards to direct its activities. The Engine was to include the capability of retaining and displaying upon demand any of its 1000 fifty-digit numbers (the first suggestion that a computing machine should have a memory) and was to be capable of changing its course of action depending on calculated results. Unfortunately for Babbage, his theories were years ahead of existing engineering technology, but he contributed to posterity the idea that punched cards could be used as inputs to computers.
Herman Hollerith put punched cards to use in 1890 in his electric accounting machines, which were not computers, but machines designed to sort and collate cards according to the positions of holes punched in the cards (see photo 5). Hollerith’s machines were put to effective use in the United States census of 1890.
In 1911, the Computing-Tabulating-Recording Company was formed, which changed its name to International Business Machines in 1924. In the period between 1932 and 1945 many advances were made in electric accounting machines, culminating in 1946 with IBM’s announcement of the IBM 602 and 603 electronic calculators, which were capable of performing arithmetic on data punched onto a card and of punching the result onto the same card. It was Remington Rand, however, who announced the first commercially available electronic data processing machine, the Univac I, the first of which was delivered to the US Census Bureau in 1950. In 1963, just thirteen years after the beginning of the computer business, computer rental costs in the United States exceeded a billion dollars.
Univac I was not the first computer, even though it was the first to be offered for sale. Several one of a kind computers were built in the period between 1944 and 1950 partly as a result of the war. In 1939 work was begun by IBM on the Automatic Sequence Controlled Calculator, Mark I, which was completed in 1944 and used at Harvard University (see photo 6). Relays were used to retain numbers; since relays are electromechanical and have parts that actually move, they are very slow by modern standards.
In 1943, Eckert, Mauchly and Goldstine started to build the ENIAC (Electronic Numerical Integrator and Calculator), which became the first electronic computer using vacuum tubes instead of relays (see photo 7). The next year John von Neumann became interested in EN I AC and by 1946 had recognized a fundamental flaw in its design. In “Preliminary Discussion of the Logical Design of an Electronic Computing Instrument,” von Neumann pointed out the advantages of using the computer’s memory to store not only data but the program itself. Machines without stored program capabilities were limited in scope, since they had to be partly rewired in order to solve a new problem (as was the case with EN I AC). This process sometimes took days during which time the machine could not be used. If rewiring of such machines was to be avoided, instructions had to be entered and executed one at a time, which greatly limited the machine’s decision making capabilities. Machines with stored program capabilities automatically store not only numeric data but also the program (which looks like numbers and can be treated like numbers) in memory. In short, stored program instructions can be used to modify other instructions, a concept that leads to programs which can modify themselves. It is the von Neumann stored program concept which is universally used in modern computers from the smallest microcomputer to the largest number crunchers.
The growth of the missile industry in the 1950s greatly stimulated the progress of computers used for scientific work. The nature of missile data handling at that time was such that work loads were very high during the week or so after a firing and virtually nonexistent in between. Computers were too expensive to leave idle, which led managers to look for other work for the machines. Business data processing grew from these roots to its present status, accounting for the lion’s share of machine usage today.
The latter part of 1959 saw the arrival of the transistorized computer. As a consequence of this innovation, air conditioning and power requirements for computers were reduced. Several new computers in that year were announced by IBM, Control Data Corporation, General Electric, and other manufacturers. Among the IBM announcements were the 7070 general purpose computer; the 7090, a high speed computer designed for a predominance of scientific work; the 1401, a relatively inexpensive computer aimed at the medium sized business and the 1620, a low priced scientifically oriented computer. The fantastic growth of the computer field continued through 1961 and 1962 with the announcement of more than 20 new machines each year. In 1963, continuing the family line from the grandfather 704 (as shown in photo 8), the IBM 7040 was announced. This machine embodied many of the features of the 7090 at a reduced cost. In the same year at least 23 other computers were announced by several different manufacturers. In 1964, IBM announced the 7010, an enlarged and faster version of the 1410, and the 360, which came in many different sizes and embodied many features not found in previous computers. Control Data Corporation announced the 6600, and General Electric their 400 series. The IBM 360/370 is typical of a trend in computer manufacturing which is currently followed by most manufacturers: upward compatibility. In the years prior to 1965, every manufacturer spent huge sums of money on research and programming support for several types of computers; several went out of business doing so. Likewise, computer users spent a lot of money to develop their systems for a particular computer only to find it had been superseded by a faster, less expensive machine. As a consequence, the deadly management decision of the period was, “Do we get the cheaper machine and spend the money on reprogramming, or do we risk staying with an obsolete computer and losing our programmers to the company across the street?”
Current developments point to a new trend away from the bigger machines. The combination of lower prices for components and programmable read only memories is attracting many manufacturers to the field of minicomputers and microcomputers. The current trend is clearly toward the personal computer, with TV game microprocessors leading the way.”
ONE OF THE GREAT MISCALCULATIONS IN IBM HISTORY (May, 1980)
New 1978 Electronic Games (Jan, 1978)
THE NEW HEATHKIT PERSONAL COMPUTING SYSTEMS (Sep, 1977)
ELECTROCUTED SQUIRREL SHORT-CIRCUITS PLANT (Feb, 1934)
A Colorful Introduction to Computers (Jan, 1983)
Andrew L. Ayers says: August 5, 20119:33 am
Photo 7: Is that really the ENIAC? If it is, it’s a view that I have -never- seen published anywhere else (and I have read and own an absolute ton of books on computer history). Take a look at the pictures on Wikipedia, for example:
Do you see any of the clean lines of Photo 7 in the article in any of those pictures? Take a look around the museum:
http://www.seas.upenn.e…
Where are the cables? Where are rolling plugboards? I did a Google Image Search for “ENIAC”, and got a ton of pictures, all looking similar (cables, plugboards, rat nests); of those pictures (on the first “page”), only two looked anything remotes like Photo 7; one was another copy of Photo 7:
http://www.nordhornanti…
The other was this one:
http://alfredo.octavio….
It seems strange to me these two would stand out – I know that ENIAC was moved around a few times, and upgraded over the years; if anything, these two images would be from its very latter years before it was decommissioned. I just find it strange that there would be so few images of it in this configuration (with tape drives and a console)…
Hopefully someone else here can shed some light on this…
Actually – do a Google Image Search on “Univac”:
Another Eckert and Mauchly machine (essentially the first commercialized computer for businesses and a follow-on to ENIAC). Notice how it (as well as the family of machines) looks really similar to Photo 7?
In fact, I am almost certain that this earlier image link I posted:
…is actually a UNIVAC – you can see similar tape drives in the UNIVAC GIS results.
Was this a photo mixup by the Byte editors…?
Charlene says: August 6, 201110:45 am
Under normal practices the writer of an article like this one wouldn’t even see the images the magazine was intending to run with his work until after publication, so Reid-Green shouldn’t be held responsible. Unfortunately misidentified images are common in image banks even now, but thirty years ago the problem was far worse; some doozers, like the image I found in the CP archives of a horse labelled “US President Richard M. Nixon”, were obvious, but something like this? It’s unlikely they would even question it.
Andrew L. Ayers says: August 6, 201112:06 pm
@Charlene: I was actually thinking it might be the other way around; that perhaps the author supplied the wrong images, and that the Byte editorial staff wasn’t responsible. I tend to hold Byte of this era to a higher standard, though I am willing to entertain the idea that there simply was a screwup in the selection of images due to mis-labeling or other means. We’ll likely never know; anything is speculation at this point (and pointless speculation at that, probably).
At the same time, I am only speculating that the image is that of a Univac – but not likely a Univac I, as the console of the Univac I (shown on the last page) doesn’t match that in Photo 7; I suspect it’s of a later model Univac – or possibly, even probably, of some other manufacturer’s machine (which – I don’t know right now).
Andrew L. Ayers says: August 6, 20111:18 pm
You know – the more I look at Photo 7, the more I wonder just -what- computer it is; it seems to be a very singular image of a computer. I am fairly certain it is not the ENIAC, as the ENIAC was a plug-board programmed computer, and AFAIK, didn’t have a console like the lady is sitting at. Also, if you note in the background on the left of the photo, there appears to be a wide-format paper tape reader of some sort. For the life of me, I am unable to find any similar photos online or in my various computer history book collection (which consists both of “current” books looking back; aka history – as well as contemporary books of the 1950s and 1960s regarding computer technology) – of that machine, the console, or the tape reader. It’s an utter mystery just what machine it is. I sincerely hope someone comes along here and puts my mind at rest (alright, this isn’t going to keep me up at night, but it is interesting to me). @Charlene, or anyone else: Does the clothing and/or hairstyle of the woman seem to indicate to you the 1950s? What about the country (ie – is this an American hairstyle/mode of dress – or European, or British)? Does it indicate some other era?
JMyint says: August 6, 20111:28 pm
Andrew photo number 7 isn’t the ENIAC nor is it a Univac or any other Eckert- Mauchy computer (for a little bit I thought it might be the assembled BINAC). The control station is wrong for a Univac. It turns out that it is the IBM SSEC of 1948.
http://www.computerhist… A close up of the control station.
www-03.ibm.com/ibm/hist…
Here are more pictures of the SSEC in operation. It was built into the lobby of the IBM headquarters in New York City so that passers by a visitors could see it working. http://www.columbia.edu…
Toronto says: August 6, 20118:26 pm
JM: I’m tearing up – that’s just so beautiful.
Four hundred pound rolls of seven inch wide punched manilla paper? I’d hate to have been the operator! | 计算机 |
2014-23/1195/en_head.json.gz/8559 | Prince Of Persia Warrior Within PC FULL Version AMW3 Prince Of Persia Warrior Within PC FULL Version AMW3
FULLVersion
AssassinMW3
7468dfd0090a0e1f88cb692aa0e161cf8d11e9aa
Prince of Persia: Warrior Within is a video game and sequel to Prince of Persia: The Sands of Time. Warrior Within was developed and published by Ubisoft, and released on December 2, 2004 for the Xbox, PlayStation 2, GameCube, and Microsoft Windows.[citation needed] It picks up where The Sands of Time left off, adding new features, specifically, options in combat. The Prince now has the ability to wield two weapons at a time as well as the ability to steal his enemies' weapons and throw them. The Prince's repertoire of combat moves has been expanded into varying strings that allow players to attack enemies with more complexity than was possible in the previous game. Warrior Within has a darker tone than its predecessor adding in the ability for the Prince to dispatch his enemies with various finishing moves. In addition to the rewind, slow-down, and speed-up powers from The Sands of Time, the Prince also has a new sand power: a circular "wave" of sand that knocks down all surrounding enemies as well as damaging them. The dark tone, a vastly increased level of blood and violence as well as sexualized female NPCs earned the game an M ESRB rating.
Following Warrior Within, a second sequel and a prequel were made, expanding the Sands of Time story. Prince of Persia: The Two Thrones was released on November 30, 2005 and Prince of Persia: The Forgotten Sands was released on May 18, 2010.[citation needed] A port of Warrior Within was done by Pipeworks, renamed as Prince of Persia: Revelations, and it was released on December 6, 2005 for Sony's PlayStation Portable.[citation needed] The port includes additional content including four new areas not available in the original release.[citation needed] On the 3rd of June 2010, a port of Warrior Within was released for the iOS.[citation needed] A remastered, High-Definition, version of Warrior Within was released on the PlayStation Network for the PlayStation 3 on December 14, 2010.[1] | 计算机 |
2014-23/1195/en_head.json.gz/9505 | Rakuten.com Shopping $629.00eBay $619.00eBay Deals $599.00
Android Development Tools Getting Revamped
By Ed Hardy, Brighthand Editor | | 3763 Reads
Developers have had the tools they need to write applications for the forthcoming Android operating system for a couple of months now, but many are not pleased with them. That's why the Open Handset Alliance (OHA) has announced plans to release an improved Software Development Kit (SDK).Exactly what the changes will be are not known, as Quang Nguyen, OHA's Developer Advocate, writing on the Android Developers Blog, simply refers to them as "significant updates to the SDK."Nguyen also didn't give an exact date on when this update will be released, only that it will be out "in several weeks".$10 Million Developer Challenge DelayedApparently, developers have been having enough problems with the software tools that the OHA has decided to push back the deadlines for the Android Developers Challenge in which the OHA will award developers $10 million in prizes.The award money will be distributed equally between two phases. Under the new schedule, Android Developer Challenge I will run until April 14. The 50 most promising entries received will each receive a $25,000 award to fund further development. Those selected will then be eligible for ten $275,000 awards and ten $100,000 awards in phase II.More About AndroidAndroid is an operating system for smartphones being put together by the Open Handset Alliance, a collection of 30+ companies, including Intel, TI, Sprint, T-Mobile, HTC, Motorola, Samsung, and Wind River, but being led by Google.This group is putting the finishing touches on this platform, which will consist of a Linux-based operating system, middleware, and key mobile applications. Many of these are likely to tie into Google's services, like Gmail and Google Maps.Because this platform will be open source, the Alliance hopes it will be quickly extended to incorporate new technologies as they emerge. In addition, it will be open to third-parties to create applications using Java. Related ArticlesDell May Unveil Its First Smartphone Next Month Based on AndroidAn Initial Overview of the Android User Interface HTC Commits to Multiple Android Smartphones Next Year Tweet
LG G2 Mini Review: The Biggest Mini Phone
Best Windows Phone Apps: Files
Nokia X Shifts from Android to Windows Phone as Microsoft Lays Off 18,000 | 计算机 |
2014-23/1195/en_head.json.gz/10037 | -> Perl for the Web -> Performance Myths
Performance Mythspart of Perl for the WebBefore any performance problems can be solved, it's important to understand the pitfalls a Web developer can encounter while attempting to optimize a Web application. It's difficult enough to come to the conclusion that an application should be tested for performance before it's offered to the public. But after the decision is made, it's sometimes even more difficult to get beyond technology preconceptions to discover the aspects of an application that might cause performance bottlenecks when put into production. For applications that are already being used by site visitors, it can be difficult to realize that problem areas aren't always obvious and that the solutions to performance problems aren't necessarily straightforward.
It isn't always easy to pinpoint the real reason why an application is perceptibly slow. There are many links in the chain between the Web server and the site visitor, and any one of them can suffer from performance degradation that is noticeable by the user. A perceived lag in image load time, for instance, can be the result of a slow connection between the user and the Internet, or it could be caused by the Web server's connection. In addition, load from other users accessing the same images could cause the same effect, as could load from users accessing unrelated applications and tying up server resources. Performance problems also can affect each other, causing greater or lesser-perceived slowdowns. In the case of slow graphics, a slow connection from the user to the Internet could mask a slow connection from the Web server to the Internet, but it also could exacerbate any delays in the transmission caused by server lag. When a site is deemed slow overall by disgruntled visitors, it's difficult to tell which of these factors had the most effect on the slowdown and which of the factors are solvable within the scope of the Web server.
Perception itself can be misleading. A slow site can appear slower when site elements rely on unusual browser tricks or high-bandwidth technologies for basic navigation or form input. A Flash interface that takes two seconds to download might take ten seconds to initialize on a computer with a slow processor, for instance, and the user might not be able to tell the real reason for the slowdown. Even a site that relies on mouseovers to activate meaningful elements of a visual navigation interface might run into trouble when the "on" graphics take more time to load than it takes the user to overshoot the graphic and move on to the next menu choice. In cases like these, simple confusion over interface elements might make a site seem less responsive than it should, which makes duplicating the source of the slowdown even more difficult for site personnel who have no such difficulties.
With Web applications, the same uncertainties apply. Perceived lag can be caused by server-side slowness or network delays. Server-side performance loss can come from many sources, including system architecture, network connection, and simple server overload from too many interested visitors. Because each visitor feels like the unique user of the Web application, it's all too likely that performance loss from any source is intolerable and will cause users to go elsewhere.
Luckily, common gateway interface (CGI) applications all share a few common performance bottlenecks that can be easily identified and removed. On the flip side, Perl CGI programs won't be helped by many performance enhancements that would seem obvious in other environments. Optimizing the application at the source code level, benchmarking application at runtime to gauge the performance differences between one style and another, and switching to a different database based on external benchmark scores are all unlikely to solve performance problems that plague Perl CGI programs. Even worse, switching the development language from Perl to a compiled language such as C or Java is likely to do more harm than good. Each "solution" is more likely to drain development than fix performance issues because it doesn't address the real reasons for performance loss.
Program Runtime
As odd as it might seem, the time it takes a Perl CGI application to run is rarely ever a factor in the application's performance. To understand this, it's important to know the difference between program runtime and the total time and system resources taken when using a program to complete a specific task.
With a normal instance of a program written in Perl, the program is run interactively at the command line. After this happens, the program is loaded off disk, and the Perl compiler:
Also is loaded off disk Is executed Configures itself based on files it loads from the disk Accepts the Perl program as an argument
Parses the program and checks for syntax errors
Compiles the program and optimizes it Executes the resultant bytecode Because the first seven steps usually take less than a second, a simple Perl program run in this fashion seems to run instantaneously (that is, in less than a second). Similarly, the total execution time of more complex programs seems to depend solely on the length and complexity of the program being run. A log analysis program that takes twelve seconds from start to finish spends most of that twelve seconds analyzing logs, so the fraction of a second spent on the first seven steps are seen as trivial in ordinary use.
Perl CGI programs are run using all the same steps, but the relative importance of each step becomes skewed greatly because of the demands placed on a Web server in normal use. Although an administrative Perl program is likely to be run once by a single user willing to wait for a response, a Web application written in Perl is more likely to be accessed hundreds of times per second by users expecting subsecond response timesmost of whom are likely to run the same or similar Web applications again within a few seconds. With this environment, the time spent in the first seven steps is no longer trivial; that is, "less than a second" becomes much too long when compared to a runtime of a few milliseconds. The processor power used in performing the main task of the Web application has to be divided up between the running instance of an application and dozens of other copies, which are in the process of being loaded off disk, compiled, and instantiated.
Compile Time is Much Longer
When any Perl program is run, the Perl runtime first compiles it from source code. This process isn't trivial even for the tiniest of Perl programs; in those cases, Perl spends much more time and processing power compiling the program than it does running the program itself. This should make perfect sense to anyone familiar with the process necessary to compile programs in a language such as C. For C programs, the compile step is one that takes minutes or hours, even for a program designed to run for only a few seconds. Perl is similar in this regard, even though the time scales are seconds and milliseconds, respectively, for most Perl Web applications. The effects of the compile step aren't usually noticed when executing a Perl program. This is because the compile step of most Perl programs is only a second or two when executed by a single user, far less than most users would notice when executing an average system program. This happens no matter how long the program itself takes to run. The program either executes very quickly, in which case, the added compile time is seen as a reasonable total runtime, or it executes over a longer time period than the compile time, in which case, the time spent compiling is inconsequential compared to runtime.
If the same circumstances were to apply to a C program, the effect would be much more noticeable. If a standard program took half an hour to initialize before performing a simple two-second task, the overall perceived runtime would be interminable. Even if the program took an hour to run after the initial half hour of compiling, it would still seem like an incredible waste to compile the program before every use.
On the time scale used by most Web applications, this is exactly the case. A Web application usually needs to respond in milliseconds to keep up with the rate of incoming requests, so a half second spent compiling the program is an eternity. Compared to that eternity, a millisecond faster or slower isn't likely to make much of a difference. Disk I/O is Much Slower
The first optimization any Web server makes to its operation is caching files in memory instead of accessing them from the disk. The reason for this is raw speed; a file in memory can be accessed much more quickly than a file on disk. Because a Web site is likely to be made up of hundreds of small text files that change very infrequently, it's possible to keep a large number of them in memory for quicker access. If files are accessed a thousand times a minute, the Web server gets an orders-of-magnitude speed increase by loading the files off disk once a minute instead of each time they're accessed. On top of this, it's usually possible to check each file to see if it's been changed since it was last read from disk. This saves the need to reload files unless they've been altered.
The Web server isn't able to cache files used by a CGI Web application, however. The main program file has to be read from disk each time, as do Perl module files, data files, and any other supporting files for the application. Some of these files might be cached within the Perl CGI program if they're being called from within the program itself; however, none of the files associated with compiling the application is going to be cached by Perl because the files are used only once throughout the compilation process.
Even if files used during runtime are cached for use later in the same program, within the CGI model, there's no way to cache these files for use by later instances of the program, let alone for other Perl programs that use the same files. A text file being searched for the words "dog" and "cat," for instance, could be loaded into memory once and used both by the "dog" search routine and the "cat" search routine; however, the next time the program is called to perform the same search on the same file, it has to load the file from disk all over again. This repeated disk access is likely to be much more time-consuming than the algorithms that actually perform the search.
Data Structure Setup Takes Time
A program that provides access to an external data set has to have access to that data internally before it can deal with the data in a meaningful way. An Extensible Markup Language (XML) document, for instance, has to be translated from the text representation of the document into a Perl data structure, which can be directly acted upon by Perl functions. (This process is generally called "parsing the document," but it can involve many more steps than simply breaking the document into chunks. More detail on handling XML can be found in Chapter 16, "XML and Content Management.") Similar processes have to occur when accessing a database, reading a text file off disk, or retrieving data from the network. In all cases, the data has to be translated into a form that's usable within the Perl program.
The process of setting up data structures as a program runs, also known as instantiation, can be very time-consuming. In fact, the time taken by instantiating data might be hundreds of times greater than the time spent actually processing it. Connecting to a busy database can take as long as five or six seconds, for instance, while retrieving data from the same database might take only a fraction of a second under the same circumstances. Another example would be parsing a log file into a hash structure for easier access and aggregation, which could take seconds while the actual aggregation functions take only milliseconds.
Again, note that the time scales on which a Web application operates are likely to seem miniscule, but the important relationship to consider is the relative time and processing power taken by instantiating a Web application as compared to running it. The combined total of these two steps might still be much less than a second, but when evaluating the source of performance bottlenecks, it would be counterproductive to concentrate on a process that takes up only a small percentage of the total runtime while ignoring parts of the process that take up the majority.
The Effect of Minor Optimization
Understandably, the first thing a Perl CGI programmer looks at when trying to make an application run faster is the program code, even when the time it takes to run that code is a fraction of the time spent loading and compiling it. The urge comes from Perl's reputation as a beginner language. If it's so easy to write a program in Perl, the idea goes, most early Perl programs must be written badly with many inefficiencies. This idea is particularly popular among programmers with a background in C. Much of the design work that goes into a C program addresses the need to optimize the program's memory footprint, processor load, and runtime. From a Perl programmer's perspective, though, optimization is a secondary concern. The primary concern for many Perl programmers is to complete the tasks required of the program using as many existing Perl techniques and modules as possible. In this respect, Perl can be seen more accurately as a solution set rather than a simple programming language. The final result is to make creating a particular application possible with the tools Perl makes available. For Web applications, this becomes even more crucial; in many cases, it's more important for a Web programmer to meet a deadline and deliver a complete and functioning Web application than it is to achieve the greatest possible performance within that application. Performance is seen more as a pleasant side effect than a primary goal, especially when the goal is getting the job done, not merely getting it done faster.
Luckily, Perl does most of the optimizing for the programmer. Because Perl programs tend to use similar styles and techniques, many of those techniques have been quietly optimized over the years to give common programs a performance boost without the need to explicitly optimize the program. In addition, many Perl optimizations are expressed in programmatic hints that have made their way into desired Perl programming style. Because of these subtle optimizations that Perl recognizes and encourages, it's actually possible to waste time modifying a particular piece of Perl code that already is being quietly optimized behind the scenes. In fact, it's more likely than not that optimizing code written in a Perl style by rewriting it using the rules of another language (such as C) might actually cause more harm than good.
There is an exception to this rule, but it involves major optimization outside the bounds of Perl rather than minor optimization within them. Modules can be written in C or other compiled languages for use with Perl, and within these modules, it's possible to optimize data structures and algorithms that give real performance improvements in cases in which Perl isn't capable of providing them on a general scale. Many common Perl modules use compiled code to provide performance improvements to often-used algorithms and data structures, so it's possible to benefit from these optimizations without having to develop them from scratch.
Optimization Can Increase Development Time
Web programming is a special discipline that spans a variety of skills, but it usually ends up falling between the cracks when it comes time to plan the budget. Web applications are likely to be misunderstood because the possibilities of Web sites are still being explored, and the timelines and decisions surrounding Web applications are likely to vary from nebulous to terrifying. In a situation like this, the cost of optimizing a Web application can be prohibitive when compared to the potential benefits of spending that time improving the application by adding new features or fixing interface errors. Better yet, it would be a boon to Web programmers to find ways to improve performance across the board without incurring so much of a penalty to development time. In this way, simple optimizations that cause large improvements in performance are far more valuable than minor optimizations that cause only incremental performance increases. Minor optimization always can be carried out at a later time, but in most Web development environments, it's much more likely that new features and updated interfaces will create new work and tighter schedules before that point is ever reached. An Exception: XS-Optimized Modules
One optimization that can make a real difference when dealing with Web applications is XS, which is the interface language provided by Perl for extending the language using compiled modules written in C. The idea is analogous to the programmer's ability in C to optimize algorithms by writing them in assembly language for a specific processora technique that is widely used in programs that need to perform specific calculations with high performance. XS bridges the gap between compiled Cwith all the restrictions and processor-specific performance optimizationsand Perl.
Modules written using XS can provide noticeable improvements to both runtime and processor load by compiling optimized handlers for large data structures and complex algorithms that Perl would be less than efficient in handling by itself. It does this by enabling a module developer to specify the interface to a Perl module in the XS language, which is then compiled into a shared library. The XS language can contain complete C functions or it can reference existing C header files to provide a direct or indirect interface to those functions. After it is compiled, the shared library and its associated Perl wrapper module can be used as any other Perl module would be. This enables XS-optimized modules to replace existing all-Perl modules if there's a need to improve performance or to provide a link to existing C code.
Two areas in which XS noticeably improves performance is in database access and XML processing. In the case of database access, the Perl DBI module provides a Perl interface to database driver modules written in C. This enables the individual drivers for each database to be written using C-based interfaces, which are more commonly found in vendor-provided development toolkits than pure Perl interfaces would be. It also encourages driver modules to optimize the raw transfer of data to and from the Perl interface routines in the main DBI module, while providing a seamless interaction between the driver layer and those routines. For XML processing, similar XS modules enable Perl XML processors to use existing parsers and algorithms written for C applications, which optimize the handling of data structures that would otherwise be unwieldy if represented as native Perl references and variables. Luckily, most of the common cases in which XS is needed already have been addressed by the modules that are most commonly used. DBI, as mentioned before, uses XS to optimize data transfer speeds whenever possible. XML parsing performance is improved using the expat parser (through the XML::Parser module), and interactions with the parsed documents can be optimized by using Xerces for Document Object Model (DOM) processing, Sablotron for XSLT transformations, and Orchard for XML stream handling. (More details on XML processing using Perl is available in Chapter 16.) In each case, the interface to the module is left up to Perl, and C is used simply to accelerate those parts of the modules that are likely to get the most frequent use.
Of course, additional XS modules can be created as the need arises to incorporate C libraries or compile code for performance enhancements. Graphics libraries, windowing toolkits, and proprietary communications protocols are some of the many uses that XS modules can be written to address. This frees the core of Perl to handle the interfaces between these libraries. The development of compiled modules using XS is a topic that is outside the scope of this book, but it can be noted here that most of these performance enhancements are similarly outside the realm of minor optimization and are generally exceptions to the rule.
Perl and C Differences
It would seem logical that the performance problems Perl sees are due to it being an interpreted language. There's no distinct compilation step seen when running a Perl program, so it's easy to come to the conclusion that the Perl compiler is actually an interpreter; it's also common to hear Perl referred to as a scripting language, which lumps it into the same category as shell scripting languages or high-level languages, such as Lisp. Both of these are interpreted and neither is known for high performance. These ideas are reinforced by Perl programmers' recent forays into graphic user interface programming, which usually results in an application that is less responsive overall than a comparable C program.
Most programs in the Web environment that can be compared to Perl CGI applications are both compiled and optimized for performance. The Web server, for instance, is almost always a compiled C program optimized to serve files to remote clients as fast as possible. Server modules, as well, are compiled in with the Web server and optimized to provide their servicessuch as authentication or encryptionwith as little overhead as possible. Because these applications are optimized for performance and all are compiled, the connection between compiled applications and performance optimization is a strong and obvious one. Writing a compiled application in C to avoid the overhead of interpreting Perl programs might seem to be the answer. This also would seem to enable more thorough optimizations to be made using the standard techniques developed for other compiled C programs. As common as it is, however, this answer is wrong. Compiled CGI applications are just as slow as their Perl brethren because the differences seen between Perl and C programs in other arenas aren't nearly as pronounced when encountered in Web applications. Even if it did improve the performance of a Web application in use, compiling a Web application presents its own problems during development. Traditional single-user applications have slow development cycles that end when the application is shipped to a user. This comes from the complexity and robustness of end-user applications, such as Microsoft Word, that are required to perform many tasks in a coordinated fashion with few errors and very little chance to fix errors. On the other hand, Web applications are likely to change rapidly, and they are likely to be created and modified by the same developers who use them. This comes from the Web tradition of editing text filesusually HTML or plain textfor immediate distribution by way of the Web site. Changes are made interactively as errors show up on the pages displayed. In many cases, a page is changed and changed again many times over the course of a few minutes as variations of page layout or word choice are made and tested in context. Because Web applications are edited and tested in an environment that encourages near-instant feedback, the time spent compiling a C application after every change might become cumbersome when developing for the Web. Developers might be discouraged from trying different variations of an implementation because the time spent compiling each possibility would be prohibitive. C CGI is Still Slow
Compiled CGI programs are still CGI programs. Chapter 5, "Architecture-Based Performance Loss," describes a bottleneck that can afflict any CGI process. A compiled C program is no exception; C programs still have to be loaded from disk and executed, program variables and data structures still have to be instantiated, and any files related to the program still have to be loaded fresh from disk each time the program runs. Any optimizations a C process would normally benefit from in single-user use are still tiny in comparison to the overhead caused by CGI invocation of the program. In fact, the major difference in overhead between a CGI program written in C and a work-alike written in Perl is the compilation step necessary before the Perl program is executed. Because the Perl program has to pass through this compile step every time it is executed, and because the C program is compiled once before it is executed the first time and then run as a system binary from then on, it would seem that Perl could never catch up to the performance achievable by the equivalent C program. Fortunately, both compilation overhead and instantiation overhead are artifacts of the way CGI programs handle requests, so even this difference between Perl and C can easily be overcome by switching away from the CGI protocol.
C Optimizations Aren't Automatic
It's been said that it's easier to write a bad C program than it is to write a bad Perl program. Conversely, it's easier to write an optimized Perl program than it is to write an optimized C program. C enables great flexibility when developing a program, but this flexibility can be detrimental when writing Web applications. Issues such as memory allocation and garbage collection are an anathema to Web programmers because they add complexity and difficulty to the application design process. A Web programmer doesn't want to have to deal with transitory memory leaks, data type mismatches, and system corruption due to incorrect memory allocation when there are more important issues with which to be concerned. These issues include consistent user interfaces and easy access to data sources. Perl's assumptions might not produce the pinnacle of optimized code, but that is more than made up for by the sheer number of optimizations and corrections Perl makes that the Web programmer never has to know about.
Perl offers optimizations that are automatic, like the optimized regular expression engine and the XS-optimized DBI module, which interfaces with a database (see Chapter 14, "Database-Backed Web Sites"). In Perl 5.6, for instance, many optimizations were made to the Perl compiler itself, which affected common Perl keywords, such as sort, by increasing the optimization of nonstandard ways of calling the function. These improvements are present in each version of every module or core release made available.
Because of the high-level abstraction inherent in developing a Perl program, future optimizations to Perl will be utilized by Perl-based applications automatically. This can't be said for a compiled C program for two reasons. First, the optimizations present in a particular version of a C compiler or in a particular version of a system library aren't likely to be so common as to improve the performance of allor even a majority ofthe Web applications written in C. Even if that was the case, though, the applications would have to be recompiled against the new version of the compiler or libraries, which requires a separate step to be performed each time the improvements were made available.
C Programs Still Connect to a Database
One similarity between Perl programs and C programs when creating Web applications is the supporting applications and services to which either application would need to connect. Because a Web application is likely to interact with system resources such as database servers, network support applications, groupware, and other server applications, a Web application written in Perl is likely to rely on transactions between these system applications as much as it relies on internal processing within the Perl application. Similarly, a Web application written in C or Java would have to integrate the same system-level applications, so any bottlenecks caused by system applications would cause the same performance problems for Web applications written in either Perl or C. A database server, for example, is going to be accessed in similar ways by both a Perl application and a C application. Either one is likely to use the same underlying network interface because Perl module developers generally use the C libraries made available by database server developers. Even if the modules so created behave differently from the standpoint of the programmer who is developing Web applications in Perl, transactions between the database and the Perl application are still likely to be identical to those between the database server and a C application. This is especially true for database servers such as Oracle that rely heavily on network protocols to facilitate database interaction because the network protocols bridge the gap between any specific implementations of the protocols in C or Perl.
One important difference between database access in C and Perl is the ease with which Perl applications can be written to access a database. Database access from Perl has become both efficient and easy to develop in the past few years due to the DBI interface module, which creates a generalized database for use within Perl programs. Optimized interfaces to each supported database can then be written to conform to the DBI specification, which enables database driver developers to concentrate on the details of interfacing with a particular database without having to develop an understandable application program interface (API) for use by application developers. It also enables application developers to learn one database interaction API and apply it to any database made available to DBI. The DBI module is covered in more detail in Chapter 14.
Java is a language commonly used for Web development. It illustrates this myth perfectly. Although Java is a precompiled language, database performance from a Java application is likely to be slower overall than database performance from a Perl application. This is due in large part to the available Java Database Connection (JDBC) drivers being used to access database servers from Java servlets and Java applications. These drivers aren't nearly as optimized for performance as are their C or Perl counterparts, so any application that sees heavy database use is likely to be slowed considerably by the inefficiency of JDBC drivers. Misleading Benchmarks
Another myth when testing a Perl CGI program for Web performance is that reasonable information can be garnered from the benchmark facilities provided within Perl. The mere presence of these tools would seem to indicate that they would be useful in diagnosing the cause of performance loss, but the tools themselves aren't suited to diagnosing the problems a Web application would face. In fact, most of the results given when using these tools on a Web application are confusing at best and ludicrous at worst.
Perl benchmarks are designed only to test the relative performance of algorithms within the scope of a Perl program. These benchmarks, even when run against an entire Perl program, would measure only runtime and would completely ignore compile time and the other steps necessary to run a program. Because those other steps are much more likely to affect the overall performance of a Web application, the results themselves are useless even when they seem to make sense.
Benchmarking also doesn't take the special nature of Web requests into account. Differing connection speeds cause variable effects on a Web server, which in turn can change the performance characteristics of the Web application being accessed. Because these effects can be duplicated only within the Web server environment itself, the aspects of the applications that need to be tested are outside the scope of the benchmarking tools. Better tools are available, but they would have to operate outside of Perl entirely. Benchmarks Measure Only Runtime
One common mistake when benchmarking a Perl CGI Web application is using the Benchmark module to time the program from start to finish. It would seem reasonable that capturing the time when the program starts and the time when the program ends, and then subtracting the former from the latter, would give a reasonably accurate measure of the total time the program takes to run. Benchmark.pm provides a group of convenience methods that gives a precise system time that is partitioned into user and system CPU time as well as "wallclock" or perceived time. It also can perform calculations on pairs of these times to determine the difference between them. Listing 8.1 gives an example of how Benchmark is used in such a fashion.
Listing 16.1 Benchmarking a Program
01 #!/usr/bin/perl
03 require 5.6.0;
04 use warnings;
05 use strict;
07 use Benchmark;
09 my $t1 = Benchmark->new();
11 print "Content-type: text/plain\n\n";
13 print "This is a CGI script...\n";
15 for my $x (1..99)
17 my $square = $x * $x;
18 print "The square of $x is $square.\n";
21 print "...which does something silly.\n";
25 my $total_time = timediff($t2,$t1);
27 print "\nThis script took ". timestr($total_time) . " to run.\n\n";
The program in Listing 8.1 illustrates a basic way in which the Benchmark module might be used to determine the total runtime of a program. Lines 03 through 05 include the basic reference checks and modules that are essential to any well-written Perl program. Line 07 includes the Benchmark module itself, and line 09 instantiates a new Benchmark time object, which is assigned to the variable $t1. (Note that this object is more complex than the time stamp given by the built-in localtime function, but for the purposes of this program, it can be thought of in the same way.) The time stored in $t1 is the start time of the program proper. It should be noticed at this point that this isn't really the start time of the benchmarking program, but the start time of the first accessible part of the program; it isn't feasible to get a time before this point.
Lines 11 through 21 of the program make up the core of the program being benchmarked. In theory, if we weren't benchmarking the program, these lines would comprise the program in its entirety. In this case, the program does very little and does it pretty inefficiently; it prints a line of text, uses a simple but inelegant algorithm to determine the square of each number from 1 to 99, and then prints each result. Line 21 declares the folly of such an enterprise and ends the program proper. This section of the program could be replaced with any Perl program, CGI or otherwise.
After the central program has finished, line 23 instantiates a second Benchmark time object to capture the moment. Because we now have the start time and end time of the program, line 25 uses the timediff function provided by the Benchmark module to determine the difference between time $t2 and time $t1, which is returned as a third time object and stored in $total_time. Line 27 then converts that into a string and displays it.
The first problem with timing a program in this fashion becomes apparent as soon as the program is run. Running this program from the command line produces the following output:
Listing 16. Content-type: text/plain
This is a CGI script...
The square of 1 is 1.
The square of 97 is 9409.
...which does something silly.
This script took 0 wallclock secs ( 0.01 usr + 0.00 sys = 0.01 CPU) to run.
Something doesn't look right in this output; the last line states that the program took no time to run. As it turns out, the Benchmark module records "wallclock" time only in increments of seconds, so a program like thiswhich takes only a few milliseconds to runwon't even register as having taken any time. Also, it would appear that this process required only 10 milliseconds of CPU time, which gives some indication of the load on the processor caused by executing the program; however, that still doesn't translate to any meaningful demarcation of time that could be used to test the overall performance of a Web application. It might be argued that the program itself does nothing useful, causing the output to be skewed toward the zero direction. Unfortunately, most common Web applications return a similar value because the time taken by Perl to execute this type of application is usually less than a second. For instance, substituting Listing 7.7 from Chapter 7, "Perl for the Web," gives very similar output:
<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML//EN">
<HTML><HEAD><TITLE>SQL Database Viewer</TITLE>
</HEAD><BODY>
</BODY></HTML>
Again, very little information was actually given about the total runtime of the program, even though the CPU time taken by this program appears to be about 30 times greater than the previous program. It would seem that even a program that does real work should take so little time as to be practically instantaneous, so the idea that Perl CGI is slow would seem to have no merit. No Internal Benchmarks for Compile Time
Benchmarks like this only tell half the story when dealing with a Perl program. The idea of benchmarking a compiled program from within the program itself would make some sense; there isn't much that happens before the program is run, and the runtime of the program (or parts of the program) is important only if it's noticeable to the single user interacting with it. In Perl, however, steps that occur before the benchmarking section of the program is ever reached cause much of the effective runtime and processor load. Notably, the time taken by loading the Perl compiler and compiling the program are ignored completely by benchmarks of this type. Because the Perl compiler also compiles the benchmark code, there's no way to start the timer earlier and catch the time spent before the core program is reached. The beginning of the program in terms of source code is executed only after it is compiled at runtime. Before that point, no Perl can be executed because it hasn't yet been compiled.
It's not possible to get around this problem by embedding the entire program inside a separate timing program, as was the case with Listing 8.1. Any code embedded in this way is considered a part of the main Perl program, and it is compiled at the same time. As a result, the Perl compiler has already been loaded and the program has already been compiled before any embedded code is reached and executed, so the timing results still would exclude compile time. In fact, there is no way to get a benchmark of the total runtime of a Perl program from within itself or an encompassing Perl program. In all cases, the time spent loading the Perl compiler and compiling the program falls outside the scope of the Perl-based timing code. There are ways to trick the Perl compiler into compiling the timed code after the rest of the program is running, but they result in benchmarks that are so unique as to be useless. You can compile and evaluate the code at runtime using eval, for instance, by loading the program off disk, assigning it to a string variable, and processing it with eval. Tricks like these are made possible because Perl enables the program to take a finer grain of control over sections of code, if necessary. (Applications like this are explored in Chapter 9, "The Power of Persistence.") However, there's no guarantee that the process used to compile and execute the code in this modified way will have any relationship to the process used by Perl to compile and execute the program independently; thus, any benchmarks given by this process would not be indicative of real performance. This problem is compounded further by the fact that time taken by loading the Perl compiler is still being ignored. The Effect of Connect Time
An additional factor in application performance that shouldn't be ignored is the speed of visitors' Internet connections and the time it takes to connect to the site and retrieve the results of a page. Connection speed can affect site performance in more ways than one, and it's even possible for visitors with slow connections to disrupt the performance of an otherwise-fast site for visitors with more bandwidth. It's very difficult to test a site based on connection types, but considering the effects in a general sense might avoid some surprises.
When benchmarking a Web application written in Perl, it's easy to forget that the application will be accessed through a network connection that doesn't provide instantaneous access to the information being provided. A Web application might respond in a matter of milliseconds, but high latency between the visitor's computer and the Web server might add seconds to the time it takes to establish a connection. Independent of latency, available bandwidth might be limited to the point in which page contents returned by the Web application take even more precious seconds to be transferred downstream to the visitor. Although there isn't much that can be done within Perl to improve the performance of an application over a slow network connection, it's still important to keep this kind of overhead in mind when determining whether an application is providing reasonable performance at a particular rate. (This idea is discussed in more detail in Chapter 15, "Testing Site Performance.")
A slow connection can affect other visitors' experience as well. Depending on the Web server and the Web application, it's possible that an application will have to stay connected and running as long as it's processing the request, which would include the time necessary to transmit the results back to the visitor's computer. This means that an application that technically takes only a few milliseconds to process a request can remain open hundreds of times longer while transferring the results. This prevents that server process from answering any other requests in the interim. If enough of these slow requests come in simultaneously to clog the pipes, it's possible that visitors with fast connections could be kept waiting for a connection to open up before their requests can even start to be processed. Situations like these are more common than they should be, unfortunately, because network congestion can create an environment in which otherwise fast connections can become slow enough to create the same effect.
Slow upstream connections can become as big a performance drain as slow downstream connections. A Web application that requires large files or other data streams to be sent from the client to the Web server for processing can suffer doubly from a slow connection speed. A forum application that accepts long text submissions, for instance, will have to wait until the entire request is transmitted to the server before it's possible to start processing the request. Because the Web server process is occupied while this is happening, and then occupied while the information is being processed and the result is being returned through the same slow connection, it's possible to have a Web application that posts decent benchmarks in local testing take long enough to time out the visitor's Web browser connection. This kind of upstream lag is very common due to the asymmetric nature of most DSL and cable modem connections; a connection that has a decent downstream transfer rate might have only a tenth of that bandwidth open for an upstream response. Because upstream and downstream connections can interfere with each other on both the client and the server, it's doubly important to check the performance of a Web application under slow network circumstances.
Unfortunately, it's very difficult to test a site based on connection speeds. Most benchmarking applications assume that the site should be tested by overloading it with as many requests as possible as quickly as possible. With these, the goal is to saturate all available bandwidth with simultaneous requests and see how many requests the Web application can process before losing performance or shutting down entirely. In many cases, it's not even possible to set preferences on the benchmarking application to test connections that are slower than the one used for testing. Generally, the test connection is a LAN with hundreds of times more bandwidth than a site visitor would have. On top of that, it's very difficult to tell the average speed of visitor connectionseven after the fact. Chapter 15 discusses a few ways to simulate slower connections while maintaining a reasonable server load.
Database Benchmarks Expect Different Circumstances
Despite the lip service paid to testing the performance of Web applications, in practice they aren't likely to be tested or benchmarked frequently. Database servers, on the other hand, are some of the most aggressively benchmarked applications available. As a result, it would seem that the task of performance testing the database-enabled aspects of a Web application has already been done to satisfaction. Most database benchmarks take for granted circumstances that are very different from the kind of usage patterns a Web application would impose on a database server; thus, benchmarks produced in regular server testingeven those from third-party groupsare likely to be inadequate when testing the performance of a database in terms of the Web applications it's supporting.
Databases are likely to be benchmarked in terms of transactions per minute. Few databases are likely to be compared in terms of the number of seconds it takes to connect, however, so the connection time from a CGI application still has to be factored in when testing the performance of a database-backed Web application. The usage patterns of a Web application are unlike most database transactions. Most database front-end applications are likely to access the database in a straightforward fashion; they connect to the server, log in, and start processing a stream of transactions in response to interactive commands. When testing a database in this context, the most important aspects would be the number of concurrent connections, the maximum rate at which transactions can be processed, and the total bandwidth available to returning results. These would correspond respectively to the number of users who could access the system at the same time, the complexity of the programs they could use to access the database, and the amount of time they would have to wait for a complete response. This makes it more reasonable to test the database by having a group of identical applications access the database simultaneously, process as many transactions as possible to determine the maximum transaction rate, and retrieve as much data as quickly as possible to determine the maximum transfer rate.
With a Web application, however, the usage pattern would be very different. A CGI Web application is likely to connect to a server, log in, process a single transaction, retrieve a subset of the results, and disconnect, only to reconnect again a moment later. As a result, testing in a Web context would need to concentrate more on the total time taken by connecting to and disconnecting from the database, the overhead due to logging in and processing a transaction, and the elapsed time necessary to retrieve the necessary subset of data needed by the application. (Chapter 14 has more detail about how to make Web applications access a database in a more continuous manner.) As it turns out, the best way to simulate such odd request patterns is by running the Web application itself in production testing.
***begin sidebar
The Real Cost of Optimization
When I first went to a meeting of the San Diego Perl Mongers, a local Perl user group, another attendee explained a problem he was having with the performance of a Web application. Bill was working for a university in San Diego as a Web programmer, and one of his duties was the care and feeding of an application used to process class registrations through the school's Web site. At the start of every quarter, the site would get pounded by traffic as the students all logged in at the same time to search through and register for classes. Unfortunately, the application wasn't able to handle the traffic, and it responded to the flood of requests slowlyif at all. This made the students furious with the site, and it caught the attention of Bill's superiors.
He had inherited this application from a previous Webmaster. It was written in Perl for use with CGI, and the application itself was full of twists, convoluted functions, and hard-to-follow pathways. Bill was having a devil of a time just understanding the program enough to fix errors or add features. So, he had no idea how to optimize it to make it react more efficiently to the temporary surges of traffic it was likely to see. Previous attempts had been made to improve the speed of the program by optimizing the code in sections of it, but none had improved the application's performance to the point where it was usable under load. (Unfortunately, they didn't realize this until the next registration period started and more students got angry at the system.) At some point, the decision was made to rewrite the entire application in a compiled language such as Java. As a result, Bill's job wasn't to fix the application completely, but to patch parts of it wherever possible to improve performance while the new application was being written. With this in mind, he latched on to an idea that the database accessed by his Web application was itself slow, so if it was possible to increase the performance of the database, it might be possible to improve the performance of the application to the point where it was usable. Bill came to the Perl Mongers meeting with this goal in mind, but he ended up getting a tutorial on the mechanics of CGI instead. Bill's problem was not that the application itself was slow, but that it was being overwhelmed by overhead whenever a large group of users accessed it simultaneously. Having read this book up to this point, you should recognize that Bill's problem wasn't going to be solved by tuning the database or optimizing sections of code, and it wasn't going to be solved by translating the whole mess into Java, either. The crux of the problem was that Bill's program was running as a CGI process, which by now should send shivers of apprehension down your spine. The overhead of starting, compiling, and instantiating his complex program and connecting to an already-overloaded database was completely overshadowing the main task of the programprocessing the students' requests. At that meeting, Bill was told what you've read here: tuning the application itself wouldn't help more than incrementally, but running the application in a persistent environment would give exponential performance improvements, especially in the circumstances he was seeing.
I don't know if Bill was ever able to act on that knowledge, or whether the university eventually did rewrite the entire application in Java. Hopefully clear heads and reasonable performance testing prevailed. I do know that I've seen the same situation many times since then, which is the main reason why I've written this book. ***end sidebar
When it comes to Perl CGI, performance myths abound. Most people place the blame for poor performance on Perl because it's not precompiled or hand-optimized, but in truth, neither of these factors has much effect on CGI performance. Optimizing the runtime of Perl programs doesn't help (aside from a few exceptions), and rewriting Web applications in C doesn't make a difference if the application is still being called through CGI. When determining the cause of a performance problem, internal Perl benchmarks are likely to give misleading data because there's no way to add in the compilation and instantiation time required to get a complete picture of how long a CGI Web application really takes from start to finish.
This is a test.Perl for the Web contents ©2001 New Riders Publishing. Used with permission.Other site contents ©2000, 2001 Chris Radcliff. All rights reserved.Page last updated: 15 August 2001 | 计算机 |
2014-23/1195/en_head.json.gz/11505 | CD Projekt – Pirated games are not lost sales, DRM is “a lot” for legitimate users to put up with
Saturday, 19th May 2012 19:36 GMT
GOG’s managing director, Guillaume Rambourg, has said that surprisingly enough, it wasn’t the DRM-free version of The Witcher II which was pirated the most, but the retail version which shipped with DRM.
The reason, according to Rambourg and CD Projekt CEO Marcin Iwinski, has less to do with sharing and more to do with the reputation gleaned from cracking a title’s DRM.
“Most people in the gaming industry were convinced that the first version of the game to be pirated would be the GOG version, while in the end it was the retail version, which shipped with DRM,” Rambourg told Forbes.
“We were expecting to see the GOG.com version pirated right after it was released, as it was a real no-brainer,” added Iwinski. “Practically anyone could have downloaded it from GOG.com and released it on the illegal sites right away, but this did not happen. My guess is, that releasing an unprotected game is not the real deal, you have to crack it to gain respect and be able to write: “cracked by XYZ.” How would “not cracked by XYZ, as there was nothing to crack” sound? A bit silly, wouldn’t it?
“The illegal scene is pretty much about the game and the glory: who will be the first to deliver the game, who is the best and smartest cracker. The DRM-free version at GOG.com didn’t fit this too well.”
Back in December, Iwinski estimated the game had been pirated over 4.5 million times, and by now, that figure has probably risen significantly. Still, these numbers do not constitute lost sales, according to the CEO.
“It really puzzles me how serious software companies can consider each pirated copy to be a lost sale,” said Iwinski. “Maybe it looks nice in an official report to say how threatening pirates are, but it is extremely far from the truth.
“I would rather say that a big part of these 4.5 million pirated copies are considered a form of trial version, or even a demo. Gamers download [pirate copies] because it’s easy, fast, and, frankly, costs nothing. If they like the game and they start investing the time, some of them will go and buy it. This is evident in the first Witcher, where the total sales are 2.1 million units at present and the game is still doing well, although it is already five years old.”
Iwinski went on to say he doesn’t see a future in DRM, as it simply “does not work,” and the technology, which is supposed to be protecting a company’s investment, not only gets hacked within hours of release, but does nothing more than alienate the consumer.
“DRM, in most cases, requires users to enter serial numbers, validate his or her machine, and be connected to the Internet while they authenticate – and possibly even when they play the game they bought,” he explained. “Quite often the DRM slows the game down, as the wrapper around the executable file is constantly checking if the game is being legally used or not.
“That is a lot the legal users have to put up with, while the illegal users who downloaded the pirated version have a clean–and way more functional–game. It seems crazy, but that’s how it really works.”
Posted in: CD Projekt
Tags: Guillaume Rambourg Marcin Iwinski Witcher II gog.com Share on: | 计算机 |
2014-23/1195/en_head.json.gz/12219 | HTML 5 on The Road to Standardization
We've all heard lots of talk lately about browsers that support HTML 5, both on the desktop and on mobile devices. Versions of Chrome, Firefox, Internet Explorer, and Opera all now have varying degrees of HTML 5 support. And all the mobile operating systems are also loudly trumpeting their HTML support. What you may not realize is that while HTML 5 is getting more solid, it's still years away from becoming as an official standard -- although in practice, it should be gaining more acceptance very quickly.
Yesterday, the World Wide Web Consortium (W3C) announced it was extending the charter of the HTML 5 Working Group, saying the standard should have its "Last Call" for people to comment on the standard in May, but targeting 2014 for it becoming a solidified standard.At Mobile World Congress, W3C CEO Jeff Jaffe explained that the Last Call date typically means a standard is sufficiently stable for inclusion in products like browsers that are updated frequently - but that for it to become an official standard, W3C wants it to be "really hardened" so it can go into devices such as TVs or even automobiles that tend never to be updated. One of the things holding it back, he said, is the lack of a sufficiently comprehensive test suite to fully test interoperability. Obviously, there are lots of test sites out there now, with different browsers looking better or worse depending on the test.
Jaffe said the 2011 browser releases have many innovative features and are likely to be very useful, but they may not support all of the features in the final HTML 5 standard. He also noted that W3C was interested in the broader "one web" concept, including new versions of the cascading style sheet (CSS) specification, scalable vector graphics (SVG), its new font framework, and a device API standard, so that browsers could access items such as geolocation from a phone through a consistent method.
The most controversial part of HTML 5 may be the codec for the video tag. In the past few months, Microsoft and Apple have come down in favor of H.264, with Google, Mozilla, and others in favor of Google's WebM (VP8) codec and Theora. Jaffe said the W3C "would prefer to endorse a codec" but hasn't yet. He said the Web has been successful because it is an open platform not owned by anybody, so it would like an "adequate-quality royalty-free codec". He noted that work by Web creator and W3C Director Tim Berners-Lee and others built the Web as an open platform, and that's what has driven the value of the platform, so he said "it's only fair" that any video codec be royalty-free, even if it is limited to a specific field of use on the Internet. Jaffe said he thought the HTML 5 "will be transformative to many industries" allowing broad distribution and creation of media, for instance.
Internet,Show Reports,Software
web browser,internet,Mobile World Congress | 计算机 |
2014-23/1195/en_head.json.gz/12585 | NIST Home > ITL > Software and Systems Division > Information Systems Group > Conformance Testing
Martha Gray, Alan Goldfine, Lynne Rosenthal, Lisa Carnahan
With any standard or specification, eventually the discussion turns to "how will we know if an implementation or application conforms to our standard or specification?" The following discussion defines conformance and conformance testing as well as describes the components of a conformance testing program.
1. Definitions - what is conformance (to a standard)?
Conformance is usually defined as testing to see if an implementation faithfully meets the requirements of a standard or specification. There are many types of testing including testing for performance, robustness, behavior, functions and interoperability. Although conformance testing may include some of these kinds of tests, it has one fundamental difference -- the requirements or criteria for conformance must be specified in the standard or specification. This is usually in a conformance clause or conformance statement, but sometimes some of the criteria can be found in the body of the specification. Some standards have subsequent documentation for the test methodology and assertions to be tested. If the criteria or requirements for conformance are not specified, there can be no conformance testing.
The general definition for conformance has changed over time and been refined for specific standards. In 1991, ISO/IEC DIS 10641 defined conformance testing as "test to evaluate the adherence or non-adherence of a candidate implementation to a standard." ISO/IEC TR 13233 defined conformance and conformity as "fulfillment by a product, process or service of all relevant specified conformance requirements." In recent years, the term conformity has gained international use and has generally replaced the term conformance in ISO documents.
In 1996 ISO/IEC Guide 2 defined the three major terms used in this field.
conformity - fulfillment of a product, process or service of specified requirements
conformity assessment - any activity concerned with determining directly or indirectly that relevant requirements are fulfilled.
conformity testing - conformity assessment by means of testing.
ISO/IEC Guide 2 also mentions that "Typical examples of conformity assessment activities are sampling, testing and inspection; evaluation, verification and assurance of conformity (supplier's declaration, certification); registration, accreditation and approval as well as their combinations."
Conformance tests should be used by implementers early-on in the development process, to improve the quality of their implementations and by industry associations wishing to administer a testing and certification program. Conformance tests are meant to provide the users of conforming products some assurance or confidence that the product behaves as expected, performs functions in a known manner, or has an interface or format that is known. Conformance testing is NOT a way to judge if one product is better than another. It is a neutral mechanism to judge a product against the criteria of a standard or specification.
2. Testing for conformance?
Ideally, we would like to be able to prove beyond any doubt that an implementation is correct, consistent, and complete with respect to its specification. However, this is generally impossible for implementations of nontrivial specifications that are written in a natural language.
The alternative is falsification testing, which subjects an implementation to various combinations of legal and illegal inputs, and compares the resulting output to a set of corresponding "expected results." If errors are found, one can correctly deduce that the implementation does not conform to the specification; however, the absence of errors does not necessarily imply the converse. Falsification testing can only demonstrate non-conformance. Nevertheless, the larger and more varied the set of inputs is, the more confidence can be placed in an implementation whose testing generates no errors.
3. Conformance clause and specification wording
The conformance clause of a standard specification is a high-level description of what is required of implementers and application developers. It, in turn, refers to other parts of the standard. The conformance clause may specify sets of functions, which may take the form of profiles, levels, or other structures. The conformance clause may specify minimal requirements for certain functions and minimal requirements for implementation-dependent values. Additionally it may specify the permissibility of extensions, options, and alternative approaches and how they are to be handled.
3.1 Profiles and/or levels
Applications often do not require all the features within a standard. It is also possible that implementations may not be able to implement all the features. In these cases, it may be desirable to partition the specifications into subsets of functionality.
A profile is a subset of the overall specifications that includes all of the functionality necessary to satisfy the requirements of a particular community of users.
Levels are nested subsets of the specifications. Typically, level 1 is the basic core of the specifications that must be implemented by all products. Level 2 includes all of level 1 and also additional functionality. This nesting continues until level n, which consists of the entire specification.
It is possible for a standard to have both profiles and levels defined for the entire specification (i.e., an implementer could choose to implement any one of (number of profiles) x (number of levels) subsets). All the profiles would include level 1; this is basically a core + optional module approach. Alternatively, each profile could have its own set of levels.
3.2 Extensions to the specifications
There are two main approaches to handling implementation specific extensions to a standard. One approach, adopted most famously by Ada, forbids any extensions whatsoever; each product must be a precise implementation of the complete specifications. This is called strict conformance.
The other approach adopts the weaker overall requirement that, while extensions are allowed, an implementation must perform all the functionality in the specifications exactly as specified. This more common approach usually includes some additional, more specific, requirements in the conformance clause, along the lines of:
Extensions shall not re-define semantics for existing functions
Extensions shall not cause standard-conforming functions (i.e., functions that do not use the extensions) to execute incorrectly
The mechanism for determining application conformance and the extensions shall be clearly described in the documentation, and the extensions shall be marked as such
Extensions shall follow the principles and guidelines of the standard they extend, that is, the specifications must be extended in a standard manner.
One approach that has been used successfully is to have a register for extensions. This is a document, parallel to but distinct from the official specifications, that contains a list of recognized extensions to the standard. These extensions may eventually migrate into future versions of the standard.
3.3 Options
A standard may classify features as "mandatory" vs. "optional," and provide a table listing features so classified. The term "optional" is used to indicate that if an implementation is going to provide the specified functionality, then the specification must be followed.
3.4 Implementer defined values
Specifications sometimes need to address:
Implementation dependent ranges, minimum or maximum allowed sizes, etc.
Values that may be different for different conforming implementations of the standard
The color model(s), if any, supported by the standard
Features reserved for registration.
3.5 Alternate approaches
Specifications may describe several different ways to accomplish its operation (e.g., a choice of file formats or codes). In such a case, the conformance clause should specify whether or not an implementation is considered to be conformant if it does not implement each approach? (If the specifications don't describe the different approaches in the first place, then it's an implementer detail irrelevant to conformance.)
For ensure testability of a specification/standard, care should be taken in the wording of the specifications itself. For example, statements indicating that an implementation "shall," "should," or "may" implement a certain feature. The meaning of the words "shall," "should," and "may," in the context of the given standard, could be defined in the conformance clause or be as defined by organizations such as ISO.
Some recent standards include, as official parts of the standard, lists of assertions. These assertions are statements of functionality or behavior derived from the standard, and which are true for conforming implementations. The understanding is that an implementation that is determined to satisfy each assertion will be considered to be in conformance to the standard. Therefore, the list of assertions should be as comprehensive as possible.
4. Conformance Testing Program
It is well recognized that conformance testing is a way to ensure that "standard-based" products are implemented. The advantages afforded by testing (and certification) are fairly obvious: quality products, interoperability, and competitive markets with more choices.
Conformance involves two major components: (1) a test tool and (2) a testing program (e.g., certification or branding). A testing program cannot exist without a test tool or test suite, but a test tool/suite can exist without a testing program. Not all specifications or standards need a testing program. Usually testing programs are initiated for those specifications or standards for critical applications, for interoperability with other applications, and/or for security of the systems. The decision to establish a program is based on the risk of nonconformance versus the costs of creating and running a program.
A conformance testing program usually includes:
Standard or specification
Test tool (e.g., tool, suite, and/or reference implementation
Procedures for testing
Organization(s) to do testing, issue certificates of validation, and arbitrate disputes
A specification/standard that includes a conformance clause and a test tools are essential to defining and measuring conformance.
Generically, a conformance test suite is a collection of combinations of legal and illegal inputs to the implementation being tested, together with a corresponding collection of "expected results." If such a list is provided, the starting point for the development of the test suite is the list of assertions for the standard. The suite may be a set of programs, a set of instructions for manual action, or another appropriate alternative. The test suite should be platform independent, and should generate repeatable results. Development of the test suite may well be the costliest part of the conformance program.
A reference implementation is an implementation of a standard that is by definition conformant to that standard. Such an implementation provides a proof of concept of the standard and also provides a tool for the developers of the conformance test suite (by generating expected values, testing the test suite, etc.) A reference implementation has maximum value in the early stages of a conformance program.
The conformance testing policies and procedures should be spelled out before testing begins. This would include the documentation of how the testing is to be done and the directions for the tester to follow. Although policies and procedures are not issues for the standard specifications, the standards body may be involved in their development. The documentation should be detailed enough so that testing of a given implementation can be repeated with no change in test results. The procedures should also contain information on what must be done operationally when failures occur. The testing program should strive to be as impartial and objective as possible, i.e., to remove subjectivity of both the procedures and the testing tools.
Finally, the testing policy and procedure should identify and define the actions of the organization(s) responsible for conducting the tests, handling disputes, and issuing certificates (or brands) of validation.
Who does the testing? Some standards have no official testing organizations. They rely on self-assessment by the implementer (1st party testing) and acceptance testing by buyers. Others have qualified third party testing laboratories that apply the test suite according to the established policies and procedures. A testing laboratory can be an organization or individual, and can either be accredited from a formal accreditation organization such as NIST's National Voluntary Laboratory Accreditation Program (NVLAP) or recognized by the buyer, seller, and certificate issuer, as qualified to perform the testing.
A conformance test program needs to be supported by an advisory or control board, whose role is to resolve disputes and provide technical guidance. This board should be a body of impartial experts. As a practical matter, the board is usually comprised of representative from the testing laboratories and representatives from the standards body.
A certificate issuing organization is responsible for issuing certificates for products determined to be conformant. The decision to issue a certificate is based on the testing results and established criteria for issuing certificates. These criteria may or may not require implementations to pass 100% of the specifications. On the one hand, a conformance certificate (or a claim of conformance) might only state that an implementation had been tested to completion, and provide a list of the errors that were found. It would then be up to a purchaser to decide the criteria (how many or what kinds of errors) it wishes to use to make implementations eligible for purchase. On the other hand, the policy might be that a certificate is issued (or a claim of conformance is made) only if no errors are found. Often a hybrid of these is appropriate -- i.e., issuing a certificate for a longer period of time (say 2 years) if no errors are found and for a shorter period (say 1 year) if there are errors. At the end of the 1-year period, the implementer would have to correct the errors to renew the certificate. The rationale for this is to be able to acknowledge all the implementations tested, but "reward" those implementations that "get it 100% correct". Another issue it whether or not a certificate expires (e.g., good for 2 years or never expire). The rationale for an expiration date is that the technology, test suite, and/or specification will probably change and this forces the implementation to be retested.
While actual testing (1st or 3rd party testing) and certification (branding) can be carried out by various organizations, it is essential that there be a centralized sponsor or owner of the conformance testing program. The sponsor has a fundamental interest in ensuring the success of the program. Typically, the sponsor establishes and maintains the program. It assumes responsibility for insuring that the components of the program are in place and becomes the centralized source for information about the program.
Lisa J. Carnahan
Information Technology Gaithersburg, Md. 20899
Email: lisa.carnahan@nist.gov
Prepared by Robin Cover for the The SGML/XML Web Page archive. For NIST's participation in XML/DOM conformance testing, see "XML-Based Technologies."
-------------------------------------------------------------------------------- Sign Up for NIST E-mail alerts:
Date created: September 8, 2010 | Last updated: October 5, 2010 Contact: Webmaster | 计算机 |
2014-23/1195/en_head.json.gz/13700 | DonationCoder.com Forum Main Area and Open Discussion General Software Discussion Making the Switch-06: Software Management is not that different
Topic: Making the Switch-06: Software Management is not that different (Read 3492 times)
zridling
Friend of the Site
Posts: 3,289 Linux captive
Making the Switch-06: Software Management is not that different
What's the single biggest advancement Linux has brought to the industry? Package management. Or more specifically, the ability to install and upgrade software over the network in a seamlessly integrated fashion — along with the distributed development model package management enabled. The distro (short for "distribution") model for GNU/Linux has componentized the OS, and blurred the line between applications and OS. You choose a distro based on what you need to perform your own tasks, how you like to work, what hardware is available, and of course, your environment (scientific, gaming, video editing, database work, a desktop that closely mimics Windows, etc.). When you get down to it, Ubuntu is not an OS as much as a complete software "package" or set of components built on top of the Linux Kernel, in Ubuntu's case, Debian.This is good news, because unlike Microsoft, which takes years to bring out another version of Windows, it becomes far easier to push new innovations out into the marketplace and generally evolve the OS over time. For many distros, this is every six months, which means keeping up with innovation, and [software] package management is the key to hold it all together.Let's start with what we know. With Windows, you visit download sites like FileForum, Portable Freeware, osalt.com, File Hippo and others to search for or download the latest. In many Windows programs, you can also set them to automatically check for updates each time you open them, or you can use a program like WebSite-Watcher to scan the web for page updates of selected programs. Windows itself has long had its own updater which maintains the OS with Windows Update.GNU/Linux is somewhat similar, only the process is almost entirely automated for system, drivers, and user-installed software. Like Windows, there are thousands of programs, many of which aren't that good, or a percentage of which has been abandoned. The most common open source repository is SourceForge.net, which hosts over 153,000 projects! Even DonationCoder.com hosts its own star coders at DonationCoders. Each distro comes with an installed base of applications. "Small" distros like Puppy Linux limit this to a bare minimum, allowing the user to build their own system's software. Larger distros like Ubuntu, SLED 10 (SUSE Linux Enterprise Desktop), or Fedora can install as much or as little as you want during setup. It's up to you to decide which is you. Here's an example, along with the configure subdialog to the right, of automatic updates in a GNU/Linux distro:Within each distro (a specifically built copy of Linux) there is a [software] package manager. By default, many package updaters are set to scan, download, install, and clean up new versions of the software already installed on your system. In many distros, including Fedora and SUSE, they use a specific format called RPM, which originally stood for "RedHat Package Manager"; now, however, it stands for RPM Package Manager. For Ubuntu and Debian-based Linux distros, they use .DEB packages. Also, each distro manages software via a package manager such as APT (Advanced Package Tool), YUM (Yellow dog Updater, Modified), YaST (Yet Another Setup Tool), and SMART (supports RPM, DEB and Slackware packages on a single system, but does not permit relationships among different package managers). Each of these allows you to manage RPM/DEB packages via either the command line or a GUI.I'll use SUSE Linux Enterprise Desktop as an example. The easiest way is to use the graphical application YaST2 (Yet Another Setup Tool). It can handle all your software management, can resolve dependencies, check for updates, and even use mirrors on the fly because Novell is using a dynamical referrer on the main download address. YaST safely guides you through the installation procedure, or if you want, you can set all software — both system and user-installed to update automatically (with various conditions, such as final versions, with or without alerts, and so on). YaST (and this is true for other installers mentioned above) automatically detects the available system hardware and submits proposals for driver updates, even proprietary ones.YaST et al. are also a reliable aids for the user administration, security settings, and the installation of additional software. For instance, with the respective YaST module and Samba, even Linux n00bs like me can easily network Linux and Windows hosts. Graphical dialogs facilitate the configuration of DNS, DHCP, and web servers in the home network. If you want to add another source to your updater (YUM, YaST, etc.), click Software and there start the module "Installation Source." There you will find a list of all currently configured package repositories. Click the Add button to add another one. Click on the button and choose the protocol you want to use (usually http or ftp). After that, enter the source line into the first field. There are lots of software repositories around, but you have to make sure that they work with your distro is all. It's not a big deal, much like checking whether a Windows app will run on Vista or on an older OS like Win98.Just like Windows' Add/Remove Software (or Contrl Panel > Programs and Features in Vista), open the YaST control center, select Install and Remove Software. A second window will open from which you can search for a particular package. Let's say you wanted to install a video conferencing application, but you didn't know what the application was called. Enter the word "video" in the search field and all of the packages that have video in either their package name or description will appear in the window to the right (you can specify other search criteria on the page). Click on a package name and a description of the software will appear in the tabbed 'Description' window in the right lower half of the screen (check the figure below). If this is the package you want, click on the check box next to the package name, then click the accept button in the bottom right-hand corner. Should there be dependencies associated with the package you chose to install, a popup window will appear informing you of this fact. Click Continue and the installation will proceed. That's all there is to it!________________________________________________Part-01: My journey from Windows to LinuxPart-02: Which Linux distro to choose?Part-03: First impressions and first problems after installationPart-04: The "User Guide" as life raft, more n00b problemsPart-05: Ten Great Ideas of GNU/LinuxPart-06: Software Management is not that different
« Last Edit: July 29, 2007, 03:02:08 PM by zridling »
- zaine (on Google+) | 计算机 |
2014-23/1195/en_head.json.gz/15703 | Latest AACS revision defeated a week before release
The latest AACS revision has already been cracked a week before its official …
Despite the best efforts of the Advanced Access Content System (AACS) Licensing Administration (AACS LA), content pirates remain one step ahead. A new volume key used by high-def films scheduled for release next week has already been cracked. The previous AACS volume key was invalidated by AACS LA after it was exposed and broadly disseminated earlier this month. The latest beta release of SlySoft's AnyDVD HD program can apparently be used to rip HD DVD discs that use AACS version 3. Although these won't hit store shelves until the May 22, pirates have already successfully tested SlySoft's program with early release previews of the Matrix trilogy. AACS LA's attempts to stifle dissemination of AACS keys and prevent hackers from compromising new keys are obviously meeting with extremely limited success. The hacker collective continues to adapt to AACS revisions and is demonstrating a capacity to assimilate new volume keys at a rate which truly reveals the futility of resistance. If keys can be compromised before HD DVDs bearing those keys are even released into the wild, one has to question the viability of the entire key revocation model. After the last AACS key spread far and wide across the breadth of the Internet, AACS LA chairman Michael Ayers stated that the organization planned to continue clamping down on key dissemination, despite the fact that attempts to do so only encouraged further dissemination. In a monument to comedic irony, the AACS LA has elected to put out the fire by pouring on more gasoline. AACS clearly has yet to stop those determined to break the DRM scheme from copying movies, but its key revocation model does create additional burdens for device makers, software developers, and end users. As the futility of trying to prevent copying continues to become more apparent and the costs of maintaining DRM schemes escalate, content providers will be faced with a difficult choice of whether to make their content more or less accessible to consumers. We are already seeing the music industry beginning to abandon DRM, but it doesn't look like the movie industry is ready to take the same logical step. Instead, the MPAA wants to have the best of both worlds by making DRM interoperable and designing it in a manner that, according to MPAA head Dan Glickman, will permit legal DVD ripping "in a protected way." Although the MPAA's plans for DRM reform could reduce the incentives for hacking AACS, the war between hackers and DRM purveyors will continue for the foreseeable future. Expand full story | 计算机 |
2014-23/1195/en_head.json.gz/15993 | Welcome to Code: WIKI! Your free encyclopedia of all Code Lyoko related information!
Account creation has been suspended for the time being due to massive amounts of spamming and vandalism.
Insekt
From Code Wiki
(Redirected from Insekts)
"This Article is about Insekts the in-game monsters, you may be looking for the Comic."
Monster Concept Art.
Insekts are one of XANA's monsters who have only made appearances in the games Code Lyoko DS: Fall of X.A.N.A. and Code Lyoko: Quest for Infinity. Insekts look like hornets except for being bigger, stronger and having black shells. In the DS game, they appear to reside mainly in the Forest Sector. Their main weapons are exploding mines similar to those of a Manta except that they do not require contact to explode and they also don't home on targets. The Mines simply are targeted to your position, and shot there. Most of the time, one is able to evade the mines, which causes them to fall on the ground and explode in a matter of seconds, this however becomes harder to do when there are several Inskets throwing mines at you at the same time.
Another reference to the fact that they originate in the Forest sector, is that the boss of the Forest Translation bore a striking resemblance to the Insekts. Other than that, they are also seen in other sectors such as the Desert sector. Like most other monsters, to destroy them, one must target the Xana symbol on it; though it doesn't blow up from the first impact, the first hit causes it to fall to the ground, and the second hit destroys it. While flying, the monster is unreachable with Ulrich's Sabers, it can only be shot down by O | 计算机 |
2014-23/1195/en_head.json.gz/16863 | D-Lib Magazine
Volume 5 Number 1ISSN 1082-9873
A Common Model to Support Interoperable Metadata
Progress report on reconciling metadata requirements from the Dublin Core and INDECS/DOI Communities
David BearmanArchives & Museum Informatics dbear@archimuse.com
Eric Miller OCLC Online Computer Library Center, Inc. emiller@oclc.org
Godfrey Rust
godfreyrust@dds.netkonect.co.uk
Jennifer Trant
Art Museum Image Consortium
jtrant@amico.net
Stuart Weibel OCLC Online Computer Library Center, Inc. weibel@oclc.org
The Dublin Core metadata community and the INDECS/DOI community of authors, rights holders, and publishers are seeking common ground in the expression of metadata for information resources. Recent meetings at the 6th Dublin Core Workshop in Washington DC sketched out common models for semantics (informed by the requirements articulated in the IFLA Functional Requirements for the Bibliographic Record) and conventions for knowledge representation (based on the Resource Description Framework under development by the W3C). Further development of detailed requirements is planned by both communities in the coming months with the aim of fully representing the metadata needs of each. An open "Schema Harmonization" working group has been established to identify a common framework to support interoperability among these communities. The present document represents a starting point identifying historical developments and common requirements of these perspectives on metadata and charts a path for harmonizing their respective conceptual models. It is hoped that collaboration over the coming year will result in agreed semantic and syntactic conventions that will support a high degree of interoperability among these communities, ideally expressed in a single data model and using common, standard tools. Introduction
The Dublin Core Element Set (DC) was defined to support information discovery in the networked environment. The name "Core" indicates an assumption that DC will coexist with other metadata sets. Perhaps because of the focus on defining a simple semantic for the DC element set, it sometimes appears that DC is in conflict with other uses of metadata (such as those of intellectual property holders). However, analysis has shown that many shared requirements exist. To enable the identification of common requirements, the various users of metadata need a common conceptual model. An IFLA semantic model, presented in "Functional Requirements for the Bibliographic Record1," provides a meeting point for what had at first seemed two very different perspectives. This article provides an outline for promoting consensus on the structure and relationship of Dublin Core metadata and Rights Management metadata, but our hope is that this structure may be appropriate for groups and projects beyond those currently involved in these discussions as well.
Working together to define common requirements and articulate common metadata structures benefits both communities. The same information objects (books, journals, articles, sound recordings, films, and multimedia, in physical or electronic form) are encountered in traditional library and commercial environments2. Indeed these two worlds intersect continuously when resources are acquired for library collections. A shared standard for core discovery metadata that integrates easily into future information use models has great benefits for interoperability across domains and for information interchange.
I. Background I.a The Dublin Core in 1998
Readers of D-Lib
will be familiar with the process that led to the Dublin Core Element Set. Focussed on requirements for improved information discovery in the networked environment, the DC initiative grew out of a process of debate and consensus building3. The DC effort has attracted a broad cross section of resource description communities, including libraries, museums, government agencies, archives, and commercial organizations. International interest has led to translations of the Dublin Core to 20 languages thus far. It has drawn its strength from the diversity of communities represented, and the ability to bridge many differing perspectives and requirements. What began as an effort to identify simple semantics for resource discovery has generated wide recognition of larger information architecture issues. Well before the 5th Dublin Core workshop in Helsinki (October, 1997),4 it was recognized that the Dublin Core needed to accommodate more specific metadata than could be supported by the basic model of 15 elements containing strings of text as values.5 The semantic refinements of qualifiers and substructure, present to some degree in all DC applications, required a more sophisticated underlying representation. Using "dot notation" in HTML6 to achieve this has helped to jump-start many early applications of Dublin Core, and is in some ways responsible for progress made so far. However, the inherent structural limitations of HTML assured that broad-scale interoperability and interchange of structured metadata would never be satisfactory for anything but very simple resource description. The broad adoption of at least some qualifiers in virtually all Dublin Core applications to date attests to the wide acceptance that qualifiers are a necessary addition to the resource description landscape. Implementers need the additional power of specification and substructure that qualifiers afford. It is clear that a conceptual model is necessary to support the broad spectrum of semantic requirements expected by implementers (from very simple to highly structured). In addition, the conceptual model needs to recognize that the metadata requirements are not static and must allow for graceful changes as the DC standard evolves. A Data Model Working Group7 was constituted following the Helsinki meeting to address these issues. It was in part the efforts of this group that has highlighted some inconsistencies in the formulation of the basic element set. For example, the element "Source" records a specific type of "Relation" between two information objects; the elements "Creator", "Publisher" and "Contributor" can be usefully thought of as types of Agents.8 Several groups running workshops on how to use Dublin Core metadata with practitioners also reported confusion among users as to how to choose between these elements, suggesting that there are practical as well as theoretical reasons to revisit the structural foundations of the Dublin Core.
At the same time as these more sophisticated requirements began to be expressed within the DC Community and a model was being developed to support them, the Dublin Core Initiative was criticized by others concerned with the limitations of basic DC semantics for expressing their requirements. These limitations were of particular concern when DC metadata is seen as a core drawn from a broader metadata universe designed to support particular applications or uses that may begin (but do not end) with information discovery. Thus the challenge to the DC community is how simultaneously to support the current users of DC while ensuring that the development of DC enhances interoperability between metadata sets and communities. I.b Rights and Publishing Metadata in 1998
In the first half of 1998 a number of leading international intellectual property rights owners� organizations came together with a common technical aim. They had recognized that in the digital environment the traditional distinctions between market sectors (such as records, books, films, and photographs) were collapsing into a common "content" environment where different rights were traded (or, where appropriate, given away freely) in increasingly complex ways. The various identifiers used by these sectors9 needed to work together, with their accompanying metadata.
The INDECS (Interoperability of Data in E-Commerce Systems)project was established to integrate a number of sectoral initiatives. These include the copyright societies� CIS (Common Information System) plan, the record industry's ISRC and MUSE project, the audiovisual ISAN initiative, the text publishing industries' ISBN and ISSN and the Digital Object Identifier (DOI). It is designed as a "fast-track" process, to develop a number of specific common standards and tools that enable interoperability of identifiers and metadata.
The INDECS project has the backing of the international trade bodies of many major content provider groups including IFPI (recording industry), CISAC and BIEM (copyright societies), IPA and STM (book publishing), IFRRO (reprographic rights) and ICMP (music publishers), all of whom are affiliated, and constituent members of which are the active partners. Its sponsors also include the International DOI Foundation. It has a formal evaluation procedure including an open conference in London in July and culminating in a final approval committee to be held at WIPO in Geneva in October 1999. The INDECS project is concerned with the same resource discovery elements as Dublin Core, but in addition embraces metadata for people (human and legal) and intellectual property agreements and the links between them. Its basic model has evolved from the copyright societies' CIS plan, initiated in 1994 and the initiator of the ISO proposals for the International Standard Work Code (ISWC) and International Standard Audiovisual Number (ISAN). INDECS has a simple rubric expressing its underlying commerce model: People Make Stuff, People Use Stuff, and People Do Deals About Stuff. Each of these three primary entities and the links between them -- People ("Persons", whether natural or legal), Stuff ("Creations", whether tangible, spatio-temporal or abstract) and Deals ("Agreements" between persons about the use of creations) -- requires unique identifiers and standardised descriptive metadata. Based on work already established within the CIS Plan, INDECS has developed a first draft of its detailed generic metadata model integrating these three entities in a way that may be applied to any type or combination of creation or agreement. The model also attempts to elaborate a comprehensive set of high-level semantics for rights-based metadata. Some of the work of the project will be taken up with the mapping of the model and its semantics against the various constituent identifiers and metadata sets.
In preparation for the project, which began formally in November 1998, the INDECS technical co-ordinator Godfrey Rust voiced concerns about the limitations of DC, that were published as part of an article in the July 1998 D-Lib [http://www.dlib.org/dlib/july98/rust/07rust.html] that also outlined elements of the INDECS model. Weaknesses in the unqualified fifteen element Dublin Core as seen from the perspective of INDECS requirements, include the arbitrary separation of Creator and Contributor, the ambiguity of Publisher, the confusion of Source and Relation, the limited definitions of Date and Rights, the absence of a "Size" element (an "Extent" in INDECS terminology), the vagueness of the DC definitions, and a bias in its terminology towards text-based resources. More significantly, given the desire to see a discovery element set within a larger process, Rust felt the Dublin Core did not recognize the inherent complexity of "creations" that are commonly made up of a network of constituent elements each with its own "core" metadata and rights. The flat fifteen elements of DC, which are important in its widespread acceptance, may be too limiting in some important contexts.
But the heart of the matter lay in another kind of dependency. One thing that had emerged clearly from the CIS/INDECS model is that effective rights metadata ("agreements") are heavily reliant on well-structured descriptive resource metadata of the kind identified in the DC core set, and set out in related but somewhat different terms in the INDECS analysis. Without unambiguous identifiers and classifications, rights cannot be securely protected or granted. The terms of agreements are dependent (among other things) on the identity and characteristics of the works owned or traded. For example, the precise role of a Contributor, Creator or Publisher, the Date and Place of creation or publication, the Format, Genre or Subject of a work are all elements that may affect rights ownership, licensing or royalty entitlement. Because this descriptive metadata is an integral part of rights metadata, Rust argued the two have to be tightly-structured to interoperate. Rights owners need well-structured metadata for their electronic resources, but it does not make sense to develop a separate standard. No one wants to maintain parallel sets of metadata for different functions, particularly when the function of discovery is so closely tied to commerce and other uses. Rights owners, like all those involved in metadata management, are seeking the best models for creating, storing, using and re-using metadata for many purposes.10 Despite the apparent difficulties, INDECS, not wishing to re-invent existing wheels, would much prefer to find common ground now with DC than face a proliferation of overlapping standards in the future.
Harmonisation should also bring substantial benefits for the DC community, which requires "people" identification systems and licensing mechanisms as much as the creators and rights owners. The models are neutral on the matter of rights and commercial terms: whether usage is free or paid for, the chain of rights enabling the user to use a given resource requires identification, for the security of user and owner alike. I.c Seeking Rapprochement
The Web environment offers the possibility for the general distribution of authoritative metadata, associated with a resource at the time of its creation. Organizations could be attracted to using the Dublin Core for discovery (even if their primary interest is to create metadata for data exchange, income collection or rights management) if a broad standard for descriptive metadata in the Web environment supported the activities that are the motivations for discovery. The Warwick Framework, which grew out of the second Dublin Core workshop, already reflected the DC community�s recognition of the need for distinct but inter-related metadata packages, that enabled interoperation beyond discovery alone.11
The opportunity to seek common solutions with other groups who are working on metadata, including rights holders, and in the process to clarify implications of the DC model for Dublin Core users, led to discussions between representatives of the DC and INDECS/DOI communities at DC-6. Initial technical discussions (in September of 1998) revealed many commonalities in requirements, including the likelihood that each could be expressed using the Resource Description Framework (RDF). A subsequent meeting immediately prior to DC-6 further strengthened these understandings and began to identify specific semantics that would benefit from mutual attention. The results of these discussions were well received at a joint meeting of the DC-Technical Advisory Committee and DC-Policy Advisory Committee immediately prior to DC-6. A presentation to a plenary meeting of DC-6 itself introduced the collaboration to the DC community. During and after the meeting, the authors continued to explore areas of possible commonality. A more fully elaborated version of a possible framework was presented to the Data Model Working Group immediately following the DC workshop, to ensure that the semantics under discussion could find common expression. Thus the concepts described in this paper represent work in progress, which will require detailed scrutiny by the DC, DOI, and INDECS community.
II. A Logical Model of Common Semantics II.a Developing a Logical Model
The start point for a common understanding which began to emerge at DC-6 between the DC and INDECS/DOI approaches was a shared appreciation of a third piece of metadata analysis: the recently published, but long gestating, IFLA Functional Requirements for the Bibliographic Record (FRBR). The INDECS entity relationship analysis, although broader in scope as it embraced rights as well as resources, had already quite independently reached similar conclusions to the FRBR, and many in the DC community had recognised the FRBR's potential value.
Translating both the INDECS requirements and the DC requirements into the IFLA model provided the framework of a common logical expression for the two perspectives. It reveals a substantial, probably complete, overlap between the scope of INDECS metadata requirements (in relation to resources) and the scope of metadata requirements in Dublin Core and its emerging qualifications. Common semantics can be identified for each metadata element. More detailed, application-specific semantics will need to be articulated, though, not just for these two broad communities, but for other communities and for implementations within them. We believe that by sharing a common higher level semantic framework, both communities can support a high level of interoperability. II.b The IFLA Model
The working method we adopted was to use the IFLA FRBR model as a common logical model for metadata. This model has as its focus an information resource in one of four "states". In this model, an original creator conceives a WORK. The work is an abstraction. It must be realized through an EXPRESSION. The same work may be realized through numerous expressions -- as when two directors produce the same WORK. Each EXPRESSION may be embodied in one or more manifestations, as when we have the printed script, the video of a performance of the play and an audio-CD of the performance of the musical numbers. When MANIFESTATIONS are mass-produced, each MANIFESTATION may be exemplified in many ITEMS (typically called copies). Many WORKS find a single EXPRESSION in only one MANIFESTATION. Some very successful WORKS are expressed in many genres and/or performed at many times and may be produced in numerous MANIFESTATIONS. Many unique information resources, such as museum objects, or natural specimens are not "copies" of mass produced manifestations. We believe that with further careful thought and consultation the DOI/INDECS and DC communities can agree fully on fundamental categories of information resources based on this analysis.
FIGURE 2: Works, Expressions, Manifestations and Items [based on IFLA FRBR Figure 3.1] [Cardinality is expressed here with arrows.]
We proposed two significant clarifications to the FRBR analysis for our purposes. The first was to recognise that an item is a type of manifestation, albeit an extremely important one. The second overcomes, we believe, what had seemed a common confusion about the nature of an Expression. An expression, we propose, is a creation which exists in space and time (identified, if it is public, as a performance), but not in tangible form. The INDECS model provides a number of ways of describing the structural differences between the main creation types:
MANIFESTATION/ITEM
Concepts, thoughts, ideas
Atoms, bits
Spatio-Temporal
"I conceived it"
"I did it"
"I made it"
This analysis falls out naturally to the INDEC/DOI community as each of these "creation types" gives rise to a different set of rights. Perhaps the clearest example is of an audio or audiovisual CD (manifestation) which contains recordings of performances (expressions) of songs (abstract works). Each of these entities has different identities (UPC/EAN, ISRC, ISWC) and metadata and, commonly, different rights owners. In textual or static visual works the identification of the "expression" which gave rise to the manifestation of the resource (the act of creating this draft or version) may be less evident or important, but our analysis at this stage suggests that it is no less real or central to an understanding of the effective organisation of metadata.
The two communities have no common view as yet about the adequacy of the second part of the IFLA model, which relates the bibliographic resource to people and organizations responsible for its creation, expression, distribution and ownership. The IFLA model [expanded further in figure 3 below], reflects the library community�s traditional interest in the creation of works and the ownership of physical works. It does not adequately represent the interest of the INDECS community in the ownership of intellectual property rights. Intellectual property rights arise from initial creation (though they may depend on what specific kinds of creation -- origination, compilation, excerpting, media transformation, replication), but they can be transferred from one person or organization to another independently of the ownership of a physical item. FIGURE 3: People, Organizations, Works and their Expressions [based on IFLA FRBR Figure 3.2]
II c. Enhancing the IFLA Model
We propose the explicit representation of an ACTION as an enhancement to the IFLA model. This is missing in the IFLA model, but is needed to recognize the transferability of ownership (of both rights and objects) and to acknowledge the multiplicity of roles that Agents play with respect to information resources. Figure 3, below, shows ACTIONs within the model, taking place in a specific time and place.
FIGURE 4: Integrated Expression of DC and INDECS Semantics for Agents, Actions, and Resources
ACTIONS support another element missing in the IFLA model, the role instruments may play in the creation and dissemination of information resources. People have created many devices (such as the Hubble Space Telescope, weather satellites or robocams) that automatically create, or automatically distribute, information resources. It is not sufficient, however, to articulate only who created, realized, produced or owned a work, expression, manifestation, or item, or the rights thereto. Both communities are further interested in describing what IFLA calls the "Subject" Relationship attributes of WORKS. The IFLA model recognizes that WORKS may have as their subject:
1) works expressions, manifestations or items
2) persons, organizations and instruments
3) concepts, objects, events, or places
The DC community has gone a step beyond "Subject Relations" (something is "about" something else) and records what might be called "Object Relations" (something is "made of", or "part of" something else). In some communities dealing with informational resources, particularly those concerned with images, museum objects, and archives, these "of-ness" relations are extremely important. They fall into two classes -- provenance and coverage.
With these extensions of the IFLA model, we feel that we have a strong basis on which to construct any divergent, discipline- or domain-specific, metadata sets including extensions to DC and INDECS. The common model enables us to share what we feel we hold in common.
FIGURE 5: A High Level Model for DC and INDECS Semantics
III. Using the Model
We've been pleased to find that this simple, high-level model is capable of explaining quite a few things that were intriguing or troubling to the DC community before it was developed. Some of these questions are discussed below, as examples. For the INDECS community, some of whose members have identified resources in categories similar to these for many years as different rights and rightsholders apply to the different levels, the analysis provides an invaluable point of convergence with the bibliographic tradition.
III.a Relations
At and following the Dublin Core workshop in Helsinki, a "Relations Working Group" considered a wide range of possible relations between information resources. Following an analysis, the group arrived at five reciprocal relations:12 - References / Is Referenced By (to point to other information resources) - IsBasedOn / IsBasisFor (to express intellectual derivation)
- IsVersionOf / HasVersion (to express historical evolution)
- Is Format Of / Has Format (to identify transformations of media or layout)
- Is Part of / Has Part (to record Part/Whole)
During the DC/INDECS discussions, and in efforts to articulate a representation in XML/RDF, other relations were determined to be required, including:
- Is [to express that something is original or sui generis]
- IsMetadataAuthorOf / HasMetadataAuthoredBy [to name the creator of the metadata]
- IsDefinitionOf / IsDefinedBy[to point to the URI of the definition of the semantics]
- IsOwnerOf/ IsOwnedBy [to name the owner/repository with custody of a physical thing]
None of these "Relations" require any further elaboration of the model, for each is, in some way, a reflection of the extensions we made earlier to the IFLA model:
Is reflects our introduction of the concept of natural, or non-bibliographic, items. IsMetadataAuthorOf reflects the need for the explicit recording of the person or organization responsible for the content of a metadata set. IsDefinitionOf reflects the need for metadata to be specifically defined within the context of a particular intellectual schema; it exists in an information universe and needs to carry the address of its name space.
IsOwnerOf reflects the need to track physical custody
separately from intellectual property rights (the latter are described by
the fifteenth DC element, and aim to be comprehensively covered by the
"agreements" structure in INDECS/DOI).
This articulation of relationships and their types, helps resolve several issues within the Dublin Core community, five of which came to the fore during the most recent workshop in Washington (each discussed further below): 1) the problem with separate metadata elements for Creator, Contributor, and Publisher, as all are Agents related to Works, Expressions, Manifestations or Items
2) the confusion between Genre and Format, as both are Form, related to either Work/Expression or Manifestation/Item 3) the many qualifiers that have been proposed for Date, as a Work, its Expression, a Manifestation and an Item can each have a particular Date
4) the apparent redundancy of the element "Source", as Source is expressed more clearly as a particular Relation
5) the reasons for the 1:1 relationship between metadata and an information resource and why application of this principle seems to lead to confusion. III.b Agents
The terms Creator, Contributor and Publisher are often used as if they roughly correspond to the relationships between an Agent and a Work (he wrote this play), an Agent and an Expression (she acted in this play), and an Agent and a Manifestation (they published this script). However, the Creator, Contributor, Publisher distinction does not provide a fine enough classification to specify such vastly different roles as editor, composer, or actor, or distinguish these roles from others with a lesser creative impact on a manifestation such as typographer, foundry, or audio engineer. Nor do Creator/Contributor/Publisher permit us to specify the owner and rights owner(s) of an item. Finally, without a fairly tortured usage, they do not allow us to identify devices involved in creation and dissemination of information resources. By combining the various types of Agents, we can assign them roles from a vocabulary scheme that has all the variation required in each application. III.c Form (DCType and Format)
As recognized in the IFLA model, the concept of "Form" is problematic unless we are careful about whether we are speaking of the form of the "Work", the "Expression", the "Manifestation"/"Item". English had to borrow a concept for the "form" of a work or expression from the French language term genre. These kinds of descriptors have been referred to in Dublin Core discussions as suitable for recording in DCType. The "physical form" of a manifestation or item is often called "format" and was assigned a separate element in the 15 element Dublin Core. Unfortunately, in English again, the actual terms used to name genre and format are often identical (such as photograph, drawing, film). Clarifying whether the term qualifies a work, expression, or a manifestation/item, can prevent confusion between homonyms, and aid in the distinction between these two DC elements.
III.d Date
The Dublin Core Date Working Group has heard arguments for many qualifiers to be applied to the Date element. Refinements have included date written, date performed, date published, date issued, date valid, date recorded, etc. All are dates of particular actions; describing when a person did something in relation to a resource. The IFLA model does not entirely eliminate this confusion, and cannot because it does not separately identify the entity "Action."13 If Actions are explicitly documented as part of the model, unqualified dates can be explicitly tied to acts of creation, arrangement, performance, dubbing, release, etc. This is much clearer than seeing these dates as properties of the information resource itself, where they lose their meaning without qualification.
III.e Source
Before the articulation of the "Relation Types", the Dublin Core workshops defined a metadata element "Source" as: "A string or number used to uniquely identify the work from which this resource was derived, if applicable. For example, a PDF version of a novel might have a SOURCE element containing an ISBN number for the physical book from which the PDF version was derived." The Source element could therefore record any of the relations IsFormatOf, IsBasedOn and IsVersionOf. The example given actually refers to the Identifier (ISBN) of a Manifestation (physical book) from which another Manifestation was derived. But the "Identifier" of the PDF file could be considered to be the same ISBN. More accurately, we need to indicate that the physical book is a Manifestation of a particular Expression of a Work, while the PDF file is a different Manifestation of the Manifestation (physical book). III.f "1:1"
One of the most difficult issues for implementers of the Dublin Core has been defining logical clusters of metadata (called "metadata sets" during the Helsinki workshop). A "logical cluster of metadata" refers to one occurrence of the fifteen repeatable DC metadata elements, referring to a particular state of an information resource. In discussions at the Helsinki DC meeting, we found this concept of metadata clusters or sets clarifying and avoided confusing it with "metadata records" -- e.g., the physical record in a specific implementation, which could contain many logical clusters of metadata (as a nested hierarchy for example). Each cluster references one, and only one, state of the information resource -- the elements that it contains describe that instance. We referred to this as the 1:1 principle, and the resulting clusters as embodiments of 1:1 structures. An implementation system can respect 1:1 structures even when a single physical record in that system may describe the Work, its Expression, a Manifestation, and even other Works that are related to it. The model proposed here makes it clearer that multiple Agents and Actions may be related to a Work, its Expression, a Manifestation or an Item. The metadata about these relations must, therefore, be kept discrete. Traditional bibliographic control systems have not handled well the implicit or explicit linkages among entities, primarily perhaps due to technological limitations. The Web, however, for which linkage is the dominant underlying metaphor, solves the technological problem. We now have the means to document and execute complex semantic relationships, but are without a body of best practice to guide us. The IFLA Model makes the logic of 1:1 structures clearer, and may help resolve this problem. If metadata is exchanged using 1:1 expressions, different metadata authorities can create different parts of the metadata: the publisher can report publication metadata; the rights owner can report rights metadata; and the author or an information provider can report metadata about the work. Those who document a particular performance of a play or work of music can confine themselves to reporting the many individuals involved in this expression, and "point to" or reference metadata about the work or works performed. Music distributors who release the performance on CD, on tape, and in a RealAudio file, can confine themselves to describing these three Manifestations. Indeed, in considering a complex multimedia object containing hundreds or thousands of component "creations", it is impossible to see how adequate metadata could be assembled by any other means. This recognition of the "modular dependence" of metadata lies at the heart of the INDECS project. Because each of the different resource types attract different rights and rights holders, the "1-to-1" identification and description of components of a single creation has long been recognised as sine qua non for e-commerce metadata.
It is important to envision how these various metadata sets might be reassembled into a human readable form, or how a "record" might fully contain everything that a user might want to find out about a particular manifestation of a work. Depending on the local implementation, this information might be held in a single "record" or a number of related (or hierarchically structured) records. Each implementation will have a different interfaces, and there may be different views of the data for input and searching. But for metadata to be interchanged among systems unambiguously, the relations implicit in local systems must be made explicit (for not all implementations will be constructed on the same assumptions, particularly when interchange is cross-domain). A common record syntax is required; both INDECS and the Dublin Core have been exploring the Resource Description Framework (RDF) as a means to carry complex metadata. In order to build tools that function predictably across millions of metadata sets created in different environments, these metadata must have the same structure and explicit relations. The proposed RDF expression of the Dublin Core (discussed further below) offers a syntax for recording complex relations. It ensures that one metadata element is about one logical or physical thing, and that the relation between that thing and other things is noted explicitly using Relation Types. It then becomes possible to "operate" or trigger actions based on these relations. A rigorous structure helps retrievability too, because it makes it possible to search for an individual whether they made a work or a part of a work. And it makes it possible to collocate all the possible digital manifestations of a work by tracing their relation back to the work. The 1:1 rule, then represents a logical simplification, but at the cost of system complexity. As tools to support RDF become widespread (a reasonable expectation, given the indications that industry support for RDF is increasing rapidly), implementing such systems will seem much less difficult. At this stage in the collaboration the possibility that the DC model may be extended to meet the requirements of the INDECS/DOI community, and in doing so enable for itself a new range of functionality, is an attractive consideration. The alternative -- independent and incompatable models -- is equally unattractive.
IV. A Common Syntactic Model - RDF
The Resource Description Framework (RDF), developed under the auspices of the World Wide Web Consortium (W3C),14 is an infrastructure that enables the encoding, exchange, and reuse of structured metadata. This infrastructure promotes metadata interoperability by helping to bridge semantic conventions and syntactic encodings with well defined structural conventions. RDF does not stipulate semantics, but rather provides the ability for each resource description community to define a semantic structure that reflects community requirements. RDF can utilize XML (eXtensible Markup Language) as a common syntax for the exchange and processing of metadata.15 The XML syntax is a subset of the international text processing standard SGML (Standard Generalized Markup Language) specifically intended for use on the Web. The XML syntax provides vendor independence, user extensibility, validation, human readability, and the ability to represent complex structures. By exploiting the features of XML, RDF imposes a structure that provides for the unambiguous expression of semantics and, as such, enables the consistent encoding, exchange, and machine-processing of standardized metadata.
RDF supports the use of conventions that will facilitate interoperability of modular metadata sets and their reuse within community defined semantics. These conventions include standard mechanisms for representing semantics that are expressed in a simple, yet powerful, formalism. RDF additionally provides a means for publishing both human-readable and machine-processable sets of properties, or metadata elements, defined by resource description communities.
The Dublin Core Data Model Working Group agrees that RDF provides adequate flexibility and functionality to fully support encoding DC metadata. Early implementation prototypes and the thinking of a substantial cross section of metadata implementers and theoreticians suggest that this assertion will stand up to the test of practical implementation. The broad scope and flexibility of RDF makes it desirable, however, to define conventions for structuring metadata to maximize the potential for interoperability.
The Dublin Core Data Model applies RDF as an enabling mechanisms, providing consistent guidelines to support Dublin Core applications. It declares the semantics defined by the Dublin Core community using RDF Schemas. To avoid confusion, and be explicit when combining semantics from multiple domains, or "namespaces", a namespace identifier token (i.e., "dc") is prefixed to each element in order to identify it uniquely; the element <dc:Title> would be interpreted as "the "Title" as defined by the Dublin Core Community". <dc.Title> could then appear in a number of different applications, maintained by publishers and libraries for example, and still be recognized as equivalent.
Namespaces also facilitate meeting the requirement for additional substructure or qualification within DC. The Data Model Working Group recommends that qualifiers be managed under a separate, but closely related namespace: Dublin Core Qualifiers (DCQ). This compartmentalization will facilitate modular accommodation of different levels of Dublin Core. Applications accommodating only the 15 Dublin Core elements without qualifiers need recognize only the Dublin Core namespace; additional complexity can be provided in a modular fashion with the addition of DCQ.
FIGURE 6: RDF expression of Qualified and Unqualified Dublin Core, using dc:Date
RDF encourages this type of semantic modularity by creating an infrastructure that supports the combination of distributed attribute registries (i.e., multiple namespaces). Thus, a central registry is not required. This permits communities to declare element sets which may be reused, extended and/or refined to address application or domain specific descriptive requirements. The rights holding community can therefore declare an additional set of elements to support the detailed documentation of rights transfer agreements (or deals). This Deal-making metadata may make use of some DC elements, which could be recognized by DC applications. Building a modular rights package of metadata onto a DC discovery set would avoid redundancy, and enable a DC-aware application to discover where a particular work may be affected by rights restrictions. Resolving those rights would require an awareness of the rights metadata and the rules for its processing.
V. The Process for Moving Forward
The Dublin Core Metadata Initiative evolved as a loose association of interested stakeholders and practitioners. As interest and recognition in DC has grown, and the constituencies that the initiative serves have broadened, it has become increasingly evident that a formal process is necessary to promote stability (by way of standardization) and support evolution as the effort matures. The goals of such a process include maintaining the coherence of the DC element set, while assuring both broad representation of the perspectives of various sectors and disciplines, and international input. The culture of the Dublin Core initiative is strongly imbedded in consensus building; the emerging DC Process Guidelines16 will validate that process.
The Process Guidelines being developed are based on the identification of work items from a wide variety of sources, and their assignment to open working groups. Proposals for solutions are vetted by open community discussion and ratified by a Technical Advisory Committee and a Policy Advisory Committee. The Dublin Core Directorate (based at OCLC) administers these committees and maintains a website that constitutes the repository of decisions and supporting documents.17
The integration of the conceptual data models for DC, INDECS, and IFLA is one of the first work items to be developed de novo under the new Process Guidelines. The Schema Harmonization Working Group represents a significant early challenge for this process, requiring as it will some changes to the Dublin Core which will have ramifications for existing applications. It will be necessary to manage changes to ensure that the functional requirements of each of the represented communities are accommodated, and the least possible disruption is made to current practice. As a practical matter, these objectives are likely to be met by engineering a new Dublin Core version that is backwardly compatible with existing applications, while supporting a more robust data model expressing the common requirements of DC and INDECS/DOI. Early discussions among the stakeholders suggest that this is a realistic goal.
Change inevitably causes uncertainty and, and uncertainty leads to instability. The benefits and costs of such change must be considered carefully, and if changes are to be made, they must be introduced with the minimum possible disruption to existing applications. In the present case we may reassured somewhat by the fact that the semantics of what we hope to be able to say about resources is not changing significantly. Rather, it is the structure of how assertions are made that is in question. The objective is to structure metadata assertions using a general model with greater flexibility than the implicit ad hoc model that has evolved in the past. The clarification of the model will allow the resulting metadata to be simpler and clearer (and thus, more likely to be consistent from application to application). Recent Web history gives us cause for reassurance. HTML is deeply flawed as a markup language, mixing as it does procedural markup (where to put dots on the screen) with structural markup (what is the function of an entity). It is nearly useless for purposes of serious publishing, and could only be made to be parseable SGML through the contorted efforts of a year of standardization effort. It is, nonetheless, one of the cornerstones of the success of the Web, and its evolution through several versions and metamorphosis into XML have brought to the Web a more mature markup idiom that will meet the demands of far more sophisticated applications than were practical just four years ago. It is significant to note that most Web software continues to do a reasonable job at rendering the early versions of HTML as well as its descendents.
The Dublin Core can be thought of as the HTML of Web metadata. It is crude by many resource description standards, an affront to the ontologists, and suffers some of the foibles of committee design. Notably, it is useful, and it is this useful compromise between formality and practicality on the one hand, and simplicity and extensibility on the other that has attracted broad international and interdisciplinary interest. The growing base of legacy applications need not be abandoned, even as more sophisticated applications are developed. Careful attention was invested in making the transition of HTML versions as smooth as possible. Similar attention will be paid to making changes in the Dublin Core minimally disruptive.
The process being established to govern the evolution of the Dublin Core Initiative is the guarantee that this transition can be made with the broad interests of the community always at the fore. The Dublin Core Initiative is at is heart an open and populist endeavor. It succeeds on the efforts of many dozens of committed stakeholders that work together in a co-operative spirit of compromise. The formalization of the effort is taking place with the goal of retaining this spirit of pragmatic consensus building.
VI. Conclusions
Libraries want to share content; publishers want to sell it. Museums strive to preserve culture, and artists to create it. Musicians compose and perform, but must license and collect. Users want access, regardless of where or how content is held. What all of these stakeholders (and more) share is the need to identify content and its owner, to agree on the terms and conditions of its use and reuse, and to be able to share this information in a reliable ways that make it easier to find. An infrastructure for supporting these goals is now emerging, and the prospects for broad agreement on how to support these shared requirements has never been brighter. But new technology does not solve our problems or resolve our differences. It simply makes it possible for us to address the problems more clearly.
By exploring the development of a shared conceptual model, and its expression in a common syntax, the DC and INDECS/DOI communities have affirmed the value of creating and interchanging modular metadata sets. Just as creative works move through a process, from their creation, distribution, discovery, retrieval and use, so too will their metadata need to be interchanged in order to support these functions in an integrated environment of networked information. Simple resource description was among the primary motivations for embarking on the development of the Dublin Core. The idea of an intuitive semantic framework that anyone �- from creators of the work on the one hand to a skilled cataloger on the other �- could use to describe resources is appealing. The modeling described in this paper seems to fly in the face of this goal of simplicity. It is often the case, however, that simplicity can only be achieved through detailed engineering that helps to mitigate the underlying complexity of a problem. Lego(tm) is child's play. Any three year old can use them. But the interoperability of Lego blocks across 6 decades (and that satisfying "click" as they snap together) is the result of precise engineering to tolerances that approach those necessary for internal combustion engines.
The authors assert that coping with the broad spectrum of complexity expected of metadata applications in the future requires this same sort of engineering. A sound underlying model will help achieve the broadest possible interoperability, from simple embedded metadata on the one hand up through the intricacies of complicated, multi-functional descriptions. The potential benefits of reconciling these closely related sets of requirements are great. It will be difficult otherwise over the next few years to ensure that each community with somewhat distinct needs does not invent incompatible metadata models for achieving what are fundamentally overlapping objectives. Educators have proposed models, as have stakeholders in electronic commerce. It is of paramount importance that we identify common frameworks for exchange of information, and to recognize where requirements converge. The needs of various stakeholders are often different, and frequently conflicting. Nonetheless, all may benefit when the frameworks for expressing descriptions are interoperable. The immediate audience for this paper comprises the current DC constituency more than that of INDECS or DOI, so it has concentrated more on the issues which this collaborative work has on DC. However, in parallel with the DC process, the INDECS and DOI initiatives are on their own fast track, and while developing their technical proposals and consolidating consensus within their own communities, they face the same issues as DC with cross-sector co-operation. The needs, governance, objectives and timetables of each of our communities needs to be respected by the others if common Web metadata standards are to be broadly successful.
The Dublin Core Working Group on Schema Harmonization will be working on these issues in conjunction with the INDECS project. The leaders of these groups will seek out the most effective mechanisms for co-operation. At the same time there are initiatives in other communities, including developments related to the DOI and other international standard identifiers. We welcome the participation of others interested in ensuring that our conceptual models and the tools developed from them support the interoperability we all desire in the networked information universe.
VII. Notes
[Note 1] < http://www.ifla.org/VII/s13/frbr/frbr.pdf>
[Note 2] For the purposes of discussion we have oversimplified the divisions between the "bibliographic" community of those who manage information resources after their creation, and the "rights owners" community of those whose interests are in the commercial exploitation of intellectual property. The lines between these groups are much less clear than are portrayed here, and most individuals and organizations will at some point play all the roles of information creator, information user, and information repository.
[Note 3] Regular reports of the DC workshops have been published in D-Lib; they can also be found linked to the DC Web site at <http://purl.org/DC/about/workshop.htm">.
[Note 4] See Stuart Weibel and Juha Hakala, "DC-5: The Helsinki Metadata Workshop: A Report on the Workshop and Subsequent Developments" in D-Lib, February 1998, <http://www.dlib.org/dlib/february98/02weibel.html>.
[Note 5] IETF RFC 2134.
[Note 6] Suggested in Canberra as "The Canberra Qualifiers." See <http://www.dlib.org/dlib/june97/metadata/06weibel.html>.
[Note 7] <
http://purl.org/dc/groups/datamodel.htm>.
[Note 8] The decision to define three categories rather than identify roles within a more general category dates from the first workshop. In the absence, at that time, of any explicit method to express roles within a more general category, it was decided to sweeten broad acceptance of DC with what Terry Allen then called "syntactic sugar." Subsequent formalization of the data model, and the experience of a number of groups who have had difficulty teaching the distinctions to catalogers, led to the "Agents Proposal" which was discussed at DC-6. <http://www.archimuse.com/dc.agent.proposal.html>.
[Note 9] Such as the ISBN (International Standard Book Number), ISSN (International Standard Serial Number), ISRC (International Standard Recording Code), ISAN (International Standard Audiovisual Number), ISWC (International Standard Work Code) and the newly-developed DOI (Digital Object Identifier).
[Note 10] As Dublin Core was "the only game in town" for Web metadata, rights owners were beginning to view the Dublin Core as precisely that, and were either starting to use it inappropriately as a set of entities on which to model their databases, or to ignore it entirely because of its inadequacy for the same purpose.
[Note 11] See Lorcan Dempsey and Stuart L. Weibel, "The Warwick Metadata Workshop: A Framework for the Deployment of Resource Description," D-Lib Magazine, July/August 1996, <http://www.dlib.org/dlib/july96/07weibel.html>.
[Note 12] See the Relation Working Group report at < http://purl.org/dc/documents/working_drafts/wd-relation-current.htm>.
[Note 13] The IFLA document defines the attribute Date of the Work as: "The date of the work is the date (normally the year) the work was originally created. The date may be a single date or a range of dates. In the absence of an ascertainable date of creation, the date of the work may be associated with the date of its first publication or release." [Note 14] The RDF has the status of a proposed recommendation of W3C, see: <http://www.w3.org/TR/PR-rdf-syntax/>.
[Note 15] RDF is a simple yet powerful data model predicated on the notion of triples (resource, property, value) which is based on classic research. Generally the utility of this abstract data model, however, requires some syntax for transmission across the wire. For a variety of reasons we (in this case the W3C) have chosen XML as this syntax, but this is not a requirement of RDF per se. Sony Research is using RDF for describing HDTV broadcasts using their own proprietary syntax. Nokia is additionally using RDF instance data in a compressed binary format for strictly bandwidth transmission issues, etc.
[Note 16] Available on the DC web site at < http://purl.org/dc/>.
[Note 17] <http://purl.org/dc/>.
Copyright © 1999 David Bearman, Eric Miller, Godfrey Rust, Jennifer Trant, and Stuart Weibel
Correction made to ISWC hyperlink and INDECS URL removed at the request of the authors, The Editor, January 19, 1999 9:10 AM.
Top | Contents
Monthly Issues
Home| E-mail the Editor D-Lib Magazine Access Terms and Conditions
DOI: 10.1045/january99-bearman | 计算机 |
2014-23/1195/en_head.json.gz/18215 | About Creature House
Creature House was founded in 1994 by Alex S C Hsu and Irene H H Lee.
Their vector graphics software Expression was built around research in Skeletal Strokes (presented at SIGGRAPH'94). As computer scientists and artists, Hsu and Lee understood the needs of their users; the Expression design featured a functional, intuitive interface, accompanied by a quirky, funny website.
Expression was initially released in September 1996 by publisher Fractal Design. It went on to win many awards. Creature House continued research into computer graphics, and further developed its software. Skeletal Strokes technology enabled animation sequences to have bold outlines and fluid movements. This was unlike any other kind of animation. Expression was on its way to becoming a major software tool for illustration and cel animation.
LivingCels was the last flagship product from Creature House. It used the same drawing engine as Expression, but added a sophisticated animation system. The beta release of LivingCels for Windows and Mac was available for only a few weeks, from August until October 2003. At the end of October 2003, Microsoft acquired the company, and the software vanished. | 计算机 |
2014-23/1195/en_head.json.gz/19109 | Fakers beware: no more MS updates for you
WGA goes live
Microsoft is no longer providing updates to non-genuine versions of its Windows XP operating system. From today, the company has switched over to a full launch of its Windows Genuine Advantage Programme as part of its ongoing anti-piracy campaign.Users will now have to join the WGA authentication program if they want to receive software updates from the Microsoft Download Centre or from Windows Update. However, MS says it will still provide security patches for pirated systems, which will be available via Automatic Updates in Windows.
To register for the WGA, users just need to visit the Microsoft Download Centre, Windows Update or Microsoft Update. There they will be prompted to download an ActiveX control that checks the authenticity of their Windows software and, if Windows is validated, stores a download key on the PC for future verification.Microsoft stresses that this process "does not collect any information that can be used by Microsoft to identify or contact the user".Back in January 2005, Microsoft extended the pilot scheme - which had been running in English since September 2004 - to include 20 different languages. It also broadened the kind of content available to participants. Microsoft says many of the 40 million people who signe | 计算机 |
2014-23/1195/en_head.json.gz/19374 | What Is a Multi-User Operating System?
Multi-user systems often require a network of servers and other components. Watch the Did-You-Know slideshow
G. Wiesen
A multi-user operating system is a computer operating system (OS) that allows multiple users on different computers or terminals to access a single system with one OS on it. These programs are often quite complicated and must be able to properly manage the necessary tasks required by the different users connected to it. The users will typically be at terminals or computers that give them access to the system through a network, as well as other machines on the system such as printers. A multi-user operating system differs from a single-user system on a network in that each user is accessing the same OS at different machines.The operating system on a computer is one of the most important programs used. It is typically responsible for managing memory and processing for other applications and programs being run, as well as recognizing and using hardware connected to the system, and properly handling user interaction and data requests. On a system using a multi-user operating system this can be even more important, since multiple people require the system to be functioning properly simultaneously. This type of system is often used on mainframes and similar machines, and if the system fails it can affect dozens or even hundreds of people. Ad
A multi-user operating system allows multiple users to access the data and processes of a single machine from different computers or terminals. These were previously often connected to the larger system through a wired network, though now wireless networking for this type of system is more common. A multi-user operating system is often used in businesses and offices where different users need to access the same resources, but these resources cannot be installed on every system. In a multi-user operating system, the OS must be able to handle the various needs and requests of all of the users effectively.
This means keeping the usage of resources appropriate for each user and keeping these resource allocations separate. By doing this, the multi-user operating system is able to better ensure that each user does not hinder the efforts of another, and that if the system fails or has an error for one user, it might not affect all of the other users. This makes a multi-user operating system typically quite a bit more complicated than a single-user system that only needs to handle the requests and operations of one person.
In a multi-user system, for example, the OS may need to handle numerous people attempting to use a single printer simultaneously. The system processes the requests and places the print jobs in a queue that keeps them organized and allows each job to print out one at a time. Without a multi-user OS, the jobs could become intermingled and the resulting printed pages would be virtually incomprehensible. Ad
What Is System Time?
What Is a Superuser?
What Is the Administrative Share?
What Is a Backplane?
What Is the System Folder?
What Is a Power User?
What Is a Distributed Operating System?
I'm Andres and I want to know the difference between a multiprocessing operating system and multi-user processing operating. view entire post
@topher – Windows is a single user operating system, even if it’s networked. It has a single administrative account, and that is the only true user. Other logins operate under this one account. view entire post
@topher – Nowadays, Linux is the typical network os for most multi-user operating systems. Years ago the IBM AS400 mainframe computer was also standard hardware for these kinds of systems. I never used the IBM myself but did know someone from work who had used it. view entire post
Are these usually Windows operating systems or Linux operating systems? When I've used systems like this, they've tended to be bespoke software, so I've never known what the underlying OS was (or if there was one). view entire post | 计算机 |
2014-23/1195/en_head.json.gz/20375 | Dead Space 3 Dead Space 3 is a Third-Person Shooter with Survival-Horror gameplay elements that challenges players to work singly, or with a friend to stop the viral/monster Necromorph outbreak. The game features the return of franchise hero, Isaac Clarke and the necessity of his weapons making abilities and precision skill in using them against in order to defeat enemies. Other game features include, drop-in/out co-op support, the additional character John Carver, evolved Necromorph enemies, a new cover system, side missions, and more... Platform: Xbox 360
Download Dead Space 3 here - http://tinyurl.com/PCDDeads3
Dead Space 3 brings Isaac Clarke and merciless soldier, John Carver, on a journey across space to discover the source of the Necromorph outbreak. Crash-landed on the frozen planet of Tau Volantis, the pair must comb the harsh environment for raw materials and scavenged parts. Isaac will then put his engineering skills to the ultimate test to create and customize weapons and survival tools. The ice planet holds the key to ending the Necromorph plague forever, but first the team must overcome avalanches, treacherous ice-climbs, and the violent wilderness. Facing deadlier evolved enemies and the brutal elements, the unlikely pair must work together to save mankind from the impending apocalypse...
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Download here - http://tinyurl.com/PCDDeads3
Roslan Kasim
dead space,
Tomb Raider - The Final Hours Edition
Tomb Raider is an Action-Adventure game that introduces players to the origin of one of the most identifiable video games icons of all-ti...
Dead Space 3 Dead Space 3 is a Third-Person Shooter with Survival-Horror gameplay elements that challenges players to work singly, or with a friend to s...
Crysis 2 Xbox 360
Sequel to one of the greatest PC shooters ever, Crysis 2 offers console players their first taste of Crytek's unique shooter gameplay. ...
Mortal Kombat Xbox 360
Mortal Kombat - Products Description Prepare yourself to reenter the tournament in the triumphant return of Mortal Kombat . A complete re...
The Legend of Zelda: Ocarina of Time 3D - Nintendo
The Legend of Zelda: Ocarina of Time 3D is a must-have single player Action-Adventure game for the Nintendo 3DS. A reimagining of the origi...
Thor: God of Thunder - PSP3
Thor : God of Thunder Description ; Experience a spectacular, epic-scale original adventure in Thor’s first standalone game as he battles t...
Ninja Gaiden 3 Razor's Edge
Ninja Gaiden 3: Razor's Edge Bath the dragon sword in even more blood. Ninja Gaiden 3: Razor's Edge is coming to the Xbox360 and ...
Pokemon White Version - Nintendo
Pokémon White Version is a handheld Role-Playing game (RPG) for play on the Nintendo DS / DSi. Released as a companion game to the DS / DSi...
The 3rd Birthday - Sony PSP
Description : Monsters - the Twisted, that spawned in the far future devoured all the humans, and when they destroyed the whole earth, they...
Game Backup System Game Backup System is a complete solution that includes special software and tutorials to making perfect copies & backups of your games...
PlayStation 4 unveiled with social, remote feature...
Langkawi homestay - homestay di langkawi Bali hotels Langkawi Hotels
Langkawi Island info Langkawi car rental
Langkawi Property - jual beli tanah di Pulau Langkawi Tanah di Langkawi - jual beli tanah di Langkawi View my complete profile
Search engine submissions | 计算机 |
2014-23/1195/en_head.json.gz/20780 | Windows XP Media Center Edition 2005 Review
Paul Thurrott's Supersite for Windows
Paul Thurrott
I've been a fan--and a steady user--of Windows XP Media Center Edition since for almost three long years now, and have watched this trend-setting product evolve through three revisions, all of which have built on the successes of the past and added new features, fixed problems, and made the underlying platform more stable. Now, three years after I first set my sights on this intriguing multimedia champion, XP Media Center Edition (XP MCE) is at a cross-road. It still offers the premium PC experience, with amazing and unparalleled digital media features. But too, it's still a computer, and not necessarily the type of device one would want in the living room. In other words, the same old arguments about Media Center seem to apply today as much as they did when the product first shipped in 2002.
If you're not familiar with Windows XP Media Center Edition, and the Media Center PCs on which it runs, please refer back to my earlier reviews of Windows XP Media Center Edition and Windows XP Media Center Edition 2004, the versions of this product that preceded XP MCE 2005. We've got a lot of ground to cover, and I don't want to waste any time or space revisiting the past. Let's see what's going on with Media Center these days.
Three years of feedback
While Microsoft can hardly be faulted for basing the feature set of its first Media Center version on internal testing, field tests, and surveys, the company now has a large body of dedicated users who are clamoring to provide the company with feedback about the product. Some of the feedback was surprising, according to Windows eHome Division General Manager Joe Belfiore, who noted that while almost 50 percent of all Media Center buyers were using the machines in their dens, studies, or home offices, 27 percent use the machines in their living rooms, and 23 percent use them in bedrooms.
The usage patterns are interesting as well, and point to the success of Microsoft's current strategy of augmenting Media Center PCs with Media Center Extenders, which allow users to enjoy Media Center content remotely on other TVs in their home (see below for more details, and also my Media Center Extender review). 58 percent of Media Center users watch TV on their PCs, while 27 percent have their Media Center PC connected directly to a TV set. Most Media Center customers are happy with their machines, too: 89 percent say they are "satisfied," while 66 percent say they are "very satisfied. That said, there is still certainly plenty of room for improvement. Customers told Microsoft that the features they'd like to see most in XP MCE 2005 include improved TV quality, easier music management, the ability to save recorded TV shows to DVD, multi-tuner and HDTV support, archiving/backup of personal memories (photos, home videos), and the ability to enjoy Media Center content in other rooms in the house, and on the go.
To that end, the company began working on "Symphony," the product that became Windows XP Media Center Edition 2005. Don't be fooled by reports that this release is a minor upgrade, it's not. First, there's been a not-so-subtle change in the way in which Microsoft perceives XP MCE. In the past, XP MCE was sort of a high-end niche product, since it was available only with special types of PCs which came with bundled TV tuner cards. Now, that vision has changed dramatically. "When we talked about Media Center in the past, we tended to refer to it as the version of Windows that came with a remote [control]," Belfiore said. "But it's also worth mentioning that in this version in particular Media Center represents the version of Windows that is the highest end, most complete, and best version of Windows, even when you're sitting at your desk using your mouse and keyboard. It's the best PC experience you can get as a consumer."
Second, for the first time since XP shipped in 2001, Microsoft has taken the opportunity to provide XP MCE 2005 with a brand new visual style (discussed below), highlighting its prominence and importance in the XP line-up.
And finally--and perhaps most importantly--Microsoft has imbued XP MCE 2005 with a rousing set of upgrades that takes a previously excellent but somewhat flawed product to a whole new level. The visuals in XP MCE 2005 are stunning. So are the new features, which are, yes, refinements, but also major usability wins that will have customers grinning to themselves as they discover the product's improved functionality. You'll see what I mean by that in a bit.
Introducing XP Media Center Edition 2005
Windows XP Media Center Edition 2005 is a major improvement over previous versions of XP MCE, with changes both to the underlying Windows desktop and the Media Center experience. If you're an existing Media Center customer, you'll want to get this upgrade as soon as possible. If you've never considered a Media Center PC, this release will likely change you mind. Unlike previous MCE versions, XP Media Center Edition 2005 improves both the core Windows desktop as well as the remote-enabled Media Center experience. In the next two sections, we'll examine those improvements.
Desktop changes
From the first boot, it's clear that Microsoft has not just changed the Media Center environment in this release, but has also updated the underlying Windows desktop to give XP MCE 2005 a unique, stylish, and professional new fascia (Figure). "We've updated the visuals in this release, and you'll see enhancements to the digital media features, making it the premium version of Windows," Belfiore said. The new default visual style, Energy Blue (also seen in Windows Media Player 10 and Windows Media Player 10 Mobile), is a subtle improvement on the default Luna style used by other XP versions, with shiny, sharp edges.
XP MCE 2005 also includes a number of features that Microsoft previously made available only in its $20 Plus! Digital Media Edition for Windows XP, including the Windows Audio Converter, Windows CD Label Maker, Windows Dancer, and Windows Party Mode (Figure). And, it includes screensavers and themes that it previously made available only in its $20 Plus! for Windows XP package. Bundled screensavers include the ever-popular Aquarium (now with more fish) (Figure), Da Vinci, Nature, and Space. Themes include Aquarium, Da Vinci, Nature, and Space. XP MCE 2005 also ships with a unique screensaver called My Pictures Premium: This screensaver presents an animated slideshow of your My Pictures folder, with optional background music, and is quite nice.
Additionally, the version of Windows Movie Maker 2.1 that ships with XP MCE 2005 supports burning DVD movies (Figure), a feature that isn't available to other versions of WMM 2.1 on other versions of XP. It also sports new transitions and video effects (Figure). And while this isn't new to XP MCE 2005, if you like the Media Center interface, but want to use it as a media player, you can run it in a window alongside your other applications.
Finally, XP MCE 2005 ships with all of the core XP improvements from XP Service Pack 2 (SP2, see my review) and is the only version of Windows to ship with Windows Media Player 10 (see my review) in the box. "Even if you haven't run Media Center yet, or even picked up the remote, you've still got a bunch of new features," Belfiore noted.
Improvements to the core Media Center experiences
The Media Center environment--or Media Center experience, as Microsoft likes to call it--has undergone far more impressive improvements. The first thing you'll notice when you bring up Media Center and navigate through the entries in the Start Page (Figure) is the cleaner, easier-to-read new font. Also, most of the Start Page entries now spawn pop-up previews of commonly accessed, or recently accessed, content (Figure). So, for example, if you hover over the My TV choice, you will see pop-ups for Recorded TV, Live TV, and Movies (the latter of which is a new feature I'll explain below), three of the most commonly accessed features of My TV. However over My Videos, and you'll see your most recently accessed videos.
This pop-up effect isn't just pretty, it's useful. That's because it helps you get to the content you want more quickly. For example, we have friends over for dinner every Sunday night, and typically play a random selection of New Age music over a photo slideshow from all of our pictures from 2004. Since we've done this so many times in the past, the "New Age" genre is now the first pop-up preview choice off of My Music on the Start Page, and the "2004" photo folder is the first choice next to My Music. A couple of clicks of the remote later, and we're good to go.
What's interesting is that Microsoft used to bubble up this "most recently used" content on the main page for each subsection. So, for example, to access New Age on XP MCE 2004, I'd have to go to the My Music page. Now, I can save a click. Nice!
Microsoft has also subtly changed the behavior for shutting down Media Center. Previous versions supported the standard Minimize, Maximize, and Close buttons in the upper right side of the Start Page. In XP MCE 2005, there is a single Shut Down icon to the left of the clock. When you select this, it turns bright red and displays the text "Shut Down." Press the button and Media Center displays a handy Shut Down pop-up (Figure), from which you can close Media Center, log off, shut down the system, restart the computer, or enter standby mode.
One of the most dramatic changes in XP MCE 2005 is the new context menu-like pop-ups (Figure) that appear whenever you right-click on an item in Media Center (or click the Details/More Info button on the remote control). "We wanted to provide Media Center with a consistent way to get more information about any object you see onscreen," Belfiore said. You can click the More Info button and you get this list, and say, 'here's this thing, and here's all the stuff I can do with it.' This feature is also extensible, so third parties can add new options to the pop-up menus that will expose functionality in their services.
"Media Center is the most entertaining version of Windows ever," Belfiore said. "If you're into digital photos, TV or DVD, music, or video, this is the place to be. Windows XP Media Center Edition 2005 is the first consumer operating system with built-in support for high definition TV (HDTV) programming, and we're covering HDTV broadcasts, HD-DVD, and HD Web downloads in this release.
Windows XP Media Center Edition 2005 features a simpler new setup experience that walks you through the process of configuring your Internet connection, TV signal, speakers, TV or monitor, and other features. Particularly nice are the new options for optimizing the display to match your equipment (Figure). You can also easily setup your speakers to support 2-, 5.1-, or 7.1-speaker setups.
Billed as the ultimate experience in Media Center television viewing and recording features in XP MCE 2005 have, predictably, received the most attention. "As far as we're concerned, TV watching is the new mission critical application for Windows," Belfiore said.
Indeed the number one request from customers was that Microsoft improve the picture quality for those Media Center systems that were connected to televisions. In previous versions, the company focused on PC-based connections like DVI and VGA, because it felt (and rightfully so) that most people would use Media Center PCs solely as PCs, with traditional PC displays. "Last year, we made a really big step forward with good TV quality on computer displays," Belfiore said. "But one weak area--and a lot of reviews noted this--was that the picture quality of TV on some Media Center PCs wasn't as good as what you'd get from a less expensive DVR device or by jus | 计算机 |
2014-23/1195/en_head.json.gz/20932 | April 2003 Volume 29, Issue 3 Format for Printing Send Feedbacknothing.but.net: Supercharge Your Browserby Rick KlauRatchet up the power and speed of your Internet experience with add-on toolbars and bookmarklets.Hard as it may be to believe, it's been nearly eight years since Newsweek proclaimed "The Year of the Internet." When the Web was in its infancy, the word "browser" was synonymous with Mosaic, and then later Netscape. Today, if you're like most people, browser means Internet Explorer. Regardless of your particular interpretation of Microsoft's competitive stance, it's hard to deny the company's success in bundling the browser with the operating system.But with that bundling comes a curious phenomenon: Most users accept their browsers "as is," without any enhancements whatsoever. Yet there is now a wide array of ways to boost your browser experience. Not only will these enhancements make your browsing a more positive experience, they will likely save you considerable time.The Current Browser Market: Guess Which Is Top Dog?Microsoft Internet Explorer (IE) commands about three quarters of the overall browser market in the United States, with Netscape grabbing the next 10 to 15 percent. (The remainder is split primarily between two organizations, Mozilla and Opera.) What's perhaps most interesting is that the most popular version of IE is IE 6, which has been out for slightly more than a year. By contrast, the most popular version of Netscape is Netscape 3, which was released in 1996.As a result, most browser enhancements tend to focus on the IE market first and the Netscape market second. From a practical perspective, there is a much larger market for IE enhancements-and more mature browsers tend to support a wider variety of methods for improving the browser.Fortunately, however, all hope is not lost. Let's examine the available options depending on your browser and its version number, starting with toolbars.Yahoo Companion Toolbar-Plus Lookups and BookmarksIE 4.x and later, Netscape 4.x and laterWith the Yahoo Companion toolbar, at http://companion.yahoo.com, you get to add the power of Yahoo's search engine, index and a few other features-right inside your browser. Having the Yahoo search bar right up front is nice. It means that regardless of where you are in your browser, you can instantly search Yahoo for any content in which you're interested. But it goes much further. If, for example, you want to search a dictionary or thesaurus, you can do so from the same toolbar. (In the search box, click on the down arrow and select the resource you want to use.)Other powerful features of the Yahoo Companion toolbar include stock quote lookups, movie times, news searches and the ability to search their photo archives. Probably most useful for me is the ability to have the Yahoo Companion toolbar make all of my shared bookmarks available.Because I use several computers (one at home, a laptop at work and any number of shared Internet computers while on the road), I am often away from "my" browser with my bookmarks. Yahoo allows me to keep a set of bookmarks available and accessible from any computer. To set up this feature, go to http://bookmarks.yahoo.com, and then enable Yahoo Bookmarks on your Companion toolbar. (And if you share your computer with multiple people, all they have to do is sign in to the Companion toolbar with their Yahoo ID and password, and they'll get their bookmarks instead of yours.)Google Toolbar-Plus Special Queries and DemographicsIE 5.x and laterIf you're a Google addict and you're not using the Google toolbar, you're missing out on at least half of Google's power. Find it at http://toolbar.google.com. Like the Yahoo Companion toolbar, the Google toolbar provides instant access to the search engine regardless of where you are in your browser. But you also get the ability to search any of Google's services-not just its Web index, but also Google Groups (a 20-year archive of Usenet), Google Images, Google Answers and Google News.Once you've done a search, you can have the Google toolbar highlight your search terms on any destination page you've visited. In long Web pages, this is an absolute lifesaver. Also, once you're at a site, you can revise your query in the search box and then click on Search Site. Google will then restrict your search only to the site you're currently visiting.The Google toolbar also gives you some valuable demographic data about a site when you visit-including the site's Google PageRank (Google's proprietary algorithm for evaluating a site's importance) and its Page Info. Page Info includes the ability to see a cached version of the site. (Google maintains local copies of all sites it crawls, so you can see if a Web page's content has been changed since Google visited the site last.) Plus, Page Info includes links to "Similar Pages" (Google establishes similarity by looking for common phrases and link patterns) and "Backward Links" (which tells you who has linked to this site).For a Little More Punch: Other ToolbarsUsing the same concept, other sites provide toolbars that can be added to your browser. Here are some handy ones.Merriam-Webster: www.m-w.com/tools/toolbar (which provides the Merriam-Webster Dictionary, Thesaurus and Word of the Day). You can use it with IE 5.x and later.Nutshell: www.torrez.org/projects/nutshell (which provides easy access to Google, Amazon, Dictionary.com, Internet Movie Database and Daypop). You can use it with IE 5.5 and later.Teoma: http://sp.ask.com/docs/teoma/toolbar (which is a strong competitor to Google's search engine and also includes an "e-mail page to a friend" function). You can use it with IE 5.x and later.Girafa: www.girafa.com/index.acr?c=1 (which provides a search toolbar and lets you see thumbnails of search results before visiting sites). You can use it with IE 4.x and later.BookmarkletsBookmarklets take advantage of some very useful code called "javascript." (Strangely enough, although they share the same root, javascript has nothing in common with the Java programming language.) Javascript "applets" are often just a few lines of code that execute within a browser. Frequently they are interactive within a browser-meaning that based on something you do (for example, highlight some text on the current page), a bookmarklet can act on that text.One advantage to bookmarklets over toolbars is that bookmarklets are supported by just about every major browser available. So if your browser of choice is Opera or Netscape, you're still in luck.There is an entire collection of bookmarklets at www.bookmarklets.com. Once you find a bookmarklet you like, simply drag the link to your Links toolbar. (If you don't see one, follow the instructions at the site to enable it.) That's it-you're all set. Now you'll have a button in your browser that corresponds to the function assigned by the bookmarklet. For example, if you used the "More Info About" bookmarklet, you can now highlight some text and click on that button. Once clicked, it populates a Web page with the text you highlighted, allowing you to visit various search engines based on your query.For other good resources for bookmarklets, visit Google or Yahoo and search on that term.Wanted: More Power ToolsBrowser enhancements like these can make your Web work user-friendlier and save you some time (which translates into money) in the process. So let's hope that the range of options continues to expand. Do you know of a browser power tool not mentioned here? Be sure to send me an e-mail and let me know so I can share it with other readers!Rick Klau ( rklau@interfacesoftware.com) is Vice President of Vertical Markets at Interface Software, Inc., and a co-author of the ABA LPM book The Lawyer's Guide to Marketing on the Internet (2nd edition). His blog is at www.rklau.com/tins. Format for Printing Send Feedback | 计算机 |
2014-23/1195/en_head.json.gz/22108 | LinuxInsider > Enterprise IT > Applications | Next Article in Applications
Adobe Sends Creative Suite to the Cloud for Good
So long, boxed software: It's the cloud way or the highway for users of Adobe's Creative Suite apps, updated versions of which are now available only through a paid subscription model. "We're going to see more of these models, where the software is essentially leased to the customer more than sold," said Jagdish Rebello, research director at IHS iSuppli.
After announcing
last month that all of its Creative Suite apps would soon move to the cloud, Adobe on Tuesday made good on its promise and delivered the resulting subscription-based software.
Now included under the umbrella name Creative Cloud, the latest versions of apps including Photoshop, Illustrator, Dreamweaver and Premiere Pro are now available exclusively to Creative Cloud subscribers for prices starting at US$19.99 per month.
As a way to entice users of Adobe's existing software, Adobe is offering discounts of up to 60 percent for CS6 owners during a year-long period, or a 40 percent discount for those on CS3 to CS5.5.
"This could be attractive for the customer, but there will be some initial resistance," said Jagdish Rebello, research director at IHS iSuppli. "We're going to see more of these models, where the software is essentially leased to the customer more than sold."
Adobe did not respond to our request for further details.
A Changing Business Model
Adobe is not the first company to go from selling a product with a potentially high up-front cost to providing it via a yearly subscription model. However, customers haven't always been happy about paying for what they feel they previously had paid for, so it could take a while for this model to become widely accepted.
"This is a sort of a change in the mindset in how software makers are going to market," Rebello told the E-Commerce Times. "Microsoft had a 365 product for enterprise, but consumers are typically used to downloading the software or paying for the CD up front. Microsoft could have done more with the pricing to get customers more interested."
Pricing of course remains the issue, because many customers of the company's older boxed software opted not to update on a regular basis, and thus were content paying for new versions only when necessary. This shift could require a renewal at times when users might not see the need. Thus the benefit is more for Adobe than its customers.
"Adobe's plan makes sense from both a business and technological perspective," said Charles King, principal analyst at Pund-IT. "Subscription-based delivery allows the vendor to keep closer tabs on customers and end users, ensures that applications aren't being used outside the parameters of the EULA, and provides a mechanism that makes piracy more difficult and less profitable.
"Since the vast majority of vendors distribute software updates and fixes via the Internet, they have the mechanism and expertise in place to switch over to subscriptions," King told the E-Commerce Times.
'These Customers Are Angry'
The ability to stay continuously updated could appeal to some customers, but for hobbyists it could be more of a burden.
"These customers are angry," said Amy Konary, research vice president for software licensing and provisioning at IDC.
"This approach works really well in situations where customers want the updates and see value in the added functionality," Konary told the E-Commerce Times. "However, the hobbyists are not going to see value for features they don't use. The subscription is for the continuous value, and if you don't see that value then the subscription won't be as appealing."
In fact, "it might even be objectionable if you aren't using it," she said.
Indeed, more than 32,000 consumers have signed an online petition
protesting the new model.
'This Is the Future'
While customers might not like the idea of shifting to a subscription model, the fact is that the companies aren't likely to back down given the regular revenue stream and enforcements the model provides.
On the upside, however, cloud-based software could bring consumers the advantage of sharing across multiple devices.
"This is the future as customers have more devices, and it could be more expensive to buy multiple programs for those devices," Rebello suggested. "However, Adobe needs to be a little more aggressive in the pricing to pacify the customers."
It remains to be seen how attractive Adobe's discounts prove, or whether they will be enough to get current customers to move to the cloud. Some may see them as a good deal, but others may opt to keep using the older version and upgrade only when absolutely necessary.
"What customers will make of this is anyone's guess," said Pund-IT's King. "Since the vast majority of consumers and businesses have access to high-speed bandwidth, downloading large files isn't the hassle it once was."
Adobe's customer base is also smaller and more homogeneous than Microsoft's is, "so it may have an easier time selling the subscription idea, especially if it adds some 'sugar' in the form of additional services and other features," King added. "I expect Adobe will emphasize that subscriptions ensure that customers will benefit from using the latest/greatest official software rather than out-of-date or fake versions."
'We Will See More of This'
Of course, Adobe isn't actually giving consumers a choice -- at least not beyond staying behind with older software. Microsoft, on the other hand, has opted to provide a perpetual license as well as a subscription model.
Still, that doesn't mean other companies won't try to replicate the Adobe strategy.
"We will see more of this," predicted IDC's Konary.
Judging by the way Adobe is pushing customers in a single direction, she concluded, "they must understand that there are certain customers that aren't seeing this is a better deal." | 计算机 |
2014-23/1195/en_head.json.gz/23528 | Tag Archives: Oracle Corporation
CEO Series: Harry Curtin
Harry Curtin
Founder and CEO, BestIT
How did the recession affect the IT industry?
I think it really hurt IT. It really did, especially with larger corporations, I saw. It was almost like within a couple of month period that companies just shut everything off — especially large, multinational, Fortune 100 companies. You could talk to each one of them and they all had the same kind of story where “we have to cut it off,” and if they had a program they had been working on for a year, they just shut it down. I think there was a lot of fear, absolutely. The small- and mid-sized (companies), I think, really kept it up more than the large, to me. I think they felt a little bit more nimble. They stayed a little more positive, frankly. They were fighting through it more than the large companies. I think decisions from the top kind of cut everything off all the way down. It really hurt (IT) companies that were focused with larger companies, major projects. It changed them. It changed things a lot.
What signs of recovery are you seeing in your industry?
What I’m seeing right now is mostly that companies are taking a deep breath after everything and they’re looking at, “OK, maybe I’m not ready to do something today, but what should I be doing tomorrow?” They’re starting to plan for later this year, next year; (they) don’t want to let things get to the point where things are just falling apart. (They) want to stay on top of it, but what’s (their) next step? So they are really in the planning phase in my book, and they are opening up their ears and thinking about what they need to do next. Even though they may not be ready today, they are taking those initiatives to move in the right direction.
What are the benefits of IT outsourcing?
It allows you to focus on your core business. It can reduce costs greatly, if it’s done right. It can also create more of an efficiency in your business, because you aren’t focusing on an area that you aren’t an expert in. You can stay focused on what you’re really good at and just do it that much better, rather than being distracted by an area that’s not really a core. Our company, we outsource areas where it’s not our competency or something we want to do long term.
In terms of image, does the outsourcing of services still face challenges?
I think so. I think people in the late ’90s and early-2000s got a view of outsourcing that it’s shipping a job or a service overseas, which is not the case. That’s a piece of it, but lots of companies — I’ll use manufacturing as an example. You may manufacture a whole component, but maybe a piece of that is something you’ve never been able to manufacture correctly, you haven’t gotten the quality you wanted, you haven’t gotten the pricing right, and you essentially outsource that piece. Maybe it’s a local company down the street that does it … I think that if (people) are just looking at it as (work) goes overseas, that is not the right way. I think you need to look at what the solution is. (BestIT) is a U.S.-based organization; essentially when a company signs on with us, if it’s an extensive enough contract, we’re going to need to hire here. AZ Business Magazine March 2010So we’re going to be creating jobs, as well as they are going to be creating jobs for themselves, because now we’ve reduced what their costs are and they can hire in sales or project management — wherever they feel the gaps are in their business.If (people) look deep enough, they’ll find organizations that offer outsourcing that may not be what the typical outsourcing is.
What does an IT professional need in order to be considered part of the C-level team? (There needs to be) a lot of hard work, a lot of focus, a great attitude — people that are really committed to helping support the business. And not taking this (job) as a nine-to-five, but also thinking out-of-the-box in terms of where can you take the business up to the next level, or what you can do yourself to help the business.
CEO of BestIT
Founded BestIT in 2004
Owned an investor relations firm with a focus on small-cap technology firms
Worked as an investment adviser at Charles Schwab
Worked at Oracle Direct Sales for Oracle Corporation
Attended Buffalo State College
www.bestit.com
POSTED: March 1, 2010. TAGS: A View From the Corner Office, BestIT, Buffalo State College, CEO, CEO series, Charles Schwab, corporations, fortune 100 companies, Harry Curtin, major projects, March 2010, multinational fortune, Oracle Corporation Subscribe Now Get our weekly email newsletters and stay up to date with the latest in Arizona business, commercial real estate, lifestyle, travel and events! | 计算机 |
2014-23/1195/en_head.json.gz/23851 | Life simulation game
This article possibly contains original research. Please improve it by verifying the claims made and adding inline citations. Statements consisting only of original research should be removed. (March 2008)
Part of a series on:
Simulation video games
Sub-genres
Construction and management simulation
Business simulation game
City-building game
Government simulation
Digital pet
God game
Social simulation game
Sports game
Sim racing
Sports management game
Vehicle simulations
Amateur flight simulation
Combat flight simulator
Space flight simulator
Space combat game
Space trading game
Submarine simulator
Vehicular combat game
Chronology of business simulation video games
Chronology of city-building video games
Chronology of god video games
List of space flight simulator games
Life simulation (or artificial life games)[1] is a sub-genre of simulation video games in which the player lives or controls one or more virtual lifeforms. A life simulation game can revolve around "individuals and relationships, or it could be a simulation of an ecosystem".[1]
3.1 Digital pets
3.2 Biological simulations
3.3 Social simulation
4.1.1 Loosely biology- and evolution-inspired games
4.2 Social simulations
Life simulation games are about "maintaining and growing a manageable population of organisms",[2] where players are given the power to control the lives of autonomous creatures or people.[1] Artificial life games are related to computer science research in artificial life. But "because they're intended for entertainment rather than research, commercial A-life games implement only a subset of what A-life research investigates."[2] This broad genre includes god games which focus on managing tribal worshipers, as well as artificial pets that focus on one or several animals. It also includes genetic artificial life games, where players manages populations of creatures over several generations.[1]
Artificial life games and life simulations find their origins in artificial life research, including Conway's Game of Life from 1970.[1] But one of the first commercially viable artificial life games was Little Computer People in 1985,[1] a Commodore 64 game that allowed players to type requests to characters living in a virtual house. The game is cited as a little-known forerunner of virtual-life simulator games to follow.[3] | 计算机 |
2014-23/1195/en_head.json.gz/25384 | By Janice Helwig and Mischa Thompson, Policy Advisors Since 1999, the OSCE participating States have convened three “supplementary human dimension meetings” (SHDMs) each year – that is, meetings intended to augment the annual review of the implementation of all OSCE human dimension commitments. The SHDMs focus on specific issues and the topics are chosen by the Chair-in-Office. Although they are generally held in Vienna – with a view to increasing the participation from the permanent missions to the OSCE – they can be held in other locations to facilitate participation from civil society. The three 2010 SHDMs focused on gender issues, national minorities and education, and religious liberties. But 2010 had an exceptionally full calendar – some would say too full. In addition to the regularly scheduled meetings, ad hoc meetings included: - a February 9-10 expert workshop in Mongolia on trafficking; - a March 19 hate crimes and the Internet meeting in Warsaw; - a June 10-11th meeting in Copenhagen to commemorate the 20th anniversary of the Copenhagen Document; - a (now annual) trafficking meeting on June 17-18; - a high-level conference on tolerance June 29-30 in Astana. The extraordinary number of meetings also included an Informal Ministerial in July, a Review Conference (held in Warsaw, Vienna and Astana over the course of September, October, and November) and the OSCE Summit on December 1-2 (both in Astana). Promotion of Gender Balance and Participation of Women in Political and Public Life By Janice Helwig, Policy Advisor The first SHDM of 2010 was held on May 6-7 in Vienna, Austria, focused on the “Promotion of Gender Balance and Participation of Women in Political and Public Life.” It was opened by speeches from Kazakhstan's Minister of Labour and Social Protection, Gulshara Abdykalikova, and Portuguese Secretary of State for Equality, Elza Pais. The discussions focused mainly on “best practices” to increase women’s participation at the national level, especially in parliaments, political parties, and government jobs. Most participants agreed that laws protecting equality of opportunity are sufficient in most OSCE countries, but implementation is still lacking. Therefore, political will at the highest level is crucial to fostering real change. Several speakers recommended establishing quotas, particularly for candidates on political party lists. A number of other forms of affirmative action remedies were also discussed. Others stressed the importance of access to education for women to ensure that they can compete for positions. Several participants said that stereotypes of women in the media and in education systems need to be countered. Others seemed to voice stereotypes themselves, arguing that women aren’t comfortable in the competitive world of politics. Turning to the OSCE, some participants proposed that the organization update its (2004) Gender Action Plan. (The Gender Action Plan is focused on the work of the OSCE. In particular, it is designed to foster gender equality projects within priority areas; to incorporate a gender perspective into all OSCE activities, and to ensure responsibility for achieving gender balance in the representation among OSCE staff and a professional working environment where women and men are treated equally.) A few participants raised more specific concerns. For example, an NGO representative from Turkey spoke about the ban on headscarves imposed by several countries, particularly in government buildings and schools. She said that banning headscarves actually isolates Muslim women and makes it even harder for them to participate in politics and public life. NGOs from Tajikistan voiced their strong support for the network of Women’s Resource Centers, which has been organized under OSCE auspices. The centers provide services such as legal assistance, education, literacy classes, and protection from domestic violence. Unfortunately, however, they are short of funding. NGO representatives also described many obstacles that women face in Tajikistan’s traditionally male-oriented society. For example, few women voted in the February 2010 parliamentary elections because their husbands or fathers voted for them. Women were included on party candidate lists, but only at the bottom of the list. They urged that civil servants, teachers, health workers, and police be trained on legislation relating to equality of opportunity for women as means of improving implementation of existing laws. An NGO representative from Kyrgyzstan spoke about increasing problems related to polygamy and bride kidnappings. Only a first wife has any legal standing, leaving additional wives – and their children - without social or legal protection, including in the case of divorce. The meeting was well-attended by NGOs and by government representatives from capitals. However, with the exception of the United States, there were few participants from participating States’ delegations in Vienna. This is an unfortunate trend at recent SHDMs. Delegation participation is important to ensure follow-up through the Vienna decision-making process, and the SHDMs were located in Vienna as a way to strengthen this connection. Education of Persons belonging to National Minorities: Integration and Equality By Janice Helwig, Policy Advisor The OSCE held its second SHDM of 2010 on July 22-23 in Vienna, Austria, focused on the "Education of Persons belonging to National Minorities: Integration and Equality." Charles P. Rose, General Counsel for the U.S. Department of Education, participated as an expert member of the U.S. delegation. The meeting was opened by speeches from the OSCE High Commissioner on National Minorities Knut Vollebaek and Dr. Alan Phillips, former President of the Council of Europe Advisory Committee on the Framework Convention for the Protection of National Minorities. Three sessions discussed facilitating integrated education in schools, access to higher education, and adult education. Most participants stressed the importance of minority access to strong primary and secondary education as the best means to improve access to higher education. The lightly attended meeting focused largely on Roma education. OSCE Contact Point for Roma and Sinti Issues Andrzej Mirga stressed the importance of early education in order to lower the dropout rate and raise the number of Roma children continuing on to higher education. Unfortunately, Roma children in several OSCE States are still segregated into separate classes or schools - often those meant instead for special needs children - and so are denied a quality education. Governments need to prioritize early education as a strong foundation. Too often, programs are donor-funded and NGO run, rather than being a systematic part of government policy. While states may think such programs are expensive in the short term, in the long run they save money and provide for greater economic opportunities for Roma. The meeting heard presentations from several participating States of what they consider their "best practices" concerning minority education. Among others, Azerbaijan, Belarus, Georgia, Greece, and Armenia gave glowing reports of their minority language education programs. Most participating States who spoke strongly supported the work of the OSCE High Commissioner on National Minorities on minority education, and called for more regional seminars on the subject. Unfortunately, some of the presentations illustrated misunderstandings and prejudices rather than best practices. For example, Italy referred to its "Roma problem" and sweepingly declared that Roma "must be convinced to enroll in school." Moreover, the government was working on guidelines to deal with "this type of foreign student," implying that all Roma are not Italian citizens. Several Roma NGO representatives complained bitterly after the session about the Italian statement. Romani NGOs also discussed the need to remove systemic obstacles in the school systems which impede Romani access to education and to incorporate more Romani language programs. The Council of Europe representative raised concern over the high rate of illiteracy among Romani women, and advocated a study to determine adult education needs. Other NGOs talked about problems with minority education in several participating States. For example, Russia was criticized for doing little to provide Romani children or immigrants from Central Asia and the Caucasus support in schools; what little has been provided has been funded by foreign donors. Charles Rose discussed the U.S. Administration's work to increase the number of minority college graduates. Outreach programs, restructured student loans, and enforcement of civil rights law have been raising the number of graduates. As was the case of the first SHDM, with the exception of the United States, there were few participants from participating States’ permanent OSCE missions in Vienna. This is an unfortunate trend at recent SHDMs. Delegation participation is important to ensure follow-up through the Vienna decision-making process, and the SHDMs were located in Vienna as a way to strengthen this connection. OSCE Maintains Religious Freedom Focus By Mischa Thompson, PhD, Policy Advisor Building on the July 9-10, 2009, SHDM on Freedom of Religion or Belief, on December 9-10, 2010, the OSCE held a SHDM on Freedom of Religion or Belief at the OSCE Headquarters in Vienna, Austria. Despite concerns about participation following the December 1-2 OSCE Summit in Astana, Kazakhstan, the meeting was well attended. Representatives of more than forty-two participating States and Mediterranean Partners and one hundred civil society members participated. The 2010 meeting was divided into three sessions focused on 1) Emerging Issues and Challenges, 2) Religious Education, and 3) Religious Symbols and Expressions. Speakers included ODIHR Director Janez Lenarcic, Ambassador-at-large from the Ministry of Foreign Affairs of the Republic of Kazakhstan, Madina Jarbussynova, United Nations Special Rapporteur on Freedom of Religion or Belief, Heiner Bielefeldt, and Apostolic Nuncio Archbishop Silvano Tomasi of the Holy See. Issues raised throughout the meeting echoed concerns raised during at the OSCE Review Conference in September-October 2010 regarding the participating States’ failure to implement OSCE religious freedom commitments. Topics included the: treatment of “nontraditional religions,” introduction of laws restricting the practice of Islam, protection of religious instruction in schools, failure to balance religious freedom protections with other human rights, and attempts to substitute a focus on “tolerance” for the protection of religious freedoms. Notable responses to some of these issues included remarks from Archbishop Silvano Tomasi that parents had the right to choose an education for their children in line with their beliefs. His remarks addressed specific concerns raised by the Church of Scientology, Raelian Movement, Jehovah Witnesses, Catholic organizations, and others, that participating States were preventing religious education and in some cases, even attempting to remove children from parents attempting to raise their children according to a specific belief system. Additionally, some speakers argued that religious groups should be consulted in the development of any teaching materials about specific religions in public school systems. In response to concerns raised by participants that free speech protections and other human rights often seemed to outweigh the right to religious freedom especially amidst criticisms of specific religions, UN Special Rapporteur Bielefeldt warned against playing equality, free speech, religious freedom, and other human rights against one another given that all rights were integral to and could not exist without the other. Addressing ongoing discussion within the OSCE as to whether religious freedom should best be addressed as a human rights or tolerance issue, OSCE Director Lenarcic stated that, “though promoting tolerance is a worthwhile undertaking, it cannot substitute for ensuring freedom of religion of belief. An environment in which religious or belief communities are encouraged to respect each other but in which, for example, all religions are prevented from engaging in teaching, or establishing places of worship, would amount to a violation of freedom of religion or belief.” Statements by the United States made during the meeting also addressed many of these issues, including the use of religion laws in some participating States to restrict religious practice through onerous registrations requirements, censorship of religious literature, placing limitations on places of worship, and designating peaceful religious groups as ‘terrorist’ organizations. Additionally, the United States spoke out against the introduction of laws and other attempts to dictate Muslim women’s dress and other policies targeting the practice of Islam in the OSCE region. Notably, the United States was one of few participating States to call for increased action against anti-Semitic acts such as recent attacks on Synagogues and Jewish gravesites in the OSCE region. (The U.S. statements from the 2010 Review Conference and High-Level Conference can be found on the website of the U.S. Mission to the OSCE.) In addition to the formal meeting, four side events and a pre-SHDM Seminar for civil society were held. The side events were: “Pluralism, Relativism and the Rule of Law,” “Broken Promises – Freedom of religion or belief in Kazakhstan,” “First Release and Presentation of a Five-Year Report on Intolerance and Discrimination Against Christians in Europe” and “The Spanish school subject ‘Education for Citizenship:’ an assault on freedom of education, conscience and religion.” The side event on Kazakhstan convened by the Norwegian Helsinki Committee featured speakers from Forum 18 and Kazakhstan, including a representative from the CiO. Kazakh speakers acknowledged that more needed to be done to fulfill OSCE religious freedom commitments and that it had been a missed opportunity for Kazakhstan not to do more during its OSCE Chairmanship. In particular, speakers noted that religious freedom rights went beyond simply ‘tolerance,’ and raised ongoing concerns with registration, censorship, and visa requirements for ‘nontraditional’ religious groups. (The full report can be found on the website of the Norwegian Helsinki Committee.) A Seminar on Freedom of Religion and Belief for civil society members also took place on December 7-8 prior to the SHDM. The purpose of the Seminar was to assist in developing the capacity of civil society to recognize and address violations of the right to freedom of religion and belief and included an overview of international norms and standards on freedom of religion or belief and non-discrimination. | 计算机 |
2014-23/1195/en_head.json.gz/25842 | October 7, 2011 Convey Bends to Inflection Point
Nicole Hemsoth It’s difficult to ignore the momentum of conversations about data-intensive computing and it’s bleed-over into high performance computing, especially as we look toward November’s SC11 event, which puts the emphasis squarely on big data driven problems. A number of traditional HPC players, including SGI, Cray and others, are making a push to associate some of their key systems with the specific needs of data-intensive computing. Following a conversation this week with Convey Computer’s Director of Marketing, Bob Masson, and Kevin Wadleigh, the company’s resident math libraries wizard, it was clear that Convey plans to be all over the big data map—and that this trend will continue to demand new architectures that pull the zing of FLOPS in favor of more efficient compute and memory architectures. The duo from Convey talked at length about the inflection point that is happening in HPC. According to Masson, this shift in emphasis to data-intensive computing won’t ever replace the need for numerically-intensive computing, but opens a new realm within HPC—one that is timed perfectly with the steady influx of data from an unprecedented number of sources. A recent whitepaper (that prompted our chat with Convey) noted that HPC is “no longer just numerically intensive, it’s now data-intensive—with more and different demands on HPC system architectures.” Convey claims that the “whole new HPC” that is gathered under the banner of data-intensive computing possesses a number of unique characteristics. These features include data sizes in the multi-petabyte and beyond range, high ratio of memory accesses to computing, extremely parallelizable read access/computing, highly dynamic data that can often be processed in real time. Wadleigh put this move in historical context, pointing to the rapid changes in the 1980s as the industry cycled through a number of architectures meant to maximize floating point performance. While it eventually picked its champion, this process took many years—one could even argue decades—before the most efficient and best performing architecture emerged.
He says this same process is happening, hence the idea of the “inflection point” in high performance computing. While again, the power of the FLOP will not be diminished, when it comes to efficient systems that are optimized for the growing number of graph algorithms deployed to tackle big data problems, massive changes in how we think about architecture will naturally evolve. Of course, if you ask Bob or Kevin—that evolution is rooted in some of the unique FPGA coprocessor and memory subsystem designs their company is offering via their so-called Hybrid Core Architecture. While these are all traits that are collected under the “big data” or “data intensive” computing category, another feature—the prevalence of graph algorithms—is of great importance. Problems packing large sets of structured and unstructured data elements are becoming more common in research and enterprise, a fact that warranted a new set of benchmarks to measure graph algorithm performance.
As the preeminent benchmark for data-intensive computing, the Graph 500, measures system performance on graph problems using a standardized measure for determining the speed it takes the system to transverse the graph. While this could be a short book on its own, suffice to say, the Graph 500 website has plenty of details about the benchmark algorithm—and details about the top performing systems as announced at ISC this past summer. Convey feels confident about its position on the list (after not placing in the top ten for the last list in June) and notes that this year’s Graph 500 champions (TBA at SC11) will either have spent a boatload of money on sheer cores and memory—or will have come up with more efficient approaches to solve efficiency and performance challenges of these types of problems.
Convey claims that when it comes to architectures needed to support this type of computing, standard x86 “pizza box” systems falls way short in terms of a lack of inherent parallelism, memory architecture that is poorly mapped to the type of memory accesses, and a lack of synchronization primitives.
They say that with systems like their Hybrid Core Architecture line, some of these problems are solved, bringing a range of features those with data-intensive computing needs have been asking for. Among the “most desirable” architectural features of this newer class of systems is the need to de-emphasize the FLOPS and concentrate on maximizing memory subsystem performance. Accordingly, they stress their FPGA coprocessor approach to these needs, stating that such systems can be changed on the fly to meet the needs of the application’s compute requirements.
Convey’s high-bandwidth memory subsystem is key to refining the performance and efficiency of graph problems. Their approach to designing a memory system, for instance, that only spits back what was asked for and optimizing aggregate bandwidth, are further solutions. However, even with these features, users need to be able to support thousands of concurrent outstanding requests, thus providing top-tier multi-threading capabilities is critical. The question is, if HPC as a floating point-driven industry isn’t serving the architectural needs of the data-intensive computing user, what needs to change? According to Wadleigh, who spent his career engaged with math libraries, the differentiator between Convey (and commodity systems, for that matter) is the two-pronged approach of having large memory and a unique memory subsystem. Such a subsystem would be ideal for the kinds of “scatter gather” operations that are in high demand from graph problems.
Wadleigh said that “most memory system today have their best performance when accessing memory sequentially because memory systems bring in a cache line worth of data with 8 64-bit points. Now, as long as you’re using that, it’s great—but if you look at a lot of these graph problems, half of the accesses are to random data scattered around memory, which is very bad when you’re thinking about this for traditional architectures.” He claims that standard x86 systems, at least for these problems, have bad cache locality, bad virtual memory pagetable locality, and these are also bad patterns for distributed memory parallelism.
One other element that Convey stresses is that data-intensive computing systems, at least in terms of their own line, need to have hardware-based synchronization primitives. With the massive parallelism involved, synchronization in read and writes to memory has to be refined. They state that “when the synchronization mechanism is ‘further away’ from the operation, more time is spent waiting for the synchronization with a corresponding reduction in efficiency of parallelization.” In plain English, maintaining this synchronization at the hardware level within the memory subsystem can yield better performance.
With the focus on data-intensive computing at the heart of SC11 and companies with rich histories in HPC, including Convey jockeying for positions across both the Top500 and the Graph 500, it’s not hard to see why the Convey team thinks of this time as an inflection point in high performance computing, and why they think their Hybrid Core architecture is positioned to take advantage of this.
Share this: Tags: big data, Convey, convey computer, data-intensive, HPC, hybrid core, whitepaper Join the discussion Cancel reply | 计算机 |
2014-23/1195/en_head.json.gz/26464 | End of Windows XP support spells trouble for some
Associated Press Published: April 7, 2014 - 06:54 PM End of Windows XP support spells trouble for some
Bree Fowler Associated Press Copyright � 2014 The Associated Press. All rights reserved. This material may not be published,broadcast, rewritten or redistributed..
NEW YORK: Microsoft will end support for the persistently popular Windows XP on Tuesday, and the move could put everything from the operations of heavy industry to the identities of everyday people in danger.An estimated 30 percent of computers being used by businesses and consumers around the world are still running the 12-year-old operating system.“What once was considered low-hanging fruit by hackers now has a big neon bull’s eye on it,” says Patrick Thomas, a security consultant at the San Jose, Calif.-based firm Neohapsis.Microsoft has released a handful of Windows operating systems since 2001, but XP’s popularity and the durability of the computers it was installed on kept it around longer than expected. Analysts say that if a PC is more than five years old, chances are it’s running XP.While users can still run XP after Tuesday, Microsoft says it will no longer provide security updates, issue fixes to non-security related problems or offer online technical content updates. The company is discontinuing XP to focus on maintaining its newer operating systems, the core programs that run personal computers.The Redmond, Wash.-based company says it will provide anti-malware-related updates through July 14, 2015, but warns that the tweaks could be of limited help on an outdated operating system.Most industry experts say they recognize that the time for Microsoft to end support for such a dated system has come, but the move poses both security and operational risks for the remaining users. In addition to home computers, XP is used to run everything from water treatment facilities and power plants to small businesses like doctor’s offices.Thomas says XP appealed to a wide variety of people and businesses that saw it as a reliable workhorse and many chose to stick with it instead of upgrading to Windows Vista, Windows 7 or 8.Thomas notes that companies generally resist change because they don’t like risk. As a result, businesses most likely to still be using XP include banks and financial services companies, along with health care providers. He also pointed to schools from the university level down, saying that they often don’t have enough money to fund equipment upgrades.Marcin Kleczynski, CEO of Malwarebytes, says that without patches to fix bugs in the software XP PCs will be prone to freezing up and crashing, while the absence of updated security related protections make the computers susceptible to hackers.He added that future security patches released for Microsoft’s newer systems will serve as a way for hackers to reverse engineer ways to breach now-unprotected Windows XP computers.“It’s going to be interesting to say the least,” he says. “There are plenty of black hats out there that are looking for the first vulnerability and will be looking at Windows 7 and 8 to find those vulnerabilities. And if you’re able to find a vulnerability in XP, it’s pretty much a silver key.”Those weaknesses can affect businesses both large and small.Mark Bernardo, general manager of automation software at General Electric Co.’s Intelligent Platforms division, says moving to a new operating system can be extremely complicated and expensive for industrial companies. Bernardo, whose GE division offers advisory services for upgrading from XP, says many of the unit’s customers fall into the fields of water and waste water, along with oil and gas.“Even if their sole network is completely sealed off from attack, there are still operational issues to deal with,” he says.Meanwhile, many small businesses are put off by the hefty cost of upgrading or just aren’t focused on their IT needs. Although a consumer can buy an entry-level PC for a few hundred dollars, a computer powerful enough for business use may run $1,000 or more after adding the necessary software.Barry Maher, a salesperson trainer and motivational speaker based in Corona, Calif., says his IT consultant warned him about the end of XP support last year. But he was so busy with other things that he didn’t start actively looking for a new computer until a few weeks ago.“This probably hasn’t been as high a priority as it should have been,” he says.He got his current PC just before Microsoft released Vista in 2007. He never bought another PC because, “As long as the machine is doing what I want it to do, and running the software I need to run, I would never change it.”Mark McCreary, a Philadelphia-based attorney with the firm Fox Rothschild LLP, says small businesses could be among the most effected by the end of support, because they don’t have the same kinds of firewalls and in-house IT departments that larger companies possess. And if they don’t upgrade and something bad happens, they could face lawsuits from customers.But he says he doesn’t expect the wide-spread malware attacks and disasters that others are predicting — at least for a while.“It’s not that you blow it off and wait another seven years, but it’s not like everything is going to explode on April 8 either,” he says.McCreary points to Microsoft’s plans to keep providing malware-related updates for well over a year, adding that he doubts hackers are actually saving up their malware attacks for the day support ends.But Sam Glines, CEO of Norse, a threat-detection firm with major offices in St. Louis and Silicon Valley, disagrees. He believes hackers have been watching potential targets for some time now.“There’s a gearing up on the part of the dark side to take advantage of this end of support,” Glines says.He worries most about doctors like his father and others the health care industry, who may be very smart people, but just aren’t focused on technology. He notes that health care-related information is 10 to 20 times more valuable on the black market than financial information, because it can be used to create fraudulent medical claims and illegally obtain prescription drugs, making doctor’s offices tempting targets.Meanwhile, without updates from Microsoft, regular people who currently use XP at home need to be extra careful.Mike Eldridge, 39, of Spring Lake, Mich., says that since his computer is currently on its last legs, he’s going to cross his fingers and hope for the best until it finally dies.“I am worried about security threats, but I’d rather have my identity stolen than put up with Windows 8,” he says. | 计算机 |
2014-23/1195/en_head.json.gz/27541 | ← 5 Tips Enterprise Architects Can Learn from the Winchester Mystery House
Open Group Security Gurus Dissect the Cloud: Higher of Lower Risk →
by The Open Group Blog | February 6, 2012 · 12:30 AM San Francisco Conference Observations: Enterprise Transformation, Enterprise Architecture, SOA and a Splash of Cloud Computing
By Chris Harding, The Open Group This week I have been at The Open Group conference in San Francisco. The theme was Enterprise Transformation which, in simple terms means changing how your business works to take advantage of the latest developments in IT.
Evidence of these developments is all around. I took a break and went for coffee and a sandwich, to a little cafe down on Pine and Leavenworth that seemed to be run by and for the Millennium generation. True to type, my server pulled out a cellphone with a device attached through which I swiped my credit card; an app read my screen-scrawled signature and the transaction was complete.
Then dinner. We spoke to the hotel concierge, she tapped a few keys on her terminal and, hey presto, we had a window table at a restaurant on Fisherman’s Wharf. No lengthy phone negotiations with the Maitre d’. We were just connected with the resource that we needed, quickly and efficiently.
The power of ubiquitous technology to transform the enterprise was the theme of the inspirational plenary presentation given by Andy Mulholland, Global CTO at Capgemini. Mobility, the Cloud, and big data are the three powerful technical forces that must be harnessed by the architect to move the business to smarter operation and new markets.
Jeanne Ross of the MIT Sloan School of Management shared her recipe for architecting business success, with examples drawn from several major companies. Indomitable and inimitable, she always challenges her audience to think through the issues. This time we responded with, “Don’t small companies need architecture too?” Of course they do, was the answer, but the architecture of a big corporation is very different from that of a corner cafe.
Corporations don’t come much bigger than Nissan. Celso Guiotoko, Corporate VP and CIO at the Nissan Motor Company, told us how Nissan are using enterprise architecture for business transformation. Highlights included the concept of information capitalization, the rationalization of the application portfolio through SOA and reusable services, and the delivery of technology resource through a private cloud platform.
The set of stimulating plenary presentations on the first day of the conference was completed by Lauren States, VP and CTO Cloud Computing and Growth Initiatives at IBM. Everyone now expects business results from technical change, and there is huge pressure on the people involved to deliver results that meet these expectations. IT enablement is one part of the answer, but it must be matched by business process excellence and values-based culture for real productivity and growth.
My role in The Open Group is to support our work on Cloud Computing and SOA, and these activities took all my attention after the initial plenary. If you had, thought five years ago, that no technical trend could possibly generate more interest and excitement than SOA, Cloud Computing would now be proving you wrong.
But interest in SOA continues, and we had a SOA stream including presentations of forward thinking on how to use SOA to deliver agility, and on SOA governance, as well as presentations describing and explaining the use of key Open Group SOA standards and guides: the Service Integration Maturity Model (OSIMM), the SOA Reference Architecture, and the Guide to using TOGAF for SOA.
We then moved into the Cloud, with a presentation by Mike Walker of Microsoft on why Enterprise Architecture must lead Cloud strategy and planning. The “why” was followed by the “how”: Zapthink’s Jason Bloomberg described Representational State Transfer (REST), which many now see as a key foundational principle for Cloud architecture. But perhaps it is not the only principle; a later presentation suggested a three-tier approach with the client tier, including mobile devices, accessing RESTful information resources through a middle tier of agents that compose resources and carry out transactions (ACT).
In the evening we had a CloudCamp, hosted by The Open Group and conducted as a separate event by the CloudCamp organization. The original CloudCamp concept was of an “unconference” where early adopters of Cloud Computing technologies exchange ideas. Its founder, Dave Nielsen, is now planning to set up a demo center where those adopters can experiment with setting up private clouds. This transition from idea to experiment reflects the changing status of mainstream cloud adoption.
The public conference streams were followed by a meeting of the Open Group Cloud Computing Work Group. This is currently pursuing nine separate projects to develop standards and guidance for architects using cloud computing. The meeting in San Francisco focused on one of these – the Cloud Computing Reference Architecture. It compared submissions from five companies, also taking into account ongoing work at the U.S. National Institute of Standards and Technology (NIST), with the aim of creating a base from which to create an Open Group reference architecture for Cloud Computing. This gave a productive finish to a busy week of information gathering and discussion.
Ralph Hitz of Visana, a health insurance company based in Switzerland, made an interesting comment on our reference architecture discussion. He remarked that we were not seeking to change or evolve the NIST service and deployment models. This may seem boring, but it is true, and it is right. Cloud Computing is now where the automobile was in 1920. We are pretty much agreed that it will have four wheels and be powered by gasoline. The business and economic impact is yet to come.
So now I’m on my way to the airport for the flight home. I checked in online, and my boarding pass is on my cellphone. Big companies, as well as small ones, now routinely use mobile technology, and my airline has a frequent-flyer app. It’s just a shame that they can’t manage a decent cup of coffee.
Dr. Chris Harding is Director for Interoperability and SOA at The Open Group. He has been with The Open Group for more than ten years, and is currently responsible for managing and supporting its work on interoperability, including SOA and interoperability aspects of Cloud Computing. Before joining The Open Group, he was a consultant, and a designer and development manager of communications software. With a PhD in mathematical logic, he welcomes the current upsurge of interest in semantic technology, and the opportunity to apply logical theory to practical use. He has presented at Open Group and other conferences on a range of topics, and contributes articles to on-line journals. He is a member of the BCS, the IEEE, and the AOGEA, and is a certified TOGAF practitioner.
Filed under Cloud, Cloud/SOA, Conference, Enterprise Architecture, Enterprise Transformation, Service Oriented Architecture, Standards
Tagged as Andy Mulholland, big data, Capgemini, Chris Harding, cloud, cloud computing, Cloud Computing Work Group, CloudCamp, IBM, Jeanne Ross, Lauren States, MIT, Mobile, Nissan. Celso Guiotoko, OSIMM, SOA, SOA Reference Architecture, SOCCI, The Open Group Conference, unconference | 计算机 |
2014-23/1195/en_head.json.gz/27745 | > Results > PRO 80
Public Record Office: Conservation Database
This dataset series provides details of the conservation work undertaken by staff at the Public Record Office (PRO) and now The National Archives (TNA). The original bound registers from 1882 provided a record of PRO and TNA documents on which repair and/or binding work has been carried out, and provided details of the conservation work done. From September 1957, they also gave information about materials used in the process. These datasets are comprised of automated versions of these registers, and this series is evidence of the use of such automation in the Department from 1989 to 2003. The earliest dataset in the series contains information dating back to 1978. The datasets constitute a detailed record of conservation work. Their primary function was to provide a permanent record of the fine detail of the conservation work that took place. Secondary functions were to assist with the workflow of the tasks, record significant steps in the process, and identify the members of staff who did the work.
After 1995, the datasets were also used to generate reports, to be used as evidence of meeting targets.
Between 1989 and 1995, the datasets existed as spreadsheets. They followed a uniform arrangement to record each year's work, and included the following types of data:
Information about the document. This includes the original PRO/TNA reference; its location (Portugal Street, or Kew); the name of the document; the dates it entered and left the repair system. A ticket system was used to keep track of the document as it moved between sections; this is the 'issuing department' referred to in the datasets, and department in this case means an internal section of PRO/TNA, not the Government Department that created the record. The datasets also contain some contextual information about the record and its historical importance, although users should not regard this information as a definitive archival description, and are advised to refer to The Catalogue at TNA. Information about the conservation actions, and materials used in the process. One field records a description of the physical state of the item when it arrived; the description is sometimes continued in the 'Notes' field. Objects requiring conservation were identified with broad categories: paper, parchment, maps, seals, photographs, and volumes. Specific repair and conservation actions are described in each dataset. These include, for example, the use of insecticides, fixatives, adhesives, and chemicals used for cleaning; the manufacture of boxes or cases; the use of envelopes or encapsulation; moulds used for seal repair. Bookbinding work is described in numerous related fields, concerned with sewing of sections, spines, boards, styles, covering material and finishing, etc. Information about the conservators. This includes the initials of up to three conservators involved in the process. Other fields record when they did the work, estimates of time taken, and how long the work actually took. After 1995, when the dataset became a database, it included broadly the same sorts of data as above, but with additional data relating to exhibitions and the loan of documents for exhibitions; the internal storage and retrieval of registry file materials, and electronic records; and the internal storage and retrieval of images. The database itself however contains no image material.
Conservation records are important for two main reasons. Firstly, they provide a detailed overview of the overall physical condition or the state of repair and subsequent conservation needs of public records deposited in The National Archives. Secondly, conservation actions constitute a part of the history of each document or volume, and should always be recorded so that the actions could be reversed in the future, or that subsequent actions can be performed safely.
The datasets in this series are available to download. Links to individual datasets can be found at piece level.
Hardware: The Conservation Database was available to conservation staff working at The National Archives on networked PCs.
Operating System: Windows NT 4 until 2001; Microsoft Windows 2000 thereafter. Application Software: SmartWare II, from 1989 to 1991; Microsoft Excel (version not known) from 1991 to 1994; Microsoft Access 97 from 1995 to 2003, latterly in Microsoft Access 2000.
User Interface: Following the adoption of Microsoft Access as the application software for the Conservation database, the user interface consisted of a number of Access forms for data entry and report generation.
Logical structure and schema: The Conservation Database as transferred to NDAD comprises eight datasets. The first seven of these datasets (INFO89 to INFO95) are single tables extracted from MS Excel spreadsheets, organised according to the year's work they represent (although the contents of the first table includes records from 1978). The eighth dataset (oldConservationV21) was extracted from an MS Access database, and has a slightly more complicated structure. How data was originally captured and validated: The data was entered by conservation staff as they started and completed their actions on the documents. Each dataset is thus a cumulative record of work, built up over time.
All the datasets derived from MS Excel spreadsheets in this series are closed, in that they represent an annual 'chunk' of data which was not updated at year end. The dataset taken from the MS Access database is likewise closed, in that it constitutes a 'snapshot' of data from 1995-2003.
Validation performed after transfer: Details of the content and transformation validation checks performed by NDAD on the datasets for the Conservation database are recorded in the Dataset Catalogues.
For Public Record Office: Registers, Repairs, please see PRO 12 Held by:
Former references:
in The National Archives: CRDA/71
Public Record Office, 1838-2003
datasets and documentation
The Conservation Database datasets and related dataset documentation are subject to Crown Copyright; copies may be made for private study and research purposes only.
Immediate source of acquisition:
United Kingdom National Digital Archive of Datasets
Custodial history:
Originally transferred from The National Archives. The United Kingdom National Digital Archive of Datasets (NDAD) then held the datasets until 2010 when The National Archives (TNA) resumed responsibility for their custody.
Accruals:
Further accruals are not expected.
Unpublished finding aids:
Extent of documentation: 23 documents, Dates of creation of documentation: c 1989 - 2006
Administrative / biographical background:
The series records the conservation actions carried out on public records stored in The National Archives (formerly the Public Record Office), by staff of the Conservation Department, between 1978 and 2003. These records of conservation work were created and used at a time when the Public Record Office operated on two sites, with buildings at Portugal Street near Chancery Lane, and a second building at Kew. Found an error? Suggest a correction
PRO Domestic Records of the Public Record Office, Gifts, Deposits, Notes and Transcripts Records of the Public Record Office's Internal Administration PRO 80 Public Record Office: Conservation Database Related research guides | 计算机 |
2014-23/1195/en_head.json.gz/27795 | Information and communication technologies for development
It has been suggested that Development informatics be merged into this article. (Discuss) Proposed since November 2012.
An OLPC class in Ulaanbaatar, Mongolia.
Inveneo Computing Station
Information and Communication Technologies for Development (ICT4D) refers to the use of Information and Communication Technologies (ICTs) in the fields of socioeconomic development, international development and human rights. The theory behind this is that more and better information and communication furthers the development of a society.
Aside from its reliance on technology, ICT4D also requires an understanding of community development, poverty, agriculture, healthcare, and basic education. This makes ICT4D appropriate technology and if it is shared openly open source appropriate technology.[1] Richard Heeks suggests that the I in ICT4D is related with “library and information sciences”, the C is associated with “communication studies", the T is linked with “information systems", and the D for “development studies”.[2] It is aimed at bridging the digital divide and aid economic development by fostering equitable access to modern communications technologies. It is a powerful tool for economic and social development.[3] Other terms can also be used for "ICT4D" or "ICT4Dev" ("ICT for development") like ICTD ("ICT and development", which is used in a broader sense[4]) and development informatics.
ICT4D can refer to assisting disadvantaged populations anywhere in the world, but it is usually associated with applications in developing countries.[5] It is concerned with directly applying information technology approaches to poverty reduction. ICTs can be applied directly, wherein its use directly benefits the disadvantaged population, or indirectly, wherein it can assist aid organisations or non-governmental organizations or governments or businesses to improve socio-economic conditions.
The field is an interdisciplinary research area through the growing number of conferences, workshops and publications.[6][7][8] This is partly due to the need for scientifically validated benchmarks and results, that can measure the effectiveness of current projects.[9] This field has also produced an informal community of technical and social science researchers who rose out of the annual ICT4D conferences.[10]
1 Theoretical background
3 Values framework
4 Access and Use of ICT
5 ICT for different aspects of Development
5.1 Climate, Weather and Emergency Response Activities
5.2 People with Disabilities
5.3 ICT for Education
5.4 ICT For Livelihood
5.5 ICT for Agriculture
5.6 ICT4D for Other Sectors
6 ICT4D and Mobile Technologies
6.1 Mobile Technology in Education
6.2 Mobile Shopping
6.3 Mobile Telephony and Development Opportunities
6.3.1 An Example: Zidisha
6.3.2 An Example: Esoko
6.3.3 An Example: Scientific Animations Without Borders
7 Opportunities
7.1 Women and ICT4D
7.2 Artificial Intelligence for Development
8 Funding ICT4D
9.1 Analyses
9.2 Problems
9.4 Sustainability and scalability
10 Impact Assessment on ICT4D
11 Criticisms/Challenges
11.1 ICT and its Carbon Footprint
11.2 ICT and the Neoliberalization of Education
12 Country and region case studies
12.1 ICT4D in the Philippines
12.2 ICT4D in Africa
13 International Programs and Strategies for ICT4D
13.1 eLAC Action Plans for Latin America and the Caribbean
13.2 SIRCA Programme for ICTD Researchers in Asia Pacific Region
14 Organization that Support ICT4D Programmes
14.1 ICT for Greater Development Impact
15 Events Supporting ICT4D Initiatives
15.1 2000 Okinawa Summit of G8 Nations
15.2 World Summit on the Information Society (WSIS)
15.3 World Summit on the Information Society (WSIS) Stocktaking
15.4 WSIS Project Prizes
15.5 WCIT
15.6 CRS ICT4D Conference
15.7 ICT4A
19.1 Media
19.2 Video
Theoretical background[edit]
The ICT4D discussion falls into a broader school of thought that proposes to use technology for development. The theoretical foundation can be found in the Schumpeterian notion of socio-economic evolution,[11] which consists of an incessant process of creative destruction that modernizes the modus operandi of society as a whole, including its economic, social, cultural, and political organization.[12]
The motor of this incessant force of creative destruction is technological change.[13][14] While the key carrier technology of the first Industrial Revolution (1770–1850) was based on water-powered mechanization, the second Kondratiev wave (1850–1900) was enabled by steam-powered technology, the third (1900–1940) was characterized by the electrification of social and productive organization, the fourth by motorization and the automated mobilization of society (1940–1970), and the most recent one by the digitization of social systems.[11] Each one of those so-called long waves has been characterized by a sustained period of social modernization, most notably by sustained periods of increasing economic productivity. According to Carlota Perez: “this quantum jump in productivity can be seen as a technological revolution, which is made possible by the appearance in the general cost structure of a particular input that we could call the 'key factor', fulfilling the following conditions: (1) clearly perceived low-and descending-relative cost; (2) unlimited supply for all practical purposes; (3) potential all-pervasiveness; (4) a capacity to reduce the costs of capital, labour and products as well as to change them qualitatively”.[14] Digital Information and Communication Technologies fulfill those requirements and therefore represent a general purpose technology that can transform an entire economy, leading to a modern, and more developed form of socio-economic and political organization often referred to as the post-industrial society, the fifth Kondratiev, Information society, digital age, and network society, among others.
ICT4D cube: an interplay between technology (horizontal: green), society (vertical: blue), policy (diagonal: yellow/red)
The declared goal of ICT-for-development is to make use of this ongoing transformation by actively using the enabling technology to improve the living conditions of societies and segments of society.[citation needed] As in previous social transformations of this kind (industrial revolution, etc.), the resulting dynamic is an interplay between an enabling technology, normative guiding policies and strategies, and the resulting social transformation.[11][12][13] In the case of ICT4D, this three-dimensional interplay has been depicted as a cube.[15] In line with the Schumpeterian school of thought, the first enabling factor for the associated socio-economic transformations is the existence technological infrastructure: hardware infrastructure and generic software services. Additionally, capacity and knowledge are the human requirements to make use of these technologies. These foundations (horizontal green dimension in Figure) are the basis for the digitization of information flows and communication mechanisms in different sectors of society. When part of the information flows and communication processes in these sectors are carried out in e-lectronic networks, the prefix "e-" is often added to the sector's name, resulting in e-government, e-business and e-commerce, e-health, and e-learning, etc. (vertical blue dimension in Figure). This process of transformation represent the basic requirements and building blocks, but they are not sufficient for development. The mere existence of technology is not enough to achieve positive outcomes (no technological determinism). ICT for Development policies and projects are aimed at the promotion of normatively desired outcomes of this transformation, the minimization of negative effects, and the removal of eventual bottlenecks. In essence, there are two kinds of interventions: positive feedback (incentives, projects, financing, subsidies, etc. that accentuate existing opportunities); and negative feedback (regulation and legislation, etc.) that limit and tame negative developments (diagonal yellow-red dimension in Figure).[15]
A telecentre in Gambia
The intentional use of communication to foster development is not new. So-called Development Communication research during the 1960s and 1970s set the ground for most existing development programs and institutions in the field of ICT4D, with Wilbur Schramm, Nora C. Quebral and Everett Rogers being influential figures in this academic discipline. In modern times, ICT4D has been divided into three periods:[16]
ICT4D 0.0: mid-1950s to late-1990s. This was before the creation of the term "ICT4D". The focus was on broadcasting Development Communication, computing / data processing for back-office applications in large government and private sector organizations in developing countries.
ICT4D 1.0: late-1990s to late-2000s. The combined advent of the Millennium Development Goals and mainstream usage of the Internet in industrialised countries led to a rapid rise in investment in ICT infrastructure and ICT programmes/projects in developing countries. The most typical application was the telecentre, used to bring information on development issues such as health, education, and agricultural extension into poor communities. More latterly, telecentres might also deliver online or partly online government services.
ICT4D 2.0: late-2000s onwards. There is no clear boundary between phase 1.0 and 2.0 but suggestions of moving to a new phase include the change from the telecentre to the mobile phone as the archetypal application. There is less concern with e-readiness and more interest in the impact of ICTs on development. Additionally, there is more focus on the poor as producers and innovators with ICTs (as opposed to being consumers of ICT-based information).
As information and communication technologies evolve, so does ICT4D: more recently it has been suggested that Big Data can be used as an important ICT tool for development and that it represents a natural evolution of the ICT4D paradigm.[17]
Values framework[edit]
It is unusual for an objective endeavor, a research, to have corresponding values. However, since ICT4D is foremost an initiative as well as an advocacy, it can be that development itself opts for a certain ideal or state. As such, values in developmental research can be included. The Kuo Model of Informatization has three dimensions, namely: infrastructure, economy and people. These dimensions correspond to:[18]
Education and literacy levels
Economic indicators (GNP, GDP, etc.)
Telecommunications and media infrastructure
However, this may not be applicable to all countries. In the model, the three dimensions are correlated with each other, but Alexander Flor notes that in his country, the Philippines, the model is not be entirely suitable due to the following reasons:
The high education and literacy levels are not directly correlated with telecommunications infrastructure and degree of economic development.
The correlation between the degrees of telecommunications infrastructure and economic development cannot easily be established.
Flor proposes a new dimension be added to the Kuo Model - values dimension. This dimension can be operationalized through government priority indicators, subsidy levels and corruption levels among others. He proposes the following values for this dimension: equality, complementarity, integration, participation and inclusion, development from within and convergence.[19]
Access and Use of ICT[edit]
See also: Computer technology for developing areas and List of ICT4D organizations
ICT4D projects often employ low-cost, low-powered technology which are sustainable in a developing environment. The challenge is hard, since it is estimated that 40% of the world's population has less than US$ 20 per year available to spend on ICT. In Brazil, the poorest 20% of the population counts with merely US$9 per year to spend on ICT (US$ 0.75 per month).[20]
From Latin America it is known that the borderline between ICT as a necessity good and ICT as a luxury good is roughly around the “magical number” of US$10 per person per month, or US$120 per year.[20] This is the cost ICT people seem to strive for and therefore is generally accepted as a minimum. In light of this reality, telecentre, desktop virtualization and multiseat configurations currently seem the most simple and common paths to affordable computing.
ICT4D projects need to be properly monitored and implemented, as the system's design and user interface should be suitable to the target users. ICT4D projects installed without proper coordination with its beneficiary community have a tendency to fall short of the main objectives. For example, in the usage of ICT4D projects in those farming sectors where a majority of the population are considered to be technologically illiterate, projects lie idle and sometimes get damaged or allowed to become obsolete.
Further, there should be a line of communication between the project coordinator and the user for immediate response to the query of, or the difficulty encountered by, the user. Addressing the problem properly will help encourage the user via interactivity and participation.
Peer to peer dialogs facilitated by Cisco’s Telepresence technology is now being used, connecting 10 centers around the world to discuss the best practices on the use of ICT in urban service delivery.
ICT4D is also given a new take in the introduction of Web 2.0. With the 5.2 billion internet users, the power generated by the internet should be noticed. With social networking at the frontier of the new web, ICT can have a new approach. Updates, news and ordinances are spread readily by these applications; feedback system can be more evident. In the Philippines, the administration now uses social media to converse more with its citizens for it makes people feel more in touch with the highest official in the land.[21] Also another innovation is a standard suite of city indicators that enabled mayors & citizens to monitor the performance of their city with others, this is important to have consistent & comparable city-level data.
Geographic Information Systems (GIS) are also used in several ICT4D applications, such as the Open Risk Data Initiative (OpenRDI). OpenRDI aims to minimize the effect of disaster in developing countries by encouraging them to open their disaster risk data. GIS technologies such as satellite imagery, thematic maps, and geospatial data play a big part in disaster risk management. One example is the HaitiData, where maps of Haiti containing layers of geospatial data (earthquake intensity, flooding likelihood, landslide and tsunami hazards, overall damage, etc..) are made available which can then be used by decision makers and policy makers for rehabilitation and reconstruction of the country.[22][23] The areas which are receiving priority attention include natural resources information assessment, monitoring and management, water shed development, environmental planning, urban services and land use planning.[24]
Many of these initiatives are a mixture of donor agency support, international intervention and local community enterprise. For instance, the wireless Town Information Network implemented in the town of Slavutych (Ukraine) to address social-economic issues in the context of the closure of Chernobyl Nuclear Power Plant was supported by international donors, development consultants, and importantly local stakeholders.[25] E-bario (Malaysia) is an example of a grassroots initiative which was also supported by a wider network and is an initiative which has endured and overcome many of the pitfalls of ICT4D projects.[26]
ICT for different aspects of Development[edit]
Climate, Weather and Emergency Response Activities[edit]
The use of ICT in weather forecasting is broad. Nowadays, weather forecasting offices are using mass media to inform the public on weather updates. After Tropical Storm Ondoy in the Philippines, the Filipino people are more curious and aware about the weather hazards. Meteorological offices are also using advanced tools to monitor the weather and the weather systems that may affect a certain area.
Monitoring devices[27]
Weather Satellites
Doppler weather radars
Automatic weather stations (AWS)
Wind profiler
other synoptic data or weather instruments
One of these is the creation of weather monitoring devices like 'EARTH SIMULATOR' which is used to advancedly see weather conditions.
Climate change is a global phenomenon affecting the lives of mankind. In time of calamities we need information and communication technology for disaster management. Various organisations, government agencies and small and large-scale research projects have been exploring the use of ICT for relief operations, providing early warnings and monitoring extreme weather events.[28] A review of new ICTs and climate change in developing countries highlighted that ICT can be used for (1) Monitoring: observing, detecting and predicting, and informing science and decision making; (2) Disaster management: supporting emergency response through communications and information sharing, and providing early warning systems; and (3) Adaptation: supporting environmental, health and resource management activities, up-scaling technologies and building resilience.[28] In the Philippines, institutions like the National Disaster and Risk Reduction and Management Council help the public in monitoring the weather and advisory for any possible risks due to hazardous weather. NetHope is another global organization which contributes disaster management and awareness through information technology. According to ICTandclimatechange.com ICT companies can be victims, villains or heroes of climate change.
People with Disabilities[edit]
According to World Health Organization (WHO), 15% of the world's total population have disabilities. This is approximately 600 million people wherein three out of every four are living in developing countries, half are of working age, half are women and the highest incidence and prevalence of disabilities occurs in poor areas.[29] With ICT, lives of people with disabilities can be improved, allowing them to have a better interaction in society by widening their scope of activities.
Goals of ICT and Disability Work
Give disabled people a powerful tool in their battle to gain employment
Increase disabled people’s skills, confidence, and self-esteem
Integrate disabled people socially and economically into their communities;
Reduce physical or functional barriers and enlarge scope of activities available to disabled persons
At the international level, there are numerous guiding documents impacting on the education of people with disabilities such as Universal Declaration of Human Rights (1948), moving to the Convention against Discrimination in Education (1960), the Convention on the Rights of the Child (1989), the Convention on the Protection and Promotion of the Diversity of Cultural Expressions (2005). The Convention on the Rights of Persons with Disabilities (CRPD) includes policies about accessibility, non-discrimination, equal opportunity, full and effective participation and other issues. The key statement within the CRPD (2006) relevant for ICT and people with disabilities is within Article 9:
"To enable persons with disabilities to live independently and participate fully in all aspects of life, States Parties shall take appropriate measures to ensure to persons with disabilities access, on equal basis with others, to the physical environment, to transportation, to information and communications, including information and communications technologies and systems, and other facilities and services open or provided to the public, both in urban and rural areas. (p. 9)"
Another international policy that has indirect implications for the use of ICT by people with disabilities are the Millennium Development Goals (MDGs). Although these do not specifically mention the right to access ICT for people with disabilities, two key elements within the MDGs are to reduce the number of people in poverty and to reach out to the marginalised groups without access to ICT.[30]
ICT Programs:
Estonian e-Learning Development Centre alongside with Primus- One activity of Primus is to develop and run a support system for students with special needs. This is done by: developing different support services (e.g. digitalising and recording teaching material for students with visual impairments, creating training courses); improving learning environments (assessing physical accessibility of buildings); running a scholarship scheme for students with special needs to support their full participation in studies.[30]
European Unified Approach for Assisted Lifelong Learning (EU4ALL)- The aim of this initiative is to create an accessible and adapted course addressed to students with different disabilities – cognitive, physical and sensory. The course was designed through an Instructional Learning Design. The learner is given access to a course with activities and resources personalised according to the student’s needs profile.[30]
Plan Ceibal- aims to promote digital inclusion in order to reduce the digital gap with other countries, as well as among the citizens of Uruguay. In order to support better access to education and culture, every pupil in the public education system is being given a laptop. Within Plan Ceibal an initiative began at the end of 2008 to provide tools to improve accessibility of the laptop for learners with special needs, using particular assistive technology aids in classes equipped with these machines.[30]
Leren en werken met autisme (Learning and working with Autism)- is a DVD with several tools aimed at helping students with autism or autistic spectrum disorders in their transition from education to work, or workplace training settings. One of the tools is the wai-pass– specific e-portfolio software. This e-portfolio not only provides information about the skills and competences of a particular student, but also about his/her behaviour in particular settings and situations. This type of very relevant information is gathered by teachers throughout the student’s school career and often vanishes when a student leaves school. Through this e-portfolio tool, the information can be easily disclosed to (potential) employers. There is also a Toolkit for workplace learning and traineeship and Autiwerkt, a movie and a website with roadmaps, tips and tricks on traineeship and preparation for regular employment of students.[30]
Everyday Technologies for Children with Special Needs (EvTech)- is a collaborative initiative aiming to increase the possibilities of children with special needs to make choices and influence their environments in everyday life by developing individualised technical environments and tools for children and their families.[30]
Discapnet- world's biggest and most visited website dealing with disability issues.[29]
ICT for Education[edit]
ICT for Education (ICT4E) is a subset of the ICT4D thrust. Globalization and technological change are one of the main goals of ICT. One of its main sectors that should be changed and modified is education. ICTs greatly facilitate the acquisition and absorption of knowledge; offering developing countries unprecedented opportunities to enhance educational systems, improve policy formulation and execution, and widen the range of opportunities for business and the poor. One of the greatest hardships endured by the poor, and by many others who live in the poorest countries, is their sense of isolation. The new communications technologies promise to reduce that sense of isolation, and open access to knowledge in ways unimaginable not long ago.
Education is seen as a vital input to addressing issues of poverty, gender equality and health in the MDGs. This has led to an expansion of demand for education at all levels. Given limited education budgets, the opposing demand for increased investment in education against widespread scarcity of resources puts intolerable pressure on many countries’ educational systems. Meeting these opposing demands through the traditional expansion of education systems, such as building schools, hiring teachers and equipping schools with adequate educational resources will be impossible in a conventional system of education. ICTs offer alternate solutions for providing access and equity, and for collaborative practices to optimize costs and effectively use resources.[31]
Countries with National Programs and Good Practice Examples of ICT Use in Education include: [32]
Chile, the Chilean experience[33]
Costa Rica, The Ministry of Education and Fundación Omar Dengo’s partnership
India (Kerala), IT@school
Bangladesh, BRAC's Computer Aided Learning (CAL) Initiative, http://e-education.brac.net
Jordan Education Initiative
Macedonia's Primary Education Project (PEP)
Malaysia, Smart School
Namibia’s ICTs in Education Initiative, TECH/NA!
Russia E-Learning Support Project
Singapore's Masterplan for ICT in Education (now in its third edition)[34]
South Korea, first aid beneficiary now donor,[35] the Korea Education Research & Information Service (KERIS)
Uruguay, small South American country, Plan Ceibal
ICT has been employed in many education projects and research over the world. The Hole in the Wall (also known as minimally invasive education) is one of the projects which focuses on the development of computer literacy and the improvement of learning. Other projects included the utilization of mobile phone technology to improve educational outcomes.[36]
In the Philippines, there are key notes that have been forwarded to expand the definition of ICT4E from an exclusive high-end technology to include low-end technology; that is, both digital and analog.[37] As a leading mobile technology user, the Philippines can take advantage of this for student learning. One project that serves as an example is Project Mind,[38] a collaboration of the Molave Development Foundation, Inc, Health Sciences University of Mongolia, ESP Foundation and the University of the Philippines Open University (UPOU) which focuses on the viability of Short Message System (SMS) for distance learning. Pedagogy, Teacher Training, and Personnel Management are some of the subgroups of ICT4E. UPOU is one of the best examples of education transformation that empowers the potential of ICT in the Philippines' education system. By maximizing the use of technology to create a wide range of learning, UPOU promotes lifelong learning in a more convenient way.
Since the education sector plays a vital role in economic development, Education System in developing countries should align with the fast evolving technology because technological literacy is one of the required skills in our current era. ICT can enhance the quality of education by increasing learner motivation and engagement, by facilitating the acquisition of basic skills and by enhancing teacher training which will eventually improve communication and exchange of information that will strengthen and create economic and social development.
ICT For Livelihood[edit]
Agriculture is the most vital sector for ICT intervention most especially that majority of the population around the world rely on agriculture to live sustainably. Dr. Alexander G. Flor, author of the book ICT4D: Information and Communication Technology for Development, agriculture provides our most basic human needs that are food, clothing and shelter.
Ever since people have this natural way of thinking on how they can survive and make a living by harvesting crops used for food and fiber, raising livestock such as cow, sheep and poultry that produces animal products like wool, dairy and eggs, catching fish or any edible marine life for food or for sale, forestry and logging to grow and harvest timber to build shelter. With agriculture, people learned and acquired knowledge through sharing information with each other but of course this is not enough as there are also changes and developments in agriculture. Farmers should be able to take hold of updated information like prices, production techniques, services, storage, processing and the like. Evidently, updated information with the change and developments in agriculture can be addressed by the effective use of ICT (the Internet, mobile phone, and other digital technologies).
Poor families in the rural areas have limited or no access at all to information and communication technology. However, these people also needs access to ICT since this technology would help lessen their expenses on their resources like time, labor, energy, and physical resources, thus, would have a greater positive impact on their livelihoods and incomes.[39]
The lives of the rural poor could be alleviated through the application of information and communication technology through the following:
By supplying information to inform the policies, institutions, and processes that affect their livelihood options.
By providing access to information needed in order to pursue their livelihood strategies, including:
Financial Capital – online and mobile banking will allow rural poor to have greater access to banking facilities and provide a secure place for cash deposits and remittances.
Human Capital – using ICT will allow intermediaries or knowledge providers impart updated knowledge, techniques and new developments in technology to the locals.
Physical Capital – service providers will be able to monitor access to local services.
Natural Capital – access to information about availability and management of natural resources will be enhanced. Also, market access for agricultural products will be einforced. Lastly, ICT could provide early warning systems to reduce the hazard to natural disasters and food shortages.
Social Capital – connectivity, social networking, and contact for geographically disparate households will be reinforced.
In the advent of ICT it offers new opportunities to support development of the rural livelihoods. It strengthens the production and increased market coordination which are the main processes that can contribute to the future opportunity of the sector and create income for the people that depend on it.
ICT for Agriculture[edit]
Farmers who have better access to ICT have better lives because of the following:[40][41]
access to price information – farmers will be informed of the accurate current prices and the demands of the products. Hence, they will be able to competitively negotiate in the agricultural economy and their incomes will be improved.
access to agriculture information – according to the review of global and national agricultural information systems done by IICD with support from DFID in 2003, there is a need for coordination and streamlining of existing agriculture information sources, both internationally and within the developing countries. The information provided is usually too scientific that farmers cannot comprehend. Therefore, it is vital that the local information to be relayed to the farmers must be simplified.
access to national and international markets – Increasing the level of access of farmers is very vital in order to simplify contact between the sellers and the buyers, to publicize agricultural exports, facilitate online trading, and increase the awareness of producers on potential market opportunities including consumer and price trends.
increasing production efficiency – due to several environmental threats such as climate change, drought, poor soil, erosion and pests, the livelihood of farmers are unstable. Thus, the flow of information regarding new techniques in production would open up new opportunities to farmers by documenting and sharing their experiences.
creating a conducive policy environment – through the flow of information from the farmers to policy makers, a favorable policy on development and sustainable growth of the agriculture sector will be achieved.
For example, the following ICT4D innovation have been found in Taiwan:[42]
A rice germination electronic cooker[43]
A robotic tubing-grafting system for fruit-bearing vegetable seedlings[44]
An air bubble machine and multi-functional, ultrasonic machine for fruit cleaning[45]
ICTs offer advantages over traditional forms of agricultural training, using extension agents. However, these forms of communication also pose limitations. For example, face-to-face farmer training often costs $50 per farmer per year, while training via radio may cost as little as $0.50 per farmer per year. However, the capacity of radio to transmit information and collect is more limited than face-to-face interactions.[46]
ICT4D for Other Sectors[edit]
In 2003, the World Summit on the Information Society (WSIS) held in Geneva, Switzerland came up with concrete steps on how ICT can support sustainable development in the fields of public administration, business, education and training, health, employment, environment, agriculture and science.[47]
The WSIS Plan of Action identified the following as sectors that can benefit from the applications of ICT4D:
The e-government action plan involves applications aimed at promoting transparency to improve efficiency and strengthen citizen relations; needs-based initiatives and services to achieve a more efficient allocation of resources and public goods; and international cooperation initiatives to enhance transparency, accountability and efficiency at all levels of government.
Governments, international organizations and the private sector are encouraged to promote the benefits of international trade and e-business; stimulate private sector investment, foster new applications, content development and public/private partnerships; and adapt policies that favor assistance to and growth of SMMEs in the ICT industry to stimulate economic growth and job creation.
A specific sector that has received some attention has been tourism. Roger Harris was perhaps one of the first to showcase the possible benefits. His work focused on a remote location in Malaysia[48][49] and highlighted some of the possibilities of small tourism operators using the internet. Others have shown the possibilities for small tourism operators in using the internet and ICT to improve business and local livelihoods.[50][51]
Capacity building and ICT literacy are essential to benefit fully from the Information Society. ICT contributions to e-learning include the delivery of education and training of teachers, offering improved conditions for lifelong learning, and improving professional skills.
ICTs can aid in collaborative efforts to create a reliable, timely, high quality and affordable health care and health information systems [52] and to promote continuous medical training, education, and research. WSIS also promotes the use of ICTs to facilitate access to the world’s medical knowledge, improve common information systems, improve and extend health care and health information systems to remote and underserved areas, and provide medical and humanitarian assistance during disasters and emergencies.
E-employment
The e-employment action plan includes the development of best practices for e-workers and e-employers; raising productivity, growth and well-being by promoting new ways of organizing work and business; promotion of teleworking with focus on job creation and skilled worker retention; and increasing the number of women in ICT through early intervention programs in science and technology.
The government, civil society and private sector are encouraged to use and promote ICTs as instruments for environmental protection and the sustainable use of natural resources; to implement green computing programs; and to establish monitoring systems to forecast and monitor the impact of natural and man-made disasters.
WSIS recognizes the role of ICT in the systematic dissemination of agricultural information to provide ready access to comprehensive, up-to-date and detailed knowledge and information, particularly in rural areas. It also encourages public-private partnerships to maximize the use of ICTs as an instrument to improve production.
The plan of action for e-science involves affordable and reliable high-speed Internet connection for all universities and research institutions; electronic publishing, differential pricing and open access initiatives; use of peer-to-peer technology for knowledge sharing; long-term systematic and efficient collection, dissemination and preservation of essential scientific digital data; and principles and metadata standards to facilitate cooperation and effective use of collected scientific information and data.
E-security
The number of prevalent crimes online and offline, local and international (terrorism and acts to it) has led to the increased development of arsenals (including ICT) to preempt and enforce proper security measures that lead to it and put public security, peace and order a number one priority.
Reporting, projects and follow-up:
Since the first edition of the WSIS Stocktaking Report was issued back in 2005, biannual reporting has been a key tool for monitoring the progress of ICT initiatives and projects worldwide. The 2012 report reflects www.itu.int/wsis/stocktaking/docs/reports/S-POL-WSIS.REP-2012-PDF-E.pdf more than 1 000 recent WSIS-related activities, undertaken between May 2010 and the present day, each emphasizing the efforts deployed by stakeholders involved in the WSIS process.
ICT4D and Mobile Technologies[edit]
In recent years, development in mobile computing and communication led to the proliferation of mobile phones, tablet computers, smartphones, and netbooks. Some of these consumer electronic products, like netbooks and entry-level tablet computers are often priced lower as compared to notebooks/laptops and desktop computer since the target market for these products are those living in the emerging markets.[53] This made the Internet and computing more accessible to people, especially in emerging markets and developing countries where most of the world’s poor reside.
Furthermore, these consumer electronic products are equipped with basic mobile communication hardware like, WiFi and 2.5G/3G Internet USB sticks. These allowed users to connect to the Internet via mobile and wireless networks without having to secure a landline or an expensive broadband connection via DSL, cable Internet or fiber optics.
According to International Telecommunication Union, mobile communications and technology has emerged as the primary technology that will bridge in the least developed countries. This trend can be further supported by the rosy sales reports of technology companies selling these electronic devices in emerging markets which includes some of the least developed countries. In fact, some multinational computer manufacturers like Acer and Lenovo are focusing in bringing cheaper netbooks to emerging markets like China, Indonesia and India.[54]
Moreover, data from the ITU’s Measuring the Information Society 2011 report shows that mobile phones and other mobile devices are replacing computers and laptops in accessing the Internet. Countries in Africa have also recorded growth in using mobile phones to access the Internet. In Nigeria, for example, 77% of individuals aged 16 and above use their mobile phones to access the Internet as compared to a mere 13% who use computers to go online.[55] These developments and growth in mobile communication and its penetration in developing countries are expected to bridge the digital divide between least-developed countries and developed countries although there are still challenges in making these services affordable.[56]
Mobile Technology in Education[edit]
The field of Mobile Learning is still in its infancy, and so it is still difficult for experts to come up with a single definition of the concept.[57] One definition of Mobile Learning or mLearning is provided by MoLeNet: “It is the exploitation of ubiquitous handheld technologies, together with wireless and mobile phone networks, to facilitate, support, enhance and extend the reach of teaching and learning”.[58]
Advancements in hardware and networking technologies made it possible for mobile devices and applications to be used in the field of education.[59] Newer developments in mobile phone technology makes them more embedded, ubiquitous and networked, with enhanced capabilities for rich social interactions and internet connectivity. Such technologies can have a great impact on learning by providing a rich, collaborative and conversational experience to both teachers and students.[60] Mobile learning is adapted in classes since aside from the fact that it helps in the enhancement of students' learning, it also helps teachers to easily keep track of the students' progress. Communication when needed is possible at any given time. Discipline and responsibility must go though with the contents in mobile learning since whatever is posted is made available to those who are given access.[61]
Despite the challenges that it presently faces, both technical and pedagogical, experts still remain positive about the concept of mobile learning. The most commonly expected advantages from adopting mobile technology in education include their potential to be engaging for students, to enable interactive learning, and to support personalization of instruction to meet the needs of different students.[62]
Future Technology in Education Digital Devices in the Classroom
The E-reader serves as a device for reading content, such as E-books, newspapers and documents. An E-reader has wireless connectivity for downloading content and conducting other web-based tasks. Popular E-readers include Amazon Kindle and Sony Reader. Amazon Kindle E-Readers one day may be thin as the paper they replaced. The retailer has predicted a light Kindle that would not require a battery, a processing unit or local storage to be built into the body. This device would communicate with a remote that would transmit power and data to the display, offering a longer lifespan than on a rechargeable battery. Kindle E-Readers would allow those students going from class-to-class to use the display all day without charging the battery. Additionally, “Google Glass” has plans in the works to provide portable E-reader’s within the windshield of an automobile.
ELMO Document Camera
You can integrate ELMO into lessons. Some examples of lessons are Modeling Writing, Shared Reading, Letter Formation, Math, Health Education, History and Geography. The future of the ELMO depends on how teachers set up and provide interactive lessons for students. The ELMO provides demonstration of practical skills, lessons are brought to life, students are involved in an interactive classroom, Video-Conference System, Student-Response System, Digital Recorder and Own Slate/Tablet.
Direct instruction is done outside classroom (usually through videos on the Internet) and students watch lecture videos before topic is visited in class. Class time can be dedicated to deepen interaction, knowledge collaborative learning, practice skills with the instructor and received feedback from instructor.
For K-12 education, most Flipped Classrooms are dedicated to Secondary Math and Science classes. Every student has access to a device (laptop, tablet, iPod, smartphone), whether they brought it themselves or it is provided by the school. Teachers facilitate a deeper understanding. Students are engaged, active and there are fewer classroom disruptions. In the near future, Flipped Classrooms will be in all grades K-12. Additional benefits of a Flipped Classroom are the teacher is a “guide on the side,” not “sage on the stage.” Students are engaged in different strategies for learning and it becomes more personalized.
Since learning disabilities can't be cured or fixed, it’s imperative to develop a work around for students with learning disabilities. It is imperative that we accommodate students with low-tech and high-tech tools. These tools will help students reach their full potential and provide greater freedom and independence. | 计算机 |
2014-23/1195/en_head.json.gz/27878 | TOSS is a Linux distribution targeted especially at engineers and developers, while giving convenience and ease-of-use to laymen. It is a spin-off from Ubuntu. The core of Ubuntu has been retained with minimal changes, enabling users to retain its more popular and useful features, but providing a completely different look and feel. Despite the eye-candy offered with a variety of user-friendly interfaces, TOSS mainly targets student developers. It offers the user gcc-build-essential, OpenSSL, PHP, Java, gEda, xCircuit, KLogic, KTechlab, and a variety of other essential programs for engineering and application development.
GPLv2Linux DistributionsOperating System
HotSpotEngine
HotSpotEngine is a Web based software for the HotSpot Billing System and all-in-one hotspot management solutions. It supports wireless or wired networking. It is designed to run on a dedicated PC, and it is available as an installable CD image (ISO). It comes with a Linux-based OS and all required software included. Its main features include the ability to create randomly generated vouchers, prepaid user accounts with time limits or data limits, the ability to refill vouchers, and user sign-up via PayPal integration.
CommercialInternetCommunicationsNetworkingLinux Distributions
openSUSE is a Linux-based operating system for your PC, laptop, or server. You can use it to surf the Web, manage your email and photos, do office work, play videos or music, and have a lot of fun.
Operating SystemLinux distribution
Comal-Linux
Comal-Linux is a Linux distribution derived from Slackware Linux. It is packaged as a live CD, and is intended for desktop users who want to use Slackware Linux without first installing it on their computers. Comal-Linux is built from "pure" Slackware Linux, making it as compatible with the original as possible, including application packages. By choosing lightweight desktop and application software, the distribution can be used on older computers. Comal-Linux is an unofficial Muslim edition of Slackware.
GPLSlackwareOperating SystemLive-CDMuslim
openmamba is a fully featured GNU/Linux distribution for desktops, notebooks, netbooks, and servers. It runs on computers based on the 32-bit Intel x86 architecture, or on 64-bit AMD processors in 32-bit mode. openmamba comes with both free and closed source drivers for the most frequently used video cards. It supports compiz out of the box. It has preinstalled multimedia codecs, and can install the most frequently used closed source applications for GNU/Linux (such as Flash Player or Skype) very easily.
GPLv3Operating SystemLinux distribution | 计算机 |
2014-23/1195/en_head.json.gz/28515 | This page is a stub. Help us expand it, and you get a cookie.
Help expand it
This article could use an infobox! If there is already an infobox on this page, it may need more information. If you have any basic knowledge of the game, please add an infobox to this page by using the infobox template.
If you need help with wiki markup, see the wiki markup page. If you want to try out wiki markup without damaging a page, why not use the sandbox?
Add an infobox!
This article does not have any categories that specifically relate to the game. Help us add some in order to make it easier for other users to find this page.
Add some categories!
Game Dev Tycoon | Table of Contents | Walkthrough
Game Dev Tycoon/Table of Contents
Tycoon/Time Management
http://greenheartgames.com/
Game Dev Tycoon Forums
Game Dev Tycoon is a business simulation video game released on December 10, 2012. In the game, the player creates and publishes video games. Game Dev Tycoon was inspired by Game Dev Story (by Kairosoft), which was released on the App Store for iOS. Game Dev Tycoon was created by Greenheart Games, a company founded in July 2012 by brothers Patrick and Daniel Klug.
The game's developers implemented an unique anti-piracy measure for Game Dev Tycoon. Patrick Klug, founder of Greenheart Games, knowing that the game was likely to be torrented extensively, purposely released a cracked version of the game and uploaded it himself to torrent sites. Gameplay in this version is identical except for one variation, as players progress through the game they receive the following message:
Boss, it seems that while many players play our new game, they steal it by downloading a cracked version rather than buying it legally. If players don’t buy the games they like, we will sooner or later go bankrupt.
—Greenheart Games, Game Dev Tycoon
Eventually players of the cracked version will gradually lose money until they do go bankrupt, as a result of pirates.
Progression[edit]
You start out in your garage with no employees, limited money, and limited choices for your first game. You create games, gradually making more and more money until you can upgrade to an office. Once you get the office, you can hire employees to help you with the game making. After quite a while of that, you can move on to a larger office with more space to hire employees. You will stay in this office for the rest of the game.
Retrieved from "http://strategywiki.org/w/index.php?title=Game_Dev_Tycoon&oldid=676179" Categories: Pages needing an infoboxPages needing categorisationGuides at completion stage 0Pages needing box artworkGamesHidden category: Stubs This page was last modified on 30 August 2013, at 15:55.
Guide pagesGuide images Categories | 计算机 |
2014-23/1195/en_head.json.gz/28872 | To go to other pages use the Site Map
Help for Your User Group
Whether you're an officer, editor, webmaster, or concerned volunteer, you'll find something
on these pages that will benefit you and your User Group. There's a little bit of everything from newsletters and websites, to fundraising and running a successful, interesting meeting. We've added lots of new content and more links for you. Want
some community service ideas? Click on the Service link to see what our Jerry Award
winners are doing. Need to share some ideas live? We've got a direct link to
WebBoard. You'll find articles from writers like Steve Bass, award-winning editors and webmasters, and successful, current and former User Group officers and volunteers.
To help you find your way through the wealth of information, we've categorized the tips and tricks for you. Just select one of the topics, and browse through the articles. You can return to this page at any time by clicking on the Help Main topic. To return to our main page, click Home on the top menu.
And, if you've got some successful Tips and Tricks...we'd like to know about them. If
we publish your ideas, we'll give full credit, and, who knows...you could end up famous. Let us know about it here.
Most of all, remember to check back often.....we've made a hobby of collecting Tips and
Tricks, and we like to share ! | 计算机 |
2014-23/1195/en_head.json.gz/32214 | The importance of carefully drafted contracts to IT systems developmentThis article highlights the importance of IT developers having a comprehensive and carefully drafted development contract with their clients in place before commencing any IT project. more
BBC launches new mobile appsAt the Mobile World Congress, the UK broadcaster unveiled plans to launch several new mobile applications, showcasing its news, sport, and TV content. The BBC News app will arrive first, with the iPhone version leading the way in April. more
Tearing Down the WallsIn this article, Dr. Elayne Coakes analyses the project Tearing Down the Walls, which draws upon a Web 2.0 infrastructure to provide an environment in which students develop applications in accordance with their needs. The project idea came from Roger James, Information Systems Director at the University of Westminster. He wondered what would happen if students were allowed to develop new applications for other students and maybe staff within the existing ‘garden walls’ of the university’s applications. The project is called internally TWOLER where the ‘LER’ stands for Lightweight Enterprise RSS and the ‘TWO’ for Web 2.0. more
SmartphonesInteresting article on The Economist more
Mobile Retailing is on the MoveBenjamin Dyer, director of product development at ecommerce supplier Actinic, reflects on the reality of that most elusive of future technologies: mobile commerce. more
NMK - New Media Knowledge on Facebook Guardian Mobilises
The Guardian this week launched its new mobile site. NMK talked to project lead Marcus Austin to learn more about the company's strategy over the site.
What were your main aims in commissioning the new site? User experience was our main priority. We wanted to provide the best user experience on every device, with quite a broad definition of what we mean by mobile - it might be everything connected while on the move. People don't think of a mobile website as a second thing, separate to a main site. So we wanted to put as much of the Guardian onto the mobile site as possible and to offer full stories rather than single-screen summaries. We were very much aware that if the mobile site didn't replicate the content of the site users are used to seeing on their browser, then that would lead to disappointment. At the moment, 90 per cent of the content from the main site also exists on mobile. The content is generated from RSS feeds from the main site and updated every 15 minutes. The site degrades quite well on less-capable devices and also where connection speeds are lower. What were the business objectives behind creating the new site? The Guardian's existing mobile site was created years ago for the AvantGo platform and so was very much out-of-date. Beyond this, we have three objectives. We wanted to increase the Guardian's reach - its strategy is to become the leading liberal voice in world. In the developing world, mobile devices are considerably more likely to be used than computer terminals. We also want to maintain and extend our lead in traffic. And, finally, of course we want to find ways to increase revenues. Google Adwords and display advertising from 4th Screen Advertising will both be used. Who were your partners in creating the site? Our two main partners are Bluestar Mobile, who looked after marketing and design, and Mobile IQ, who supplied the platform we're using. Internally, most of the work was around making sure that the hundreds of RSS feeds that make up the content on the site worked (every section and subsection of the paper has its own feed). How long did it take? I've been working on the project since July. However, the actual building of the site has been accomplished in a remarkably short time - 13 weeks. That's quite an achievement considering it has over sixty sections and subsections. What additions are planned for phase two of the site? We want to allow for a seamless transition between the mobile and the desktop sites, so a single sign-on, allowing users to - for example - start reading an article at their desk and the automatically be able to carry on reading where they left off on the train home. We're also keen to allow for more personalisation, so that your home page might start with Digital Media and not have any football, for example. What advice do you have for other businesses seeking to establish a mobile site? Communication within the business about the strategy, content and appearance of the site will take a lot longer than you might expect. Because everyone has a mobile phone, it can seem as though 'everyone's an expert' in what mobile sites should look like and what they should do. Depending on the angle from which you come to the project, you might have a good understanding of the best way to implement the user experience, the editorial content, the design or the commercial aspects. It's not so likely that you will have experience of all four. Therefore, those early conversations with all stakeholders are crucial. Comments | 计算机 |
2014-23/1195/en_head.json.gz/32649 | IDABC Questionnaire 2009
Revision as of 23:42, 15 November 2009 by Silvia (Talk | contribs)
Jump to: navigation, search This is a draft document. A work in progress. A scratchpad for ideas. It should not be widely circulated in this form.
1 Context
2 CAMSS Questions
2.1 Part 4: Market Criteria
2.1.1 Market support
2.1.2 Maturity
2.1.3 Re-usability
2.2 Part 5: Standardisation Criteria
2.2.1 Availability of Documentation
2.2.2 Intellectual Property Right
2.2.3 Accessibility
2.2.4 Interoperability governance
2.2.5 Meeting and consultation
2.2.6 Consensus
2.2.7 Due Process
2.2.8 Changes to the formal specification
2.2.9 Support
Context We received an e-mail from a consultant studying the suitability of Theora for use in "eGovernment", on behalf of the IDABC, an EU governmental agency responsible for "Interoperability" with an emphasis on open source. The investigation is in the context of European Interoperability Framework, about which there has been some real controversy.
The method of assessment is the Common Assessment Method for Standards and Specifications, including the questions below.
CAMSS Questions Part 4: Market Criteria This group of Market criteria analyses the formal specification in the scope of its market environment, and more precisely it examines the implementations of the formal specification and the market players. This implies identifying to which extent the formal specification benefits from market support and wide adoption, what are its level of maturity and its capacity of reusability.
Market support is evaluated through an analysis of how many products implementing the formal specification exist, what their market share is and who their end-users are. The quality and the completeness (in case of partitioning) of the implementations of the formal specification can also be analysed. Availability of existing or planned mechanisms to assess conformity of implementations to the standard or to the specification could also be identified. The existence of at least one reference implementation (i.e.: mentioning a recognized certification process) - and of which one is an open source implementation - can also be relevant to the assessment. Wide adoption can also be assessed across domains (i.e.: public and private sectors), in an open environment, and/or in a similar field (i.e.: best practices).
A formal specification is mature if it has been in use and development for long enough that most of its initial problems have been overcome and its underlying technology is well understood and well defined. Maturity is also assessed by identifying if all aspects of the formal specification are considered as validated by usage, (i.e.: if the formal specification is partitioned), and if the reported issues have been solved and documented.
Reusability of a formal specification is enabled if it includes guidelines for its implementation in a given context. The identification of successful implementations of the standard or specification should focus on good practices in a similar field. Its incompatibility with related standards or specifications should also be taken into account.
The ideas behind the Market Criteria can also be expressed in the form of the following questions:
Market support Does the standard have strong support in the marketplace? Yes. For example, among web browsers, support for Xiph's Ogg, Theora, and Vorbis standards is now included by default in Mozilla Firefox, Google Chrome, and the latest versions of Opera, representing hundreds of millions of installed users just in this market alone. Further, a QuickTime component exists which enables use of Xiph's Ogg, Theora, and Vorbis standards in all Mac OS X applications that make use of the QuickTime framework - which includes Safari/Webkit, iMovie, QuickTime, and many others. On Windows, DirectShow filters exist which also enable all Windows applications that use the DirectShow framework to use Xiph's Ogg, Theora, and Vorbis standards.
What products exist for this formal specification ? Theora is a video codec, and as such the required products are encoders, decoders, and transmission systems. All three types of products are widely available for Theora.
How many implementations of the formal specification are there? Xiph does not require implementors to acquire any license before implementing the specification. Therefore, we do not have a definitive count of the number of implementations. In addition to the reference implementation, which has been ported to most modern platforms and highly optimized for x86 and ARM CPUs and TI C64x+ DSPs, we are aware of a number of independent, conformant or mostly-conformant implementations. These include two C decoders (ffmpeg and QTheora), a Java decoder (Jheora), a C# decoder, an FPGA decoder, and an FPGA encoder.
Are there products from different suppliers in the market that implement this formal specification ? Yes. Corporations such as Atari, Canonical, DailyMotion, Elphel, Fluendo, Google, Mozilla, Novell, Opera, Red Hat, Sun Microsystems, Ubisoft, and countless others have supplied products with an implementation of the Theora standard.
Are there many products readily available from a variety of suppliers? Yes. Theora has been deployed in embedded devices, security cameras, video games, video conferencing systems, web browsers, home theater systems, and many other products. A complete, legal, open-source reference implementation can also be downloaded free of charge, including components for all major media frameworks (DirectShow, gstreamer, and Quicktime), giving the plethora of applications which use these frameworks the ability to use the codec.
What is the market share of the products implementing the formal specification, versus other implementations of competing formal specifications ? Theora playback is extremely widely available, covering virtually the entire market of personal computers. Theora is also increasingly available in mobile and embedded devices. Since we do not require licensing for products that implement the specification, we do not have market share numbers that can be compared with competing formal specifications. Because implementations are readily available and free, Theora is included in many products that support multiple codecs, and is sometimes the only video codec included in free software products.
Who are the end-users of these products implementing the formal specification?
The end users are television viewers, video gamers, web surfers, movie makers, business people, video distribution services, and anyone else who interacts with moving pictures.
Maturity Are there any existing or planned mechanisms to assess conformity of the implementations of the formal specification? Yes. In addition to a continuous peer review process, we maintain a suite of test vectors that allow implementors to assess decoder conformity. We also provide free online developer support and testing for those attempting to make a conforming implementation. An online validation service is available.
Is there a reference implementation (i.e.: mentioning a recognized certification process)? Yes. Xiph maintains a reference implementation called libtheora. In addition to serving as a reference, libtheora is also highly optimized to achieve the maximum possible speed, accuracy, reliability, efficiency, and video quality. As a result, many implementors of Theora adopt the reference implementation.
Is there an open source implementation? Yes. libtheora is made available under a completely permissive BSD-like license. Its open-source nature also contributes to its quality as a reference implementation, as implementors are welcome to contribute their improvements to the reference. There are also several other open source implementations.
Does the formal specification show wide adoption? across different domains? (I.e.: public and private) Yes. In addition to the private companies mentioned in the previous section, Theora has also been specified as the sole format supported by non-profit institutions such as Wikipedia, currently the 6th largest website in the world, or as one of a small number of preferred formats supported by other public organizations, such as the Norwegian government.
in an open environment? Yes. On open/free operating systems such as those distributed by Novell/SuSE, Canonical, and Red Hat, Theora is the primary default video codec.
in a similar field? (i.e.: can best practices be identified?) Has the formal specification been in use and development long enough that most of its initial problems have been overcome? Yes. Theora was derived from VP3, which was originally released in May 2000. The Theora specification was completed in 2004. Theora has now been used in a wide variety of applications, on the full spectrum of computing devices.
Is the underlying technology of the standard well-understood? (e.g., a reference model is well defined, appropriate concepts of the technology are in widespread use, the technology may have been in use for many years, a formal mathematical model is defined, etc.) Yes. The underlying technology has been in use for nearly a decade, and most of the concepts have been in widespread use for even longer.
Is the formal specification based upon technology that has not been well-defined and may be relatively new? No. The formal specification is based on technology from the On2 VP3 codec, which is substantially similar to simple block-transform codecs like H.261. This class of codecs is extremely well understood, and has been actively in use for over 20 years.
Has the formal specification been revised? (Yes/No, Nof) Yes. The specification of the encoder is continuously revised based on user feedback to improve clarity and accuracy. The specification of the decoding part has been stable for years.
Is the formal specification under the auspices of an architectural board? (Yes/No) No. Although officially maintained by the Xiph.Org Foundation, anyone is free to join this organization, and one need not even be a member to make contributions. However, the core developers will review contributions and make sure they do not contradict the general architecture and they work well with the existing code and the test cases.
Is the formal specification partitioned in its functionality? (Yes/No) No. Theora is very deliberately not partitioned, to avoid the confusion created by a "standard" composed of many incompatible "profiles". The Theora standard does not have any optional components. A compliant Theora decoder can correctly process any Theora stream.
To what extent does each partition participate to its overall functionality? (NN%) N/A.
To what extent is each partition implemented? (NN%) (cf market adoption)
Re-usability Does the formal specification provide guidelines for its implementation in a given organisation? Yes. For example, the Theora specification provides "non-normative" advice and explanation for implementors of Theora decoders and encoders, including example algorithms for implementing required mathematical transforms. Xiph also maintains a documentation base for implementors who desire more guidelines beyond the specification itself.
Can other cases where similar systems implement the formal specification be considered as successful implementations and good practices? Xiph's standards have successfully been implemented by many organisations in a wide variety of environments. We maintain (non-exhaustive) lists of products which implement Theora support, many of them open source, so that others may use them as a reference when preparing their own products. A particularly well known, independent, but interoperable implementation is provided by the FFmpeg open source project.
Is its compatibility with related formal specification documented?
Yes. For example, the Theora specification also documents the use of Theora within the standard Ogg encapsulation format, and the TheoraRTP draft specification explains how to transmit Theora using the RTP standard. In addition, the specification documents Theora's compatibility with ITU-R B.470, ITU-R B.601, ITU-R B.709, SMPTE-170M, UTF-8, ISO 10646, and Ogg Vorbis.
Part 5: Standardisation Criteria From Idabc-camss
Note: Throughout this section, “Organisation” refers to the standardisation/fora/consortia body in charge of the formal specification.
Significant characteristics of the way the organisation operates are for example the way it gives the possibility to stakeholders to influence the evolution of the formal specification, or which conditions it attaches to the use of the formal specification or its implementation. Moreover, it is important to know how the formal specification is defined, supported, and made available, as well as how interaction with stakeholders is managed by the organisation during these steps. Governance of interoperability testing with other formal specifications is also indicative.
The standardisation criteria analyses therefore the following elements:
Availability of Documentation The availability of documentation criteria is linked to cost and online availability. Access to all preliminary results documentation can be online, online for members only, offline, offline, for members only or not available. Access can be free or for a fee (which fee?).
Every Xiph standard is permanently available online to everyone at no cost. For example, we invite everyone to download the most up-to-date copy of the Theora specification, and the latest revision of Vorbis. All previous revisions are available from Xiph's revision control system.
Intellectual Property Right The Intellectual Property Rights evaluation criteria relates to the ability for implementers to use the formal specification in products without legal or financial implications. The IPR policy of the organisation is therefore evaluated according to: the availability of the IPR or copyright policies of the organisation (available on-line or off-line, or not available);
The reference implementations of each codec include all necessary IPR and copyright licenses for that codec, including all documentation, and are freely available to everyone.
the organisation’s governance to disclose any IPR from any contributor (ex-ante, online, offline, for free for all, for a fee for all, for members only, not available);
Xiph does not require the identification of specific patents that may be required to implement a standard, however it does require an open-source compatible, royalty free license from a contributor for any such patents they may own before the corresponding technology can be included in a standard. These license are made available online, for free, to all parties.
the level of IPR set "mandatory" by the organisation (no patent, royalty free patent, patent and RAND with limited liability , patent and classic RAND, patent with explicit licensing, patent with defensive licensing, or none); All standards, specifications, and software published by the Xiph.Org Foundation are required to have "open-source compatible" IPR. This means that a contribution must either be entirely clear of any known patents, or any patents that read upon the contribution must be available under a transferable, irrevocable public nonassertion agreement to all people everywhere. For example, see our On2 patent nonassertion warrant. Other common "royalty free" patent licenses are either not transferable, revocable under certain conditions (such as patent infringement litigation against the originating party), or otherwise impose restrictions that would prevent distribution under common OSI-approved licenses. These would not be acceptable.
the level of IPR "recommended" by the organisation (no patent, royalty free patent, patent and RAND with limited liability, patent and classic RAND, patent with explicit licensing, patent with defensive licensing, or none). [Note: RAND (Reasonable and Non Discriminatory License) is based on a "fairness" concept. Companies agree that if they receive any patents on technologies that become essential to the standard then they agree to allow other groups attempting to implement the standard to use these patents and they agree that the charges for the patents shall be reasonable. "RAND with limited availability" is a version of RAND where the "reasonable charges" have an upper limit.]
Xiph's recommended IPR requirements are the same as our mandatory requirements.
Accessibility The accessibility evaluation criteria describe the importance of equal and safe accessibility by the users of implementations of formal specifications. This aspect can be related to safety (physical safety and conformance safety) and accessibility of physical impaired people (design for all).
Focus is made particularly on accessibility and conformance safety. Conformance testing is testing to determine whether a system meets some specified formal specification. The result can be results from a test suite. Conformance validation is when the conformance test uniquely qualifies a given implementation as conformant or not. Conformance certification is a process that provides a public and easily visible "stamp of approval" that an implementation of a standard validates as conformant.
The following questions allow an assessment of accessibility and conformance safety: Does a mechanism that ensures disability support by a formal specification exist? (Y/N) Yes. Xiph ensures support for users with disabilities by providing specifications for accessible technologies independent of the codec itself. Notably, the Xiph OggKate codec for time-aligned text and image content provides support for subtitles for internationalisation, captions for the hearing-impaired, and textual audio descriptions for the visually impaired. Further, Ogg supports multiple tracks of audio and video content in one container, such that sign language tracks and audio descriptions can be included into one file. For this to work, Xiph have defined Skeleton which holds metadata about each track encapsulated within the one Ogg file. When Theora is transmitted or stored in an Ogg container, it is automatically compatible with these accessibility measures.
Is conformance governance always part of a standard? (Y/N) No. Xiph does not normally provide a formal conformance testing process as part of a standard.
Is a conformance test offered to implementers? (Y/N) Yes. Xiph maintains a suite of test vectors that can be used by implementors to confirm basic conformance. Also, Xiph's online validation service is a freely available service that can be used by anyone to check conformance.
Is conformance validation available to implementers? (Y/N) Yes. Informal conformance testing is available to implementors upon request, and Xiph has provided such testing for a number of implementations in the past. The oggz tools contain a validation program called oggz-validate which implementers have made massive use of in the past.
Is conformance certification available? (Y/N) Yes. Xiph does not require certification, but maintains the right to withhold the use of our trademarks from implementors that act in bad faith. Implementors may, however, request explicit permission to use our trademarks with a conforming implementation.
Is localisation of a formal specification possible? (Y/N)
Yes. We welcome anyone who wishes to translate Xiph specifications into other languages. We have no policy requiring that the normative specification be written in English.
Interoperability governance The interoperability governance evaluation criteria relates to how interoperability is identified and maintained between interoperable formal specifications. In order to do this, the organisation may provide governance for: open identification in formal specifications, open negotiation in formal specifications, open selection in formal specifications. Meeting and consultation The meeting and consultation evaluation criteria relates to the process of defining a formal specification. As formal specifications are usually defined by committees, and these committees normally consist of members of the organisation, this criteria studies how to become a member and which are the financial barriers for this, as well as how are non-members able to have an influence on the process of defining the formal specification. It analyses: if the organisation is open to all types of companies and organisations and to individuals; Yes. Xiph welcomes representatives from all companies and organizations.
if the standardisation process may specifically allow participation of members with limited abilities when relevant; Yes. Standardization occurs almost entirely in internet communications channels, allowing participants with disabilities to engage fully in the standards development process. We also encourage nonexperts and students to assist us as they can, and to learn about Xiph technologies by participating in the standards development process.
if meetings are open to all members;
Xiph meetings are open to everyone. We charge no fee for and place no restrictions on attendance or participation. For example, anyone interested in contributing to the Theora specification may join the Theora development mailing list.
if all can participate in the formal specification creation process; Yes. All people are welcome to participate in the specification creation process. No dues or fees are required to participate
if non-members can participate in the formal specification creation process.
Yes. Xiph does not maintain an explicit list of members, and no one is excluded from contributing to specifications as they are developed.
Consensus Consensus is decision making primarily with regard to the approval of formal specifications and review with interest groups (non-members). The consensus evaluation criterion is evaluated with the following questions:
Does the organisation have a stated objective of reaching consensus when making decisions on standards? There is no explicitly stated objective of reaching consensus.
If consensus is not reached, can the standard be approved? (answers are: cannot be approved but referred back to working group/committee, approved with 75% majority, approved with 66% majority, approved with 51% majority, can be decided by a "director" or similar in the organisation).
The standard can be approved without consensus via the decision of a "director" or similar.
Is there a formal process for external review of standard proposals by interest groups (nonmembers)?
Since anyone may participate in the development process and make proposals, there is no need for a separate formal process to include proposals by nonmembers.
Due Process The due process evaluation criteria relates to the level of respect of each member of the organisation with regard to its rights. More specifically, it must be assured that if a member believes an error has been made in the process of defining a formal specification, it must be possible to appeal this to an independent, higher instance. The question is therefore: can a member formally appeal or raise objections to a procedure or to a technical specification to an independent, higher instance?
Yes. Even if a member fails an appeal within the organization, because all of the technology Xiph standardizes is open and freely implementable, they are always free to develop their own, competing version. Such competing versions may even still be eligible for standardization under the Xiph umbrella.
Changes to the formal specification The suggested changes made to a formal specification need to be presented, evaluated and approved in the same way as the formal specification was first defined. This criteria therefore applies the above criteria to the changes made to the formal specification(availability of documentation, Intellectual Property Right, accessibility, interoperability governance, meeting and consultation, consensus, due process).
The exact same process is used for revisions to the standard as was used for the original development of the standard, and thus the answers to all of the above questions remain the same.
Support It is critical that the organisation takes responsibility for the formal specification throughout its life span. This can be done in several ways such as for example a regular periodic review of the formal specification. The support criteria relates to the level of commitment the organisation has taken to support the formal specification throughout its life: does the organisation provide support until removal of the published formal specification from public domain (Including this process? Xiph.Org standards are never removed from the public domain. Xiph endeavors to provide support for as long as the standard remains in use.
does the organisation make the formal specification still available even when in non-maintenance mode?
Yes. All Xiph.Org standards are freely licensed and will always be available.
does the organisation add new features and keep the formal specification up-to-date?
Yes. Xiph maintains its ecosystem of standards on a continuous basis.
does the organisation rectify problems identified in initial implementations?
Yes. Xiph maintains a problem reporting system that is open to the public, and invites everyone to submit suggestions for improvements. Improvements are made both to the standards documents and to the reference implementations.
does the organisation only create the formal specification?
No. Xiph also produces high-quality reusable reference implementations of its standards, released under an open license.
This is a draft document. A work in progress. A scratchpad for ideas. It should not be widely circulated in this form.
Retrieved from "http://wiki.xiph.org/IDABC_Questionnaire_2009" | 计算机 |
2014-23/1195/en_head.json.gz/33625 | The rise and fall and rise of HTML
By Glyn Moody
HTML began life as a clever hack of a pre-existing approach. As Tim Berners-Lee explains in his book, “Weaving the Web”:
Since I knew it would be difficult to encourage the whole world to use a new global information system, I wanted to bring on board every group I could. There was a family of markup languages, the standard generalised markup language (SGML), already preferred by some of the world's top documentation community and at the time considered the only potential document standard among the hypertext community. I developed HTML to look like a member of that family.
One reason why HTML was embraced so quickly was that it was simple – which had important knock-on consequences:
The idea of asking people to write the angle brackets by hand was to me, and I assumed to many, as unacceptable as asking one to prepare a Microsoft Word document by writing out its binary-coded format. But the human readability of HTML was an unexpected boon. To my surprise, people quickly became familiar with the tags and started writing their own HTML documents directly.
Of course, once people discovered how powerful those simple tags could be, they made the logical but flawed deduction that even more tags would make HTML even more powerful. Thus began the first browser wars, with Netscape and Microsoft adding non-standard features in an attempt to trump the other. Instead, they fragmented the HTML standard (remember the blink element and marquee tag?), and probably slowed down the development of the field for years.
Things were made worse by the collapse of Netscape at the end of the 90s, leaving Microsoft as undisputed arbiter of (proprietary) standards. At that time, the web was becoming a central part of life in developed countries, but Microsoft's dominance – and the fact that Internet Explorer 7 only appeared in 2006, a full five years after version 6 – led to a long period of stagnation in the world of HTML.
One of the reasons that the Firefox project was so important was that it re-affirmed the importance of open standards – something that Microsoft's Internet Explorer had rendered moot. With each percentage point that Firefox gained at the expense of that browser, the pressure on Microsoft to conform to those standards grew. The arrival of Google's Chrome, and its rapid uptake, only reinforced this trend.
Eventually Microsoft buckled under the pressure, and has been improving its support of HTML steadily, until today HTML5 support is creeping into Visual Studio, and the company is making statements like the following:
Just four weeks after the release of Internet Explorer 9, Microsoft Corp. unveiled the first platform preview of Internet Explorer 10 at MIX11. In his keynote, Dean Hachamovitch, corporate vice president of Internet Explorer, outlined how the next version of Microsoft’s industry-leading Web browser builds on the performance breakthroughs and the deep native HTML5 support delivered in Internet Explorer 9. With this investment, Microsoft is leading the adoption of HTML5 with a long-term commitment to the standards process.
“The only native experience of HTML5 on the Web today is on Windows 7 with Internet Explorer 9,” Hachamovitch said. “With Internet Explorer 9, websites can take advantage of the power of modern hardware and a modern operating system and deliver experiences that were not possible a year ago. Internet Explorer 10 will push the boundaries of what developers can do on the Web even further.”
Even if some would quibble with those claims, the fact that Microsoft is even making them is extraordinary given its history here and elsewhere. Of course, there is always the risk that it might attempt to apply its traditional “embrace and extend” approach, but so far there are few hints of that. And even if does stray from the path of pure HTML5, Microsoft has already given that standard a key boost at a time when some saw it as increasingly outdated.
That view was largely driven by the rise of the app, notably on the iPhone and more recently on the iPad. The undeniable popularity of such apps, due in part to their convenience, has led some to suggest that the age of HTML is over, and that apps would become the primary way of interacting with sites online.
Mozilla responded by proposing the idea of an Open Web App Store:
An Open Web App Store should:
exclusively host web applications based upon HTML5, CSS, Javascript and other widely-implemented open standards in modern web browsers — to avoid interoperability, portability and lock-in issuesensure that discovery, distribution and fulfillment works across all modern browsers, wherever they run (including on mobile devices)set forth editorial, security and quality review guidelines and processes that are transparent and provide for a level playing fieldrespect individual privacy by not profiling and tracking individual user behavior beyond what’s strictly necessary for distribution and fulfillmentbe open and accessible to all app producers and app consumers.
As the links to earlier drafts on its home page indicate, HTML5 has been under development for over three years, but it really seems to be taking off now. Some early indications of what it is capable of can be seen in projects to replace browser plugins for PDFs and MP3s with browser-native code.
HTML5 is also at the heart of the FT's new Web App:
Creating an HTML5 app is innovative and breaks new ground – the FT is the first major news publisher to launch an app of this type. There are clear benefits. Firstly, the HTML5 FT Web App means users can see new changes and features immediately. There is no extended release process through an app store and users are always on the latest version.
Secondly, developing multiple ‘native’ apps for various products is logistically and financially unmanageable. By having one core codebase, we can roll the FT app onto multiple platforms at once.
We believe that in many cases, native apps are simply a bridging solution while web technologies catch up and are able to provide the rich user experience demanded on new platforms.
In other words, the FT was fed up paying a hefty whack of its revenue to Apple for the privilege of offering a native app. And if the following rumour is true, the FT is not the only well-known name to see it that way:
Project Spartan is the codename for a new plat | 计算机 |
2014-23/1195/en_head.json.gz/34761 | Search Site Web Archives - back to 1987 Google Newspaper Archive - back to 1901 Special report
VA yanks troubled computer system
The $472-million computer system being tested at Bay Pines just doesn't work, veterans officials say.
By PAUL DE LA GARZA and STEPHEN NOHLGREN
ST. PETERSBURG - The Department of Veterans Affairs has decided to pull the plug on a $472-million trial computer system at Bay Pines VA Medical Center because it doesn't work.
Technicians immediately will begin switching the hospital back to the old system by Sept. 30, the end of the VA's fiscal year, Rep. C.W. Bill Young, R-Largo, said Monday.
Young said the VA would continue to test the experimental computer in a "controlled environment ... to see whether it has any value to the VA system."
He acknowledged that the project's future is questionable, noting Congress planned to slash its funding. The VA has spent close to $265-million of $472-million budgeted for the program, according to federal investigators.
Florida Sen. Bob Graham reacted with vehemence.
"At a time when VA's health care system is stretched to the limit, it is outrageous - simply outrageous - to waste millions upon millions of dollars on a failed computer system," he said.
VA spokeswoman Cynthia Church did not respond Monday to several messages seeking comment.
Installed last October, the troubled computer system was designed to track finances and inventory for the VA's $64-billion nationwide budget. Bay Pines was one test site; other hospitals were scheduled to follow.
But the system was plagued with problems from the start. When it struggled to order supplies, surgeries were delayed. At one point, employees bought their own plastic gloves to draw blood. Staff members complained that they could not keep track of hospital expenditures.
As recently as last month, hospital administrators told congressional investigators that they could not account for almost $300,000.
The computer system and allegations of mismanagement at Bay Pines have been the subject of multiple federal inquiries as a result of stories in the St. Petersburg Times.
Graham, the ranking Democrat on the Senate Veterans Affairs Committee, launched his own investigation in February.
Committee investigators discovered that BearingPoint, the contractor charged with building the computer system, was paid a $227,620 "incentive bonus" for starting the Bay Pines test on schedule, despite warnings that the system wasn't ready.
"VA managers ... had no idea who completed the training and who did not," committee investigators wrote.
In a statement Monday, Graham lashed out at VA management.
"Just eight months ago, the contractor was given a performance bonus, and today we are told that they are literally pulling the plug on the system at Bay Pines. Where were the high-level managers who were supposed to be overseeing our taxpayers' dollars?"
Sen. Bill Nelson, the Florida Democrat who sits on the Armed Services Committee, said in a statement that he wanted to know what the VA would do to recover taxpayers' money.
The VA had planned to roll out the pilot computer system, known as the Core Financial and Logistics System, or CoreFLS, nationwide last February.
But VA Secretary Anthony Principi put that plan on hold after stories in the Times chronicled flaws. Principi said he would decide how to proceed after investigations by the VA inspector general and a $500,000 assessment by Carnegie Mellon University.
Both reports have been completed but not released.
Two weeks ago, Principi said he would form an advisory committee to help him decide the fate of the computer system.
In documents submitted to Congress last month, the VA acknowledged for the first time that the computer system may never work.
"Instabilities of software ... are creating major barriers to efficient and effective work flow," the VA documents said. "This is due mostly to extraordinary amount of time needed to perform tasks, and inconsistent reliability of critical data and many reports derived from that data."
While Bay Pines' problems have thrown the VA's financial side into turmoil, computer software that runs clinical programs gets widespread praise in the medical community.
As one hospital executive put it in a recent issue of the Physician Executive trade journal: "If you fully involve yourself in the VA computerized record system, you would never go back to any other way of caring for patients."
That clinical software was developed slowly over the years by VA programmers, using a common computer language. The CoreFLS system was a blending of three commercial software packages, from different manufacturers, that must communicate with each other, as well as with various components of the homegrown clinical system.
Since February, five VA officials have quit or been reassigned as a result of problems with the computer system and mismanagement at Bay Pines.
Young, who has kept in close contact with Principi, said Principi decided to pull the plug on the financial system because it does not work.
Young said the VA did not know how long it would take to fix the troubled software.
"CoreFLS will not be used in any other VA center until a lot of the flaws that are there are solved," Young said.
VA officials told Graham's office that "they are still holding out the option, pending the advisory committee, of taking it to other facilities, including Tampa, Miami and the West Coast," said Graham spokesman Paul Anderson.
Anderson said VA officials did not say what standards CoreFLS would have to meet before any such expansion.
Young said Principi had not decided exactly how to test the computer in a lab setting. He said the VA wasn't scrapping the system because it already had bought the software.
Young said he did not know whether BearingPoint would play a role in the future of the computer system. He noted that the contract between the VA and the contractor is the subject of "serious investigations."
BearingPoint declined to comment.
Young said he did not know whether Bay Pines would suffer disruptions as technicians reinstall the old system.
However, in documents BearingPoint submitted to Congress this year, the contractor noted there would be problems if the trial computer was yanked late in the project.
"A fallback should only be considered during the initial month," BearingPoint wrote. Afterward, "fallback will become more complex and cumbersome."
Hospital staff have never embraced the trial computer system because of the software glitches.
On Monday, some were relieved at the news from Washington.
"I'm astonished at this kind of waste," said Dr. Perry Hudson, chief of urology. "It's up close to $300-million and generally speaking, that is the feeling of the staff."
[Last modified July 27, 2004, 01:00:27] | 计算机 |
2014-23/1195/en_head.json.gz/35710 | (Redirected from Google Ads)
June 18, 2003 (2003-06-18)[1]
Cross-platform (web-based application)
www.google.com/adsense
Google AdSense is a program run by Google that allows publishers in the Google Network of content sites to serve automatic text, image, video, or interactive media advertisements, that are targeted to site content and audience. These advertisements are administered, sorted, and maintained by Google, and they can generate revenue on either a per-click or per-impression basis. Google beta-tested a cost-per-action service, but discontinued it in October 2008 in favor of a DoubleClick offering (also owned by Google).[2] In Q1 2014, Google earned US $3.4 billion ($13.6 billion annualized), or 22% of total revenue, through Google AdSense.[3]
3.1 AdSense for Content
3.2 AdSense for Feeds
3.3 AdSense for search
3.4 AdSense for mobile content
3.5 AdSense for domains
3.6 AdSense for video
4 How AdSense works
5 Abuse
This section needs additional citations for verification. Please help improve this article by adding citations to reliable sources. Unsourced material may be challenged and removed. (September 2010)
Google uses its Internet search technology to serve advertisements based on website content, the user's geographical location, and other factors. Those wanting to advertise with Google's targeted advertisement system may enroll through Google AdWords. AdSense has become one of the popular programs that specializes in creating and placing banner advertisements on a website, because the advertisements are less intrusive and the content of the advertisements is often relevant to the website.
Many websites use AdSense to monetize their content; it is the most popular advertising network.[4] AdSense has been particularly important for delivering advertising revenue to small websites that do not have the resources for developing advertising sales programs and sales people to generate revenue with. To display contextually relevant advertisements on a website, webmasters place a brief Javascript code on the websites' pages. Websites that are content-rich have been very successful with this advertising program, as noted in a number of publisher case studies on the AdSense website. AdSense publishers may only place up to three link units on a page, in addition to the three standard ad units, and two search boxes.[5] This restriction is not applicable for premium publishers who work directly with account managers at Google.
Some webmasters put significant effort into maximizing their own AdSense income. They do this in three ways:[citation needed]
They use a wide range of traffic-generating techniques, including but not limited to online advertising.
They build valuable content on their websites that attracts AdSense advertisements, which pay out the most when they are clicked.
They use text content on their websites that encourages visitors to click on advertisements. Note that Google prohibits webmasters from using phrases like "Click on my AdSense ads" to increase click rates. The phrases accepted are "Sponsored Links" and "Advertisements".
The source of all AdSense income is the AdWords program, which in turn has a complex pricing model based on a Vickrey second price auction. AdSense commands an advertiser to submit a sealed bid (i.e., a bid not observable by competitors). Additionally, for any given click received, advertisers only pay one bid increment above the second-highest bid. Google currently shares 68% of revenue generated by AdSense with content network partners, and 51% of revenue generated by AdSense with AdSense for Search partners.[6]
Google launched its AdSense program, originally named content targeting advertising in March 2003.[7] The AdSense name was originally used by Applied Semantics, a competitive offering to AdSense and then adopted by Google after Google acquired Applied Semantics in April 2003.[8]Applied Semantics was started in 1998 by Gilad Elbaz and Adam Weissman. Some advertisers complained that AdSense yielded worse results than AdWords, since it served ads that related contextually to the content on a web page and that content was less likely to be related to a user's commercial desires than search results. For example, someone browsing a blog dedicated to flowers was less likely to be interested in ordering flowers than someone searching for terms related to flowers. As a result, in 2004 Google allowed its advertisers to opt out of the AdSense network.[9]
Paul Buchheit, the founder of Gmail, had the idea to run ads within Google's e-mail service. But he and others say it was Susan Wojcicki, with the backing of Sergey Brin, who organized the team that adapted that idea into an enormously successful product.[10] By early 2005 AdSense accounted for an estimated 15 percent of Google's total revenues.[9]
In 2009, Google AdSense announced that it would now be offering new features, including the ability to "enable multiple networks to display ads".
In February 2010, Google AdSense started using search history in contextual matching to offer more relevant ads.[11]
On January 21, 2014, Google AdSense launched Direct Campaigns, a tool where publishers may directly sell ads.
AdSense for Content[edit]
The content-based adverts can be targeted for interest or context. The targeting can be CPC (click) or CPM (impression) based. There's no significant difference between CPC and CPM earnings, however CPC ads are more common. There are various ad sizes available for content ads. The ads can be simple text, image, animated image, flash, video, or rich media ads. At most ad sizes, users can change whether to show both text and multimedia ads or just one of them. As of November 2012[update], a grey arrow appears beneath AdSense text ads for easier identification.
AdSense for Feeds[edit]
In May 2005, Google announced a limited-participation beta version of AdSense for Feeds, a version of AdSense that runs on RSS and Atom feeds that have more than 100 active subscribers. According to the Official Google Blog, "advertisers have their ads placed in the most appropriate feed articles; publishers are paid for their original content; readers see relevant advertising—and in the long run, more quality feeds to choose from."[12]
AdSense for Feeds works by inserting images into a feed. When the image is displayed by a RSS reader or Web browser, Google writes the advertising content into the image that it returns. The advertisement content is chosen based on the content of the feed surrounding the image. When the user clicks the image, he or she is redirected to the advertiser's website in the same way as regular AdSense advertisements.
AdSense for Feeds remained in its beta state until August 15, 2008, when it became available to all AdSense users. On December 3, 2012, Google discontinued AdSense For Feeds program.[13]
AdSense for search[edit]
A companion to the regular AdSense program, AdSense for search, allows website owners to place Google Custom Search boxes on their websites. When a user searches the Internet or the website with the search box, Google shares 51% of the advertising revenue it makes from those searches with the website owner.[6] However the publisher is paid only if the advertisements on the page are clicked; AdSense does not pay publishers for regular searches. Web publishers have reported that they also pay a range from $5.67 to $18.42 per click.
AdSense for mobile content[edit]
AdSense for mobile content allows publishers to generate earnings from their mobile websites using targeted | 计算机 |
2014-23/1195/en_head.json.gz/35789 | Anonymous vs Church of Scientology
Anonymous vs Church of Scientology (Read First Post!)
Originally Posted by Wikinews
Church of Scientology related websites, such as religiousfreedomwatch.org have been removed due to a suspected distributed denial-of-service-attack (DDoS) by a group calling themselves "Anonymous". On Friday, the same group allegedly brought down Scientology's main website, scientology.org, which was available sporadically throughout the weekend. Several websites relating to the Church of Scientology have been slowed down, brought to a complete halt or seemingly removed from the Internet completely in an attack which seems to be continuous. The scientology.org site was back online briefly on Monday, and is currently loading slowly.
Really... I don't know what to think about this. On one hand, "Anonymous" strikes me as a bunch of 13-year old script kiddies posting lolcats on *chans; on the other, Scientology really, really pisses me off, in many ways... so I won't have an opinion on this until more time passes. If more attacks to Scientology on the net do happen on a constant basis, I'll give "Anonymous" credit as an organization of sorts... if the whole thing dies out in a week, I'll keep my picture of them as a bunch of 13-year old script kiddies posting lolcats on *chans (note that I'm not condoning the attacking of websites).
If anything, this is interesting as a social phenomenon the internet has brought about.
Here be an assortment of links in case any of you wish to get more info on the matter:
The Project Chanology--main Anon wiki on the matter. Note: The site undergoes daily attacks, so it might not be available all the time. Be wary of possible vandalizing actions, too.
List of local gatherings in case you wish to join a protest.
For live info on raids: irc.partyvan.org #xenu, #irl and #v (important: the last one is to mask your vhost! Be sure to do that if you use the IRC channel!)
These are alternative channels of information in case the main wiki is down.
None of these links are permanent and may suffer changes as the events develop. I'll try to keep them updated as much as I can, but I can't promise anything.
Last edited by WanderingKnight; 2008-01-31 at 15:49.
Anonymous sure makes good propaganda videos.
It's almost too good of publicity for the Church of C*** to be true.
Originally Posted by WanderingKnight
I think it's a bit frightening. The whole "anonymous" phenomenon is basically people acting out of swarm behavior - I don't recall the official term for it, but people tend to lose their individuality and resort to some more primal behaviors. This happens with people in large groups - this is part of the reason why it's so difficult for peaceful protests to remain peaceful. There's always someone who's a bit too edgy (or if you're a conspiracy theorist, someone hired by the opposing side) who sets off what becomes a chain of violence or otherwise poor behavior.
On the internet, this is just bad and also hypocritical. If you claim that information wants to be free, but then knock someone's website offline, you're preventing others from accessing that information. All the same, if the "anonymous army" realizes their full potential, what's to prevent them from going after other websites that they decide they don't like, or just want to toy with? We have no real ways of preventing it and it could be extremely disruptive. As the saying goes, "sit down - you're rocking the boat." It's bad enough that we have botnets doing this sort of thing.
In real life, this could become very dangerous. All it takes is one overly zealous participant in the "anonymous army" to deface or damage Church of Scientology structures or even people. While I disagree with the Church of Scientology and despise their methods of badgering news reporters who try to expose them for the scam that they are, trying to harass members or deface their property are crimes under our society and rightly so. I find that aspect the most worrisome - that someone might, under the influence of the swarm, do something a bit too extreme and take it just a bit too far.
Other than those worries, I can't say that I feel bad for the Church of Scientology. It'll be interesting to see what happens either way, but I will certainly not participate and would try to discourage anyone from doing any of those sorts of actions, regardless who it's targeted against.
P.S. I sort of disagree with combining this with the News Stories thread... but if it garners enough replies, I'd imagine it'll become its own thread once more.
for some reason i got this image of the Zergs (anon) vs the Protess Zealots (Scientology).
Omfgbbqwtf
Location: UC San Diego
More News on Anon vs Scientology
Send a private message to Omfgbbqwtf
Find More Posts by Omfgbbqwtf
tripperazn
Toyosaki Aki
The whole "anonymous" phenomenon is basically people acting out of swarm behavior - I don't recall the official term for it, but people tend to lose their individuality and resort to some more primal behaviors.
You're referring to "group think" or "mob psychology". I'm not so sure about that, I've never been on 4chan, but they seem to be highly efficient in their methods, uncharacteristic of groups who have fallen to group think. The Fox documentary on them definitely was pretty frightening in giving you an idea of what they're capable of. It's horribly dirty information warfare, but I have to admit it works. I have no love for Scientology and am pretty interested as to how this works out. Anyone know if they've attacked Fox for the documentary yet?
Send a private message to tripperazn
Find More Posts by tripperazn
If you were wondering about the Anon vs. Scientology bit, check out the Insurgency Wiki. Even if a single person didn't write it, it's incredibly long and detailed. I'd imagine that there's a mixture of real concern along with the random "did it for the lulz" ideas that went into it. It's sort of hard to not feel that it's justified after reading that, especially if you've been following Scientology's tactics. It'll be interesting to see what happens.
Location: Virginia Tech
Simply put, this entire church of scientology raid is unjust, and I really wish people would stop having these kinds of raids.
Especially since the people propogating these raids were in no way harmed by the church of scientology's demands for removal of that video. 4chan did not lose visitors because of that, nor did they lose money. However, these DDOS raids and protests are losing the church of scientology both money and people. 4chan had no reason to invade, other than for a few cheap laughs at other humans' expense. It's inhumane, I think.
Send a private message to Kristen
Visit Kristen's homepage!
This reminds me of Ghost in the Shell: Stand Alone Complex.
Originally Posted by ChrissieXD
However, these DDOS raids and protests are losing the church of scientology both money and people.
That's the point. The Church of Scientology isn't just another religious organization. Its founder was a science fiction writer, and two of his remarks are incredibly suspicious. First, he was quoted as saying that if you want to make a lot of money, you should organize a religion (and shortly after that, the Church of Scientology was born). Later, he stated that if you want to control people, you lie to them.
The CoS beliefs sound like something a sci-fi writer would come up with, but to be honest I feel that you should be free to believe what you will. We don't even need to attack the beliefs to make the whole thing look bad. When you join you're expected to pay a fee for the teachings of the church. Members go up in levels based on the teachings and activities they partake in, and these all cost money. It gets more expensive the higher up you go.
So far, the information I've given just makes the CoS look like a scam of some sort. You brainwash people and then take advantage of it to get tons of money from them - it's not nice, but it's certainly not vile. The trouble with the CoS is that part of the teachings, from what I understand, dictate that many people are evil and will try to stop the CoS. These people are open game - there is none of Christ's "love thy neighbor and thy enemies" here. Ex-members of the CoS are also targets. The CoS itself will harrass these ex-members and people it perceives to be attackers. Part of the reason why the media and other news outlets are so cautious when dealing with the CoS is because the CoS legal team is extremely aggressive, and the CoS itself will engage in information warfare (dragging up items from your personal life, creating problems, and so on). I've seen a video posted by the CoS of a journalist who had a history of investigating and being critical of the CoS breaking down into a rage after they confronted him with issues from his personal life - it was meant to disgrace and discredit him. In some ways, it's not terribly different from what Anonymous strives to do.
Other accounts from ex-CoS members make the organization sound downright awful. I've read that family members are barred from speaking to each other - unless they're current CoS members. Paying parts of the CoS are run under slave labor-like conditions, with poor pay, long hours, and an all-controlling employer. Again, it's bad that people are brainwashed into actually doing these things, but what makes it worse is the threat of what CoS will do to you if you leave. This is not a pleasant organization.
As if that's not bad enough, they've started to impact government offices, too. They make quite a bit of money off of their followers and were originally recognized as a corporation. They petitioned the IRS to make them tax-exempt but from what I heard, it failed - until they got some of their own members into the IRS staff. Now they don't even pay taxes. I can also recall a somewhat frightening story about how there was a murder or some other grave crime in a small town mostly populated by scientologists, and the judge - also a scientologist - was basically extremely biased. A small town full of brainwashed, hostile people - that's the sort of situation you read about in horror stories.
Earlier I stated that I didn't really support the whole Anonymous thing because I figured that they were just doing it for their own amusement. It's something they've done quite often in the past, and I find it to be immature and a bit of an annoyance. But read over their wiki, and take a look also at their other "enemy" pages. The CoS page is incredibly long, much longer than any of the other pages, and a lot more thought seems to have gone into much of it. I've also watched a number of their "IRL raids" on youtube, and was pretty surprised. I'd expected to see vandalism, harrassment, and other cheap forms of humor. Instead, I saw people (occasionally masked) handing out and posting flyers, and sometimes talking to people about it. They're making it an information war. I'm still afraid that someone is going to take it a bit too far, but for now I'm really pleasantly surprised. It's well organized, well meaning, and right now it's largely respectable.
I don't agree with things like prank calling pizza places and ordering it to scientology locations to run their bills up, of course. But I'm somewhat reversing my stance on their DDoS attacks and attempts to push scientology off of the internet. I think that scientology - like many other cults - preys on people and takes them in, blinding them to what's really going on. There are other societies/cults like this in America, and these have led to a profession of sorts called "deprogrammers" - people who will "kidnap" the target person from the society's grounds and then undo the brainwashing. It sounds ridiculous that we'd have such things in our present-day society, but scientology certainly matches the profile for such an organization.
So perhaps what it really comes down to is how you feel about the Church of Scientology. If you feel that organizations should be free to do with people what they please, then this "war" is unjustified. If you feel that the CoS is a harmful organization and has no place in society, then it's completely justified and we can only hope that it sets off a chain of events that ultimately either destroys the entire organization, or at least sends it back to just being a small-town cult. Admittedly, the implications of this being carried out by vigilantees/mob justice are a bit unsettling.
Religion is a touchy subject, but I don't believe that's what this is about. The big three - Judaism, Christianity, and Islam - will not persecute you for entering and leaving the religion. They are also not set up in a manner that forces you to pay to participate. To me, the Church of Scientology is set up more like a scam than a religion. I won't personally be participating in attempting to remove them. However, as you can probably tell I've made up my mind about them. I think this has the potential to do a lot of good, and I'm very much interested to see what's going to happen next.
So let us see if two wrongs make a right. I am sure that somewhere down the line, such an attack is criminal. But considering how seriously screwed scientology is as a "religion" I bet that such an attack would be casually looked over by the authorities because even though they should do something technically, deep down they are really cheering for this cult to cease.
On the bit about CoS being downright hostile when it comes to people outside the cult talking about Scientology, well... CoS has been almost a self-declared enemy of the internet. There's an entire article in the Wikipedia called "Scientology versus the Internet". Among many questionable legal measures, one of their attacks was to force Slashdot to delete a comment that posted an excerpt of one of their "holy books", claiming copyright infringement. So yes, they're not nice guys, and from what Ledgem says seems that Anonymous isn't just a bunch of 13-year old kids anymore... The combative and emotional part inside me is siding more and more with them.
We'll see. For now, the intention seems to be to Google Bomb the Scientology website to make it #1 hit for "dangerous cult", "brainwashing cult", etc...
(Note: for those who don't know, Google Bombing is to use a website to link to a page under a particular keyword, in this case, "dangerous cult", to make it the first hit under that keyword in Google).
PS: The situation is still developing... so I'd like the thread to take its separate shape once more. I really don't feel it's "just news".
Crystal_Method
I want dreads...
Location: a cardboard box
This whole Anon vs. Scientology to me is a very interesting experience. I myself would like to say I'm indifferent about the situation, but the truth is there are many things I really disapprove of the Church of Scientology. The way I see it is that this is somewhat a reenactment of the holocaust. I'm not saying that Anon is doing exactly what the Nazi's did. But their approach of the situation is somewhat scary, and as much as I dislike Scientology, the fact that things can turn out extremely horrible really makes me wonder if they are really trying "save" people who have been brainwashed by the Church of Scientology. Though currently, I feel Anon has a pretty good standing in their cause. Anyway the reason for my post is this, it happens to be slideshow made a while ago (maybe a little more than year ago) that is extremely informative of what goes on behind the scenes of Scientology. One of the most possibly scary things about the Church of Scientology is their extremely powerful and influential legal team. They aren't normal, these guys will do anything to rid any bad name and obstacle against them. Something no one should try to mess around with, which is why I feel this Anon vs Scientology can turn into a very ugly situation.
Send a private message to Crystal_Method
Find More Posts by Crystal_Method
While I do not agree with this post, I definately respect it, as it is well thought out and reasoned.
Is the Church of Scientology evil? Based on what you gave me in those facts, if I were to make an opinion, yes. I haven't looked into the church of scientology itself, nor have I looked into its beliefs, its actions, or policies, since I'm honestly not interested in it. I'm a Christian, so their actions don't really make much of a difference to me.
However, I'm not arguing that what they are doing is good. I'm saying that what 4chan is doing is bad. They have no right to do what they are doing and do illegal things to that church. They have no right to take away potential profits or members from that church. It's the very same way as how it would be wrong to DDOS Microsoft. By taking down their website, they lose other people money and disrupt hard work.
And yes, Scientology did work hard in order to get to where they are now. Maybe in the wrong way, but it still was hard work.
Nobody has the right to do any of this, except the government. If 4chan thought something was wrong, they should have E-Mailed an authority and asked for an investigation into it. Since, after all, it isn't really a religion as much as a cult.
Keep in mind, back in early 2006 I was against the YTMND raids against Ebaumsworld for much the same reason, that I think that things can be handled in much more peaceful ways. That had a reason behind it, as YTMND was directly hurt. This had no hurt to 4chan, so they had even less of a ground to attack.
Eh, so by your definition, scammers shouldn't be punished in any way, because he worked hard to amass their fortune? I'm not condoning the actions of Anonymous, I'm just saying that, of all their possible targets, Scientology certainly doesn't bother me at all.
Since you apparently haven't heard much about Scientology, even though you're not interested, I'd suggest you to take a look at what happens when professional journalism wants to know a bit more about their organization (it's four videos, the audio in the latter ones gets badly off sync but it's still worth it).
Read the rest of my post. Scammers should not be punished by us. They should be by the government.
I know, I read it, it's just that, by the way that particular phrase was worded, it doesn't sound right. I tend to nit-pick on these points sometimes It's all right, I don't think it's legal either, but I'm not gonna get in their way. If you ask me what I really think, I believe Scientology deserves it, but you'll never get that out of me in court But really, as a final recommendation, do please watch the video I pointed out. There's lots of material on the net about how nasty Scientology can be, but that's the most graphic example I've seen so far.
While I do wish anonymous luck, I cannot condone their actions. It's true scientology is a dangerous cult and should be stopped. However, it should be done legally. DDoS attacks are most certainly a crime. I cannot accept that the ends justify the means here.
Though this is interesting and a bit scary in it's implications for the future of information based warfare.
If it could have been done legally, wouldn't they (the government) have already done so? O.o
Not necessarily. If it never passed their minds to bring it down, then they couldn't have done it. And, maybe they need some people to apply to them as well or something.
The big problem is that Scientology can be considered a religion, and that is the 1st ammendment's issue. | 计算机 |
2014-23/1195/en_head.json.gz/35808 | Gaming Trend Forums > Gaming > Console / PC Gaming (Moderators: farley2k, naednek) > Is backwards compatibility a big deal?
Topic: Is backwards compatibility a big deal? (Read 2434 times)
Is backwards compatibility a big deal?
I have heard some say that backwards compatibility is a big deal for some people. For other people, it's not. What do you think?For me, I honestly don't care. I got a PS2 for PS2 games. I can count the number of PSOne titles I have on one hand. It never was a factor, but I didn't own a PS1. I might have changed my mind if I could, but I doubt I would have sold it off if I did get one.What does everyone else think?
It makes my life easier, but I don't think it's going to be a factor in my future console purchases.For me it means I might be more likely to play my older games. I won't have to plug in my older systems to play my older games. I really enjoy my vast Dreamcast collection, but it's a pain to pull it out and plug it in just to play Worms in all its 2D glory.I might like backwards compatibility with the Xbox because I enjoy the games and the system is so damn big. The PS2 is smaller, so it's a lot easier to hide.To be honest, I really don't go back to many of my older PS games. The exceptions are the RPG's, because there are a lot of unique titles (Persona and the super Working Designs Lunar sets).People do like to revisit older games. I won't speak for every one, but when I am buying the latest system I am going to want to play the latest and greatest titles that show off my investment. Am I going to want to break out the Xbox just to play Project Gotham 2 when Project Gotham 3 is out?It's not like the games or systems are going to dissapear. It's our choice to buy the new system and put the past behind us, not Sony, Nintendo, or Microsoft. It's not their responsibility to be backwards compatible, but if they add it to the offering I will applaud them. It's nice of them to do that. Consumers will feel like the console maker is doing them a favor. In my mind it won't affect my purchase, but I was excited when I learned the PS2 was backwards compatible.I feel it is more important that the GB and the upcoming DS be backwards compatible. I can find a place in my home to put a console, but I can only carry one handheld.
With very few exceptions, I'll never really revisit the gaming classics one (or more) generations ago, so backward's compatability isn't that much of an issue.Short of my GBA anyway. Why just the GBA? Good question. I guess it's because there isn't such a huge graphical and sound boost between the portable consoles as there are on the home ones.Ever plugged in your N64 into your HDTV? Good lord does it look ugly. At least the PS2's graphical 'cheats/boosts' make PS1 games look better.
Andrew Mallon
Backwards compatibility hasn't been a big deal in the past, but I think it is going to become a bigger issue with each successive console generation. With the NES, probably about 99% of the games released for that system are unplayable today without significant tweaks to gameplay, graphics, and sound. No one wants to go back to typing in long-ass passwords in lieu of a save system. No one wants to play a 40-hour RPG listening to the "music" generated by the NES's sound chip. With those systems, there was no reason to look back once a new console was released. It's completely different now. There are a handful of SNES games (Chrono Trigger, Final Fantasy III) that are still completely playable today. And there is a whole range of 2D PSX and N64 games that are just as much fun to play now. With the current generation, depending on the art style, there are a lot of 3D games that are still going to be just as good ten years from now. Sly Cooper, Jet Set Radio Future, and Prince Of Persia will look dated in a few years, but I don't think they'll ever look bad. I can see myself wanting to play Prince of Persia every few years or so if I'm still actively gaming. The big question in my mind is how am I going to play it if the Xbox isn't backward compatible (I have the Xbox ver of Prince of Persia)? All hardware has a finite lifespan and as these consoles get more complex they're going to break more often. I've got 25 Xbox titles now and will probably have 40 to 50 by the time the Xbox life cycle is over. Five years from now what am I going to do with this software if my Xbox breaks? Scour pawn shops and eBay for used Xboxes? Play these games on the computer via an emulator? I'll probably be able to find some way to continue to use the library but it will become much harder and involve more compromises with each year. With the PS3, I don't have to worry about it, as I know that I'll be able to buy hardware in the immediate future that will play these games. The lack of backwards compatibility isn't going to preclude me from buying the next Xbox, but it will determine when I buy it and how many games I buy for it. The amount of money I spend on console hardware is much less than I spend on software. I'll appreciate a console that much more if I know that it isn't going to be a technological dead end in a few years.
There are a lot of great quality games that I'd always like to go back and play. For example, playing The Cradle mission in Thief: Deadly Shadows simply for the atmosphere alone would be cool once I've moved on a bit. That said, with the backlog that most of us have of unfinished games, think of how many times you have broken out the older systems to play through classics like Zelda. It just doesn't happen that often, or if it does do you actually finish it again?
I've never really understood the need for backwards capability.Usually when I finish a game I never go back to it. There are exceptions of course...games I'll keep forever (Legend of Zelda, Nintendo Pro Wrestling), but the times I go back and play those games are rarer than rare.There are several games on my XBOX that I plan to keep forever, but whether or not I'll ever come back to them in a few years, after the next few generations of consoles have been released, that remains to be seen.Of course, I plan to be playing XBOX for a long time...or at least until the nextgen gets cheap enough fo rme to afford Logged
I'm all about backwards compatibility. Typically, it's because I buy games slowly, and I like knowing that I can choose from the whole library of titles when I'm at the store (for example, I only ever played Metal Gear Solid last November for the first time). Plus, call me crazy but I actually do go back and play games of yore. I just took a spin with Castlevania: Symphony of the Night the other day. Also, one of my casual gamer friends will only play Tekken 3 against me. Add this to the above point that the hardware I use for these games will almost certainly break down at some point, I enjoy knowing that I can always access these titles. The point becomes even more salient for those of us carrying a backlog into the next generation.
I can see where it might be useful, but it isn't a factor for me.If this is your first system from the company, you may want to play some of the older games that you never had the chance to.If I buy a PS3 - I bought it to play PS3 Games. I have a PS2. If I want to go back to something, I'll just play it on the system it was intended for.While it's a nice feature, if it adds any costs, I say 'dump it'.
It's definitely important... I don't want to have a closet full of ancient consoles just in case I have the urge to play game X somewhere down the road. With backwards compatibility, I can toss/sell the old console (as I did with my PS1) once the next gen console is released. Not to mention the fact that consoles will eventually break down and die... with backwards compatibility, you don't have to worry if your old console can still boot up, or go through ridiculous steps like turning your PS1 upside down or blowing on your NES cartridges until you're blue in the face.
I don't think you'll find anyone who will say "I don't want it." But I'm buying an Xbox2/PS3 for Xbox2/PS3 games. The only other thing that might factor into a purchase decision is the price.
The only argument I have seen that makes a lot of sense (besides convenience) is the idea of alienating some of the current owners. The Xbox is not that old, yet rumors have the Xbox 2 launch in 2005. Will people be upset about their game catalog becoming obsolete so quickly? Note that it won't really be obsolete, but people might feel they are being forced to buy new hardware and to turn their back on the new games that are suddenly old. Backwards compatibility would go a long way in allaying these fears and complaints.Your Xbox will not explode or revolt just because a new system is released. If it breaks you will have to buy a new one, which you would have to do even if there is never a new console released. With millions upon millions of consoles in circulation they WILL be available. You DO NOT have to buy a new system!Nintendo Entertainment SystemSuper NESNintendo 64Sega Master SystemSega GenesisSega CDSega SaturnSega DreamcastSega NomadTurboGrafix 16Atari JaguarAtari Lynx^^^not backwards compatiblewhatever did we do?
I would be playing a lot more of my old games for those systems if I hadn't been forced to choose between clearing shelf space and maintaining a link to my childhood. Still, your point is valid. Sometimes in life we just have to move on. I'll be the guy screaming and kicking in the back.
It's always nice, but I don't consider it necessary. If that feature was left out, it wouldn't prevent me from getting the new consoles when they come out.I was wondering though, if the new consoles are more powerful, would it help games that have frame rate issues on the current ones?
Backwards compatiblity is luxury, not necessity. But, aren't luxuries nice? Bah, backwards compatible or not, I'm not buying a new system in 2005. I'll be hardpressed to buy them in 2006, to be perfectly honest.
Considering that despite owning all of the current consoles my current playlist consists of Final Fantasy 4, Final Fantasy 8, and Fire Emblem, I'm a big, big fan of backwards compatibility. That said, it won't stop me from purchasing a console but it could delay it a little while.
QuoteI was wondering though, if the new consoles are more powerful, would it help games that have frame rate issues on the current ones?Actually I don't think so. I.E. Thief was designed for X amount of RAM. Running it on the Dev Kit which has double the ram of a retail kit doesn't make it faster as the game isn't programmed to take advantage of the extra headspace. | 计算机 |
2014-23/1195/en_head.json.gz/37579 | Fugitive hacker indicted for running VoIP scam
US seeks extradition of Miami man who was on the run for more than 2 years
Sharon Gaudin (Computerworld) on 19 February, 2009 10:05
Just days after his apprehension in Mexico following two years on the run from law enforcement authorities, an alleged hacker was indicted this week by a federal grand jury for hacking into the computer networks of VoIP service providers.Edwin Pena had been arrested in June 2006 on computer and wire fraud charges. The US government charged that Pena and a cohort hacked into the computer networks from November 2004 to May 2006. Pena then resold the VoIP services to his own customers.The 20-count Indictment handed down by the Grand Jury on Tuesday charges wire fraud, computer hacking and conspiracy. According to the DOJ, wire fraud carries a maximum penalty of 20 years in prison and a US$250,000 fine. The conspiracy and the computer hacking violations each carry a maximum penalty of five years in prison and a fine of US$250,000.According to the US Attorney's Office, Pena and co-conspirator Robert Moore, stole and sold more than 10 million minutes of VoIP service, causing the VoIP providers to lose more than US$1.4 million in less than a year.Pena was first charged on June 6, 2006. The government contends that he fled the country to avoid prosecution on August 12. He was apprehended by Mexican authorities earlier this month and is still being held there. US prosecutors are seeking extradition.In the fall of 2007, Moore pleaded guilty to conspiracy to commit computer fraud. He currently is serving a two-year sentence in federal prison.Federal investigators contend that Moore acted as the hacker and that Pena was the mastermind behind the scheme. But while Moore went to prison, Pena went on the run.Voice-over-IP systems route telephone calls over the Internet or other IP-based networks.As part of the scheme, Moore's job was to scan telecommunications company networks around the world, searching for unsecured ports. It was noted in the criminal complaint that between June 2005 and October 2005, Moore ran more than 6 million scans of network ports within the AT&T network alone.The complaint alleges that once Moore found unsecured networks, he would then e-mail Pena information about the types of routers on the vulnerable networks, along with corresponding usernames and passwords. Then, according to the government, Pena would reprogram the vulnerable networks so they would accept his rogue telephone traffic.The government charges that Pena ran brute force attacks on VoIP providers to find the proprietary codes they used to identify and accept authorized calls coming into their networks. He allegedly would then use the codes to surreptitiously route his own calls through their systems.
Tags hackersvoip | 计算机 |
2014-23/1195/en_head.json.gz/38410 | Wikipedia besieged by Muhammad image protest
David Price (PC Advisor) on 22 February, 2008 09:27
Angry Muslim surfers are demanding that Wikipedia remove images of the prophet Muhammad from his page on the online encyclopedia - but thus far the website is standing firm.Depictions of Muhammad have been prohibited under almost all forms of Islam since the Middle Ages in order to prevent the rise of idolatry, and thousands of Muslim Wikipedia users claim it is sinful for the website (which has four pictures of Muhammad at this page, of which only two have the face blanked out, or 'veiled') to break this rule.The discussion of the subject, as you might expect, has become both heated and repetitive. "Why are wikipedia admins insisting on inflicting pain and hatred upon muslims?" asks one complainant. "We don't have to censor ourselves to show any 'respect'," comes the response. It's the same as discussing the holocaust, say one side. It's the same as asking a Muslim website to put up pictures of Muhammad, say the other.Thank goodness technology has advanced to the point where we can rehearse millenia-old vendettas with bores on the other side of the globe.Clearly Wikipedia is right to hold firm - it's one thing to be expected to respect another religion's traditions, it's quite another to be required to obey its laws - but the debate seems to have skirted round a somewhat thorny issue that can frustrate those of us who don't live in the US. Wikipedia is meant to be international, but it's run by America, and obeys American laws. And in that respect it's kind of symptomatic of the internet as a whole. | 计算机 |
2014-23/1195/en_head.json.gz/38835 | What the WELL's Rise and Fall Tell Us About Online Community
Howard Rheingold Jul 6 2012, 2:27 PM ET
A key early member of the most influential early online community remembers the site, which is now up for sale.
Historian Fred Turner with three key players in the WELL story, Stewart Brand, Kevin Kelly, and Howard Rheingold (flickr/mindmob).
In the late 1980s, decades before the term "social media" existed, in a now legendary and miraculously still living virtual community called "The WELL," a fellow who used the handle "Philcat" logged in one night in a panic: his son Gabe had been diagnosed with leukemia, and in the middle of the night he had nowhere else to turn but the friends he knew primarily by the text we typed to each other via primitive personal computers and slow modems. By the next morning, an online support group had coalesced, including an MD, an RN, a leukemia survivor, and several dozen concerned parents and friends. Over the next couple years, we contributed over $15,000 to Philcat and his family. We'd hardly seen each other in person until we met in the last two pews of Gabe's memorial service. Flash forward nearly three decades. I have not been active in the WELL for more than fifteen years. But when the word got around in 2010 that I had been diagnosed with cancer (I'm healthy now), people from the WELL joined my other friends in driving me to my daily radiation treatments. Philcat was one of them. Like many who harbor a special attachment to their home town long after they leave for the wider world, I've continued to take an interest -- at a distance -- in the place where I learned that strangers connected only through words on computer screens could legitimately be called a "community."I got the word that the WELL was for sale via Twitter, which seemed
either ironic or appropriate, or both. Salon, which has owned the WELL since 1999, has put the
database of conversations, the list of subscribers, and the domain
name on the market, I learned.I was part of the WELL almost from the very beginning. The Whole Earth 'Lectronic
Link was founded in the spring of 1985 - before Mark Zuckerberg's first
birthday. I joined in August of that first year. I
can't remember how many WELL parties, chili cook-offs, trips to the
circus, and other events - somewhat repellingy called "fleshmeets" at
the time - I attended. My baby daughter and my 80-year-old mother
joined me on many of those occasions. I danced at three weddings of
WELLbeings, as we called ourselves, attended four funerals, brought
food and companionship to the bedside of a dying WELLbeing on one
occasion. WELL people babysat for my daughter, and I watched their
Don't tell me that "real communities" can't happen online.
In the early 1980s, I had messed around with BBSs, CompuServe, and the
Source for a couple of years, but the WELL, founded
by Stewart Brand of Whole Earth Catalog fame and Larry Brilliant (who
more recently was the first director of Google's philanthropic arm,
Google.org), was a whole new level of online socializing. The
text-only and often maddeningly slow-to-load conversations included
a collection of people who were more diverse than the
computer enthusiasts, engineers, and college students to be found on
Usenet or in MUDs: the hackers (when "hacking" meant creative
programming rather than online breaking and entering), political
activists, journalists, actual females, educators, a few people who
weren't white and middle class.
Howard Rheingold talking about the early days of the WELL.
PLATO, Usenet, and BBSs all pre-dated the WELL. But what happened in this
one particular online enclave in the 1980s had repercussions we would
hardly have dreamed of when we started mixing online and face-to-face
lives at WELL gatherings. Steve Case lurked on the WELL before he
founded AOL and so did Craig Newmark, a decade before he started
Craigslist. Wired did a cover story about "The Epic Saga of the WELL"
by New York Times reporter Katie Hafner in 1995 (expanded into a book
in 2001), and in 2006, Stanford's Fred Turner published From
Counterculture to Cyberculture: Stewart Brand, the Whole Earth Network, and the Rise of Digital Utopianism which traced the roots of much of
today's Web culture to the experiments we performed in the WELL more
than a quarter century ago. The WELL was also the subject of my Whole Earth Review article that
apparently put the term "virtual community" in the public vocabulary
and a key chapter in my 1993 book, The Virtual Community. Yet despite the historic importance of the WELL, I've grown accustomed to
an online population in which the overwhelming majority of Facebook
users have no idea that a thriving online culture existed in the
In 1994, the WELL was purchased from owners the Point Foundation (the
successor to the Whole Earth organization) and NETI, Larry Brilliant's
defunct computer conferencing software business. The buyer, Rockport
shoe heir Bruce Katz, was well-meaning. He upgraded all the
infrastructure and hired a staff. But his intention to franchise the
WELL didn't meet with the warmest reception from the WELL community.
Let's just say that there was a communication mismatch between the
community and the new owner. Panicked that our beloved cyber-home was
going to mutate into McWell, a group of WELLbeings organized to form
The River, which was going to be the business and technical
infrastructure for a user-owned version of the WELL. Since the people
who talk to each other online are both the customers and the producers
of the WELL's product, a co-op seemed the way to go. But the panic of
1994 exacerbated existing animosities - hey, it isn't a community
without feudin' and fightin'! - and the River turned into an endless
shareholders meeting that never achieved a critical mass. Katz sold
the WELL to Salon. Why and how Salon kept the WELL alive but didn't
grow it is another story. After the founders and Katz, Salon was the
third benevolent absentee landlord since its founding. It's healthy
for the WELLbeings who remain - it looks like around a thousand check in regularly, a couple of hundred more highly active users, and
a few dozen in the conversation about buying the WELL - to finally
figure out how to fly this thing on their own.
In 1985, it cost a quarter of a million dollars for the hardware (A
Vax 11/750, with less memory than today's smartphones), and required a
closet full of telephone lines and modems. Today, the software that
structures WELL discussions resides in the cloud and expenses for
running the community infrastructure include a bookkeeper, a system
administrator, and a support person. It appears that WELL discussions
of the WELL's future today are far less contentious and meta than they
were fifteen years ago. A trusted old-timer - one of the people who
drove me to cancer treatments - is handling negotiations with Salon.
Many, many people have pledged $1000 and more - several have pledged
$5000 and $10,000 - toward the purchase.
The WELL has never been an entirely mellow place. It's possible to get
thrown out for being obnoxious, but only after weeks of "thrash," as
WELLbeings call long, drawn-out, repetitive, and often nasty
meta-conversations about how to go about deciding how to make
decisions. As a consequence of a lack of marketing budget , of the
proliferation of so many other places to socialize online, and (in my
opinion) as a consequence of this tolerance for free speech at the price
of civility (which I would never want the WELL to excise; it's part of
what makes the WELL the WELL), the growth of the WELL population
topped out at around 5000 at its height in the mid-1990s. It's been
declining ever since. If modest growth of new people becomes
economically necessary, perhaps the atmosphere will change. In any
case, I have little doubt that the WELL community will survive in some
form. Once they achieve a critical mass, and once they
survive for twenty-five odd years, virtual communities can be
harder to kill than you'd think.
Recent years have seen critical books and media commentary on our alienating fascinatiion with texting, social media, and mediated socializing in general. University of Toronto sociologist Barry Wellman calls this "the community question," and conducted empirical research that demonstrated how people indeed can find "sociability, social support, and social capital" in online social networks as well as geographic neighborhoods. With so much armchair psychologizing tut-tutting our social media habits these days, empirical evidence is a welcome addition to this important dialogue about sociality and technology. But neither Philcat nor I need experimental evidence to prove that the beating heart of community can thrive among people connected by keyboards and screens as well as those conducted over back fences and neighborhood encounters.
Howard Rheingold is the author of many books on social media and how it shapes society, including Tools for Thought: The History and Future of Mind-Expanding Technology, The Virtual Community: Homesteading on the Electronic Frontier, and most recently, Net Smart: How to Thrive Online.
Benjamin M. Friedman Jun 25, 2014 Multiple Lovers, No Jealousy | 计算机 |
2014-23/1195/en_head.json.gz/39284 | Meandering streams of consciousness
I post about a variety of things: programming, urban homesteading, python, HCI, women in tech, conferences, Aspergers, neurodiversity, whatever catches my attention.
I also post raw emotional and psychological "processing", to provide a glimpse into the mind of a female Aspie geek.
Alex and I did Keynotes at PyCon APAC. There are videos of many talks including our keynotes. The talks were wide-ranging and well-done. We got to meet really awesome people there. Liew Beng Keat was the perfect host and speaker liaison. I especially enjoyed meeting Sandra Boesch of SingPath, her game to learn programming. She and her husband, Chris, hosted a tournament at the conference which was sponsored by the ubiquitous LucasFilms Singapore group. Great way to keep folks engaged and having fun. We had to leave early for Alex's GTUG talk, but it looked like a great way to finish off the conference. If anyone is asking themselves "should I attend PyCon APAC?", the answer, imho, is YES! You'll get a lot out of it. Including a chance to visit one of the most intriguing, perplexing places on earth. Singapore is hot, humid, crowded, noisy, but full of very friendly people. 90F with 90% humidity at 9am was normal. There's a reason there's so many airconditioned malls and underground tunnels connecting all the buildings downtown...
We got there and took a tourist pass that allowed us to hop on and off the tourist busses throughout town (including the Duck Tours - amphibious vehicles.) We got to hear on the busses about the multicultural heritage of which Singapore is very proud (Chinese 76.7%, Malay 14%, Indian 7.9%, other 1.4%) and about the cost-of-living (incredibly high for housing and cars!) and the distinctive districts of the city (Chinatown, Little India, Central Business District, Marina Bay...) The city is very clean (due to the high fines for littering). In fact, there's a joke about it being a "fine city" (referring to fines for jaywalking, littering, etc etc). It was a bit of a change to get used to all the skyscrapers after living in a place where you can't build anything over 2 stories. And the architecture is just weird. Every building is different. Alex joked that, if he were asked to design a skyscraper for the city, he'd go with the most straight rectangular boring block building as possible - because it would stand out so much among all the strangely shaped buildings! Here's an example of one of the most distinctive:
There's a swimming pool on top of the buildings! http://www.898.travel/page_en-70365.html
The food was great. We had the local specialty - Fish Head Curry. DH tried the, ahem, delicacy (the eyeball). I demurred. We also had a lot of great chinese, thai, vietnamese and indian food. Our last night, we had fried noodles (mee goreng) with egg on top. Yummy.
Everything in the city seems to revolve around shopping, food, and finance. But that could just be a tourist's view. Speaking of which, I got addicted to Singapore Slings there. Even got to taste it at the Long Bar in Raffles Hotel where it was invented.
It was nice to see a place that was so positive about its multicultural background. It's quite proud of its mix of people and its history as a British port. (It claims to be the busiest port in the world.) UK tourists (British and Aussie) were everywhere. It was also much more normal there to discuss money - how much things cost, how much someone paid for their home, stuff like that, than you hear in the US in general. Talking about money there is like talking about the weather here.
It's a very strange mix of extremely capitalist but very centrally controlled. (It's often referred to as Authoritarian Capitalism.) They're really focused on having everyone employed. The unemployment rate for Q1 2012 was 2.1%! Almost everyone (80+%) live in government housing, and housing is almost entirely apartment buildings. Note that "govt housing" means it was built and is run by the govt but you buy your apartment (over 95% of people do this). Only people who can't afford to buy even with govt aid rent their place. The tourbus even took time to distinguish between the types of housing ownership "leasehold" and "freehold" - "leasehold" is for 99 years and is way cheaper than "freehold" so almost everyone goes for that. Also - the govt won't sell to a single person typically. Most folks live with their parents until after they marry. (Most marriages are civil unions by the mayor, but you can also have a religious ceremony.) We met a guy who was working there, from Britain, whose company owned the flat he was living in. Oh - and you don't get to do any renovations to your home without permission. It's like the whole place is a big HOA except it's not an HOA - it's the government. And don't you dare vote the wrong way - districts where voters went too much for the opposition party got skipped over when the improvements to the elevators were being installed. So officially democratic but...
Singaporeans also like to claim there's no "social safety net" for folks. On the other hand, they have compulsory savings programs and government-run healthcare. All hospitals are government controlled. All health care is paid for by the person getting the healthcare (to reduce excessive and unnecessary use), but it's paid for out of the mandatory health-savings and the cost depends on the level of subsidies the person gets from the government based on income level. Many people have private health insurance to cover stuff not covered under the govt healthcare plan.
Only about 30% of S'poreans have cars. That's because, before you can buy a car, you have to buy a certificate (for the car) of around $80000 Singapore Dollars. There's also really high import taxes on the car itself. So it ends up costing over 3 times as much to buy a car in Singapore as in the US. And many places have tolls that increase in rush hour. So, driving is highly discouraged.
All in all, a very strange, interesting place. I definitely would *NOT* want to live there. Not just the weather is oppressive, to me. (I'm very sensitive to government controls and didn't even like living in a townhouse because of the HOA.) But, like I said, everyone was VERY friendly. So it's a fun place to visit. Just don't bring chewing gum. Or jaywalk.
Anna Ravenscroft
Geek mom. Urban Homesteader. Stanford class of '10. I have Aspergers and blog about random stuff. View my complete profile
GAE
scipy
self-identity
women_in_tech
Alex and I did Keynotes at PyCon APAC. There are v...
Shared in Reader | 计算机 |
2014-23/1195/en_head.json.gz/39398 | Introducing Bravura, the new music font
May 23, 2013 • Daniel Spreadbury Today I’m at the Music Encoding Conference in Mainz, Germany, where I am giving a presentation on the work I have been doing over the past several months on music fonts for our new application. There are two major components to the work: firstly, a proposed new standard for how musical symbols should be laid out in a font, which I have called the Standard Music Font Layout (SMuFL to its friends, pronounced with a long “u”, so something like “smoofle”); and secondly, a new music font, called Bravura. Read on for more details.
SMuFL
One of the (many) barriers to interoperability between different scoring applications is that there is no agreement on how music fonts should work, beyond a very basic layout that dates back to the introduction of the first music font, Sonata, which was designed nearly 30 years ago by Cleo Huggins (@klyeaux) for Adobe.
Sonata uses a mnemonic approach to mapping. From the Sonata Font Design Specification:
The encoding for Sonata is intended to be easy to use for a person typing… The symbols are typically either visually related to the key to which they are associated, or they are related mnemonically through the actual letter on the keycap. Related characters are typically grouped on the same key and are accessed by using the shift, option, and command keys to get related characters.
For instance, the q key is associated with the quarternoteup glyph, Q (or shift-q) with the quarternotedown character, option-q with the quarternotehead character, and so on. The treble clef is located on the ampersand key (&), because it resembles an ampersand.
In general, the shift key flips a character upside down, if that makes sense for a given character, and the option key selects the note head equivalent of a note. There are many instances where this is not possible or practical, but there is a philosophy in its design that will become evident and will allow a user to easily remember the location of most of the characters in the font.
Fonts that followed in Sonata’s stead, such as Steve Peha’s Petrucci, the first font for Finale, and Jonathan Finn’s Opus, the first font for Sibelius, initially adhered reasonably closely to Sonata’s layout, but pretty quickly they began to diverge as the applications matured, and there was no agreement on how to map additional characters.
As hundreds of new symbols were added to these font families, and new families were added, there was no standardisation at all. The Opus family, for example, now has hundreds of glyphs spread over 18 different fonts, but there is almost no overlap with how many of the same symbols are laid out in, say, Maestro or Engraver, the two font families most commonly-used in Finale.
In 1998, Perry Roland from the University of Virginia proposed a new range of symbols to be added to the Unicode standard for musical symbols, and the range of 220 symbols he proposed was duly accepted into the standard, at code point U+1D100.
Unfortunately, although this range represents an excellent start, it has not caught on (to date, the only commercial font that makes use of the Unicode Musical Symbols range is the OpenType version of Sonata), and it is in any case insufficiently broad to represent all of the symbols used in conventional music notation.
So these are the problems that I set out to solve with the Standard Music Font Layout, or SMuFL: to map all of the symbols used in conventional music notation into a single Unicode range; to allow for easy extensibility in the event that new symbols are dreamed up or there are omissions; to provide a framework for the development of new music fonts; and to develop a community around the standard, so that the wisdom of experts in different fields of music can be brought to bear.
If you want to find out more about SMuFL, you can visit its own web site at www.smufl.org.
Developing a standard like SMuFL is all very well, of course, but to make it real and useful, there need to be music fonts that demonstrate the standard: enter Bravura.
The above image shows Bravura in action: it’s the first two bars of Fibich’s Nálada (Op. 41, No. 139). You can click on the image to see a larger version, or you can download a PDF of the whole page. (The music is set in Sibelius, rather than in our new application, by the way.)
The word “Bravura” comes from the Italian word for “bold”, and also, of course, has a meaning in music, referring to a virtuosic passage or performance; both of these associations are quite apt for the font. In keeping with our desire to draw on the best of pre-computer music engraving, Bravura is somewhat bolder than most other music fonts, as this comparison of the treble (G) clef shows:
Bravura’s clef is the rightmost clef in the above example. It has a very classical appearance, similar to Opus, Sonata and Maestro, but more substantial than all of them. (Emmentaler, the most stylised of the clefs above, is the font used by Lilypond and MuseScore.)
Here is another comparison, showing the eighth note (quaver) from each of the above fonts:
Again, Bravura is the rightmost example. The notehead is nice and oval, though not as wide as Opus (an exceptionally wide notehead), and relatively large in comparison to the space size, aiding legibility. The stem thickness is also boldest for Bravura, though this is only the precomposed note from the font; when a music font is used in most scoring applications, the stem thickness can be adjusted by the user.
Here are a few other symbols from Bravura, to illustrate its classical design:
That rightmost symbol is the percussion pictogram for sandpaper blocks, by the way. The small, semibold numerals immediately to the left are for figured bass.
You will notice that there are few sharp corners on any of the glyphs. This mimics the appearance of traditionally-printed music, where ink fills in slightly around the edges of symbols.
All of the basic glyphs were modeled after the Not-a-set dry transfer system, as mentioned in a previous post. Originals were scanned, examined at high magnification, and then hand-drawn using Adobe Illustrator by yours truly. I have shared the resulting designs with a number of expert engravers, who have given me invaluable feedback on details large and small, and many of the symbols have already been through many revisions.
The result of many hundreds of hours of work, I hope you will agree that Bravura gives a very fine, classical appearance. There is still much work to do, since our own application is not yet at a sufficiently advanced stage of development that all of the glyphs in the font are being used, and there are details such as ligatures and stylistic alternates to be considered as well. But we are making Bravura available now in support of the effort to continue developing SMuFL.
Even better, Steinberg is making Bravura available under the SIL Open Font License. This means that Bravura is free to download, and you can use it for any purpose, including bundling it with other software, embedding it in documents, or even using it as the basis for your own font. The only limitations placed on its use are that: it cannot be sold on its own; any derivative font cannot be called “Bravura” or contain “Bravura” in its name; and any derivative font must be released under the same permissive license as Bravura itself.
If you use Finale or Sibelius and want to use Bravura for your own scores, unfortunately you cannot use the font unmodified, as neither Finale nor Sibelius (yet?) supports SMuFL, and there are technical restrictions on accessing the font or characters at the Unicode code points where they exist.
Nevertheless, if you would like to download Bravura to try it out, you can do so from the SMuFL web site.
If you are a font designer and you would like to contribute improvements or modifications to the glyphs included in Bravura, we’re definitely open to including them: just drop me a line with details.
80 Replies Post navigation
← The first five months
Hear Daniel on the SoundNotion podcast →
80 thoughts on “Introducing Bravura, the new music font” Laurence Payne May 23, 2013 at 12:47 pm Very interesting. The point being how the font is constructed, not whether it looks particularly different to existing ones. Steinberg created a standard with asio, I guess they can do it again.
Back in the Atari ST days, I invested a not inconsiderable amount of money in Masterscore – a notation add-on for Steinberg’s Pro-24 sequencer that disappeared without trace. Will there be an upgrade path? Not that there’s any evidence left of my owning it I fear! Just MAYBE I kept the dongle?
Reply ↓ Matthew Hindson May 23, 2013 at 12:48 pm That you are releasing this font under the open font license is outstanding. I also very much like the extra thickness of the font. Balancing adequate barline ledger and staff line weights with character weights has been a challenge for years as dpi of printers increase.
Reply ↓ Andrew Moschou May 23, 2013 at 2:15 pm Looks amazing!
Reply ↓ David H. Bailey May 23, 2013 at 2:17 pm Thank you for releasing this font early, Daniel! I very much like the appearance of all the glyphs I have examined so far. If this is an example of the attention to detail that you and the rest of the team are working on with this new notation program, I am more eager than ever to see the final products. How can a person get on the beta-testing team?
Reply ↓ Daniel Spreadbury Post authorMay 23, 2013 at 2:31 pm @David: I’m very glad that you like the look of Bravura. It has been a labour of love, as well as one of necessity. We’ll definitely let everybody know when we are looking to recruit some beta testers.
Reply ↓ terencejonesmusic May 24, 2013 at 12:56 am I’d certainly be one of those people interested in beta-testing when the time arrives. I’m very much looking forward to seeing the further developments on this project. Reply ↓ Doug LeBow May 23, 2013 at 2:30 pm Bravo Daniel! Well done in every way.
Reply ↓ David MacDonald May 23, 2013 at 2:39 pm Very exciting to read about this, and I’m so glad you’ve decided to release this under an open license. I think the boldness of Bravura may take some getting used to. The stem of that eighth note means business! In general, though, I think it looks quite beautiful. The glyphs you showed here are very nice. I like that even with the bold font, there’s still a bit of whimsy and character (trill). | 计算机 |
2014-23/1195/en_head.json.gz/39760 | You may have seen a lot of pictures in Gmail that show messages received from people like Caitlin Roran, Nathan Wood and others. Someone tried to solve the mystery and created a Wikipedia page for Caitlin Roran, but it was deleted. Here's the full content:Caitlin Roran is a fictional character (or advertising character) devised as part of a Google advertising campaign. The ad was created to promote Google's Gmail service and its availability via mobile phone. Caitlin's name appears as having sent the second email from the top dated September 13 regarding a surprise party.The surname Roran seems extremely rare in the United States and may be nonexistent outside this ad.However the ad has been seen by enough Gmail subscribers that a Google search for the name will turn up at least one Web site dedicated to keeping track of these searches.Caitlin's e-mail appears in bold typeface, and is thus yet to be opened by the owner of the phone. The email at the top of the phone's display, from Buck regarding a recent trip to Hawaii, is also bold and thus unread. Buck's message also appears to have a file (or files) attached (presumably pictures from Hawaii, but possibly some other type of file).It has been suggested that Caitlin does not represent a real person but is a name attached to a spam message. Buck's message is under similar suspicion. The messages from Susan (third position from the top) and Nathan (fourth from the top) seem less likely to be spam, as their subject headings are less typical of computer-generated spam subject headings.It's not clear if the recipient of Caitlin's email is the organizer of the "surprise party" or is one of the guests. It is also possible that the recipient is the party's honoree and is being informed of the secret plans -- though, for what purpose is unclear.According to one theory, Nathan, whose name appears next to the message "BBQ on Saturday," is the party planner and the party is to honor Buck, the author of the simulated email about having just gotten back from Hawaii. The owner of the phone possibly is Buck's best friend and the boyfriend of Susan, who is trying to make plans to have sushi.If the owner of the phone is female, however, the sushi plan suggestion is more difficult to interpret.Another question that has been raised about this ad is whether the "BBQ on Saturday" might happen to be on the same day as the "Surprise party." No day of the week is given for the surprise party, giving rise to the possibility that Caitlin's and Nathan's mutual friend (the owner of the phone) could have a conflict between the two events. Of course, even if they were on the same day, they could be at different times, which would solve the problem.It's also noted that the owner of the phone responded to Buck's e-mail about his return from Hawaii and to Susan's message about plans for sushi but ignored the messages about the BBQ and the surprise party. One could assume that the latter two messages were sent to a mass list of guests and did not require responses. Or perhaps the person has not responded to either message because both events are scheduled for the same time (presumably in the afternoon of September 16, 2006) and the person has not decided which one to attend.The interface shows only two unread messages, a sign that the phone belongs to a person who has recently signed up for Gmail.Judging by the content of the messages, the owner of the phone is likely between 20 and 40 years old and has at least a moderate amount of disposable income and leisure time. There is no evidence that the person is employed or has any interests other than planning events.Judging by the month (September), the event (BBQ), and Buck's travel destination (Hawaii), the owner of the phone likely lives in Southern California, where an email advertising a fall bar-b-que would be so ordinary as to merit no response.The tentative nature of the sushi plans with Susan also suggests that Susan is likely the significant other or close friend of the phone's owner, or at least someone with whom the phone owner socializes frequently enough to make spontaneous plan making possible. | 计算机 |
2014-23/1195/en_head.json.gz/39799 | Home News Heritage Main Menu
IBM reaches its 100th anniversary Written by Historian Thursday, 16 June 2011 IBM reaches its hundredth birthday today, June 16, 2011. But surely commercial computers are nowhere near 100 years old? True, but International Business Machines started out as a manufacturer of tabulating machines. A hundred years in a long time for any company to have been in business and IBM started out in a very different world from today's.
The centenary marks the date when Hollerith’s Tabluating Machine Company merged with the Computing Scale Company of America and the International Recording Company to become the Computing-Tabulating-Recording Company, which produced punch card sorting and tabulating machines together with assorted scales and clocks.
When Thomas J Watson Senior took over its management in 1914 8t was deep in debt and close to going out of business. However his skills as a salesman learned at the National Cash Register Company (NCR) quickly turned things around and by 1920 CTR's gross income was $14 million. In 1924, when Watson was 50, he became its chief executive officer and changed the name of CTR to International Business Machines - IBM.
Thomas J Watson Sr (1874-1956)
IBM's first involvement with computers was 5 million dollars to Howard Aiken's Mark I project - but when the machine was complete in 1944 Aiken pointedly didn't acknowledge IBM's contribution to the achievement. Given that the money had been a gift this seems unreasonable behaviour on the part of Aiken and Watson was furious. But he was also motivated to compete and Two years later IBM had produced the SSEC - Selective Sequence Electronic Calculator.
The SSEC Selective Sequence Electronic Calculator
The SSEC had 12,500 valves, 21,400 relays and was way ahead of anything else. It could store eight 20-bit numbers in electronic memory, 150 numbers on the relays and 20,000 numbers on sixty reels of punched tape. It was more powerful than ENIAC - the first true electronic computer which was completed a year earlier. and was the first publicly available computing machine and it was used for many years to do work ranging from the design of turbine blades to oil field exploration. However it was not really a computer instead the first IBM computer was the 701 in 1952.
Thomas Watson Senior was famously known for doubting the need for more than a handful of computers. However his son, Thomas J Watson Junior, had a different vision. The father and son relationship was a stormy one but in 1955 his father handed over the reins and shortly after that the ambitious System 360 was launched.
Thomas J. Watson Junior (1914-1993)
Under his leadership Big Blue, as IBM was known due to its choice of company colors, was a force to be reckoned with in early commercial computing. In its heyday, when mainframes arrived on a fleet of trucks, there was a well known saying: "nobody ever got fired for buying IBM."
Nowadays IBM hardly manufactures any hardware - I have to admit that I was under the impression it had got out of hardware altogether but the video below put me right on this. Instead the company has moved on to supplying software and services and while it is still one of the most important and influential companies in IT it doesn't have the high profile it has in the days of the IBM PC and then of the Thinkpad.
The Watson name is still remembered at IBM and was given to the AI machine that came to prominence last year by competing in, and winning, Jeopardy!.
To mark its centenary IBM is encouraging up to 400,000 employees worldwide to skip their usual office work in order to donate time to charitable causes and schools.
Thomas J Watson Senior
Thomas Watson Junior
Watson wins Jeopardy! - trick or triumph If you would like to be informed about new articles on I Programmer you can either follow us on Twitter or Facebook or you can subscribe to our weekly newsletter.
The SWTP Effect - How The Microcomputer Revolution Started In The UKIn this third and final part of our look at the early days of the microcomputer revolution as it happened in the UK, we focus on the amazing and idiosyncratic SWTP family of computers which played the [ ... ] + Full Story Ada Lovelace, The First Programmer Ada, Countess of Lovelace was born almost 200 years ago but her name lives on. In the 1970s a computer language was named after her in recognition of her status as the first computer programmer and in [ ... ] + Full StoryOther ArticlesClaude Shannon - Information Theory And MoreKonrad Zuse And The First Working ComputerJohn Backus - the Father of FortranClive Sinclair And The Small Home Computer RevolutionThomas J Watson Jr and the IBM 360 Fifty Years OnHoward Aiken and the Harvard Mark ICharles Babbage - The First Computer VisionaryKemeny & Kurtz - The Invention Of BASIC Last Updated ( Thursday, 16 June 2011 ) | 计算机 |
2014-23/1195/en_head.json.gz/39949 | Fedora 18 Release Slips Another Week
from the stay-tuned-for-the-thrilling-conclusion dept.
An anonymous reader writes "The next major release of the Fedora Project's GNU/Linux distribution (named Spherical Cow) was originally scheduled for November 16th. However, an ambitious set of new features has resulted in the project slipping way past its scheduled release. It had fallen three weeks behind before even producing an alpha release and nine weeks behind by the time the beta release was produced. A major redesign in the distribution installer seems to have resulted in the largest percentage of bugs blocking its release. The set-back marks the first time since 2005 in which there was only one major Fedora release during a calendar year instead of two. Currently, the distribution is scheduled for release on January 15th."
Brewing Saké in Texas for Fun and Profit (Video) Qualcomm Takes Down 100+ GitHub Repositories With DMCA Notice
Submission: Fedora 18 release slips another week
Why Girls Do Better At School
Re:It doesn't matter.
by Ignacio (1465) writes: on Friday January 04, 2013 @12:18PM (#42476077)
And no one (sane) hates you for that. Fedora isn't a one-size-fits-all distro, nor do they ever want to be one.
I feel like Fedora 18 is a bust
on Friday January 04, 2013 @12:57PM (#42476607)
Journal They should never have merged in the new Anaconda in the state its in. It is not production ready. It is basically impossible to create a new LV or btrfs subvolume and install into it. So you are left with installing into a real partition. And on most of my computers I'm using btrfs or LVM and I've given them the whole HD, so that's not really an option.Additionally, the UI for selecting where to install into is so confusing that I cannot say with confidence that the install isn't going to wipe out any existing partitions.The old UI was kind of fiddly, and perhaps it was a bit opaque to newer users since it required some detailed knowledge of what a partition was and how it relates to a physical hard-drive, and LVM volume group or a btrfs volume. But at least it worked and you could make it do what you wanted.Perhaps this new UI will be a lot better in the end. All I know is that merging the work into mainline Anaconda at this stage of its development was a huge mistake. It means that it will be much harder to go back to the old one should the new UI not be ready in time, or prove to be not-constructible.I consider it basic software engineer to never count on a given feature that isn't done (to the point of having had at least some testing) to be available on release. You don't let your salespeople sell it. And you don't announce it. This is something I've always had a lot of respect for Google for. They rarely announce things until they're actually done. Software engineering is too unpredictable to do it any other way.
Brewing Saké in Texas for Fun and Profit (Video) | 计算机 |
2014-23/1195/en_head.json.gz/40013 | Mobile Monday Miami - The enthusiasm is in your hand. Home
Set Up Business Operations With Service Desk Software
Tags: company structures, service desk applications When you are starting out with your business, it is likely that you will have done sufficient market studies and would have worked out a business plan. It would be best to include all those areas that would drain your resources and how you plan to counter them. Infrastructure and computer hardware would definitely top the list these days because it makes it scalable. Internet has made it possible to have a global empire regardless of where your office is. Software such as service desk software would no doubt be an asset. This would be a part of the rest of the Office suite where software is concerned and your work force would need to be trained in its use.
As long as your business plan includes the tools of the trade, you should be able to see a smooth start to the operations. One of the side benefits to having the service desk software in place initially is that you will not have to convince your employees to change from something they are used to! This will save time and effort which can be better used for business development. Adding something like service desk software later in the process would unnecessarily use up time towards transitioning from one approach to another. Here’s how this can help.
Why Do You Need Service Desk Software?
Company websites are using service desk software in order to improve their services to potential customers. Basically, this kind of software is helpful in building trust among loyal customers as well as those who are just planning to avail the products or services. This automatically responds to questions or feedback coming from customers. The software is a great tool to keep considering that it also provides ample of benefits to the company website. Creating a smooth workflow is possible with service desk software because of its automatic features. Staff members who are working on the site will not have a hard time organizing the tasks because of the software.
Reports are also organized and can be collected right away. The software gathers all the data and stores it properly in order to make use of it in the future. It gives an overview on how the company is going in terms of its products or services provided to the customers. The good thing about having this software is the convenience that it can give to staff members. It will provide an opportunity of getting the tasks done efficiently without creating hassles. Apart from this, the service desk software is a tool to increase the revenue of business.
Stop Snoring Devices And Their Side Effects
Snoring is never considered a very serious problem but there are cases of people getting sleep apnea due to snoring. To avoid such complicated problems, you should address snoring early. Lots of people think about using stop snoring devices and that is a very positive thought but using these devices on permanent basis is not a very good plan. You should look for stop snoring devices that address the cause of snoring instead of providing you with a temporary relief. Most of the devices available in the market address breathing issues of snoring people and by addressing those issues these devices stop snoring. These kinds of devices can be addictive in nature as well because once you become used to using these devices for sleep then you will not be able to sleep without them. To avoid addiction of these devices you should try skipping few nights and try to sleep without these devices. If these devices are really helping you then you must be able to sleep and snore less than usual. If you are not getting proper sleep without these devices then you need to either change it or you can consult your doctor about it.
Professional Stop Snoring Devices
If you are looking to control snoring then you must try natural methods first before you try using some professional stop snoring devices. Most of the people snore because their air passage is blocked and they cannot breathe through their nose anymore. This is very common problem and there are some very basic techniques to avoid this problem. There are so many types of nasal openers available in the market that you can use and these openers are very effective as well. When air passage is open then you will be able to breathe through your nose and snoring will not occur. Some people have breathing issues that can be addressed by some ordinary oral devices and these oral devices are also not very expensive. You just have to keep these devices in mouth while sleeping and these devices make sure that you breathe properly and avoid snoring. If your cause of snoring is above all this then you can turn towards professional stop snoring devices. These devices are complex, expensive but highly effective. You can consider these devices as the final option when nothing else works. Once you start using these devices then you will have to use them permanently.
Wireless Was An Ethernet Slayer
Its promise has been easy to see: Take away the annoying configurations associated with Ethernet cables and modem lines and provide users with automatic and effortless data transfer on a LAN or WAN connection.
Those promises have been battered by the realities of the marketplace: high costs, low connection speeds, and lots of competing or stuck-in- committee standards.
The notion of small, long-life digital devices that can easily carry around a subset of your centrally managed corporate data has been illusory. To the extent that cellular phones have become business tools, their success has been borne on the shoulders of workers individually adopting the technology.
But that is finally beginning to fundamentally change.
On the cultural side, corporations are slowly integrating wireless phones and other devices into their purchasing plans and, to a lesser extent, their management plans. Management platforms such as Tivoli Systems’ are beginning to embrace wireless or occasionally connected devices with small-footprint agents. At the same time, individual users are turning their cell phones into their primary communication devices. In Europe, where lingering regulation has made wired telephone service quite expensive and not terribly flexible, many users are making rational economic decisions and forgoing a land line altogether.
That movement coincides with the vision increasingly expounded by several networking vendors. Those firms, notably Nortel Networks, have begun in recent months to describe a vision of wireless technology where consumers or corporate users can use a single wireless handset for communications. With intelligent roaming, a user on a corporate campus can use local facilities for communications; once the user leaves the campus and switches to a commercial carrier, that user’s billing and usage information also switches.
The technology to do that is not there today. Wireless LAN networking will always be one step behind (today it is running comfortably at 10M bps) wired connections. But for voice and truncated data usage, the notion of corporate use is entirely feasible.
Cell phone manufacturers are also incorporating more corporate technologies into the increasingly digital devices: large directories, sophisticated voice mail and text messaging. The PalmPilot can synchronize with desktop PIM information. The step to integrate those devices with mainstream corporate data is a short one.
For broadband communications, such as high-speed Internet access, cable television services and so on, telecommunications companies are turning to fixed wireless as a quick and relatively painless answer to the “last mile” problem. With the RBOCs’ virtual monopoly on local circuits in the United States, competitors looking to provide high-bandwidth access with those ever-important added services see wireless as a way to sidestep tariffing and provisioning problems.
Sprint just bought People’s Choice to do just that with ION; much has been rumored about MCI WorldCom’s plans. AT&T is the exception: It clearly is leveraging its TCI investment. Equipment providers are eyeing this market as a lucrative one.
Perhaps wireless will soon be known as the rich relation in the telecom family.
Palm Left Bad Taste In Early Adopters Mouths
Leveraging the palm platform and wireless infrastructure has allowed 3Com to put together an excellent product and service with the Palm VII. Unfortunately, the per-kilobyte price model used by 3Com’s wireless partner, BellSouth, is way out of line.
Like most wireless services, 3Com and BellSouth have a usage-based pricing strategy with two levels: Basic service costs $9.99 per month for 50KB of data downloads, while a volume service plan costs $24.99 per month for 150KB of data downloads.
What doesn’t work about this is that most of the information and services that 3Com provides through the Palm.Net service, such as headline news, traffic updates and stock sales, are either free from multiple sources or worth paying for on a transactional basis.
So who should be paying for the infrastructure and connection time? Advertisers should be paying to supply information services such as news and traffic information, and for transaction-oriented services, such as stock trades, the fees should be assessed by the transaction. Ideally, the fee would stay the same, only 3Com and BellSouth get a cut of the action.
All you have to do is look at the user 3Com has targeted to see that advertising is the way to go. The Palm VII is intended for the highly mobile professional, the person who spends better than half his or her time out of the office. It’s my observation that most people who travel that much (and are interested in keeping up-to-date on news and trading stock at a moment’s notice) generally have money to spend.
These devices and services are all about targeting and capturing the peak marketing demographic. 3Com and BellSouth should be exploiting that opportunity, not their customers.
If the advertising and transaction fee model doesn’t work out, the other way to sell the service would be based on connection time, as with cellular phones. It broadens the appeal of the product and puts the costs in terms the layman can understand. I’m still not sure how 50KB of data translates into the 150 “screens” described in the Palm VII product literature. How many e-mail messages would that be? How many traffic reports or movie listings?
It takes about 1fi minutes to get a movie listing using MovieFone. That costs 15 cents, figuring a dime-a-minute cellular rate. I couldn’t tell you how much finding that same listing would cost with the Palm VII based on the information 3Com has provided so far, but it better not be more.
The pricing problem won’t likely be as daunting for corporations using the Palm VII to give real-time query and response capabilities to a mobile field force. Right now, if a mobile sales agent wants to check a customer order, it is probably an expensive process for most companies. The call in to the office incurs the cost of the call, the cost of the sales agent sitting idle on hold for a couple of minutes and the cost of a call center worker performing the query.
With the Palm VII, at least the cost of the call center worker can be eliminated, and depending on the size of the transaction, the cost of the call could be reduced.
The Open Book Standard Didn’t Last
Adoption of Open Book standard, expected soon, likely to speed acceptance of devices, lower prices
One of Canada’s biggest cellular phone companies is getting into the book- selling business. But it’s not just any book. Beginning in the fall, Rogers Cantel Inc. will sell the SoftBook electronic book system in Canada, hoping businesses will want a portable, page-sized digital reader that’s less expensive and lighter than a laptop, but that can hold 1,500 pages of documents.
“We thought it was a pretty cool product that would open doors in corporate sales,” said Robert MacKenzie, Rogers’ vice-president of PCS and product development.
“We’ve had people say ‘Do this and this for me, I’ll buy the SoftBook and give you our wireless business.’
We think this product will start as a business tool but will become a great consumer product. It will be the ultimate personal digital assistant.
It’s the first Canadian distribution deal for one of the two U.S. electronic books on the market, which have been available over the Web for several months.
So far, most of the material available for electronic books is American. Canadian publishing house McClelland and Stewart Inc., however, has prepared five Canadian titles for the Rocket eBook system in anticipation of a major Canadian bookseller picking up sales rights here in the fall.
The SoftBook (US$599 from SoftBook Press Inc. of Menlo Park, Calif.) and the Rocket eBook (from NuvoMedia Inc. of Mountain View, Calif., list price $499, available through booksellers) are monochrome HTML-based devices. Both have software for converting electronic documents so they can be read on their systems.
Material is downloaded to the SoftBook by plugging it into a phone line. The Rocket eBook, which can hold about 4,000 pages, needs a PC to retrieve documents before being transferred to the reader.
Late this year, Everybook Inc.’s two-screen colour Everybook Dedicated Reader, a PDF-format device which will range from US$500 to US$2,000, depending on the version, will join the electronic book lineup.
The adoption of electronic books has been slow, in part because of the price. As a result, two American online booksellers cut the price of the Rocket eBook this month to US$399.
They won’t get near that price here yet, though. MacKenzie guessed Rogers will sell the SoftBook for $900.
Another move which might spur the acceptance of electronic books is the expected adoption of an Open eBook standard.
Next month, members of a standards group, which includes all three manufacturers, Microsoft Corp. and several publishers, will vote on a draft specification they’ve been negotiating based on HTML and XML.
It will define methods for formatting and delivering content.
When a standard is decided upon, book, periodical and magazine publishers will have confidence that when they convert material they won’t be caught in a standards war, said Marcus Columbano, NuvoMedia’s director of marketing.
“It’s an insurance policy for a content provider,” said Kimberly Woodward, SoftBook’s director of enterprise marketing.
“So far, the majority of our sales have been to corporate customers who are using it to distribute operating procedures, database reports and technical manuals to their business partners.”
In Phoenix, the Arizona Republic has been testing SoftBook with 100 of its newspaper carriers, who need daily updates to their subscriber lists.
Usually it’s done on paper, with the carrier keeping records on cards. But since March, a test group has been downloading a new list to their electronc books nightly with additions and deletions, which of three publications subscribers will get the next morning and directions to each stop.
The pilot has been so successful that Joe Coleman, applications development manager for CNT Corp., the newspaper’s technology subsidiary, is confident all of the paper’s 1,700 carriers will be equipped with SoftBooks in three months.
MacKenzie can see municipalities equipping engineers and inspectors with electronic books to carry blueprints and reports, or service technicians carrying e-books instead of toting repair manuals.
Within 18 months, SoftBook will have a wireless modem capability, he predicted, which will expand its flexibility, as well as give it a closer link to Rogers’ cellular business.
He is also discussing the product’s potential uses with the Maclean Hunter Publishing Ltd. division of the Rogers empire, which puts out a range of trade, business and consumer publications.
“I can see the day where we bundle the SoftBook and wireless and we go after the medical profession, telling them they can get certain publications, medical information and personal applications.”
MacKenzie and others agreed that acceptance by the business sector will help spread popular acceptance of electronic books.
Bell’s Connectivity Hurt Other Providers
A recent fire in a Bell Canada central office cut off service in an area of downtown Toronto, but the effects were felt well outside the city boundaries.
One cellular service provider, Microcell Telecommunications Inc., lost communications between its switching centre in Toronto and about 30 of its cellular sites in places as far away as Barrie, which is about 100 kilometres north of Toronto.
Microcell leases lines from three companies – Bell Canada (which provides local service in Ontario and Quebec), AT&T Canada (which provides long distance service and recently bought Metronet Communications Inc., a competitive local exchange carrier) and cable service provider Shaw Communications – to connect its switching centre to its remote cellular sites.
The sites that Microcell lost represented about 15 per cent of the company’s sites in the Toronto area.
“They weren’t in one big clump,” said Anthony Schultz, vice-president of network planning and operations for Microcell Connexions, the company’s personal communications service (PCS) division. “They were scattered here and there.”
Clearnet Communications Inc. of Pickering, Ont. experienced an increase in calls (up 20 per cent from a typical Friday) despite the fact that it lost connections to some cellular sites.
“Some people said ‘My phone didn’t work where I was, but I walked a block away and got the next cell site over,’” said Mark Langton, Clearnet’s marketing director.
Most of Microcell’s disconnected sites were in rural areas where the peak demand times are during the rush hour.
The Bell service interruption lasted between 10 a.m. and 3:15 p.m. on July 16, and most service was restored by the following day.
It all started when a worker dropped a tool which caused a short circuit in an electrical panel at Bell’s central office on the western fringe of Toronto’s central business district.
Company officials believe the short circuit caused a fire, which in turn triggered the sprinkler system and prevented workers from turning on the emergency backup power.
One software project manager who works in the area and didn’t want his name used said his company could not make any calls or send e-mails to clients.
“As an engineer, I was surprised that Bell didn’t have a re-routing system set up,” he said.
“I was under the impression that if lines go down, that Bell is able to switch over service almost seamlessly, and it seemed like they weren’t able to do that.”
But an official at UUNet Canada Inc., a Toronto-based network service provider that relies on Bell, did not blame the company.
“It was simply somebody making a mistake and they did their best to get the service back up and running,” said company spokesman Richard Cantin. “It is one of those Murphy’s Law things.”
Although UUNet’s T1 network was not affected, any users that connected to the backbone through its central office would not have been able to get online.
The same went for PSINet Ltd., which, like UUNet, operates a T1 backbone network.
In the case of UUNet, however, the service outage cost the company over $20,000, Cantin said, because it has service level agreements (SLAs) with some of its customers.
In the SLA, UUNet promises to give users one free day of service for every hour of service outage – regardless of how the service was lost.
“We know we don’t control the world,” Cantin said, adding UUNet plans to continue offering SLAs.
The company plans to ask Bell for a refund for its local service, but that amount – which UUNet would not disclose publicly – will not be enough to compensate the firm for the amount that it will have to refund to customers with whom it has SLAs.
“I wouldn’t call it negligible,” Cantin said of the money lost. “(It’s) a huge chunk of dough.
“That’s chewing into the margin, not the revenue, and the margin in the Internet world is not a huge thing.”
Despite UUNet’s loss, Cantin said the service outage probably affected businesses more than it affected telecommunications firms.
“This morning I was at my chiropractor and their business literally shut down,” he said.
“They have nothing to do with communications, they have nothing to do with the Internet, but their phones weren’t working, the Interac system wasn’t working, so they couldn’t accept new bookings and they couldn’t get the people there to pay.”
Bell spokesperson Irene Shimoda said the Interac system was not shut down, but she added connections between some point of sale (POS) terminals in Toronto and the Interac system were down.
Wireless Speeding Along Its Way
Time is money. that’s never been more true than in today’s fast-twitch e-business world. For some organizations, however, time has always been more important even than money. For the New York Presbyterian Hospital Organ Preservation Unit, which procures and preserves human organs for transplant, a few minutes can be the difference between life and death. Once removed, organs remain vital for 6 to 24 hours. To cut down on time-consuming faxes and phone calls and to speed life-saving organs to patients, the hospital unit two months ago turned to a wireless Internet connection from GoAmerica Communications Corp. of Hackensack, N.J.
Now preservationists, working at a patient’s bedside or in a moving ambulance, use laptop PCs equipped with special modem cards and antennas to post digital pictures of organs and the organ’s vital statistics on a Web site operated by the International Society for Organ Preservation, in New York. On the site’s OrganView page, doctors learn of the availability of organs instantaneously.
“Fifty percent of our work is outside in the field, in transit or in other [places such as operating rooms],” said Ben O’Mar Arrington, who heads the organ preservation unit. “We don’t know if the [remote] areas are going to be ready for [Internet access]. By being mobile, and with a system we’re sure is going to work, it’s the best way to utilize the information and send it.”
As the Internet increasingly becomes their lifeblood, more and more organizations are beginning to look to wireless technology to allow employees such as traveling business executives, salespeople and field workers to log on from anywhere at any time.
But mobility is only part of the reason wireless Internet access is beginning to take off. So-called fixed wireless technologies such as Multichannel Multipoint Distribution System are emerging as viable alternatives for connecting offices and home workers to the Web at broadband speeds. Fixed wireless technologies are particularly attractive to small and midsize businesses. That’s because such companies can use wireless technologies to get high-speed Internet access for much less than the thousands of dollars a month it would cost them to lease a T-1 line. The technologies are also valuable as a way to gain high-speed access when DSL (digital subscriber line) and cable aren’t available. Despite lingering IT concerns about wireless technologies’ security, reliability and coverage, their adoption for Internet access is accelerating.
“The Internet is driving [wireless] adoption because of mobile workers,” said Roberta Wiggins, an analyst at Boston-based consultancy The Yankee Group Inc. “Data is becoming more a part of our business lives. We use the Internet so much as a resource tool, so mobile workers in the field need the same access to information.”
Buying sprees
Fueling wireless’s momentum, key vendors such as Microsoft Corp., Apple Computer Inc., Sprint Corp. and MCI WorldCom Inc. have begun to invest. For example, Microsoft last month acquired STNC Ltd., of the United Kingdom, a maker of wireless communications software for providing Internet access from mobile telephones. At the same time, MCI WorldCom and Paul Allen’s Vulcan Ventures Inc., of Bellevue, Wash., are each investing $300 million into Metricom Inc., of Los Gatos, Calif., which will allow it to expand Metricom mobile wireless Ricochet service nationwide. And MCI and Sprint have recently gone on a wireless technology buying spree, acquiring broadband wireless providers.
Vendor interest in wireless Internet access technologies doesn’t stop there. Microsoft, of Redmond, Wash., and Qualcomm Inc., of San Diego, last year formed a joint venture called WirelessKnowledge LLC to develop wireless access from smart phones, PDAs (personal digital assistants) and handheld devices to Microsoft Exchange features such as e-mail, calendaring and contact lists. In February, Cisco Systems Inc., of San Jose, Calif., and Motorola Inc., of Schaumburg, Ill., formed an alliance to develop Internet-based wireless networks. 3Com Corp., of Santa Clara, Calif., and Aether Technologies International LLC, of Owings Mills, Md., announced in June a joint venture to create a wireless data service provider called Open Sky to bring Web, e-mail and corporate intranets to cell phones, pagers and handhelds.
3Com’s Palm VII, which incorporates wireless Internet access, has motivated e-businesses to begin developing content and services for mobile users. For example, Online securities brokers Charles Schwab & Co. Inc., of San Francisco, and Fidelity Investments, of Boston, have announced plans to roll out Web-based trading sites for PDA-equipped mobile investors.
For IT managers interested in adopting wireless data technologies, the arrival of more players and more robust services cannot come soon enough. For many mobile wireless Internet users, transmission speeds remain relatively slow. New York Presbyterian users, for example, typically connect at about 19K bps. And, today, the wireless connection to the Web is not always reliable.
But the benefits of being mobile and no longer having to worry about finding a landline to plug into outweigh the slower speeds and occasional interruptions, he said. The cost also fits Arrington’s budget. The GoAmerica service costs $59 a month, and the modem cards cost $400 each. New York Presbyterian currently has four wireless laptops.
Speeding up
Connection speeds for mobile Internet service are poised to increase in the next few months and years. Cellular and personal communications services carriers such as AT&T Wireless Services Inc., Bell Atlantic Mobile and GTE Corp.’s GTE Wireless subsidiary are moving to increase wireless network data speeds from today’s typical 14.4K bps and lower to 384K bps over the next three to five years. Speeds could reach 64K bps in the next year, said Yankee’s Wiggins.
Meanwhile, Metricom is revamping its proprietary network that runs its Ricochet service. Now available at 28.8K bps in the San Francisco Bay area, Seattle and Washington, Ricochet is scheduled to reach 128K-bps service by the middle of next year in 12 markets.
Even with a 28.8K-bps connection, Harold Mann is discovering a new level of personal freedom in using the Ricochet service. Mann is one of three wireless Internet users at Mann Consulting, a multimedia company in San Francisco.
“At first I was not convinced why I needed to have [wireless Internet access],” Mann said. “[But] it’s the self-reliance; not having to depend on anyone else for a Net connection.”
Despite his initial skepticism, Mann now views the wireless connection on his Apple Macintosh PowerBook G3 as a crucial productivity tool. Mann recalled a situation when he was at a remote California site for the filming of the movie “Deep Impact.” The producers were missing an important graphic, but no one had access to a phone line. Mann clamped on his Ricochet modem, the antenna protruding atop his laptop, and downloaded the image.
Ricochet’s biggest limitation, other than its speed, is its coverage, Mann said. But Metricom is planning to expand service from three to 46 metropolitan markets by mid-2001.
And while providers of mobile wireless Internet access push for higher speeds, fixed services are already connecting users at speeds rivaling T-1 lines, DSL and cable, often for a lot less money. Fixed wireless services are often termed “DSL in the sky” or “wireless cable.” Deployments so far, however, have been mostly only regional.
Where fixed wireless Internet access is available, some IT managers have turned to it as an alternative to the high cost of a T-1 line or in lieu of DSL or cable, which often are not available in their locations. Take Yack Inc., of Emeryville, Calif. When the Web guide to Internet events and chats moved to its new office six months ago, chief technology officer Jasbir Singh needed a way to provide the then six employees with high-speed Internet access, but he couldn’t afford $2,000 to $3,500 for a T-1 connection. Singh tried to get DSL, but both the local phone company, Pacific Bell, and competitor Concentric Network Corp., of San Jose, Calif., told him that Yack was located too far from a central office to use the distance-sensitive service. But Concentric offered a solution: a wireless service it provides through ISP Wavepath, of Mountain View, Calif.,that uses a small dish placed on top of the user’s building to provide downstream service at T-1 speeds (1.54M bps) and upstream at 512K bps, Singh said. The cost: not thousands of dollars, but $499 a month. Singh was sold.
Today, the wireless connection continues to be part of Yack’s Internet-connection strategy. But the company two months ago also installed a dedicated T-1 line connecting it to one of its Web-site hosts. As Yack has grown–it now has about 30 employees–it has become able to afford the T-1 connection that Singh sees as more reliable. The wireless connection has evolved into a redundant, high-speed backup rather than the primary Internet connection, Singh said.
Reinforcing the role of fixed wireless as supplemental to T-1 are some lingering limitations in the technology.
But for Scripps Howard Broadcasting’s WXYZ-TV, of Southfield, Mich., a combination of Internet connections–including fixed wireless–provided the best way to stream live news broadcasts over the Web. When station officials began researching options for the project about a year ago, they realized they needed high-capacity bandwidth. A T-1 line seemed like the obvious solution, but Christa Reckhorn, the ABC Inc. affiliate’s director of new media, was concerned about spiking demand.
That’s where SpeedChoice fits in. The wireless Internet service from People’s Choice TV Corp., of Shelton, Conn., provides the station with a 10M-bps downstream connection, more than enough to keep reporters and assignment editors hooked into the Internet to find last-minute news sources. That keeps most of the T-1 connection’s bandwidth free to send a smooth stream of video during four daily newscasts to a hosted server in Seattle, Reckhorn said.
“There’s really no other way we could have done it,” she said. “You’re limited to the speed of a T-1, and we couldn’t afford higher bandwidth. [But] wireless allows you to add bandwidth at the same cost.”
Recent Posts Set Up Business Operations With Service Desk Software | 计算机 |
2014-23/1195/en_head.json.gz/41180 | utv question???
Started by Guest_gtc_*
Guest_gtc_*
what is the difference between "scheduled series list" and "auto-record list" in the my shows section?
Both have a place to put a checkmark for no repeats. It would list a show I placed there and say there are so many upcoming recordings. When I click on it the ones listed as upcoming include the repeats even though I put a checkmark in the no repeat box. This kinda threw me and I was hoping you might be able to explain it to me.
Oh, one last one. There is in my shows a history button. Can you delete some of them or does the list just keep on getting longer and longer?
Karl Foster
Here is the difference. Auto record will record a show regardless of the time or channel. For example, if you wanted to record all of the "Friends" episodes that are on, you could do an auto-record and it will pick up all of the "Friends" episodes that are on NBC, TBS, and other syndicated showings, no matter what time they are on. This is handy for shows that are on at different times during the week like "Big Brother" that is on three days a week at different times. It will pick them all up. You can narrow a search for auto-record also to network channels, other channels, etc. It is really handy when you want to record a movie or something that doesn't show up when you do a search, but will continually look for it and automatically record it for you whenever it comes up. For example, my daugher has been wanting to see "A League of Their Own" and it set an auto record about a month ago, and it just showed up with a recording scheduled for this Sunday. I didn't have to do anything else besides set the auto-record. You can also set an auto-record to pick up certain topics you might be interested in. I have an auto-record set on mine to record shows that have "F-16" whenever it appears in the title or description. A series record will record a series when it is on at the same time. For example, "Friends" on a series record would record it on NBC on Thursday nights (or any other night that it shows up at the same time). The history will show the last 250 transactions. If you recoreded something it will show up and then it will also show up when you erase it. It moves pretty fast.
"If you took the time to get to know me, well, you'd be wasting your time. I am exactly who you think I am." - Earl Hickey
Hi karl. Thanks for the reply. I think I am beginning to understand but one thing I am still wondering about is how it does repeats. I set a series record for "the man show" using sunday (I think) as the episode I marked. In the list of upcoming shows though it lists episodes airing tomorrow and saturday even though I marked the show starting sunday and the earlier ones plus others are repeats.
Even though the "repeat" shows show up as upcoming recordings, they won't record as long as you have checked the "Don't Record Repeats" button. The series record will pick up every show that is on that channel (249) that has the same title around the same time. Also, if the channel description says "No Information Available" it will get recorded as the UTV doesn't know if it is a repeat or not. Comedy Central is famous for having many shows that say "No Information Available" in the description. Hope that helps.
Sounds good. Thanks for the help and advice. I'm still learning but so far it's great. | 计算机 |
2014-23/1195/en_head.json.gz/41321 | ERCIM News No.42 - July 2000 [contents]
WWW9 attracted over 1400 Participants to Amsterdam
by Ivan Herman
The 9th World Wide Web conference hosted by CWI in Amsterdam attracted more than 1400 participants (55% Europeans, almost 40% from USA/Canada). 20 companies and organizations (including ERCIM, but also Sun, IBM, ACM, Philips, the Internet Society of the Netherlands, or UPC) were involved through some form of sponsorship, an exhibition booth, or as co-organizer. 30% of the participants came from academic institutions (universities, research centres, museums or galleries), over 50 reviewed academic papers were presented (280 submissions).
Such dry statistical facts do not tell about the exciting atmosphere during the week of 15-19 May, when the 9th World Wide Web conference was held in Amsterdam. These conferences have traveled all over the world, from Santa Clara, in California, through various European cities to Brisbane, in Australia. They have become the primary meeting places of Web experts worldwide, where the latest technologies are presented and discussed. Amsterdam was no exception.
The WWW conferences are not trade shows; they are typically attended by techies, with only few suits around. This determines the nature of the conference programmes, too. At WWW9, the technical paper sessions were complemented by a series of Web & Industry sessions (featuring such companies as General Motors, Elsevier, or Nokia), where industrials presented their visions for the future and the technical challenges they face in realizing these; panels over XML protocols, WAP and its connection with the Web, graphics techniques on the Web (such as Web3D, SVG, or WebCGM), or Web internationalisation generated passionate debates; 90 posters triggered further technical discussions around a high diversity of topics. There were five keynote speakers, coming from such companies as Ericsson, Philips or Psion. A series of tutorials and workshops preceded the core conference; a so-called Developers Day, which gave speakers the opportunity to dive into the most intricate details of their work, closed the event.
The evolution of the mobile Web was one of the main topics that spread throughout the conference. The term mobile is very general: it refers to mobile phones with WAP facilities, but also to PDA-s like Psions or Palms, or to applications used, for example, in the automobile industry. This new phenomenon raises a number of new challenges, from protocol level to application. There were tutorials and developers day sessions on the subject; the opening and the closing keynotes (Egbert-Jan Sol, Ericsson, and Charles Davies, Psion, respectively) both gave a thorough overview from their perspective. It was a nice coincidence that this conference took place in Europe this time; the Old Continent has a considerable advantage over the US in the mobile Web area, it was therefore quite appropriate that this topic dominated a conference held in Amsterdam.
The World Wide Web conferences have a traditional contact with the World Wide Web Consortium; indeed, the conference is the most important annual public appearance of W3C, where the newest developments are presented. In Amsterdam the W3C track sessions which included presentations on new topics and specifications like SVG, P3P, XML Schemas, or the Web Accessibility Initiative, were extremely popular and well attended.
Of course, the Web has also become a social phenomenon. One new aspect of the Amsterdam conference was that social issues were brought to the fore, too. The keynote address of Lawrence Lessig (Harvard Law School), talking about the issues of government control on the Web, or about trademark and patent problems, was certainly one of the highlights of the conference. A separate, parallel track was entirely devoted to cultural activities and the Web. Virtual museums and galleries, Web-based architectural models, metadata and property right problems, etc., all raise new challenges to the technical community.
For all those who could not make it to Amsterdam, the proceedings of the technical papers are available (published by Elsevier, Amsterdam) and, of course, accessible on the Web (http://www9.org). In the coming weeks the presentation materials of the keynote speakers will be put on the web site, too, so that everybody can have an impression of the conference. The next conference in the series will be in Hong Kong, May 2001.
WWW9 website: http://www9.org
Ivan Herman - CWI, co-chair of WWW9
E-mail: Ivan.Herman@cwi.nl | 计算机 |
2014-23/1195/en_head.json.gz/42290 | Contact Advertise War for Linux Is Lost - Almost
posted by Dmitrij D. Czarkoff on Tue 5th Dec 2006 18:30 UTC The title of the article seems completely wrong to you? Naturally it would, when you daily read something like this. But I do state all this stuff is being a big mistake, if not worse. I am sure, that Linux is now close to extinction, and still is getting closer and closer to the point of no return.
Before we enter the discussion, please accept the following: in this article I'm not giving any opinion on topic of software freedom or openness. Neither I discuss the pros and cons of UNIX way and WYSIWYG. All the words concerning these issues are just describing the situation, but never expressing any attitude.
Linux's success
Now the Linux-based operating systems are rising. Linux is being run on numerous systems from Internet servers to employees' and home users' desktops. More and more companies (or even administrative bodies) are moving to Linux. Many well-known software vendors abandon their previous system software projects in favour of Linux (as Novell and PalmSource did) or enter the OS market with new Linux-based solutions (just like Oracle). Many of software developers or even software houses (like Sun) port their core applications and services on Linux. Linux is widely recognized as successful project and reliable business platform.
The Linux advocates are trying to make us think this process shows the success of Linux and respectively UNIXes victory over Windows and fellows. Is it really so?
My answer is: no.
Linux was just one of the numerous projects, which happened to rise due to being distributed as Free Software and supported by FSF. This sort of publicity made it possible to start several commercial projects (namely Red Hat, SuSE, Mandrake and some Debian derivatives) based on Linux. The rise of free-of-charge Linux-bases system attracted the views of software vendors who started sharing the benefits of Linux's publicity by contributing to the project.
Being opposed to Microsoft's monopoly over the system software market Linux had to keep competing with its rival. While the customers believed the desktop software to be exactly what Windows was, the vendors started to invest the projects that shared theses views. Ever since Linux was becoming more and more like Windows, providing the same user experience and utilities resembling those of Windows and fellows.
The GNOME project is a good example. Being initially intended to provide the IBM OS/2 user experience it gained vendor attention. When the project's officials started to state that the next goal of the project was the Windows UI, it became the default desktop in some commercial distribution (including Red Hat), and the GNOME adoption fastened when the developers starter to decrease the features amount, making it really no more difficult to customize than Windows' Shell32 UI.
The situation is fundamentally bad. Do you want to know why? Stay tuned.
The matter of OS choice
The user would never mind the operating system he uses. The computer is simply a tool for completing the user's tasks, so the only valid factor to be taken in consideration while choosing the operating system is the default set of approaches. That's to say your system must provide you with the instruments most compliant with your mindset and your way of acting.
This time we have another issue that is taken in consideration by some of us: the freedom, that is delivered to us by the software. Those who believe that the software they use should be easily customizable and modified choose among the open source operating systems. Those, who think that the user must be given the freedoms to use program any way he feels appropriate, to share it without charging the fee for such sharing and to make his modifications of the program accessible to general public, has to choose among the so-called Free Software operating systems.
Currently only several UNIXes are both Free Software or open source ideology compliant and stable enough to be used in mission-critical systems, so no choice is left for the open source and Free Software followers. But this situation is starting to get better: the ReactOS, Haiku and GNUstep projects are being actively developed, so we are about to see the Free Software operating systems in style of Windows, BeOS and MacOS X respectively.
This time we have two different styles of operating available: WYSIWYG and non-WYSIWYG (I can't recall any good all-known word for it and don't want to introduce my own term). The first one is the natural domain of Microsoft Windows, Apple MacOS (including OSX) and BeOS (now ZETA). The later once used to be the default UNIX's style. Things changed dramatically since.
The WYSIWYG and traditional UNIX way to accomplish the task are incompatible. WYSIWYG (acronym for What You See Is What You Get) introduces user to a graphical environment, where the result of every manipulation is displayed as it would be seen in result. For example you are shown the document and you see how it would look like. But there is no room for logical structure of document in the case of WYSIWYG software. The other way puts it different: you are presented to a plain text document with the markup describing both logical structure and formatting, but you are not given any idea about how that all would look on paper.
I don't know for sure whether it's really more handy for an average person, or it just has much better publicity, but WYSIWYG is currently widely recognized as the preferable way, making WYSIWYG implying operating systems most popular. As I mentioned above, commercial Linux distributions' default UIs followed the successful WYSIWYG styles. Being promoted as cheaper alternative solutions they pretend to give user the same level of usability and productiveness as Windows while being similar enough to make user transitions nearly seamless.
So, we are having Linux in position of WYSIWYG-implying operating system. Well, MacOS X in its turn is actually a desktop environment based on UNIX-like Darwin operating system. But the way OSX wraps the underlying OS makes me feel comfortable while excluding OSX from UNIXes list. It would even be reasonable to state that if Apple decides to move OSX from Darwin to some totally different non-UNIX-like platform, the users won't feel the difference. But Linux is really UNIX, and unlike MacOS X Linux doesn't really try to hide away the UNIX nature of the system, although the percentage of UNIX-styled software decreases dramatically.
Anyway, the idea of Linux as UNIX-based Windows clone feels optimistic for everyone except for UNIX way zealots. But if we take a closer look, we'll find that this model leads Linux to ruin.
"War for Linux 1/2"
(5) 184 Comment(s) Related Articles | 计算机 |
2014-23/1195/en_head.json.gz/42341 | National Computer Systems
Maggie Knack
NCS Signs Agreement with NASD RegulationSM
NCS to Deliver Electronic Exams and Continuing Education Internationally
Minneapolis, Minn., November 22, 1999 — National Computer Systems, Inc., (NCS) (NASDAQ: NLCS) announced today that it has been selected by National Association of Securities Dealers Regulation, Inc., (NASD Regulation) to be the exclusive delivery provider of computer-based NASD RegulationSM examinations and continuing education outside of the United States and Canada. NASD Regulation, an independent subsidiary of the National Association of Securities Dealers, Inc., (NASD®) is charged with regulating the securities industry and The Nasdaq Stock MarketSM. Terms were not disclosed.
Securities professionals who trade securities on any U.S. stock exchange on behalf of customers must register with the NASD. As part of the registration process, securities professionals must pass qualification examinations that demonstrate competence in the areas in which they will work. Registered representatives must also participate in periodic computer-based training to ensure that they adhere to continuing education requirements.
NASD Regulation is a pioneer in electronic testing, beginning the practice of delivering licensing exams electronically in 1979. “We are pleased that, through NCS’ unique Internet-based delivery system, we will have the opportunity to offer their advanced testing services to our international registered representatives. We anticipate that NCS’ customer-oriented approach to service will have a positive impact on the representatives’ learning and testing experience,” said Mary L. Schapiro, NASD Regulation president.
NCS will use the technology it currently employs to deliver hundreds of thousands of Information Technology exams for industry leaders such as Microsoft Corporation and Novell, Inc. Delivery of the NASD Regulation exams and continuing education is expected to begin next spring in London, Paris, Frankfurt, Seoul, Hong Kong and Tokyo.
“NCS is delighted to be signing this exclusive agreement with NASD Regulation for international computer-based testing,” commented David W. Smith, president of NCS’ Assessments and Testing business. “This agreement continues our momentum in the professional certification market. We look forward to a strong working relationship with NASD Regulation.”
NASD Regulation oversees all U.S. stockbrokers and brokerage firms. NASD Regulation, along with The Nasdaq Stock Market, Inc., are subsidiaries of the NASD, the largest securities industry self-regulatory organization in the United States.
NCS is a global information services company providing software, services, and systems for the collection, management, and interpretation of data. NCS (www.ncs.com) serves important segments of the education, testing, assessment, and complex data management markets. Headquartered in Minneapolis, NCS has 4,600 employees serving its customers from more than 30 locations worldwide.
NASD Regulation is a service mark of NASD Regulation, Inc. NASD is a registered service mark of the National Association of Securities Dealers, Inc. The Nasdaq Stock Market is a registered service mark of The Nasdaq Stock Market, Inc.
Print Release | Close Window
Pearson VUE - 5601 Green Valley Drive - Bloomington, MN 55437 USA - www.PearsonVUE.com | 计算机 |
2014-23/2168/en_head.json.gz/1194 | Public ScopingPublic scoping is the process of determining the scope of public concerns, desires, or issues by gathering initial input from a wide variety of sources. This process helps the BLM to better understand the nature and extent of issues and impacts to be addressed in the RMP and the methods by which they will be examined. The official start of the scoping period began with the publication in the Federal Register of the Notice of Intent to Prepare a Resource Management Plan on March 25, 2005. The comment period ended on May 24, 2005. During those 60 days, open houses were held on May 2, 2005 (Winnemucca, NV); May 3, 2005 (Lovelock, NV); May 4, 2005 (Gerlach, NV); and on May 5, 2005 (Reno, NV). For more information please refer to the Scoping Summary Report. The public was also invited to comment on the Draft RMP/EIS during the public review period described below.
Development of Planning Issues and CriteriaDuring May and June of 2005 BLM resource specialists, along with Tetra Tech, Inc., considered comments received during the scoping period and developed a list of planning issues and criteria.
Data Collection and AnalysisFrom March 2005 to September 2006 baseline studies were conducted to gather information on the current condition of resources and management of those resources.
Formation of AlternativesFrom June 2005 through December 2009 the Winnemucca District and Tetra Tech resource specialists, along with the assistance of cooperating agencies and the Resource Advisory Council Subgroup, developed a set of alternatives that could address the resource management issues and criteria raised during the public scoping period. One of the alternatives was to continue with the current management as delineated in the Sonoma-Gerlach and Paradise-Denio MFPs. The range of alternatives was included in the Draft RMP and Environmental Impact Statement (EIS). After thorough analysis and much deliberation, the Winnemucca District Manager selected a preferred alternative. The Draft RMP/EIS, including the preferred alternative, was reviewed at all levels of the BLM: the District Office, Nevada State Office, Washington Office and the Solicitor's Office. Public Review of Draft RMP/EISThe BLM released the Draft RMP/EIS to the public for a 90 day review period on June 25, 2010. The comment period was extended until October 25, 2010.During this period, public comments were received through a variety of avenues. Open house meetings were held on Monday, July 26, in Winnemucca; Tuesday, July 27, in Lovelock; Wednesday, July 28, in Gerlach; and Thursday, July 29, in Reno. Comments received during this period were gathered, organized, analyzed and used during the development of the Proposed RMP/Final EIS.
Development of Proposed RMP/Final EISBased on consideration of public comments, and in consultation with nine cooperating agencies, a Resource Advisory Council Subgroup, and tribal governments, the WD prepared the Proposed RMP / Final EIS by modifying alternatives; supplementing, improving, and modifying the analysis; and making factual clarifications to maps, figures and tables. Like the draft version, the Proposed RMP/Final EIS received extensive review at the District, State and Washington office levels. The BLM published its Notice of Availability of the Proposed RMP/Final EIS on September 6, 2013. The Environmental Protection Agency also published its notice in the Federal Register (EPA Notice), which initiates a 30-day public protest period until October 7, 2013. A 60-day Governor's consistency review period has also been initiated. Record of Decision and Final RMPUpon resolution of all land use plan protests, the BLM will issue an Approved RMP and Record of Decision. The RMP will be implemented and monitored to ensure goals and objectives outlined in the document are being met. The RMP will serve as the Winnemucca District's land use planning guide for future management actions. | 计算机 |
2014-23/2168/en_head.json.gz/1573 | » Forums » ALL ABOUT GAMES » ADVENTURE GAME DISCUSSIONS » Announcing The Capri Connection - The Final Chapter to the Capri Trilogy
Announcing The Capri Connection - The Final Chapter to the Capri Trilogy
Loc: In the Naughty Corner What could be better than strolling along a beautiful tropical island to get rid of the winter blues? You can check out the campaign here. Check out the the official Capri website here. Here are the reviews for A Quiet Weekend in Capri and Anacapri. Quote:The Capri Connection, the 3rd game of the trilogy including A Quiet Weekend in Capri and Anacapri the Dream by Silvio & Gey Savarese, has been launched on Kickstarter! Go and get a look on www.caprisaga.com in the News page. As the previous two games, The Capri Connection is a PC point-and-click, first-person Adventure Game based on photos and ambient animations, where the setting is a real place and most of the characters are real people, often performing in the game as they do in their real life. The Capri Connection will be loosely linked to the previous games and it can be played independently. Different from the other two games of the trilogy, the set will be extended from Capri to other gorgeous places nearby, such as the Amalfi Coast, Naples and other islands of the bay of Naples. As in the previous game, The Capri Connection will have the same user interface and will be available in Adventure Game Mode, Walking Mode and in 3 language options: English, Italian and Italian with English subtitles. The Capri Connection will run on Windows Operating systems, such as XP, Vista, Windows 7 and Windows 8. The Capri Connection is produced by the authors and funded via Kickstarter, a powerful tool for funding independent developers projects. The authors encourage adventure gamers all over the world to help in the funding of this project by pledging here on Kickstarter and selecting the Pledge/Reward they prefer. Help continue the trilogy by supporting The Capri Connection, an old style Adventure Game where the plot must remain a mystery until the first player will try to unfold it!Happy Gaming!Ana _________________________
Re: Announcing The Capri Connection - The Final Chapter to the Capri Trilogy
Thank SheriffI, for one, really hope they are successful. Although at first release I couldn't get into the first game, it wasn't until a few years ago when I gave the updated versions in The Mysteries of Capri DVD a try that I really enjoyed the vacation on the Isle of Capri. Enjoyed both games but favor the second as the better of the two. Heres hoping that The Capri Connection will be even better.
Thanks, Ana. I really hope this project succeeds and that this game will be made. Top
I loved Anacapri the Dream. Some of the puzzles were insanely difficult, but it was so gloriously beautiful to travel through! Thanks, Ana.
Thank you, BrownEyedTigre _________________________
Good News Ana!
OH MY GOODNESS please tell me this is true and not that Im reading this while Im sleeping Oh My Goodness!!! Im so HAPPY! yippee cant wait Thank You Thank You Thank You!!!!!! Thank You Ana Bets Sylvio and Guy _________________________
Loc: In the Naughty Corner Just a reminder, it will come to light if the campaign succeeds. If you are really wanting the game to be developed you will have to back the Kickstarter campaign.
For those following, Gey Savarese the co developer, along with his son Silvio, has posted this over at Adventure Point in response to some concerns about their Kickstarter campaign and with the DRM.http://adventurepoint.forumotion.com/t1298-want-a-third-capri-game#15871
This campaign ends on April 30 but they still have a long way to go. Maybe some people have forgotten about it because they chose a 60 day period for the campaign. I bucked my tendency to procrastinate and pledged early on this one. Hope to see them make their goal...A Quiet Weekend in Capri was a very enjoyable game.
I point, therefore I click.
velorond
I really hope they get to do this game!I played Anacapri a while back and it was such a nice and relaxing experience.
Playing the Capri saga games has been my only vacation for years . I hope The 3rd game will get made regardless of Kickstarter success - as only 15 hours remain.Karen
Whats the latest on Capri? Will it play on Win7&8?
Loc: In the Naughty Corner The new Capri funding did not go through. Please post in Glitches regarding the older ones and compatibility. Ana _________________________ | 计算机 |
2014-23/2168/en_head.json.gz/1928 | 612 656-9602 | Home | Service Area | Backup | About Us | Server | Other Systems | Products | News |MNservice@macsplus.comPlease use our Amazon Link for buying products.MacsPlus merges and moves ________________________________________________________________________ May 2012For business and personal reasons, MacsPlus merged with Capitol Mac, located in the Inner Harbor of Baltimore.Capitol Mac was one choice of several partners. They make service calls, have expertise, have authorization for Apple service and sales, but most of all for the integrity of the management. I found them to be straightforward in their dealings. That should translate into greater customer satisfaction.MacsPlus founder Bruce Greene had never participated in a merger before and is glad to have Capitol Mac taking over clients for the Baltimore metropolitan area. "I still wish that I could go on the service calls personally" says Bruce, "but the personal side of my life brought me back to Minnesota". | Home | Service Area | Backup | About Us | Server | Other Systems | Products | News |July 2012For those who must deal with the Microsoft OS, I have tried the current beta version of Windows 8. It is quite different from any previous interface, and I am not impressed (except for their ability to not 'get it'). More later.Aug 2012Tech Toads, a Maple Grove, Minnesota computer store, has been sending occasional Macintosh referrals my way. Only a few of them actually call me, so Tech Toads and I have agreed in principal that they should take the Macs in and I will go to their store to work on them. We have agreed upon percentages,and look forward to satisfying more customers. | 计算机 |
2014-23/2168/en_head.json.gz/2346 | Microsoft Details Investments in Business Process Management Functionality By Editorial Staff On Jun 13, 2006
Manufacturing, retail customers to benefit from extended supply chain functionality via RFID, EDI capabilities in upcoming release
Nashville — June 13, 2006 — At the U Connect Conference for supply-chain management, Microsoft Corp. announced deeper investments in business process management by disclosing plans to ship Microsoft BizTalk Server 2006 R2. Microsoft said the capabilities in the new release of BizTalk Server should enable customers to extend the reach of their business processes. Scheduled to be available to customers in the first half of 2007, BizTalk Server 2006 R2 will add native support for electronic data interchange (EDI) and AS2, as well as a new set of services for developing and managing radio frequency identification (RFID) solutions. BizTalk Server 2006 R2 will broaden its support for platform technologies such as the 2007 Microsoft Office system and Windows Vista, including core WinFX technologies such as Windows Workflow Foundation and Windows Communication Foundation. Furthermore, BizTalk Server 2006 R2 will provide an adapter framework on top of Windows Communication Foundation to help customers build customized adapters to better support interoperability between applications. Together, the new capabilities are expected to enable customers to increase the reach of their existing business processes to address a host of current and emerging business problems, particularly in the area of end-to-end supply chain and retail operations. "Our customers are increasingly innovating with processes that reach well outside typical business applications — whether connecting processes with a key supply chain partner or integrating real-time events from a manufacturing device on a plant floor," said Burley Kawasaki, group product manager for BizTalk Server. "In each of these cases, extending core processes outside of the organization is often too costly and too hard to realize, leading to missed opportunities due to a lack of real-time awareness into the processes. The new capabilities in BizTalk Server 2006 R2 will further empower customers to drive innovation by extending their business process management solutions to the right device, the right information and the right trading partner." Kawasaki said the new supply chain capabilities in BizTalk Server 2006 R2 are designed to drive greater efficiencies in connecting a company's core business processes to those of its external supply chains. In addition to the existing XML-based business-to-business support in BizTalk Server 2006, native support for EDI and the AS2 protocol in BizTalk Server 2006 R2 will enable users to connect their supply chains to key suppliers and trading partners. Ken Vollmer, principal analyst in the Application Development and Infrastructure research group at Forrester Research Inc., commented, "For many years, larger firms in the retail, manufacturing, healthcare, insurance and other sectors have been able to take advantage of the productivity improvements that the automated exchange of EDI transactions provides, and usage continues to expand among these organizations as they reach out to more trading partners and use an expanded list of EDI transaction sets. Many tier-two and tier-three companies have realized that they can achieve the same benefits of improved value chain coordination that their larger trading partners have, and they are establishing their own hubs for exchanging EDI transactions with their downstream trading partners."* Microsoft said BizTalk Server 2006 R2 includes new features that provide native support for messaging protocols such as EDIFACT and X12 and the secure communication protocol, EDIINT AS2. Microsoft is encouraging broader partner and customer feedback through participation in the BizTalk Server 2006 R2 technology adoption program (TAP). "Enterprises today continue to use EDI as a means to more securely and reliably connect partners to critical supply chain business processes. These investments are fundamental to how they do business," said Jean-Yves Martineau, chief technology officer and co-founder of Cactus Commerce. "By building our Trading Partner Integration and Global Data Synchronization Solutions on top of the supply chain capabilities in BizTalk Server 2006 R2, we will help customers connect more efficiently with their partners and supply chains through the use of EDI and AS2 and emerging technologies such as RFID." BizTalk RFID is a set of technologies with open APIs and tools to build vertical solutions and configure intelligent RFID-driven processes. It includes the following: — Device abstraction and management capabilities to help customers manage and monitor devices in a uniform manner. This "plug and play" architecture allows customers to leverage the investments in standard or nonstandard devices by providing a uniform way of managing their device infrastructure. — An event processes engine that enables customers to create business rules and manage the choreography of event pipelines for RFID events.
— Tight integration with Microsoft Visual Studio and open APIs that allows developers to quickly integrate RFID events with existing business applications, or create their own custom solutions. — Support for industry standards and an extensible architecture with advanced deployment and monitoring tools that IT professionals can use to meet service-level commitments and reduce long-term costs. * Trends 2006: Electronic Data Interchange, Forrester Research Inc., October 2005
Microsoft Details Investments in Business Process Management Functionality Sourcing/Procurement | 计算机 |
2014-23/2168/en_head.json.gz/3362 | Share this video on: Tweet
THE CONCEPT BEHIND LEVEL-5
As a child, I was always playing games. Even after I grew up, I never forgot how fun and elating those experiences were. The Game & Watch series were games that stood out most in my memory. They were low-tech by today’s standards, but I remember being in awe at the fact that I could hold a “world” in the palm of my hand. Perhaps it was then that I saw a true future and potential in games.
When establishing LEVEL-5, I was set on creating games that would bring the children of today the same excitement I felt as a child. This single desire is what inspired us to start this company.
With a corporate stance that left no room for compromise, we have grown today to set even more ambitious goals. During phase 1, our entire team succeeded in working together to gain trust in an industry where we were still considered a start-up developer. In phase 2, we were recognized by players through the success of prominent titles such as DRAGON QUEST VIII. In 2007, our company entered phase 3 with the release of Professor Layton and the Curious Village. Professor Layton was the first game we handled from development through to its release. With this game, LEVEL-5 transitioned from a developer to a publisher in Japan. This transformation exponentially increased our interest in all facets of the game industry. Once we pass phase 3, we will most likely be competing with the top brands of the industry in phase 4.
Making Fukuoka the “Hollywood of the gaming industry” through the Game Factory Friendship, as well as expanding our efforts into movie and anime production are just a few of the challenges we are pursuing. However, LEVEL-5′s main arena will remain games. That is why my fellow staff and I are going to continue the path my heart has followed since day one: the journey to becoming the “number one game brand”. I want us to be known as “the world’s best,” not for the company’s capital or scale, but for the entertainment value of our products.
President and CEO,
Akihiro Hino
SIGN UP FOR FUTURE UPDATES
Copyright © LEVEL-5 International America, Inc. All rights reserved. | 计算机 |
2014-23/2168/en_head.json.gz/3764 | VA » Office of Public and Intergovernmental Affairs » Executive Biographies »
Stephen W. Warren
Executive in Charge for Information and Technology
Stephen Warren joined the Department of Veterans Affairs in May 2007 as the first Principal Deputy Assistant Secretary in the Office of Information and Technology (PDAS/IT) and serves as the Deputy Chief Information Officer for the Department.
As the PDAS, Stephen is the Chief Operating Officer of the $3.3 billion, 8,000-employee IT organization, overseeing its day-to-day activities to ensure VA employees have the IT tools and services needed to support our Nation's Veterans. He successfully led the integration of VA's vast IT network into one of the largest consolidated IT organizations in the world. Prior to assuming his current role, Warren served as Chief Information Officer (CIO) for the Federal Trade Commission and previously as CIO for the Office of Environmental Management, Department of Energy. Beginning in 1982, Warren served nine years of active duty in the U.S. Air Force, where he was involved in a broad range of activities, to include: research in support of the Strategic Defense Initiative (SDI), support for nuclear treaty monitoring efforts, and service in Korea as a transportation squadron section commander.
Warren has received a number of awards and honors, to include: Federal Computer Week Federal 100 award winner, 2004 and 2012; Presidential Rank Award of Distinguished Executive, 2008; Government Information Security Leadership Award (GISLA), 2006; Service to America Social Services Medal, 2004; AFFIRM (Association for Federal Information Resources Management) Leadership Award for Innovative Applications, 2004; and American Council for Technology Intergovernmental Solutions Awards, 2004. Warren has been widely published on subjects involving nuclear facilities, radioactivity, and related issues. He holds a Master's in Systems Management from the Florida Institute of Technology and a Bachelor's in Nuclear Engineering from the University of Michigan.
Share Quick Links Enter ZIP code here
Office of Public and Intergovernmental Affairs Home
National Programs & Special Events | 计算机 |
2014-23/2168/en_head.json.gz/4225 | Toolbox Software — Toolbox Article
From the April/May 2007 Review of Construction Accounting Software Toolbox Software has been offering software to construction and construction-related companies for over 10 years. Originally designed by a construction-specific CPA firm, Toolbox offers a non-modular approach, which is most suitable for mid-sized companies. Real-time transaction processing, easy flexibility and numerous customizing options are found throughout Toolbox, including multi-company processing, where users can choose to utilize the same database or create a new one. The program offers easy tracking of liens and release notices, allows users to enter and maintain a history of change orders, provides for the tracking of completion percentages and permits unlimited phase and cost code levels. An unlimited number of user-defined cost types can be set up for new jobs, and indirect job costs can be allocated at any time, even retroactively.
All fixed asset information can be tracked, including cost attached per job. Depreciation data can be maintained for every piece of equipment owned, and recording serial numbers within the program will help to protect against loss. Revenues can be tracked using a variety of criteria, including by salesperson, by region, by project manager or by estimator. An unlimited number of costs can be attached to inventory and non-inventory items. Drilldown capability is provided in most reports in the system, covering all modules. Dashboard drilldown capability is provided in the Job Cost and GL modules. As well, reporting options have been enhanced with upgraded financial reporting capability.The Toolbox job entry screen is comprehensive, with tabs for entering job description, scales, job budget, scheduled value, cost plus, status, purchase order and subcontractor information, liens, an area to attach files, and notes. Tabs can be customized or eliminated as needed. The Plant module offers unlimited material cost and billing rate scales. Work orders can be assigned to a specific technician and tracked by status or type. Documents can be e-mailed, faxed or saved in PDF format from most screens.
Toolbox has introduced a number of new enhancements with the latest version, including the addition of a series of .NET grids that provide users with exceptional customization abilities for improved workflow. This version also introduces Scheduling, multi-location inventory management, and improved database synchronization. The vendor also noted that it will soon add the ability to drilldown to source transactions and provide automatic updates and expanded Help screens, among other things.
LEARNING CURVE – 4.5 Stars Toolbox provided a Demo CD, a new version update and detailed instructions, which translated to a painless installation and setup. ODBC compliance makes Toolbox extremely user friendly and allows users to customize the invoice, payroll, purchase order and subcontract entry screens.
The main screen of Toolbox features a drop-down menu bar that provides access to all system features. At the left-hand side of the screen is a vertical bar containing icons that offer one-click access to frequently used functions. Navigation through Toolbox is easy, with data-entry screens laid out in a logical sequence. Users can change the order in which fields appear in the data-entry screens to accommodate the user’s preference and allow for smoother data entry. All data-entry screens also contain drop-down lists for easy item lookup. The Help function offers limited assistance, but a user manual is available to assist new users with questions. A series of tabs is available in both the AP and AR data-entry screens that provide access to previously entered transactions such as AP invoices and customer billings. Tabs that are not utilized can be eliminated if desired. Additional system preferences can also be set up using the Utilities function. MODULES & ADD-ONS – 4.5 Stars Toolbox includes GL, AP, AR, Job Cost, Payroll, Document Control, Scheduling, Inventory, Plant, Fixed Assets and Estimating. Using a relational database eliminates the need to purchase only certain modules and allows users to have access to the complete Toolbox program upon purchase. PRODUCTIVITY TOOLS – 4.5 Stars Toolbox offers numerous productivity tools, including the GL Miscellaneous Names feature, which allows users to record additional contact information when processing cash purchases. A list of active customers can be maintained via the AR function, placing historical customers on the inactive list rather than eliminating them from the database. As well, the Job Cost module offers Estimating System Templates that work with third-party applications. Also of note, is the Implementation feature, which assists with all aspects of system implementation, from understanding construction accounting to customizing screen layouts and cost code libraries. IMPORT/EXPORT & INTEGRATION – 5 Stars Toolbox is a completely integrated system that eliminates time-consuming batch processing by using one central database for easy data access. The program is ODBC-compliant, which allows for easy export of data to third-party applications such as Microsoft Word and Excel. System utilities included with Toolbox allow for the import of payroll information, pricing lists, and customer and vendor information from third-party applications, as well. REPORTING – 5 Stars Toolbox offers a comprehensive list of reports, and all reports utilize the Crystal Reports data engine, which expands customization capability quite nicely. The Custom Reports and Forms function allows for the customization of both system reports and forms. As noted earlier, reports can be e-mailed, faxed, saved in PDF format or exported to third-party applications such as Microsoft Word and Excel. Reports can be customized by modifying adding, or eliminating columns, both in appearance and location. SUPPORT & TRAINING – 4.5 Stars System support is offered via telephone, directly from the Toolbox website, or by fax. Various training options are also offered and are listed on the vendor’s website. Cost for training varies, with classes ranging in price from $300 up to $1,000. RELATIVE VALUE – 4.5 Stars Toolbox’s Job Cost and Accounting Software is an easily navigated product that contains a wealth of features with a friendly user interface. Especially designed for midsize construction companies, Toolbox starts at $6,100 for a single-user system. This is an excellent choice for those looking for a combination of strength, extreme flexibility and ease of use. 2007 Overall Rating – 4.5 Stars Continue Reading
Toolbox Software — Toolbox Accounting & Audit | 计算机 |
2014-23/2168/en_head.json.gz/5093 | A Panda no mundo|Imprensa|Tecnologias|Contactos|A minha conta|Loja Panda Particulares | Empresas | Parceiros
Soluções da gama 2014
Proteção Multi Plataformas
Panda Gold ProtectionPanda Global Protection 2014
Panda Antivirus Pro 2014
Para Android
Panda Mobile SecurityPanda Gold ProtectionPanda Global Protection 2014
Panda Antivirus for MAC
Downloads para clientes
Zona beta
Renovar antivírusDocumentação
Panda Gold Protection Panda Global Protection 2014
Panda Mobile Security
Comprei o antivírus, e agora? Perguntas frequentes – FAQs
Ferramentas de Desinfeção Recuperação de password
Alteração do e-mail
Encontra-se em: Panda Security > Home Users > security-info > Glossary Glossário Termos técnicos sobre vírus informáticos e antivírus.
A ActiveX: This technology is used, among other things, to improve the functionality of web pages (adding animations, video, 3D browsing, etc). ActiveX controls are small programs that are inserted in these pages. Unfortunately, as they are programs, they can also be targets for viruses.
Address Book: A file with WAB extension. This is used to store information about other users such as e-mail addresses etc.
Administrator: A person or program responsible for managing and monitoring an IT system or network, assigning permissions etc.
Administrator rights: These rights allow certain people to carry out actions or operations on networked computers.
ADSL: This is a kind of technology that allows data to be sent at very high speed across an Internet connection. It requires a special ADSL modem.
Adware: Programs that display advertising using any means: pop-ups, banners, changes to the browser home page or search page, etc. Adware can be installed with the user consent and awareness, but sometimes it is not. The same happens with the knowledge or lack o knowledge regarding its functionalities.
Algorithm: A process or set of rules for calculating or problem-solving.
Alias: Although each virus has a specific name, very often it is more widely-known by a nickname that describes a particular feature or characteristic of the virus. In these cases, we talk about the virus ‘alias’. For example, the virus CIH is also known by the alias Chernobyl.
ANSI (American National Standards Institute): Is a voluntary organization that sets standards, particularly for computer programming.
Anti-Debug / Anti-debugger: These are techniques used by viruses to avoid being detected.
Antivirus / Antivirus Program: These are programs that scan the memory, disk drives and other parts of a computer for viruses.
API (Application Program Interface): This is a function used by programs to interact with operating systems and other programs.
Armouring: This is a technique used by viruses to hide and avoid detection by the antivirus.
ASCII: Is a standard code -American Standard Code for Information Interchange- for representing characters (letters, numbers, punctuation marks, etc.) as numbers.
ASP (Active Server Page): These are particular types of web pages that allow a site to be personalized according to user profiles. This acronym can also refer to Application Service Provider.
Attributes: These are particular characteristics associated to a file or directory.
Autoencryption: The way in which a virus codifies (or encrypts) part or all of itself, making it more difficult to analyze or detect to analyze.
AutoSignature: This is normally a short text including details like name, address etc. that can be automatically added to new e-mail messages.
B Backdoor: This is a program that enters the computer and creates a backdoor through which it is possible to control the affected system without the user realizing.
Banker Trojan: A malicious program, which using different techniques, steals confidential information to the customers of online payment banks and/or platforms.
Banner: An advert displayed on a web page, promoting a product or service that may or may not be related to the host web page and which in any event links directly to the site of the advertiser.
Batch files / BAT files: Files with a BAT extension that allow operations to be automated.
BBS (Bulletin Board System): A system or service on the Internet that allows subscribed users to read and respond to messages written by other users (e.g. in a forum or newsgroup).
BHO (Browser Helper Object): A plugin that is automatically runs long with the Internet browser, adding to its functionality. Some are used for malicious ends, such as monitoring the web pages viewed by users.
BIOS (Basic Input / Output System): A group of programs that enable the computer to be started up (part of the boot system).
Bit: This is the smallest unit of digital information with which computers operate.
Boot / Master Boot Record (MBR) : Also known as the Boot sector, this is the area or sector of a disk that contains information about the disk itself and its properties for starting up the computer.
Boot disk / System disk: Disk (floppy disk, CD-ROM or hard disk) that makes it possible to start up the computer.
Boot virus: A virus that specifically affects the boot sector of both hard disks and floppy disks.
Bot: A contraction of the word ‘robot’. This is a program that allows a system to be controlled remotely without either the knowledge or consent of the user.
Bot herder: A person or group that controls the botnet. They are also known as ‘bot master’ or ‘zombie master’.
Botnet: A network or group of zombie computers controlled by the owner of the bots. The owner of the botnets sends instructions to the zombies. These commands can include updating the bot, downloading a new threat, displaying advertising or launching denial of service attacks.
Browser: A browser is the program that lets users view Internet pages. The most common browsers are: Internet Explorer, Netscape Navigator, Opera, etc.
Buffer: This is an intermediary memory space used to temporarily save information transferred between two units or devices (or between components in the same system).
Bug: This is a fault or error in a program.
Bus: Communication channel between different components in a computer (communicating data signals, addresses, control signals, etc).
Byte: This is a unit of measurement of digital information. One byte is equal to 8 bits.
C Cache: This is a small section of the computer’s memory.
Category / Type (of virus): As there are many different types of viruses, they are grouped in categories according to certain typical characteristics.
Cavity: Technique used by certain viruses and worms to make them more difficult to find. By using this technique, the size of the infected file doesn’t change (they only occupy cavities in the file affected).
Chat / Chat IRC / Chat ICQ: These are real-time text conversations over the Internet.
Client: IT system (computer) that requests certain services and resources from another computer (server), to which it is connected across a network.
Cluster: Various consecutive sectors of a disk.
CMOS (Complementary Metal Oxide Semiconductor): This is a section of the computer’s memory in which the information and programs needed to start up the system are kept (BIOS).
Code: Content of virus files -virus code, written in a certain programming language-. Can also refer to systems for representing or encrypting information. In its strictest sense, it can be defined as a set of rules or a combination of symbols that have a given value within an established system.
Common name: The name by which a virus is generally known.
Companion / Companion virus / Spawning: This is a type of virus that doesn’t insert itself in programs, but attaches itself to them instead.
Compressed / Compress / Compression / Decompress: Files, or groups of files, are compressed into another file so that they take up less space.
Cookie: This is a text file which is sometimes sent to a user visiting a web page to register the visit to the page and record certain information regarding the visit.
Country of origin: This generally refers to the country where the first incidence of virus was first recorded.
Cracker: Someone who tries to break into (restricted) computer systems.
CRC (CRC number or code): A unique numeric code attached to files that acts as the files ID number.
Crimeware: All programs, messages or documents used directly or indirectly to fraudulently obtain financial gain to the detriment of the affected user or third parties.
CVP - Content Vectoring Protocol: Protocol developed in 1996 by Check Point which allows antivirus protection to be integrated into a firewall server.
Cylinder: Section of a disk that can be read in a single operation.
D Damage level: This is a value that indicates the level of the negative effects that a virus could have on an infected computer. It is one of the factors used to calculate the Threat level.
Database: A collection of data files and the programs used to administer and organize them. Examples of database systems include: Access, Oracle, SQL, Paradox, dBase, etc.
DDoS / Distributed Denial of Service: This is a Denial of Service (DoS) attack where multiple computers attack a single server at the same time. Compromised computers would be left vulnerable, allowing the attacker to control them to carry out this action.
Debugger: A tool for reading the source code of programs.
Deleted items: A folder in e-mail programs that contains messages which have been deleted (they have not been eliminated completely from the computer). After deleting a message containing a virus, it is advisable to delete it from this folder as well.
Detection updated on: The latest date when the detection of a malware was updated in the Virus Signature File.
Dialer: This is a program that is often used to maliciously redirect Internet connections. When used in this way, it disconnects the legitimate telephone connection used to hook up to the Internet and re-connects via a premium rate number. Often, the first indication a user has of this activity is an extremely expensive phone bill.
Direct action: This is a specific type of virus.
Directory / Folder: Divisions or sections used to structure and organize information contained on a disk. The terms folder and directory really refer to the same thing. They can contain files or other sub-directories.
Disinfection: The action that an antivirus takes when it detects a virus and eliminates it.
Distribution level: This is a value that indicates the extent to which a virus has spread or the speed at which it is spreading. It is one of the factors used to calculate the Threat level.
DNS (Domain name system): System to enable communication between computers connected across a network or the Internet. It means that computers can be located and assigns comprehensible names to their IP addresses. DNS servers, are those computers in which these names are handled (resolved) and associated to their corresponding IPs.
DoS / Denial of Service: This is a type of attack, sometimes caused by viruses, that prevents users from accessing certain services (in the operating system, web servers etc.).
Download: This is the process of obtaining files from the Internet (from Web pages or FTP sites set up specifically for that purpose).
Driver / Controller: A program, known as a controller, used to control devices connected to a computer (normally peripherals like printers, CD-ROM drives, etc).
Dropper: This is an executable file that contains various types of virus.
Dynamic Link Library (DLL): A special type of file with the extension DLL.
E EICAR: European Institute of Computer Anti-Virus Research. An organisation which has created a test to evaluate the performance of antivirus programs, known as the EICAR test.
ELF -files- (Executable and Linking Format): These are executable files (programs) belonging to the Unix/Linux operating system.
Emergency Disk / Rescue disk: A floppy disk that allows the computer to be scanned for viruses without having to use the antivirus installed in the system, but by using what is known as the “command line antivirus”.
Encryption / Self-encryption: This is a technique used by some viruses to disguise themselves and therefore avoid detection by antivirus applications.
EPO (Entry Point Obscuring): A technique for infecting programs through which a virus tries to hide its entry point in order to avoid detection. Instead of taking control and carrying out its actions as soon as the program is used or run, the virus allows it to work correctly for a while before the virus goes into action.
Exceptions: This is a technique used by antivirus programs to detect viruses.
Exploit: This can be a technique or a program that takes advantage of a vulnerability or security hole in a certain communication protocol, operating system, or other IT utility or application.
Extension: Files have a name and an extension, separated by a dot: NAME.EXTENSION. A file can have any NAME, but the EXTENSION (if it exists) has a maximum of three characters. This extension indicates the type of file (text, Word document, image, sound, database, program, etc.).
F Family / Group: Some viruses may have similar names and characteristics. These viruses are grouped into families or groups. Members of the group are known as variants of the family or the original virus (the first to appear).
FAT (File Allocation Table): This is a section of a disk that defines the structure and organization of the disk itself. It also contains the ‘addresses’ for all the files stored on that disk.
File / Document: Unit for storing information (text, document, images, spreadsheet etc.) on a disk or other storage device. A file is identified by a name, followed by a dot and then its extension (indicating the type of file).
Firewall: This is a barrier that can protect information in a system or network when there is a connection to another network, for example, the Internet.
FireWire: Is a high-speed communication channel, used to connect computers and peripherals to other computers.
First Appeared on…: The date when a particular virus was first discovered.
First detected on: The date when the detection of a certain malware was first included in the Virus Signature File.
Flooding: Programs that repeatedly send a large message or text to a computer through messaging systems like MSN Messenger in order to saturate, collapse or flood the system.
Format: Define the structure of a disk, removing any information that was previously stored on it.
Freeware: All software legally distributed free of charge.
FTP (File Transfer Protocol): A mechanism that allows files to be transferred through a TCP/IP connection.
G Gateway: A computer that allows communication between different types of platforms, networks, computers or programs.
GDI (Graphics Device Interface): A system that allows the Windows operating system to display presentations on-screen or in print.
Groupware: A system that allows users in a local network (LAN) to use resources like shared programs; access to Internet, intranet or other areas; e-mail; firewalls and proxies, etc.
H Hacker: Someone who accesses a computer illegally or without authorisation.
Hacking tool: Program that can be used by a hacker to carry out actions that cause problems for the user of the affected computer (allowing the hacker to control the affected computer, steal confidential information, scan communication ports, etc).
Hardware: Term referring to all physical elements in an IT system (screen, keyboard, mouse, memory, hard disks, microprocessor, etc).
Header (of a file): This is the part of a file in which information about the file itself and its location is kept.
Heuristic scan: This term, which refers to problem solving by trial and error, is used in the computer world to refer to a technique used for detecting unknown viruses.
Hijacker: Any program that changes the browser settings, to make the home page or the default search page, etc. different from the one set by the user.
Hoax: This is not a virus, but a trick message warning of a virus that doesn’t actually exist.
Host: This refers to any computer that acts as a source of information.
HTTP (Hyper Text Transfer Protocol): This is a communication system that allows web pages to be viewed through a browser.
I Identity Theft: Obtaining confidential user information, such as passwords for accessing services, in order that unauthorized individuals can impersonate the affected user.
IFS (Installable File System): System used to handle inbound/outbound information transfers between a group of devices or files.
IIS (Internet Information Server): This is a Microsoft server (Internet Information Server), designed for publishing and maintaining web pages and portals.
IMAP (Internet Message Access Protocol): This is a system or protocol which allows access to e-mail messages.
In circulation: A virus is said to be in circulation, when cases of it are actually being detected somewhere in the world.
In The Wild: This is an official list drawn up every month of the viruses reported causing incidents.
Inbox: This is a folder in e-mail programs which contains received messages.
Infection: This refers to the process of a virus entering a computer or certain areas of a computer or files.
Interface: The system through which users can interact with the computer and the software installed on it. At the same time, this software (programs) communicates via an interface system with the computer’s hardware.
Interruption: A signal through which a momentary pause in the activities of the microprocessor is brought about.
Interruption vector: This is a technique used by a computer to handle the interruption requests to the microprocessor. This provides the memory address to which the service should be provided.
IP (Internet Protocol) / TCP-IP: An IP address is a code that identifies each computer. The TCP/IP protocol is the system, used in the Internet, that interconnects computers and prevents address conflicts.
IRC (Chat IRC): These are written conversations over the Internet in which files can also be transferred.
ISDN (Integrated Services Digital Network): A type of connection for digitally transmitting information (data, images, sound etc).
ISP (Internet Service Provider): A company that offers access to the Internet and other related services.
J Java: This is a programming language that allows the creation of platform independent programs, i.e., they can be run on any operating system or hardware (multi-platform language).
Java Applets: These are small programs that can be included in web pages to improve the functionality of the page.
JavaScript: A programming language that offers dynamic characteristics (e.g. variable data depending on how and when someone accesses, user interaction, customized features, etc.) for HTML web pages.
Joke: This is not a virus, but a trick that aims to make users believe they have been infected by a virus.
K Kernel: This is the central module of an operating system.
Keylogger: A program that collects and saves a list of all keystrokes made by a user. This program could then publish the list, allowing third parties to access the data (the information that the user has entered through the keyboard: passwords, document texts, emails, key combinations, etc.).
L LAN (Local Area Network): A network of interconnected computers in a reasonably small geographical area (generally in the same city or town or even building).
Link / Hyperlink: These are parts of a web page, e-mail or document (text, images, buttons, etc.), that when clicked on, take the user directly to another web page or section of the document.
Link virus: This is a type of virus that modifies the address where a file is stored, replacing it with the address of the virus (instead of the original file). As a result, when the affected file is used, the virus activates. After the computer has been infected, the original file will be unusable.
Logic bomb: This is a program that appears quite inoffensive, but which can carry out damaging actions on a computer, just like any other virus.
Loop: A set of commands or instructions carried out by a program repeatedly until a certain condition is met.
M Macro: A macro is a series of instructions defined so that a program, say Word, Excel, PowerPoint, or Access, carries out certain operations. As they are programs, they can be affected by viruses. Viruses that use macros to infect are known as macro viruses.
Macro virus: A virus that affects macros in Word documents, Excel spreadsheets, PowerPoint presentations, etc.
Malware: This term is used to refer to all programs that contain malicious code (MALicious softWARE), contain malicious code, whether it is a virus, Trojan or worm.
Map: This is the action of assigning a shared network disk a letter in a computer, just as if it were another drive in the computer itself.
MAPI: Messaging Application Program Interface. A system used to enable programs to send and receive e-mail via a certain messaging system.
Mask: This is a 32 bit number that identifies an IP address in a certain network. This allows the TCP/IP communication protocol to know if a an IP address of a computer belongs to one network or another.
Means of infection: A fundamental characteristic of a virus. This is the way in which a virus infects a computer.
Means of transmission: A fundamental characteristic of a virus. This is the way in which a virus spreads from one computer to another.
Microprocessor / Processor: This is the integrated electronic heart of a computer or IT system e.g. Pentium (I, II, III, IV,...), 486, 386, etc.
MIME (Multipurpose Internet Mail Extensions): This is the set of specifications that allows text and files with different character sets to be exchanged over the Internet (e.g. between computers in different languages).
Modem: A peripheral device, also known as MOdulator DEModulator, used to transmit electronic signals (analogical and digital). It is designed to enable communication between computers or other types of IT resources. It is most often used for connecting computers to the Internet.
Module: In IT parlance, this is a set or group of macros in a Word document or Excel spreadsheet, etc.
MS-DOS (Disk Operating System): This operating system, which predates Windows, involves the writing of commands for all operations that the user wants to carry out.
MSDE (Microsoft Desktop Engine): A server for storing data, which is compatible with SQL Server 2000.
MTA (Message Transfer Agent): This is an organized mail system that receives messages and distributes them to the recipients. MTAs also transfer messages to other mail servers. Exchange, sendmail, qmail and Postfix, for example, are MTAs.
Multipartite: This is a characteristic of a particular type of sophisticated virus, which infects computers by using a combination of techniques used by other viruses.
Mutex (Mutual Exclusion Object): Some viruses can use a mutex to control access to resources (examples: programs or even other viruses) and prevent more than one process from simultaneously accessing the same resource. By doing this, they make it difficult for antiviruses to detect them. These viruses can ‘carry’ other malicious code in the same way that other types, such as polymorphic viruses, do. [Top]
N Network: Group of computers or other IT devices interconnected via a cable, telephone line, electromagnetic waves (satellite, microwaves etc), in order to communicate and share resources. Internet is a vast network of other sub-networks with millions of computers connected.
Newsgroup: An Internet service through which various people can connect to discuss or exchange information about specific subjects.
Nuke (attack): A nuke attack is aimed at causing the network connection to fail. A computer that has been nuked may block.
Nuker: Person or program that launches a nuke attack, causing a computer to block or the network connection to fail.
O OLE (Object Linking and Embedding): A standard for embedding and attaching images, video clips, MIDI, animations, etc in files (documents, databases, spreadsheets, etc). It also allows ActiveX controls to be embedded.
Online registration: System for subscribing or registering via the Internet as a user of a product or services (in this case, a program and associated services).
Operating system (OS): A set of programs that enables a computer to be used.
Overwrite: This is the action that certain programs or viruses take when they write over a file, permanently erasing the content.
P P2P (Peer to peer): A program -or network connection- used to offer services via the Internet (usually file sharing), which viruses and other types of threats can use to spread. Some examples of this type of program are KaZaA, Emule, eDonkey, etc.
Packaging: An operation in which a group of files (or just one) are put into another file, thus occupying less space. Packaging is similar to file compression, but is the usual way of referring to this in Unix/Linux environments. The difference between packaging and compression are the tools used. For example, a tool called tar is normally used for packaging , while zip or gzip -WinZip- are used for compressing.
Parameter: A variable piece of data indicating how a program should behave in any given situation.
Partition: A division of a computer’s hard disk which enables the operating system to identify it as if it were a separate disk. Each partition of a hard disk can have a different operating system.
Partition table: An area of a disk containing information about the sections or partitions, that the disk is divided into.
Password: This is a sequence of characters used to restrict access to a certain file, program or other area, so that only those who know the password can enter.
Password stealer: A program that obtains and saves confidential data, such as user passwords (using keyloggers or other means). This program can publish the list, allowing third-parties to use the data to the detriment of the affected user.
Payload: The effects of a virus.
PDA (Personal Digital Assistant): A pocket-sized, portable computer (also called palmtops). Like other computers, they have their own operating system, have programs installed and can exchange information with other computers, the Internet, etc. Well-known brands include Palm, PocketPC, etc.
PE (Portable Executable): PE refers to the format of certain programs.
Permanent protection: This is the process that some antivirus programs carry out of continually scanning any files that are used in any operations (albeit by the user or the operating system.) Also known as sentinel or resident.
Phishing: Phishing involves massive sending of emails that appear to come from reliable sources and that try to get users to reveal confidential banking information. The most typical example of phishing is the sending of emails that appear to come from an online bank in order to get users to enter their details in a spoof web page.
Plataform: Refers to an operating system, in a specific environment and under certain conditions (types of programs installed, etc.).
Plugin: A program that adds new functionality to an existing system.
Polymorphic / Polymorphism: A technique used by viruses to encrypt their signature in a different way every time and even the instructions for carrying out the encryption.
POP (Post Office Protocol): This is a protocol for receiving and sending e-mails.
Pop-up menu: List of options that is displayed when clicking on a certain item or area of a window in a program with the secondary mouse button (usually the right). These options are shortcuts to certain functions of a program.
Pop-up windows: A window that suddenly appears, normally when a user selects an option with the mouse or clicks on a special function key.
Port / Communication port: Point through which a computer transfers information (inbound / outbound) via TCP/IP.
Potentially Unwanted Program (PUP): Program that is installed without express permission from the user and carries out actions or has characteristics that can reduce user control of privacy, confidentiality, use of computer resources, etc.
Prepending: This is a technique used by viruses for infecting files by adding their code to the beginning of the file. By doing this, these viruses ensure that they are activated when an infected file is used.
Preview Pane: A feature in e-mail programs that allows the content of the message to be viewed without having to open the e-mail.
Privacy policy: This is the document that sets out the procedures, rules, and data security practices of a company to guarantee the integrity, confidentiality and availability of data collected from clients and other interested parties in accordance with applicable legislation, IT security needs and business objectives.
Proactive protection: Ability to protect the computer against unknown malware by analyzing its behavior only, and therefore not needing a virus signature file periodically updated.
Process killer: A program that ends actions or processes that are running (active) on a computer, which could pose a threat.
Program: Elements that allow operations to be performed. A program is normally a file with an EXE or COM extension.
Programming language: Set of instructions, orders, commands and rules that are used to create programs. Computers understand electronic signals (values 0 or 1). Languages allow the programmer to specify what a program must do without having to write long strings of zeros and ones, but using words (instructions) that are more easily understood by people.
Protocol: A system of rules and specifications that enables and governs the communication between to computers or IT devices (data transfer).
Proxy: A proxy sever acts as a middle-man between an internal network, such as an Intranet, and the connection to the Internet. In this way, one connection can be shared by various users to connect to an Internet server.
Q Quick Launch bar: The area next to the Windows Start button or menu, which contains shortcut icons to certain items and programs: e-mail, Internet, antivirus, etc.
R RAM (Random Access Memory): This is a computer's main memory, in which files or programs are stored when they are in use.
Recycle bin: This is a section or folder on the hard disk where deleted files are stored (provided they haven’t been permanently deleted).
Redirect: Access one address via another.
Remote control: The action of gaining access to a user’s computers (with or without the user’s consent) from a computer in a different location. This access could pose a threat if it is not done correctly or for legitimate purposes.
Rename: Action whereby a file, directory or other element of a system is given a new name.
Replica: Among other things, the action by which a virus propagates or makes copies of itself, with the aim of furthering the spread of the virus.
Resident / Resident virus: A program or file is referred to as resident when it is stored in the computer’s memory, continuously monitoring operations carried out on the system.
Restart: Action whereby the computer is temporarily stopped then immediately starts again.
Ring: A system governing privilege levels in a microprocessor, controlling the operations that can be performed and its protection. There are various levels: Ring0 (administrator), Ring1 and Ring2 (administrator with less privileges), Ring3 (user).
ROM (Read Only Memory): This is a type of memory which under normal circumstances cannot be written on, and therefore its content is permanent.
Root directory: This is the main directory or folder on a disk or drive.
Rootkit: A program designed to hide objects such as processes, files or Windows registry entries (often including its own). This type of software is not malicious in itself, but is used by hackers to cover their tracks in previously compromised systems. There are types of malware that use rootkits to hide their presence on the system.
Routine: Invariable sequence of instructions, that make up part of a program and can be used repeatedly.
S Scam: Any illegal plot or fraud in which a person or group of persons are tricked into giving money, under false promises of economic gain (trips, vacations, lottery prizes, etc.).
Scanning -ports, IP addresses-: The action of identifying the communications ports and/or IP addresses of a computer and getting information about their status. This action can sometimes be considered an attack or threat.
SCR files: These files, which have the extension SCR, could be Windows screensavers or files written in Script language.
Screensaver: This is a program that displays pictures or animations on the screen. These programs were originally created to prevent images from burning onto the screen when the computer wasn’t used for a while.
Script / Script virus: The term script refers to files or sections of code written in programming languages like Visual Basic Script (VBScript), JavaScript, etc.
Sector: This is a section or area of a disk.
Security patch: Set of additional files applied to a software program or application to resolve certain problems, vulnerabilities or flaws.
Security risk: This covers anything that can have negative consequences for the user of the computer. For example, a program for creating viruses or Trojans).
Sent items: A folder in e-mail programs which contains copies of the messages sent out.
Server: IT system (computer) that offers certain services and resources (communication, applications, files, etc.) to other computers (known as clients), which are connected to it across a network.
Service: The suite of features offered by one computer or system to others that are connected to it.
Services applet: An applet in Windows XP/2000/NT, which configures and monitors system services.
Shareware: Evaluation versions of a software product that allow users to try out a product for a period of time before buying it. Shareware versions are normally free or significantly cheaper than complete versions.
Signature / Identifier: This is like the virus passport number. A sequence of characters (numbers, letters, etc.) that identify the virus.
SMTP (Simple Mail Transfer Protocol): This is a protocol used on the Internet exclusively for sending e-mail messages.
Software: Files, programs, applications and operating systems that enable users to operate computers or other IT systems. These are the elements that make the hardware work.
Spam: Unsolicited e-mail, normally containing advertising. These messages, usually mass-mailings, can be highly annoying and waste both time and resources.
Spammer: A program that allows the mass-mailing of unsolicited, commercial e-mail messages. It can also be used to mass-mail threats like worms and Trojans.
Spear Phishing: This attack uses phishing techniques but is aimed at a specific target. The creator of this type of attack will never use spam to obtain a massive avalanche of personal user data. The fact that it is targeted and not massive implies careful preparation in order to make it more credible and the use of more sophisticated social engineering techniques..
Spyware: Programs that collect information about users' browsing activity, preferences and interests. The data collected is sent to the creator of the application or third-parties, and can be stored in a way that it can be recovered at another time. Spyware can be installed with the user consent and awareness, but sometimes it is not. The same happens with the knowledge or lack of knowledge regarding data collected and the way it is used.
SQL (Structured Query Language): A standard programming language aimed at enabling the administration and communication of databases. It is widely used in the Internet (e.g. Microsoft SQL Server, MySQL, etc).
Statistics: A sample of malware has statistics whenever its infection percentage is among the 50 most active threats.
Status bar: A section that appears at the bottom of the screen in some Windows programs with information about the status of the program or the files that are in use at the time.
Stealth: A technique used by viruses to infect computers unnoticed by users or antivirus applications.
String: A sequence of characters (letters, numbers, punctuation marks etc.).
Sub-type: Each of the sub-groups into which a type is divided. In this case, a group of viruses or threats within the same category or type, with certain characteristics in common.
Symptoms of infection: These are the actions or effects that a virus could have when it infects a computer including trigger conditions.
System services: Applications which normally run independently when a system is started up and which close, also independently, on shutting down the system. System services carry out fundamental tasks such as running the SQL server or the Plug&Play detector.
T Targeted attack: Attacks aimed specifically at a person, company or group and which are normally perpetrated silently and imperceptibly. These are not massive attacks as their aim is not to reach as many computers as possible. The danger lies precisely in the customized nature of the attack, which is designed especially to trick potential victims.
Task list: A list of all programs and processes currently active (normally in the Windows operating system).
Technical name: The real name of a virus, which also defines its class or family.
Template / Global template: This is a file that defines a set of initial characteristics that a document should have before starting to work with it.
Threat level: This is a calculation of the danger that a particular virus represents to users.
Title bar: A bar on top of a window. The title bar contains the name of the file or program.
Track: A ring on a disk where data can be written.
Trackware: All programs that monitor the actions of users on the Internet (pages visited, banners clicked on, etc.) and create a profile that can be used by advertisers.
Trigger: This is the condition which causes the virus to activate or to release its payload.
Trojan: Strictly speaking, a Trojan is not a virus, although it is often thought of as such. Really they are programs that, enter computers appearing to be harmless programs, install themselves and carry out actions that affect user confidentiality.
TSR (Terminate and Stay Resident): A characteristic that allows certain programs to stay in memory after having run.
Tunneling: A technique used by some viruses to foil antivirus protection.
U Updates: Antiviruses are constantly becoming more powerful and adapting to the new technologies used by viruses and virus writers. If they are not to become obsolete, they must be able to detect the new viruses that are constantly appearing. To do this, they have what is called a Virus Signature File
UPX: This is a file compression tool (Ultimate Packer for eXecutables) which also allows programs compressed with this tool to be run without having to be decompressed.
URL (Uniform Resource Locator): Address through which to access Internet pages (or other computers).
V Vacination: An antivirus technique that allows file information to be stored, and posible infections detected when a change is noted in the file.
Variant: A variant is a modified version of an original virus, which may vary from the original in terms of means of infection and the effects that it has.
Virus: Viruses are programs that can enter computers or IT systems in a number of ways, causing effects that range from simply annoying to highly-destructive and irreparable.
Virus constructor: A malicious program intended to create new viruses without having any programming skills, as it has an interface that allows to choose the characteristics of the created malware: type, payload, target files, encryption, polymorphism, etc.
Virus Signature File: This file enables the antivirus to detect viruses.
Volume: This is a partition of a hard disk, or a reference to a complete hard disk. This term is used in many networks where there are shared disks.
Vulnerability: Flaws or security holes in a program or IT system, and often used by viruses as a means of infection.
W WAN (Wide Area Network): A network of interconnected computers over a large geographical area, connected via telephone, radio or satellite.
Windows desktop: This is the main area of Windows that appears when you start up the computer. From here you can access all tools, utilities and programs installed on the computer, via shortcut icons, options in the Windows Start menu, the Windows taskbar, etc.
Windows Explorer: Program or application available in Windows to administer the files available on the computer. It is very useful for getting an organized view of all directories.
Windows Registry: This is a file that stores all configuration and installation information of programs installed, including information about the Windows operating system.
Windows Registry Key: These are sections of the Windows Registry that store information regarding the system’s settings and configuration.
Windows System Tray: Area in the Windows taskbar (usually in the bottom right corner of the screen), which contains the system clock, icons for changing system settings, viewing the status of the antivirus protection, etc.
Windows taskbar : This is a bar that appears at the bottom of the screen in Windows. The bar contains the Start button, the clock, icons of all programs resident in memory at that moment and shortcuts that give direct access to certain programs.
WINS (Windows Internet Name Service): A service for determining names associated with computers in a network and allowing access to them. A computer contains a database with IP addresses (e.g. 125.15.0.32) and the common names assigned to each computer in the network (e.g. SERVER1).
Workstation: One of the computers connected to a local network that uses the services and resources in the network. A workstation does not normally provide services to other machines in the network in the same way a server does.
Worm: This is similar to a virus, but it differs in that all it does is make copies of itself (or part of itself).
Write access / permission: These rights or permissions allow a user or a program to write to a disk or other type of information storage unit.
Write-protected: This is a technique used to allow files on a disk or other storage device to be read but to prevent users from writing on them.
WSH (Windows Scripting Host): The system that enables you to batch process files and allows access to Windows functions via programming languages such as Visual Basic Script and Java Script (script languages).
X XOR (OR-Exclusive): An operation used by many viruses to encrypt their content.
Z Zip: A particular format of compressed file corresponding to the WinZip application.
Zombie: A computer controlled through the use of bots.
Zoo (virus): Those viruses that are not in circulation and that only exist in places like laboratories, where they are used for researching the techniques and effects of viruses.
Soluções antivírus
Gold Protection
Global Protection 2014
Antivirus Pro 2014
Mobile Security Panda Antivirus for Mac
Serviços Panda
Fórum de Suporte Técnico Download de antivírus
Download de documentação Renovar Loja Renovações automáticas SUPORTESuporte para particularesSuporte para empresas
FAQ'sContactos de Suporte Técnico
Inteligência Colectiva Evolução das nossas tecnologias
Porquê as nossas tecnologias?
Blog do PandaLabs Últimas notícias
Mobile Security Gold ProtectionGlobal Protection 2014
Antivirus Pro 2014Panda Cloud CleanerPanda Antivirus for Mac
Download Antivírus 2014
Downloads para clientes Zona betaUtilitários para remoção de vírus
ContactosFacebook
YoutubeRSS
© Panda Security 2014 | Política de privacidade | Acordo legal | A Panda no mundo | 计算机 |
2014-23/2168/en_head.json.gz/5260 | 'Control-Alt-Hack' game lets players try their hand at computer security
Do you have what it takes to be an ethical hacker? A new card game developed by computer scientists gives players a taste of life as modern computer-security professional.
Players assume the roles of characters with their own special skills. Game play involves completing missions by rolling the dice, using skills and occasionally pulling something out of a bag of tricks.
Credit: Image courtesy of University of Washington
Do you have what it takes to be an ethical hacker? Can you step into the shoes of a professional paid to outsmart supposedly locked-down systems?
Now you can at least try, no matter what your background, with a new card game developed by University of Washington computer scientists.
"Control-Alt-Hack" gives teenage and young-adult players a taste of what it means to be a computer-security professional defending against an ever-expanding range of digital threats. The game's creators will present it this week in Las Vegas at Black Hat 2012, an annual information-security meeting.
"Hopefully players will come away thinking differently about computer security," said creator Yoshi Kohno, a UW associate professor of computer science and engineering.
The target audience is 15- to 30-year-olds with some knowledge of computer science, though not necessarily of computer security. The game could supplement a high school or introductory college-level computer science course, Kohno said, or it could appeal to information technology professionals who may not follow the evolution of computer security.
In the game, players work for Hackers, Inc., a small company that performs security audits and consultations for a fee. Three to six players take turns choosing a card that presents a hacking challenge that ranges in difficulty and level of seriousness.
In one mission, a player on a business trip gets bored and hacks the hotel minibar to disrupt its radio-tag payment system, then tells the manager. (A real project being presented at Black Hat this year exposes a security hole in hotel keycard systems.)
"We went out of our way to incorporate humor," said co-creator Tamara Denning, a UW doctoral student in computer science and engineering. "We wanted it to be based in reality, but more importantly we want it to be fun for the players."
This is not an educational game that tries to teach something specific, Denning said, but a game that's mainly designed to be fun and contains some real content as a side benefit. The team decided on an old-fashioned tabletop card game to make it social and encourage interaction.
Some scenarios incorporate research from Kohno's Security and Privacy Research Lab, such as security threats to cars, toy robots and implanted medical devices. The missions also touch other hot topics in computer security, such as botnets that use hundreds of hijacked computers to send spam, and vulnerabilities in online medical records.
Characters have various skills they can deploy. In addition to the predictable "software wizardry," skills include "lock picking" (for instance, breaking into a locked server room) and "social engineering" (like tricking somebody into revealing a password).
Graduate students who are current or former members of the UW lab served as loose models for many of the game's characters. Cards depict the characters doing hobbies, such as motorcycling and rock climbing, that their real-life models enjoy.
"We wanted to dispel people's stereotypes about what it means to be a computer scientist," Denning said.
The UW group licensed the game's mechanics from award-winning game designer Steve Jackson of Austin, Texas. They hired an artist to draw the characters and a Seattle firm to design the graphics. Adam Shostack, a security professional who helped develop a card game at Microsoft in 2010, is a collaborator and co-author.
Intel Corp. funded the game as a way to promote a broader awareness of computer-security issues among future computer scientists and current technology professionals. Additional funding came from the National Science Foundation and the Association for Computing Machinery's Special Interest Group on Computer Science Education.
Educators in the continental U.S. can apply to get a free copy of the game while supplies last. It's scheduled to go on sale in the fall for a retail price of about $30.
University of Washington. "'Control-Alt-Hack' game lets players try their hand at computer security." ScienceDaily. ScienceDaily, 24 July 2012. <www.sciencedaily.com/releases/2012/07/120724161014.htm>.
University of Washington. (2012, July 24). 'Control-Alt-Hack' game lets players try their hand at computer security. ScienceDaily. Retrieved July 25, 2014 from www.sciencedaily.com/releases/2012/07/120724161014.htm
University of Washington. "'Control-Alt-Hack' game lets players try their hand at computer security." ScienceDaily. www.sciencedaily.com/releases/2012/07/120724161014.htm (accessed July 25, 2014). | 计算机 |
2014-23/2168/en_head.json.gz/5271 | Powerset Promises Natural Language Search
Posted on July 18, 2007 by Terri Wells Like
Sick of using keywords? Powerset says it has a better way to search – with natural language. We’ve seen this attempted before. Why does Powerset think it can succeed?At this point in development, Powerset isn’t even open to beta testers. It’s accepting sign-ups for its PowerLabs, where Powerset testers will try out the technology on a limited number of web sites, like Wikipedia and the New York Times. According to the various accounts I’ve read, the closed beta testing is supposed to start in September.
You might have an interesting time signing up for it. I tried multiple times without apparent success, getting a 502 proxy error every time. The next day, however, I had a number of email messages in my in box from Powerset asking me to confirm my email address. I did, and I received a welcome message (about which more later). But since there’s quite a bit we don’t know about Powerset, let’s start with what we do know. What is “natural language”? It’s what lets humans understand each other, and why computers have such a hard time understanding what we say. It’s the ability to extract actual meaning from sentences. It means that someone or something receiving a query for “politicians who died of disease” would recognize that state governors, prime ministers, and presidents are politicians, and pneumonia, cancer, and diabetes are diseases.
The idea of getting computers to understand natural language has been around since 1950, when Professor Alan Turing described his famous Turing test in a paper. We have made a certain amount of progress since then, thanks to improvements in technology. But getting computers to connect concepts has proven so tricky that today’s most successful search engines use a statistical approach instead.
The most prominent search engine to try a natural language approach was Ask Jeeves. It boasted that users could ask questions rather than resort to using keywords. Unfortunately, it didn’t work very well. Ask Jeeves has since become Ask and is trying to reinvent itself to become more competitive. I was delighted to see recently, when I reviewed Ask3D, that its technology has improved tremendously. But it does not seem to be taking a natural language approach these days.
Powerset thinks they have a handle on natural language. Their search engine is actually supposed to learn and get better as more people use it. We won’t know whether it’s real or all hype until September at the earliest. In the meantime, though, it’s instructive to take a look at where this technology is coming from.
Xerox’s Palo Alto Research Center (PARC) has long been known for inventing things that other companies end up commercializing, earning it the title of “lab of missed opportunities.” These include the graphical user interface and the Ethernet networking technology. But in a deal that took a year and a half to negotiate, it licensed its natural language technology to Powerset. Fernando Pereira, chairman of the department of computer language and information science at the University of Pennsylvania, noted that the PARC natural language technology is among the “most comprehensive in existence.” But is it good enough for search? “The question of whether this technology is adequate to any application, whether search or anything else, is an empirical question that has to be tested,” he explained.
The PARC technology has 30 years of research backing it up. PARC researchers have been working with Powerset researchers for more than a year to build the prototype search engine. Indeed, Ron Kaplan, leader of PARC’s natural language research group for several years, joined Powerset as chief technology officer. Kaplan had been approached by Google, but turned them down. Yes, you read that right. Why would Kaplan turn down an established player like Google to work at Powerset? He doesn’t think Google takes natural language search seriously enough. “Deep analysis is not what they’re doing,” he explained in an interview with VentureBeat. “Their orientation is toward shallow relevance, and they do it well.” But Powerset is different; it “is much deeper, much more exciting. It really is the whole kit and caboodle.”
Powerset has also hired a number of engineers away from Yahoo. One name from Yahoo that stands out is Tim Converse, an expert on web spam; another is Chad Walters, who worked for Yahoo as a search architect. The company also claims its employees have worked for Altavista, Apple, Ask, BBN, Digital, IDEO, IBM, Microsoft, NASA, Promptu, SRI, Tellme and Whizbang! Labs.
Before I go into what that technology can do, and what Powerset envisions itself becoming, it’s worth noting that those patents are pretty ironclad. According to Powerset COO Steve Newcomb, it includes provisions that prevent any other company – such as Google – from getting access to the technology even if the other company acquires Xerox or PARC. That should be enough to give Google pause. But is the technology really enough to make the search giant start shaking in its boots?
In the welcome email I received when I signed up for Powerset’s PowerLabs, I saw a link to a short video about Powerset. For those who like this kind of irony, it’s hosted on YouTube, which is owned by Google. The one-minute video consisted of product manager Mark Johnson explaining how members of PowerLabs will get to “brainstorm ideas, write requirements, and test out the product…You’ll be able to run searches on the Powerset engine and see what our cool capabilities are, and you’ll also be able to give feedback on the results which will help to train Powerset and change the way the results come back in the future…” My chief problem with the video was that it consisted of a talking head. Why did Johnson not see fit to include a demonstration of the technology? In the Powerset blog, there are several entries that focus on how it returns results that are very different from what Google returns. Some entries even talk about why natural language is so difficult for computers to comprehend (kudos to Marti Hearst, a Powerset consultant and a professor at the Berkeley School of Information, for writing such engaging posts). So why not bring some of that out in the video?
I’ll have to assume that it’s little more than a teaser. Powerset has given demos of its technology; at least one observer has commented on the fact that these demos are always powered by someone at the company, and never seem to accept outside suggestions. Still, they have returned decent results. For example, a search on “who won an academy award in 2001?” returns Halle Berry, with a photo, a list of films, awards, and a description. Powerset and others have made much of the point that Google doesn’t return as good a result for this kind of query. Or does it? I tried the query, without quotes, in Google. I found this link on the first page of Google’s results. It’s actually better than the result that Powerset returned if I want to know all of the Academy Award winners for 2001 – which would make sense given the nature of the question. And here we actually find a disagreement – Julia Roberts supposedly took the Best Actress title for Erin Brockovich. I ended up going to the actual Academy Awards web site to clear up the discrepancy; Julia Roberts received her Oscar in 2001 for her work in 2000. Likewise Berry received hers in 2002 for Monster’s Ball, released in 2001. Even the best technology can’t read your mind.
One thing I can say for certain: Powerset isn’t afraid of a challenge. They’re running the site on Ruby on Rails. It’s a nimble framework; we’ve devoted a whole category to it on Dev Shed, as a matter of fact. But no one seems to know whether it can handle the kind of traffic that a popular search engine will inevitably attract. Powerset is a small company given what it is setting out to do. One recent article mentioned that it boasts 66 engineers. About 10 of these use Ruby on a daily basis, according to Powerset project leader Kevin Clark, so the decision to use Ruby made good sense. Also, the entire organization uses Ruby internally. Clark notes that “a substantial part of our infrastructure is being written in Ruby or being accessed through Ruby services. Our scientists use Ruby to interact with our core language technology…Frankly, we as an organization use Ruby a whole heck of a lot.” As to the scaling issues, Clark is not worried. While Twitter has been held up as an example of a company who’s Ruby on Rails technology did not scale well, Clark actually talked with Twitter’s lead developer Blaine Cook to find out where Twitter’s problems came from. He discovered that Twitter ran into architectural problems that had nothing to do with RoR. In fact, according to Cook, Ruby on Rails quickly became part of the solution: “thanks to architectural changes that Ruby and Rails happily accommodated, Twitter is now 10000% faster than it was in January.”
Unfortunately, we won’t really know how well it all works until September at the earliest, when private beta testers get to play with the technology. The rest of the world won’t get to look at it until the end of this year. I’m not going to bet that Powerset is the next Google killer, but I’m glad to see someone taking a different approach to the challenge of search. Google+ Comments Related Threads
Erin Brockovich back in Hinkley testing water (AP)Storm Erin deluges Houston (AP)Forum Exclusive: Q&A With GE's Erin Dillard (BusinessWeek)Remnants of storm Erin deluge Houston (AP)Remains of storm Erin deluge Houston (AP) Related Articles
This entry was posted in Search Engine News and tagged Erin Brockovich, Julia Roberts, New York Times, PARC. Bookmark the permalink. BioLatest Posts | 计算机 |
2014-23/2168/en_head.json.gz/5750 | Jon LoucksMonday, October 28, 2013
News Archive Jon Loucks Archive Download Magic Online ast month I wrote about implementing Theros on Magic Online. This month I return with another insight into Magic Online design. There's a lot that goes on behind the scenes of Magic Online, and I like being able to illuminate a little of that process for you, readers. It's good to be back!
In today's article, I'm going to talk about context menus, the subject of my most recent design document. I'm also going to talk about the future of context menus, at least as much as I can; it's important to note that in future design much can change, and sometimes concepts and plans are set aside because more important ones come along. Part of what I want to demonstrate is that the design of digital features often extends far beyond what is finally represented on screen.
Let's start at the beginning…
Context Menus and You!
When you click on a card in the duel scene, a menu often appears. (The duel scene is what we call the window where you actually play Magic.) It's called a "context menu" because the menu changes based on the context of the click, like the mouse button, location, game state, etc.
There are a few different types of context menus, and each requires its own tailored treatment. I'm going to walk through the various types of context menus used in Magic Online today and talk about their purpose and design. Most context menus a player uses are spawned from clicking on a card, but there are other types of context menus I'll touch on. Left-click: Action
A left mouse click is how a player takes action on a card. Sometimes this happens immediately, without the need of a context menu.
Here are a few examples: A player left-clicks a land in their hand to play it.A player left-clicks on Lightning Strike to begin casting it, then chooses a target with a left-click, and then left-clicks on their lands to pay the cost.While a player is declaring attackers, they left-click on a creature to move it into the red zone, and then left-click it again to move it out of the red zone.While a Mind Rot is resolving, the player left-clicks on a card in their hand to discard it.
Sometimes a card will have multiple actions it can take. In that case, a context menu is created with the list of available actions for that card. From there, the player can left-click on an option in the context menu to use it. Examples: A player left-clicks on Reaper of the Wilds, and then left-clicks either ": Reaper of the Wilds gains deathtouch until end of turn," or ": Reaper of the Wilds gains hexproof until end of turn."A player left-clicks on a Planeswalker, then left-clicks on the ability they want to use.A player left-clicks on a Thassa's Emissary in their hand, and then left-clicks on either "cast" or "bestow."
There's a little more detail to it than that, which I'll get to later in the article. For now, that serves as a good overview.
Right-click: Investigate
A right-click is how a player investigates a card. In my last article, I talked about levels of information. Top-level information, the most important information, is shown on the battlefield. Not all information can be shown on the battlefield, so second-level information is placed one right-click away. When a player right-clicks on a card, the full oracle text of the card is presented in simple text, along with gained blue text, such as Coordinated Assault giving first strike, and the same list of actions that a left-click creates.
If you want to know information about a card, a right-click should get you there. This is usually true, but there are still a few areas where we can better meet this goal. I'll be talking through a few of these areas today.
Activated Abilities
Let's compare the way that activated abilities are shown in the current client to the Wide Beta Client.
The current client context menu of abilities is a lot busier than the beta client version. Long abilities, and long lists of abilities, take up a lot more space in the current client. For example, Primal Command's ability text in the current client can't even completely fit on most screens!
However, there are good things about the current client's abilities lists that can still be carried over. For example, the lists have a pretty clear mouseover state, while the beta client is still a little too subtle. Similarly, the current client clearly communicates the break in clickable areas, and there's no dead space in between the options. The beta client could be better on that front.
With these ideas in mind, James Sooy created a mockup of what the beta client context menu could look like:
A quick disclaimer: This is a very early mockup. James made this in an afternoon after a design discussion, and literally handed it to me a few days before I wrote this article. There are certainly more iterations to come, but I love the direction we're heading in. Also, yes, that's not technically what City of Brass does, but that doesn't really matter in early mockups.
You can see a few improvements that mirror some of the good things about the current client, namely the clear mouseover state. There's also a line separating the abilities, though in a much less intrusive way than the current client.
There are a few other improvements you might notice, the big one being the bulleted list of options. Compare that to the current way we show a list of options:
See how the activation cost is repeated for each color option? That's incredibly redundant. With the bulleted list, we can better communicate a single activation cost with multiple options. Not only does this really clean up mana abilities, but we can also apply this technology to modal activated abilities (abilities where the player chooses a mode), like Bow of Nylea, showing each option under the same cost header.
There's another area where we're not entirely consistent. Maybe you can see it in this picture:
What if I add a Riftstone Portal to my graveyard?
Ahh! As you can see, the first two mana abilities, which are Horizon Canopy abilities, show no activated ability cost, and according to Horizon Canopy's text, tapping it for mana costs an additional 1 life. So those first two could potentially kill you if your life total is low! That's another improvement you can see in James's mockup—the life cost of the ability is shown.
Similarly, check out Karplusan Forest:
As far as the context menu is concerned, it looks like there's no difference between the three abilities. While they all have the same activation cost, the and mana abilities have the additional effect of dealing 1 damage to you—we should tell you! This all speaks to another goal of activated abilities in context menus: Show the full text of the costs and abilities.
Not only does this mean showing the full cost and effect, but it also means removing abbreviations from the context menu. The current client didn't have as much space on the screen to show ability text with the bulky presentation, so abbreviations helped keep abilities contained. However, we have confidence in the beta client's ability to present full ability text, without abbreviations. This will lead to a much cleaner and consistent presentation, with less guesswork needed from the player.
For example, here is an image I printed out and pinned to my desk to constantly remind myself of the terrors of inconsistency:
That said, there are two areas where abbreviations can be useful. The first is mana abilities. Adding mana to your mana pool is such a frequent occurrence that we can abbreviate "Add [color symbol(s)] to your mana pool" as "Add [color symbol(s)." Second, abilities where the player chooses a mode (which we call modal abilities) can lose some of their "and/or" text and punctuation, and be presented as a full ability. For example, Bow of Nylea could be written as: , : Choose one— Put a +1/+1 counter on target creature.Bow of Nylea deals 2 damage to target creature with flying.You gain 3 life.Put up to four target cards from your graveyard on the bottom of your library in any order.
There's one more thing I want to talk about from James's mockup. Did you notice that the last two abilities were a little faded? That's an effort to show abilities that aren't able to be activated right now. For example, let's take a look at Bow of Nylea again:
Remember what I was saying earlier about the context menu showing all information about a card? Some of the Bow's abilities are missing here! When a card has an activated ability that can't be activated (usually due to a lack of targets) it currently isn't shown on the context menu at all. This can be deceiving, especially when the card is being granted an ability by something else, like a Lightning Prowess. Instead, we want to show the full list of abilities that a card has, even the ones that can't be activated—those would be shown grayed-out and sorted to the bottom of the ability list.
To Menu, or Not to Menu?
We can also improve consistency on when a context menu appears, which will also serve to protect players from accidental clicks. I'll give you a few examples.
Right now, if you control a Torch Fiend but there's no artifact to target, a left-click doesn't do anything. Once an artifact is on the battlefield, the Torch Fiend gains a thin pinline border (indicating that it's clickable) and clicking on it starts to activate the ability. The prompt box simply says "Choose target artifact."
If we were to implement features like the ones above, here's how Torch Fiend could work. If you control a Torch Fiend and there's no artifact to target, it still doesn't have a clickable pinline. However, left-clicking on it would still bring up a context menu with Torch Fiend's ability, but that ability would just be grayed-out. Once an artifact is on the battlefield, Torch Fiend gains the clickable pinline. However, instead of just starting to activate the ability, left-clicking here would bring up the context menu. This requires a second click from the player to start activating the ability, but it's a much clearer process to the user. It's a little jarring to click on a permanent and then be told to select a target, especially if you click on a card with multiple abilities, only one of which is available.
There's one exception to this "always context menu" rule: casting a spell. Technically, every card in a player's hand has a "cast" option. (Or a "play" option, if it's a land.) You can see this option if you right-click on a card. However, clicking on a card in your hand to cast it happens so often, and there's no need for us to make players click twice, once on the card, and once on the "cast" option. Clicking on a card in hand and jumping straight to casting is a fairly natural process, so there's no need to add a second step in there. This is a case where we keep current functionality.
To go one level deeper, there's still an exception to that exception, which Magic Online already uses: zero-mana spells. If you start casting a 0-mana spell, like Accorder's Shield, you'll be presented with the cast option. This is to prevent players from accidentally casting a spell, since there's no intermediate cost payment step to alert them that they've started to cast a spell.
The Full Card Monty
I mentioned that a right-click context menu should give the player all information about a card. Here's an image of where we're at right now:
Let's dissect this context menu, including information it could display in the future.
Card title: Fairly self-explanatory.
Mana cost: Also fairly self-explanatory. I could see a far-future feature where altered costs are reflected here, in addition to the original cost, but that's just a nugget of a design idea that hasn't been mined yet.
Type line: In my last article, I talked about how we changed the type line display slightly with the release of Theros Gods, replacing the glaring red strikethrough with gray text, which you'll also see reflected on the context menu.
Counters: Here's something that isn't currently being reflected in the context menu. It can be hard to tell the difference between types of counters on permanents. It's rare that a permanent has more than one counter type on it (especially with +1/+1 and -1/-1 counters eating each other) but it can happen. An easy way to communicate counters on permanents is with the context menu, listing exact type and number of each counter in blue text.
Rules Text: The full rules text of the card, including blue text. This is a good place to note if the permanent is "summoning sick," which can help in those situations where you're not sure which land to animate with something like Koth of the Hammer.
Power / Toughness: Not only do we want to show the creature's current power and toughness, but we should be showing the creature's original power and toughness as well. This is in line with a design rule that I outlined in a previous article: don't hide oracle text. One way to present this is to show the current power and toughness in blue, and the original power and toughness next to it in parenthetical gray text.
Some cards have multiple "forms" they can be in. For example, check out the context menu for a double-faced card:
Here, we list the entire additional forms of cards, separated by a horizontal line. These would cover the same information as the current form of the card, though only present the oracle text—no blue text necessary here. Here's a full list of the additional forms that would want this treatment: The front side of a face down card, like a morph.The other side of an Innistrad double-faced card.The other side of a Kamigawa flip card.The original version of a card that is currently a copy of a different card, like Clone.The other levels of a Rise of the Eldrazi level-up card.
We currently treat each of these a little differently. Going forward, we could increase our consistency by making sure each of these forms is represented completely. For example, you'll see a few pieces of information missing from the context menu of a morph:
Lastly, at the bottom of the right-click context menu you'll find that card's full list of activated abilities.
Loucks, Stock, and Two Smoking Barrels
I've rambled on about context menus long enough for one day. There are a few context menus on non-cards, like abilities on the stack or the battlefield itself, but I'll leave those for another article.
I want to reiterate a disclaimer for this article. The above ideas are not necessarily a promise of things to come. Instead, they serve to illustrate the design direction the team is working towards. We're always trying new ideas, iterating on them, and re-prioritizing the things we implement. Consider this a behind-the-scenes look into that process, not a preview of the end result. And as I've said before, any feature we implement means another feature that we don't implement. It's all about priority.
As always, I encourage you to send me feedback via the email link below, or to @JonLoucks on Twitter.
Thanks for reading, and may you appreciate the value of context.
-Jon | 计算机 |