id
stringlengths 30
34
| text
stringlengths 0
75.5k
| industry_type
stringclasses 1
value |
---|---|---|
2014-23/2157/en_head.json.gz/9458 | What's New/9.2
Revision as of 17:54, 22 October 2013 by Horia (Talk | contribs)
What's New/9.2Protection (edit): Edited by: Horia
Basé sur FreeBSD 9.2-RELEASE qui ajoute cette [NO URL ENCORE liste des fonctionnalités].
Uses ZFSv5000 (feature flags), the latest open source version of ZFS. This version includes the LZ4 compression algorithm.
PC-BSD® is only available on 64-bit systems and the graphical installer will format the selected drive(s) or partition as ZFS. This means that images are no longer provided for 32-bit systems and that the graphical installer no longer provides an option to format with UFS.
GRUB is used to provide the graphical boot menu. It provides support for multiple boot environments, serial consoles, GPT booting, UEFI, graphics, and faster loading of kernel modules. During installation, most other existing operating systems will automatically be added to the boot menu.
The system has changed from the traditional ports system to pkgng and all of the PC-BSD® utilities that deal with installing or updating software use pkgng. This means that you can safely install non-PBI software from the command line and that a system upgrade will no longer delete non-PBI software.
The pkgng repository used by the software installed with the operating system is updated on or about the 5th and 20th of each month and a new freebsd-update patch is released on the 1st of each month.
The PC-BSD® utilities that deal with installing software or updates use aria2[1] which greatly increases download speed over slow links. aria2 achieves this by downloading a file from multiple sources over multiple protocols in order to utilize the maximum download bandwidth. The pc-pkg command has been added as a wrapper script to pkg. Use pc-pkg if you wish to increase your download speed when installing or upgrading pkgng packages.
PC-BSD® uses a Content Delivery Network (CDN) service for its network backbone. This means that users no longer have to pick a mirror close to their geographical location in order to get decent download speeds when downloading PC-BSD, updates, or software. It will also prevent failed updates as it removes the possibility of a mirror being out of date or offline. The source code repository for PC-BSD® has changed to GitHub[2]. Instructions for obtaining the source code using git can be found on our trac site[3].
The installer provides a built-in status tip bar, instead of tooltips, to display text about the moused-over widget.
If a non-English language is selected during installation, the post-installation configuration screens will automatically be displayed in the selected language. Additionally, if the user installed the KDE window manager, KDE-L10N will be installed.
The initial installation screen provides an option to load a saved installation configuration file from a FAT-formatted USB stick.
The installer provides an option to install a Desktop or a Server. If you select to Install a Server, it will install TrueOS®, a command-line version of FreeBSD which adds the command-line versions of the PC-BSD® utilities.
The installer provides an option to restore or clone the operating system from a remote snapshot created with Life Preserver. A network configuration icon is included to configure the connection to the server containing the remote snapshot.
The Advanced Mode screen provides configurable options to force 4K sector size, install GRUB, and set the ZFS pool name.
The installation summary screen provides an option to save configuration of the current installation selections to a FAT-formatted USB stick so that it can be re-used at a later time.
The PEFS encryption system has replaced the GELI encryption system. PEFS offers several benefits over GELI. Rather than encrypting the entire disk(s), which may expose too much known cryptographic data, it can be used on a per-user basis to encrypt that user's home directory. When the user logs in, their home directory is automatically decrypted and it is again encrypted when the user logs out. PEFS supports hardware acceleration. It can also be used to encrypt other directories using the command line; read man pefs for examples.
The encryption option has been removed from the installer and has been replaced by a "Encrypt user files" checkbox in the post-installation Create a User Screen for the primary login account and in the User Manager utility for creating additional user accounts. If you choose to use PEFS, it is very important to select a good password that you will not forget. At this time, the password cannot be easily changed as it is associated with the encryption key. A future version of PC-BSD® will provide a utility for managing encryption keys. In the mean time, this forum post provides a work around if you need to change a password of a user that is using PEFS.
It is possible to easily Convert a FreeBSD System to PC-BSD®/9.2.
When administrative access is needed, the user will be prompted for their own password. This means that users do not have to know the root password. Any user which is a member of the wheel group will have the ability to gain administrative access. By default, the only user in this group is the user account that you create during post-installation configuration. If additional users need this ability, use the Groups tab of User Manager to add them to the wheel group.
AppCafe® has been re-designed with a cleaner code base. New features include the ability to perform actions on multiple applications, save downloaded .pbi files to a specified directory, downgrade installed software if an earlier version is available as a PBI, the ability to import and export PBI lists, and improved search ability. EasyPBI has been revamped as version 2, making it even easier to create PBIs.
A graphical Package Manager utility has been added to Control Panel.
A graphical Boot Manager utility for managing boot environments and the GRUB configuration has been added to Control Panel.
A graphical PC-BSD® Bug Reporting/9.2 has been added to Control Panel.
Many improvements to Warden®/9.2 including the ability to create jails by hostname instead of by IP address, jail IP addresses can be changed after jail creation, vimage can be enabled/disabled on a per-jail basis, IPv4 or IPv6 addressing can be enabled or disabled, aliases can be added on a per-jail basis, and jail sysctls can be easily enabled on a per-jail basis.
The ability to use an external DHCP server has been added to Thin Client/9.2 and the ports collection is no longer a requirement for using this script.
↑ http://en.wikipedia.org/wiki/S.M.A.R.T.
Other languages:German • English • French • Ukrainian Retrieved from ‘http://wiki.pcbsd.org/index.php?title=What%27s_New/9.2/en&oldid=36766’ | 计算机 |
2014-23/2157/en_head.json.gz/9492 | I don't speak html particularly well, so I'll keep this brief.
WSO has provided alumni websites and mail forwarding for a long time - I don't know how far back it goes. But this winter, we were faced with the prospect of continuing to provide public-facing ssh accounts for thousands of alumni, almost none of whom we knew, in an era where university networks are sought after by attackers. So we decided that this was a service that had to be cut. Given that the server hosting these ssh accounts is also known to have the personal records of former students scattered all over the file system, we have decided that further access to the machine is out of the question. WSO has never guaranteed the reliability of its services, and the data on that machine has long been one hard drive crash away from disappearing. As for mail forwarding, we can hold onto those for a little while - we'll map all the .forward files to their contents. But in the fall, we will begin backward notification to sites that email our alumni accounts, passing the last record we have for the account instead of letting the mail through.
I joined WSO last fall, and quickly became aware that WSO had a lot of technical problems. Much of the software was outdated, and had not been updated in several years. This left the site open to a variety of known vulnerabilities. Encryption was not being applied for mail or web services. The personal information of current and former students was not being responsibly | 计算机 |
2014-23/2157/en_head.json.gz/9899 | Transforming into a Social CRM Enterprise
Being successful at social media often requires organizational change management. Here are some tips to follow.
By Kelly Liyakasa For the rest of the June 2012 issue of CRM magazine please click here
It's no secret that the power of social media lies in the connections it creates. More companies, however, are also realizing that good social CRM connections can improve their bottom line as well.Starwood Hotels & Resorts Worldwide is a case in point. The company's Starwood Preferred Guest loyalty program provides various ways for hotel guests to share their travel experiences and photos with family, friends, and other hotel guests on social media networks. As an added benefit, guests who check into one of its hotels with their Facebook or Foursquare account can receive "Starpoints," which can be applied toward free breakfasts and room upgrades.Such social media efforts have helped to double the loyalty program's membership over the past five years. But, perhaps, more importantly, members are spending 60 percent more than they did five years ago.These results would understandably whet any customer strategist's appetite. And, in fact, the promise of social CRM has done just that: Gartner is expecting social CRM sales to exceed $1 billion by the end of this year, up from $820 million last year.However, whether your organization's goal for social media is to improve sales, marketing, and/or customer service efforts, one thing is clear—success does not happen overnight. To fully harness the power of social media to engage, interact with, and sell to their target customers, organizations must be willing to embrace collective change. And that takes proper planning and patience. As with any disruptive technology, one of the biggest bottlenecks to success is changing employees' attitudes and behaviors regarding innovation. In Community Roundtable's "State of Community Management 2011" report, 28 percent of survey respondents said organizational culture was the greatest barrier to social technology usage."Enterprises are large and their cultures are well-defined," observes Paul Greenberg, president of The 56 Group and author of CRM at the Speed of Light. "They are legacy cultures—cultures of habit. And breaking habits, when the world is changing as dramatically as it is in such a short period of time, is not easy."
With the proper oversight, there are ways to overcome this hurdle. Read on for some tips for creating a social media culture within your organization.Know the RisksIn "Social Media and Its Associated Risks," a report issued by global audit, tax, and advisory firm Grant Thornton and the Financial Executives Research Foundation, a staggering 42 percent of the 141 public and private company executives surveyed admitted that no one in their organization monitors compliance with social media policies. About 21 percent said marketing and public relations were tasked with the job, and about 7 percent pinned it on IT. When it came to overall responsibility for social media, 54 percent of respondents said marketing and public relations were in charge. About 19 percent report not even using social media, and about 11 percent said that no specific group takes the lead on social media management. Some 7 percent look to business development and sales teams to take charge.The reality is that legalities and corporate governance are key elements in an enterprise social media strategy. Government and financial services are strong adopters of social media, and both face stringent regulatory processes that add complications to social interactions. In the Grant Thornton report, 61 percent of survey respondents said their company does not have a fraud management plan. Some of the identified risks in developing a compliance or fraud management plan include negative comments about a company, out-of-date information, disclosure of proprietary information, exposure of personally identifiable information, and fraud, the report indicated.Any company with a high volume of proprietary assets or information would naturally be more vigilant about sharing sensitive information in a public setting. For instance, global pharmaceutical company Allergan, the maker of BOTOX, plays in a highly regulated industry, where being social may not be as second nature as it is to a Starwood Hotels or Starbucks." We have a Facebook site, but it's not product-named because you would need a bunch of permissions," noted Heidi Shurtz, senior manager of customer relationship marketing for Allergan, at Loyalty Expo in Orlando, Fla. "We're proposing to do a closed-loop community to keep it social."Like Allergan, all companies flirting with deploying a plan for social must define the parameters for social media usage, monitoring, and management. But because governance is so fragmented, a company will typically have a view of social media use, risks, and management that might differ from that of other companies, the Grant Thornton report stated. In other words, there is no one right way to do it. Survey respondents were asked whether they had clearly defined policies regarding social media at their companies. Some 23 percent reported that they did have defined social media policies in place. About 35 percent said they did not have a social media policy in place, but were developing one. But more than four in 10 respondents (41 percent) did not have a social media policy and did not have plans to develop one."A company might not need a social media policy where another policy covers aspects of social media…for example, many companies have an electronic communications policy to address appropriate uses of the company's computer system and to reduce employee expectations of privacy and a company's risk," said Melissa Krasnow, corporate partner and certified information privacy professional at Dorsey & Whitney LLP, in the report. Krasnow added that a social media policy should be consistent with other organizational policies, such as an e-communications policy, employee handbook, and insider trading and disclosure policy.Lay the GroundworkIt's often been said that you need to be the change you wish to see. To be a social enterprise, it's not enough to simply sign up for a LinkedIn account and call it a day.It's estimated that the average enterprise-class company has 178 corporate-owned social media accounts, according to Altimeter Group's report, "A Strategy for Managing Social Media Proliferation." But simply starting a social account for every organizational department is mere child's play, according to Joel Rubinson, professor of marketing and social media strategy at New York University's Leonard N. Stern School of Business."I hear so many executives talk about having someone else tweet for them," he maintains. "They don't blog. They don't read blogs. They don't know what Google Analytics would look like on their blog. How in the world can you ever understand social media? That's like saying you read a book about France but you've never been there. It doesn't work."Experts agree that organizations must determine why social CRM is important to them. To do this, organizations must get one stakeholder from each department that will either participate in or be affected by the social CRM efforts. This group needs to create a list of ways in which social CRM can help each department and customers. Then a mutually agreed-upon list of goals for each department, the organization, and customers must be compiled.Wherever a social CRM strategy is deployed, communication will be key to its success. According to Bill Band, a vice president and principal analyst at Forrester Research, change management must address the question "What's in it for me?" People have to know why change is happening in the first place to rally behind it. It all starts with identifying your company's purpose for using social media in the first place."If relationship development is a purpose, we should be creating and executing social media around relationships," says Erick Mott, vice president of the Global Community Practice at Ektron, a Web content management solutions company. "If marketing, awareness, [or] brand equity is a purpose, then we should use social media for that. If product innovation is a purpose, you find out how you get feedback. And finally, if customer and partner support is a purpose, which it should be, you need to find out how to be responsive in real time to customer complaints."For any company that wants to have some semblance of order when creating and executing a social strategy, it's important to first understand the structure of your organization before you try to alter it. By their very nature, sales, marketing, and customer service representatives are service-based, but ask a back-office HR or finance department to get onboard with a social strategy and there may be less inclination to embrace it. Inherently, customer-facing operatives will undergo change first. Greenberg points out that "you're seeing a realignment of roles where [customer service reps (CSRs)] are now becoming community managers."When job descriptions change, compensation must naturally follow. "A lot of social technologies are taking root, and if a company discovers [it needs] to reach customers by setting up customer service communities or having customer service agents interact with these communities that promote self service or respond to the Twitter stream or sentiment analysis…that changes the job of the CSR," Band says.In the case of sales operations, it can be difficult enough for a company to lure salespeople into using their Salesforce.com automation system. "Now, you may need to set up a collaboration portal to reach your customer in a social portal or use another technology to do research on prospects to find out what they say on Facebook or LinkedIn, which suddenly means the salesperson's job is going to be different," Band adds. Enterprises will need to train accordingly.The same holds true for customer care. "Companies need to set up a dedicated social customer care team instead of saying, 'This is one more thing for my [live] agent to handle,' adds Christine Crandell, founder and president of global marketing consultancy New Business Strategies. "A typical call center is focused on first-time conflict resolution. It's very different in the social media world," she says, where customer interaction is more fluid and progressive. Therefore, these agents must have the skill sets needed to communicate with customers in a public channel.Lose ControlBecause it's all too easy for prospects and customers to take their business elsewhere, employees and managers must understand that they are no longer in control of customer relationships. "Part of the beauty of social media is actually letting go," Crandell notes.But relinquishing control of anything—especially brand reputation—can be daunting. Doing so, however, will make organizations much more attuned to customer needs because they will have to actively listen and respond. And what's gained from social networks is that fresh, sort of user-generated content that comes from it," said Jake Wengroff, Frost & Sullivan's global director of social media strategy and research, at the 2012 Sales Management 2.0 Conference in Philadelphia.This means organizations must be prepared to take the good comments with the bad—yes, bad comments are bound to surface. The important thing is not to have a knee-jerk reaction. "You might not be fond of a comment written about you, but the way to address it is head-on in the channel where it arose, publicly," Crandell advises.Enterprises that hit the "delete" button may find solace in the temporary fix, but the long-term results can be devastating to their reputation.Getting Top Brass InvolvedBecause social media is rooted in the new, the disruptive, and the personal, it's understandable why cultures of heritage still err on the side of caution. But organizations that are implementing social strategies are seeing the payoff in real results.Domino's Pizza is no stranger to a public image crisis in the social sphere. (Read "Don't Let a Crisis Destroy Your Image" in CRM's November 2011 issue.) In 2010, CEO J. Patrick Doyle publicly acknowledged Domino's digital misfortunes when snapshots of sliding cheese and other imperfections came back to haunt the company on its very own social channels.What commenced was the Pizza Turnaround initiative. Doyle admitted the product could stand to be improved, and customers became empowered to act. More recently, Domino's launched Facebook-based Think Oven, a consumer-based ideation hub of sorts that asked customers for their input on everything from pizza toppings to employee uniforms. "While companies used to say that it was about controlling employees, margin, and customers, they're now saying that it's about partnership and enablement," Crandell notes.As Domino's Pizza has proven, it's helpful when top brass stands behind social practices. But what happens if there is a changing of the guard? This shouldn't spell disaster, according to Greenberg. The reality is, "cultures survive the CEO because they're legacy cultures, and CEOs are typically the product of a culture," he says.And, even if there is a setback at the top—or anywhere else in the organization—don't get discouraged. Organizational change takes time and persistence. The shift to a social enterprise calls for an amendment of employee behaviors, processes, and technologies. "We aren't seeing a massive cultural shift at the workplace, nor is it being done through large-scale change management efforts," Greenberg maintains. "What you have, instead, are little pockets of action that, as time goes on, will cause change."10 Social CRM Deployment Tips1. Know the risks.2. Identify the reason(s) for deployment.3. Create a list of mutually agreed-upon goals.4. Communicate the value.5. Understand your organization's structure before altering it.6. Compensate accordingly.7. Train for the necessary skill sets.8. Listen and respond to customers.9. Get support from the top brass.10. Stay focused; don't let setbacks derail your plan.Associate Editor Kelly Liyakasa can be reached at kliyakasa@infotoday.com.
Most Customer Service Tweets Go Unanswered
In a study of the top 25 online retailers, only 44 percent of customer service questions posted via Twitter were answered within 24 hours.
Oracle Acquires Collective Intellect
Software giant builds out deeper cloud social offerings
SuccessFactors Debuts Mobile, Social Features for Business
Proliferation of mobile devices in enterprise is a driving force
OpenQ Launches Social Enterprise Compliance Suite
Solution lets healthcare, life sciences companies flag, classify, and act on identified risks
Viewpoints: Enabling Social Collaboration with Social Compliance
Monitoring, tracking, and reporting for risk management.
Mavenlink Releases HTML5 Mobile Application
Launch extends enterprise workflow and collaboration solutions to smartphone users.
Viewpoints: Finding the Right CRM Solution for Your Company
Merging new systems with old may be the best option.
Viewpoints: CRM Plays a Critical Role in Sales Success
A high-performing sales team starts with a strong manager.
To Be Social with Customers, Look Inside Your Organization
Proper engagement will require cross-enterprise change and strategy.
60 Minutes with The Container Store, Starbucks, and Whole Foods CEOs
Leaders weigh in on the economy, government.
Culture Plays a Key Role in Customer Relationships
Enterprises that align internally are the ones primed for success.
Customers Expect Companies to "Know" Them
SugarCRM CEO says buyers have changed, and so must companies.
Social Media Amplifies the Customer Experience
To be successful, identify goals and stakeholders, and execute on strategy.
Insight: Drug Makers Are in the Midst of a Digital Revolution
Pharmaceutical firms are sharpening their focus on multichannel marketing and analytics.
The Tipping Point: Putting Customers Before Politics
COOs struggle to navigate a siloed structure.
Magazine Features: 10 Social Customer Service Tips
New channel interactions don't have to be complicated.
Magazine Features: Beauty Marketers Must Put Their Best Face Forward
Five tips from industry experts to help make over your customer engagement efforts. | 计算机 |
2014-23/2157/en_head.json.gz/9918 | Microsoft Publisher
Access 2000 for Windows For Dummies Quick Reference
Alison Barrows ISBN: 978-0-7645-0445-7
About the Author Alison Barrows has taken the long route to .For Dummies writing, most recently thinking she would have a career in economics. A serious computer user since high school, she found herself irresistibly drawn into technical support, training, and documentation with help from friends who are .For Dummies authors. Since finding herself in a career as a writer, Alison has authored or coauthored seven books for IDG Books including Dummies 101: WordPerfect 8 and Excel 97 Secrets. During her career in technical writing and training, she has designed and written software courses and taught hundreds of computer users how to make computers work for them. In addition to writing books, Alison teaches custom computer courses and writes technical documentation and training material. Alison has a master's degree in Public Policy from the Kennedy School at Harvard University and a B.A. from Wellesley College. In real life, she loves to sing, watch Star Trek (the newer versions), cook, and dabble in yoga, rock climbing, and Ultimate Frisbee. She currently lives in Boylston, Massachusetts, with her husband, Matt, and the newest member of the family, their Portuguese Water Dog puppy Jake. | 计算机 |
2014-23/2157/en_head.json.gz/9998 | Table 2.2: Overall Summary of Project Performance (1994-99) excluding TCP Projects were found to have: Addressed a genuine development problem A problem of major importance 85% A significant problem 98% Effects in terms of use expected to be made of outputs At least 80% of outputs expected to be used as foreseen 50% At least 60% expected to be used as foreseen 89% Expectations of sustainable impact Considerable impact 45% Some or more sustainable impact 86% Cost-effectiveness for sustainable effects The most cost-effective approach 73% 35 . Projects were examined for impact against different types of objectives, i.e. a) policy, planning and legislative improvements; b) strengthening national institutional capacity; c) uptake of technical improvements; d) expansion of pilot activities; and e) follow-up investment. Follow-up investment was the weakest, whereas uptake of technical improvements and institution-building was the strongest. As might be expected, the risk of no impact was greatest in projects involving uptake of policy planning and legislative improvements, replication of pilot activities and investment follow-up. In such projects, achievement of impact usually depends on positive decisions at higher echelons, beyond where the project may have been working (if there is no follow-up, there can be little impact). Potential for Improvement in Project Performance 36 . Project design : Missions reported in 33% of cases that design was the aspect of the project where there was the greatest need for improvement. Fifty-one percent of projects were considered of too short duration and analysis showed lack of realism on duration was co-related with reduced effects and impact. In projects performing below the optimum in terms of effects and impact, analysis showed formulators were particularly over-optimistic as to the use that could be made of project outputs. Projects also tended to have unrealistic expectations of the capabilities and resources of national institutions. Further, there was inadequate attention to risks and prerequisites for project success. 37 . Among factors having the greatest negative impact on project cost-effectiveness, scheduling was found to have been the most important factor in 21% of cases. There tended to be over-expectation of what inputs governments could reasonably be expected to provide to the project. Other shortcomings were less pervasive, but significant proportions of projects could have been considerably improved with respect to clarity of immediate objectives and targets. There was also a need for better focus in 39% of cases. Failure to adequately specify beneficiaries was found to be linked to sub-optimal performance in terms of effects and impact. 38 . Missions found that 20-40% of projects would have been more cost-effective, had there been: more reliance on national training; more use of the private sector, NGOs, national experts and short-term staff; and a greater reliance on government capacities. However, there were opposite cases where for instance, heavy use of NGOs, short-term staff or government capacity, was considered counter-productive. 39 . Project implementation and management : Projects which performed sub-optimally in terms of effects and impact suffered particularly from inadequate management, with internal management being the weakest point, but FAO supervision also being an area for improvement. Both government and FAO procedures were found to have constrained performance and insufficient delegation of authority was identified as a problem in 30% of cases. In what can be both a design and implementation problem, the involvement of beneficiaries could have been improved in 18% of cases. 40 . Capacity-building : Evaluation of institution-strengthening projects indicates that the focus has been shifting increasingly from establishing or enlarging government departments to strengthening capacity of existing institutions for new functions, such as community forestry or for environmental planning. Missions were not generally optimistic about the sustainability of results of earlier institutional expansion projects due to shortages in government resources. More recent projects have been designed to develop people's self-reliance and ownership through participatory and group approaches. For both types of institution-building, missions generally concluded that the project duration was too short and a further phase was essential to consolidate results. This points to: a need for donors to accept realistic durations in their commitments to institution-building; and for projects to be designed in such a way that even if they are terminated at the end of the project life with no extension, sustainable results are achieved. 41 . Policy support : Projects attempting to strengthen policy and planning directly were most influential when they identified the major issues and supported national dialogue between the community and the political level. Most policy outputs were not from projects specialised in policy but from projects carrying out institutional development and development support. Thus, a project for area development might have an influence on land tenure policy, and one on community forestry might influence the approach to extension. 42 . People's participation : Projects where people's participation was important were on occasion criticised for giving excessive attention to establishing new participatory groups, while failing to successfully initiate other improvements through the groups. There were also criticisms that groups were not necessarily the best development tool in some situations and small-scale private initiative could have been encouraged as an alternative. As with production projects, unless people saw a definite benefit to themselves from the groups (usually economic), progress was limited. People's participation projects were sometimes found to be too short and to be spread over too wide an area. This tends to indicate that participatory projects should initially work intensively in a small area with a few groups, thus leaving a lasting capacity in the groups themselves and in the support personnel who have worked with the groups. 43 . Institution-building, people's participation and policy projects: The qualitative analysis found linkages needed to be strengthened. There was a particular danger of projects in people's participation working as separate entities rather than forging sustainable local partnerships with NGOs and government agencies. Similarly, missions found training to be among the project's most valuable outputs for capacity-building. There appeared to be an underlying assumption that individuals with increased skills would contribute to development, even if the specific endeavour for which they were trained was not sustainable. 44 . Development of production and land management : For projects designed for the development of production and improved land management, there were frequent findings that insufficient attention had been given to economic and marketing aspects. 45 . Gender : Several missions noted a failure of projects to specifically target women, even when they were the main actors, and others pointed to success when packages and extension training were specifically designed for women. In some situations, the use of female staff to reach women had been advantageous, even in societies where there were no particular taboos on men communicating with women. 46 . Regional projects : Some qualitative conclusions emerged with respect to regional projects, in particular: many regional projects had established networks for technical exchange but there was little confidence that these networks would be viable once the projects came to an end. The only network that was found definitely sustainable was for an association of commercial businesses in the seed sector. This does not invalidate networking as a way of cost-effectively achieving outreach, but it does indicate that undue energy should not go into trying to establish enduring technical exchange networks; and in Asia there was a lack of integration in the activities of projects supporting various aspects of policy and planning development for forestry, and these and other regional projects could have been more closely coordinated with FAO's Regular Programme activities and priorities. Overall Conclusions and Recommendations 47 . The present report, covering the 1994-99 period, shows considerable improvement overall in project design, implementation and production of outputs from the last examination (1985-91). It is recommended that given the persistent weaknesses in project design, further efforts should be pursued in the context of decentralized arrangements for operational activities, including in particular: (a) preparation of updated guidelines in project formulation and design with particular attention to the areas needing improvements; (b) further training of FAO staff, especially those in the decentralized offices; and (c) strengthening the existing mechanisms for reviewing and appraising project proposals, both at the decentralized offices and at Headquarters. 48 . It is suggested that for the future, synthesis reporting on results of evaluations focus on particular programme areas and cross-cutting themes in line with the priorities of the FAO Strategic Framework 2000-2015, or possibly on the work in particular regions. Programmatic evaluations will also systematically assess the Regular Programme and related field activities. III. THEMATIC EVALUATIONS OF TCP PROJECTS 49 . Apiculture and sericulture (22 projects) : The majority of projects dealt with significant development problems, usually relating to disease control (varroa in bees and pebrine disease in silkworms). However, some were not well justified on technical grounds or would not have met TCP criteria for approval had conditions on the ground been better known. Most of the projects were found to be stand-alone efforts and not part of any larger government or donor-funded programme. 50 . Performance of international consultants, with a few exceptions, was good. Nonetheless, it was noted that economies could have been achieved in some projects (particularly in seri-culture) which used several experts with similar qualifications. Also, national consultants used in some countries tended to mirror the expertise of international consultants (although some were lacking in technical qualifications). Technical backstopping from FAO for the sericulture projects was found to be generally better than for apiculture. A serious problem was the late or non-production of terminal statements or letters of completion. 51 . The number of unsatisfactory projects was felt to be too high. The best apiculture projects were those that dealt with disease outbreak (varroa) in countries where apiculture is economically significant. However, several apiculture projects were deemed unsuccessful because there was little scope for spreading their results among beekeepers in the country or their technical justification was poor. Because of much lower costs of entry, apiculture is likely to be of more interest than sericulture to most small farmers. It was found, however, that projects should be made more relevant to the majority of beekeepers in countries, rather than aiming at the most modern production methods. A consequence of this approach is that training did not have the desired impact at the producer level. Sericulture could be a relevant topic for TCP assistance, provided there is a plan for development of the sector and the role of the TCP project is clearly established. The evaluation found that the chances of a sustainable impact of sericulture projects were much reduced in countries with a limited tradition of sericulture and where there were no marketing opportunities. 52 . Legislation (31 projects ) : The projects reviewed were found to be highly relevant to development problems in the recipient countries and the assistance received from FAO was greatly appreciated. Project design tended to be rather general in terms of description of problems to be tackled and the approach taken to implementation, but there were limited prospects for improving the amount of information made available. Any deficiencies in project design were generally remedied by the initial mission to the country, when issues were identified in greater depth and project implementation strategies decided. 53 . The quality of the international consultants and Legal Office (LEG) staff (who provided advice in 13 of the projects reviewed) was generally very high. The fact that FAO has a great deal of international expertise in various aspects of natural resources legislation was found to be its primary comparative strength and the main reason for TCP project requests. Technical backstopping of legal components of projects has been generally excellent, with active inter-change between LEG officers, consultants and national counterpart staff. In some cases, there has been continued post-project follow-up by individual LEG officers as legislation has worked its way through national processes. This was greatly appreciated by the concerned governments. 54 . While most projects were relevant and well implemented, eventual project impact was found to be less satisfactory. In most countries where follow-up and impact were not satisfactory, the reason could be found in particular national circumstances such as economic upheaval or the failure to advance drafted legislation, which had been found acceptable by the individual department concerned at the time of project implementation. In order to ensure a complete package of assistance, the evaluation recommended that implementation regulations should normally be prepared in projects along with laws. 55 . The two most important factors for likely project success were a high degree of stakeholder involvement in project implementation and a favourable policy framework for achieving the desired legislative reforms. General Lessons for the TCP (based on three completed thematic evaluations) 7 56 . Project follow-up requires improvement. In most cases (but not all), once project implementation ends, contact is lost with the government counterpart body and governments do not report to FAO on follow-up action taken, despite a requirement to do so in the TCP procedures. FAO management has now agreed to hold a meeting at or near the end of the implementation period of all TCP projects to decide on follow-up action. The meeting will be called by the FAOR (where there is one) and be attended by the government counterpart agency and other interested parties, including bilateral or multilateral financing agencies that could be interested in assisting with project follow-up. This meeting will form an integral part of project reporting and closure. 57 . Evaluation missions have made numerous observations on project formulation. The criticisms in the last two evaluations were rather different. In legislation, project implementation strategies tended to be described in terms that were considered too general, while in apiculture and sericulture there was an evident disconnect in some projects between the described and actual situation in the field. This called into question the appropriateness of the concerned projects. The evaluations examined whether investment of additional resources in formulation missions would be cost-effective. In the case of legislation projects, the conclusion was quite clear that project design deficiencies are usually corrected with the first consultancy mission and the present policy of TCP not to fund formulation missions was generally endorsed. 58 . Design of the apiculture/sericulture projects presented a more fundamental problem. While the Legal Office has staff resources to cover all technical areas and languages of FAO, this was not the case for apiculture and sericulture. For most of the period under review, there was one Headquarters officer in the Agro-Industries and Post-Harvest Management Service (AGSI) who looked after apiculture and sericulture projects. This raises priority and capacity questions as to the degree that FAO can execute activities in specialized areas where it does not have a sufficiently high level of technical support. Comments of the Programme Committee (Report of the 83rd Session, May 2000 8 ) i. The Committee considered that this report provided a useful synthesis of assessments by independent project evaluation missions and those by the Evaluation Service on selected TCP projects. ii. The Committee noted with concern that, despite the overall trend for improvement in the key aspects of field project performance, project design remained the weakest aspect. While noting some improvement in the percentage rated �good�, it expressed concern that weakness in project design had been persistently highlighted in similar evaluation syntheses over a number of years, and queried what corrective actions were being taken. The Committee recognized that the problem was complex because projects dealt with complex development issues over a wide range of sectors under differing conditions and because any corrective measures involved several units within the Organization. It nevertheless stressed the urgent need for concerted action in order to ensure the quality of project formulation, particularly in the context of changing procedures and arrangements for the Field Programme. iii. In this connection, the Committee endorsed the lines of action regarding updating the project formulation guidelines, further training of FAO staff and strengthening the project review and appraisal mechanisms. In particular, it requested that progress being made in implementing these recommendations be reported to the Committee at its session in May 2001. It also underlined the importance of a greater specificity in evaluation recommendations, which should be targeted in terms of the responsible unit concerned, nature and timing of suggested actions. More broadly, the Committee highlighted the particular importance of introducing a set of criteria based on the priorities under the Strategic Framework in planning and selecting projects in future. iv. Regarding the nature of future syntheses of project evaluations, the Committee agreed that these should focus on selected programmes and thematic topics in relation to the priorities of the Strategic Framework. v. On the synthesis of thematic evaluations of TCP projects, the Committee appreciated that these exercises were undertaken with the initiative of the TC Department in order to enhance its ability to manage the TCP Programme, as well as in the interests of greater transparency and accountability. It considered that the synthesis brought out the strengths and weaknesses in TCP projects dealing with apiculture and sericulture on the one hand and legislative support on the other, and that it also pointed to a set of useful issues and lessons. However, some members requested that future reporting on TCP project evaluations include more details, including assessments of project performance by regions. vi. The Committee generally endorsed the issues and related recommendations. In particular: it noted the key role played by the beneficiary governments in the implementation of TCP projects, and agreed that the governments concerned and the FAO Secretariat including respective country representatives should, as recommended, take more pro-active measures to ensure appropriate follow-up action; and it endorsed the suggestion that the selection and approval of TCP projects should take into account the Organization's capacity to provide adequate technical support as well as its overall priorities for particular sectors concerned. (Report of the 85th Session, May 2001 9 ) vii. The Committee welcomed this progress report in response to its earlier request. It took note of the measures aimed at improving the formulation and appraisal process, including the preparation of updated guidelines, a web-based formulation tool-kit training of staff in formulation techniques and strengthening existing mechanisms for project review and appraisal. viii. The Committee recognized the importance of a clear division of responsibilities over the various phases of the project cycle, and looked forward to the contribution of a new service within the Field Operations Division in monitoring the project cycle, preparing guidelines and procedures and ensuring quality of project documents. The reduced operational units that would remain in the Regional Offices, would also have a key role, including review and operational clearance of project documents, whereas the in-country appraisal by FAORs of all new projects would be made more rigorous. ix. The Committee agreed that it would need to return to this question at a future session once the appropriate mechanisms were in place and the reorganization of the TC Department completed. x. The Committee considered that improving the quality of project design was important in enhancing the Organization's competitiveness for declining technical cooperation resources as well as its role in providing support to member countries in meeting their agriculture development needs. 6. While the results for the Near East and Europe are considerably below those for other regions, they are based on a much lower sample of projects. 7. In addition to the two TCP thematic evaluations summarised here, the analysis draws also on findings from the 1997 evaluation of food quality control projects. 8. PC 83/REP, paras 35-40. 9. PC 85/REP, paras 48-51. | 计算机 |
2014-23/2157/en_head.json.gz/10627 | About OpteamixLeadership Team
Solutions and ServicesApplication Development and Maintenance
Analytics and Business Intelligence
Business Optimization Consulting
Strategic Process Outsourcing
Remote Infrastructure Management
ApproachDelivery Models
Quality Framework
Leadership Team - Global Management Team - Executive Leadership Team opteamix_inner_image: Tony Hadzi
Chief Executive Officer and Founder
Tony brings with him extensive experience in the technical and executive leadership functions in the IT Services Industry since 1980. Prior to starting OPTEAMIX, Tony was the President of North American Operations and Global Delivery and Corporate Executive Vice President for CIBER Inc., (a $1.1 Billion Global IT Services and Consulting Company headquartered in Greenwood Village, Colorado). During his 11 year tenure at CIBER, Tony was key to re-engineering CIBER’s global delivery practices and go to market approach on a global basis.
Prior to CIBER, Tony was founder and CEO for Q DATA’s business in the USA (An $800 Million Global IT services, Consulting and Software Product Reseller Company in South Africa – Listed on the Johannesburg Stock Exchange – now Business Connexion). Tony was one of the initial members of the startup IT services company Data Trust, which later formed the Q DATA Ltd. Group of Companies. Tony served on the board of Q Data Consulting and two other Q DATA companies in Johannesburg, South Africa.
Tony started his IT career as a Systems Analyst after qualifying in Computer Sciences from the University of Witwatersrand in Johannesburg.
Raghurama Kote
Raghurama Kote (known as Kote) is a dynamic personality who is a firm believer in translating business vision into measurable objectives. Kote’s keen sense of business acumen is an outcome of his leadership roles as President of CIBER India, Founder and CEO of Iteamic, and as a Senior Partner at Ivega Corporation. Prior to joining Opteamix, Raghurama Kote was thePresident of CIBER India, following the acquisition of his company Iteamic by NYSE listed CIBER. At CIBER, he was part of a seven member Executive Committee which constituted CXOs and three business unit leaders. As President and Member of the Board of CIBER India, he was responsible for the leadership, strategic direction and operational management of CIBER’s Global Solutions Center in India with over 1350 people.
Kote is deeply passionate about providing basic needs of health, education and shelter for the underprivileged in India. He is the founder of "Right to Live" www.righttolive.org a social technology platform governed by transparency online wherein the less privileged would connect with financial donors, hospitals, doctors, government, charitable organizations and corporations.
Kote is a Bachelor in Engineering with specialization in Computer Science and Engineering, and has 20 years of experience in India and overseas in the IT Services Industry. Simplicity, integrity and the ability to carry his team along with him have helped Kote build strong client relationships, create a transparent work environment and mentor successful leaders. He brings with him a wide range of experience in leadership and global delivery. He is well known for his people management skills, beliefs and philosophy of treating each person how he would like to be treated.
Niyogi Krishnappa
Partner – Client Services
Niyogi is an energetic personality with 18 years of experience in Software Services and knows how to build an organization and demonstrate growth. Prior to Opteamix, Niyogi served as Senior Vice President – Client Services at CIBER following the acquisition of Iteamic Private Limited, a company he co-founded in 2003 and for which he served as a Senior Partner. At Iteamic, Niyogi was responsible for Software Delivery, Client Services and Business Development.
Under Niyogi's leadership Iteamic met and exceeded its milestones for people development, client satisfaction and financial growth. Iteamic received CMMI Level 3 and ISO 27001 certifications and recognition for its growth and success from two of India’s leading IT industry trade bodies – NASSCOM and STPI.
Niyogi is an expert in delivering value to clients using Global Delivery Model in software development and maintenance. He has been successful in servicing multinationals in North America, UK & Switzerland.
Shrinivas Ramanujan
Vice President - Delivery
Shrinivas Ramanujan (Rama) is responsible for end to end service delivery at Opteamix. Rama brings over 20 years of experience in program management, business process development, software development and client management.
Prior to Opteamix, Rama was a Delivery Director with CIBER (a $1.1 Billion Global IT Services and Consulting Company headquartered in Greenwood Village, Colorado). At CIBER he played a pivotal role in managing CIBER's US West Coast clients and in establishing the Agile methodology as part of Service Delivery. Prior to this Rama has worked with companies such as Wipro, Siemens, Hexaware Technologies and Aptech.
Rama is a Certified Scrum Master, Certified Scrum Practitioner and Agile Practice Lead. He holds a Bachelor's degree in Engineering from National Institute of Technology (Raipur, India) and a MBA from TAPMI – TA Pai Management Institute (Manipal, India).
opteamix_image: about_us_leadership.jpg Leadership Team
6455 S. Yosemite Street,Suite 140, Greenwood Village,
CO - 80111
Fax: 720-508-8781contact@opteamix.com
Contact Menu
© 2012 Opteamix | All rights reserved. | 计算机 |
2014-23/2157/en_head.json.gz/10920 | , Computers
Review: Mac OS X Leopard
The fifth major update to Mac OS X, Leopard, contains such a mountain of features - more than 300 by Apple's count - that it's difficult to boil this $129 operating system release down to a few easy bullet points. Leopard is, at once, a major alteration to the Mac interface, a sweeping update to numerous included productivity programs, a serious attempt to improve Mac OS security, and a vast collection of tweaks and fixes scattered throughout every nook and cranny of the operating system.As with every OS X update since version 10.1, there's no single feature in Leopard that will force Mac users to upgrade immediately. Instead, it's the sheer deluge of new features that's likely to persuade most active Mac users to upgrade, especially since this is the longest gap between OS X upgrades - two and a half years - since the product was introduced. Sure, some items on Apple's list of 300 features might seem inconsequential, but if even a handful of them hit you where you live, that will be more than enough motivation for you to upgrade.A New LookApple trumpets the interface changes in Leopard as "stunning" and "eye-opening," but in reality the changes are a mixed bag.First, the good stuff: After years of experimenting with different looks for windows, sidebars, and other interface elements, Apple seems to have settled on a fairly consistent interface. The color scheme is largely monochromatic-shades of gray with slight gradients. Apple has improved the contrast between the frontmost window and the rest of them by increasing the top window's drop shadow and dramatically lightening the color of inactive windows. The Leopard Finder's new sidebar, clearly modeled after the iTunes Source List, is better organized and more usable than its Tiger counterpart. When it comes to folders containing lots of documents, Stacks is not as useful.Unfortunately, some of the changes are not as successful. The Mac's trademark menu bar, which spans the top of the screen, has been made semi-transparent. When the desktop is set to display an image with both light and dark areas, the see-through menu bar is visually striking. Unfortunately, that aesthetic choice comes at too steep a price: the areas of light and dark behind the menu bar can severely decrease the readability of menu items.Apple has modified the Dock, OS X's built-in program launcher, so that the Dock's icons appear to sit on a reflective glass tray when the Dock is positioned on the bottom of the screen. (Someone must've pointed out to Apple that the metaphor broke down when the Dock is placed on the sides of the screen; in those orientations, the Dock's background is a simple half-transparent gray.) A pleasant glowing light appears next to the icons of currently-running programs, although the light is a bit too subtle when the Dock is positioned at the bottom of the screen.Unfortunately, the Dock's new Stacks feature is a mess, replacing a utilitarian approach to stashing folders in the Dock (click to open the folder, click and hold to see a list of the folder's contents) with a snazzy but generally less useful pop-up window featuring a stack or grid of icons. A potential feature touted during earlier demonstrations of Leopard - the ability to drag an arbitrary collection of items into the dock to make a temporary stack - apparently didn't make it to the final version.See Our Complete Leopard Coverage | 计算机 |
2014-23/2157/en_head.json.gz/11827 | Book Review OpenGL Programming Guide (7th Edition)
by Martin Ecker
Martin Ecker (1665145) writes "StatisticsTitle: OpenGL Programming Guide (Seventh Edition) – The Official Guide to Learning OpenGL, Versions 3.0 and 3.1Authors: Dave Shreiner, The Khronos OpenGL ARB Working GroupPages: 883Rating: 8/10Publisher: Addison-Wesley Professional http://www.informit.com/openglISBN-10: 0-321-55262-8ISBN-13: 978-0-321-55262-4Price: $59.99 USSummary: The Red Book remains the authoritative guide to OpenGL.The Red Book, also known as the OpenGL Programming Guide published by Addison-Wesley Professional, returns in its seventh edition, now covering OpenGL up to and including version 3.1. The Red Book, so called because of its deep red cover, is the most-well known, authoritative introduction to the OpenGL graphics API. In this review I want to take you on a whirlwind tour through the pages of this book to see what it has to offer.The Red Book is aimed at the beginning to intermediate graphics programmer that is not yet familiar with OpenGL. It assumes a basic background in computer graphics theory and working knowledge of the C programming language. Just as the previous edition of the book, the seventh edition is incredibly comprehensive and thorough. It contains explanations of pretty much every feature OpenGL has to offer, even the more obscure and rarely used ones. This is good in the sense that it's a fairly complete book, but it can also be somewhat overwhelming for a beginner when confronted with a book that weighs in at almost 900 pages. However, the good news is that the material is presented in a logical progression and even a novice will get up to speed with the basics of OpenGL after reading only the first few chapters of the book. Some of the early chapters in the book contain a few more advanced sections mostly explaining new features that got introduced with OpenGL 3.1. These sections are conveniently marked as advanced and can probably be skipped on a first read-through of the material.The first chapter gives a brief introduction to the basic concepts of OpenGL and describes the rendering pipeline model used in the API. GLUT, a cross-platform library that allows easily creating OpenGL applications, is also shortly discussed together with a program that shows GLUT in action. The following chapters proceed to explain the basic geometric primitives, such as lines and polygons, supported by OpenGL and how to render them in different positions and from different viewpoints using the various OpenGL matrix stacks. Also the basics of using colors, lighting, framebuffer blending, and fog are discussed.Chapter seven contains a description of display lists, a unique and with OpenGL 3.1 deprecated feature of OpenGL that allows to store OpenGL API calls for efficient multiple uses later on in a program. Chapter eight then moves on to discuss what an image is for OpenGL, and most notably covers pixel buffer objects, a somewhat recent addition to OpenGL. The discussion of images in chapter eight brings us straight to chapter nine on texture mapping, one of the largest and arguably most important chapters in the book. Everything you need to know about textures is discussed, from specifying texture images in uncompressed and compressed form to applying textures to triangles using the various kinds of supported texture filters. Also depth textures and their application in the form of shadow maps and – new in the seventh edition – floating-point textures and texture arrays added in OpenGL 3.0 are presented.In chapter ten the authors discuss the buffers that make up the framebuffer, such as the color buffer, depth buffer, and stencil buffer. This chapter summarizes some of the things already presented in the earlier chapters and then describes the various framebuffer operations in more detail. Chapter eleven and twelve are on the tools provided by GLU, the GL utility library, in particular tesselators, quadrics, evaluators, and NURBs. GLU is nowadays rarely ever used in production code, so these chapters mostly demonstrate just how complete the Red Book is in its coverage of OpenGL. This also applies to chapter thirteen on selection and feedback, which are rarely used features, mostly because of the lack of hardware acceleration in today's GPUs.Finally, chapter fourteen is a collection of topics that didn’t fit into the other chapters, such as error handling and the OpenGL extension mechanism. Additionally, this chapter presents various higher level techniques and tricks, for example how to implement a simple fade effect, how to render antialiased text, and some examples of using the stencil buffer. The final chapter of the book is a discussion of the OpenGL Shading Language (GLSL, for short). In the seventh edition this chapter has been updated to version 1.30 and 1.40 of GLSL, as required by OpenGL 3.0 and 3.1, respectively. Even though the OpenGL API functions required to use GLSL are presented, this is only a rough overview of how programmable shaders are used in OpenGL. For a more detailed description of GLSL the reader is referred to the book "OpenGL Shading Language. Third Edition" also called the Orange Book.The book closes with quite a few appendices on the order of operations in the OpenGL rendering pipeline, the state variables that can be queried, the interaction of OpenGL with the operating system-specific windowing systems, a brief discussion of homogeneous coordinates as used in OpenGL, and some programming tips. Also a reference of the built-in GLSL variables and functions is included.The book contains a large number of images and diagrams, all of them in black and white except for 32 color plates in the middle of the book. The illustrations are of high quality and generally help make the explained concepts and techniques easier to understand. Most of the color plates depict spheres, teapots, and other simple geometric objects, so they aren’t overly eye-catching but do serve their purpose of showing what can be achieved with OpenGL.With OpenGL 3.1 deprecating many older API features of OpenGL in favor of more modern alternatives, the seventh edition of the Red Book seems to have a bit of a split personality at times. If you're only interested in functionality not deprecated in 3.1 you can skip entire chapters, such as the chapter on display lists or fixed-function lighting. Of course, the knowledge of matrix stacks and how to use transformations is still relevant, but the corresponding OpenGL functions have been deprecated in favor of doing all the transformation math in the vertex shader or, what most people have been doing anyway, using your own matrix structures/classes on the CPU. The situation is similar for many of the other deprecated features (such as fixed-function lighting, color index mode, immediate mode, ...) that are still described in the book. I think the time is right to combine the Red Book with the Orange Book, removing any discussion of deprecated features, to have a book that focuses solely on the modern approach to graphics programming, which is mostly based on shaders. I can only hope such an OpenGL 3.1-only focused book will see the light of day soon.All in all, the Red Book remains the definitive guide to OpenGL. Apart from being a good introduction, it also contains many interesting tips and tricks that make the experienced OpenGL programmer come back to it often. If you’ve read through the Red Book and the Orange Book in their entirety you pretty much know everything there is to know about OpenGL.About the review author:The author has been involved in real-time graphics programming for more than 10 years and works as a professional game developer for High Moon Studios http://www.highmoonstudios.com/ in sunny California." | 计算机 |
2014-23/2157/en_head.json.gz/13033 | Home > Destinations > Midwest > Chicago
Walking around the historical center of Chicago, or what locals call "the Loop," you'll find your gaze drawn inexorably upward. All around you are some of the most striking and diverse architectural styles in America. Just as your eye catches an elaborate Art Deco detail in one direction, you'll spot an ultra-modern tower in the other. Old mingles with new in what amounts to a living museum of buildings from the second half of the 1800's to the present.
Chicago doesn't have mountains like Denver or beaches like Miami. It has buildings. Sure, as the nation's third largest city, Chicago serves up world-class attractions like museums, an active arts scene and championship sports teams -- as well as homegrown favorites like deep-dish pizza, Oprah and the Blues Brothers. But it's in architecture that the Windy City really shines.
This is in fact the birthplace of the modern building. The world's first skyscraper, the Home Insurance Building, was built here in 1885, and while that building was taken down in the 1930's, Chicago is today home to three of the tallest buildings in the world: the Willis Tower (formerly known as the Sears Tower, it stands at 1,450 feet), AON Center (1,136 feet) and John Hancock building (1,127 feet). Head up to the Willis or Hancock observation decks and you'll look down over a city shaped by architectural innovators like Daniel Burnham, Louis Sullivan, Mies van der Rohe and Frank Lloyd Wright.
Chicago did not start off on such a grand scale. The city was founded at the mouth of the Chicago River in 1779 by Jean Baptiste Point du Sable, a fur trader believed to be from Haiti. But by 1848, with the completion of the 100-mile Illinois and Michigan Canal con | 计算机 |
2014-23/2157/en_head.json.gz/13264 | Basic Structure of a Digital Computer [Part One]
Go to the HOMEPAGE of this site, so that you can navigate through it, and find my e-mail address.
If your browser does NOT support FRAMES (The current page does not have them, but others do), please click HERE . You will arrive at the HOMEPAGE of this site (without FRAMES). From there you can navigate through this website by means of the links in the CONTENTS-section.
to bottom Introduction This Essay is meant as a preparation for the next Essay, which is about the ontological status of ARTIFICIAL LIFE (a-life), mainly that brand of a-life that is computer-generated. For an assessment of the ontological status of the creations of a-life it is paramount to know something about the general and basic structure of a computer, in our case a digital computer. Only after possessing some insights into the structure of these interesting machines can we rationally speak about the role and nature of the substrate of artificial-life creatures in order to assess the reality-status of those creatures.
For this purpose we do not need a description of all the details and design features of modern computers. Just a general lay-out of the very basics is necessary. I will devote much attention to the general workings of Boolean logic (hardware) circuits (but only a very simple example of them will be treated of here [Part Two of this Essay]). Thereby it is important to realize that the computer-hardware is a physical device that obeys the laws of physics. Further one must realize that a computer can simulate phenomena of the outside world, and although these simulations are not the same as the phenomena which are being simulated, those simulations are something in their own right. What they are (in their own right) depends on their general and detailed structure and/or behavior. In the next Essay (about artificial life) these subjects will be adressed fully. In Part One of this Essay I will treat of the general CONCEPT of a digital computer, in terms of Turing machines. A concept already worked out in the 19-thirties.
There are two main types of computers, analog computers and digital computers. Digital computers are discrete machines having access to a finite number of internal states only, while analog machines have access, in principle, to an infinite number of internal states and could therefore be expected to outperform digital machines. Examples of this enhanced ability of analog computers would include solving halting-problems (i.e. in principle be able to determine in advance whether any program, fed into the computer, will or will not yield a definite result in a finite number of steps), and generating non-computable numbers. For a digital computer one cannot write a test-program that could determine for any other program, to be run on such a computer, whether that program (to be tested) will, when run, give a definite result after a finite number of steps. Also a digital computer cannot compute certain numbers (so called non-computable numbers).
Further we have serial machines and parallel machines. While a serial machine can only perform one computational step after another, a parallel machine can execute more than one such steps simultaneously. Parallel machines accordingly consist of more than one processor, which operate in harmony. Many processes in Nature are in fact proceeding in a parallel fashion, and can therefore adequately be simulated by such machines. But it is possible to simulate such a parallel machine on a serial machine.
In our discussions and explanations we will confine ourselves mainly to DIGITAL SERIAL COMPUTING MACHINES, but the principle (concept) expounded covers parallel machines as well.
The Digital (serial) Computer The main components of a digital computer are :
Input devices (keyboard, mouse, etc).
Memory board.
Central Processing Unit (processor).
Output devices (video terminal, printer, etc.).
Besides these main components we find slow secondary storing divices, such as floppy disks and hard disks. These can contain data and programs that can be used as input. They also can receive output. Further there may be one ore more control units that check and regulates information-flow (information-traffic).
Figure 1. Von Neumann's computer architecture -- the layout of a typical serial machine. Except with respect to input and output devices, the information-flow is in a back-and-forth fashion. To program such a computer, in order that it will solve a certain problem or generate some desired result, the programmer first writes an algorithm, which is a solution of the problem in the form of a sequence of steps, written in ordinary language. This algorithm is then coded into a suitable programming language that will enable the computer to ' understand ' and successfully execute the corresponding instructions. Usually, this involves one of the ' higher ' programming languages, so called because they are reasonable close to human language. But because the Central Processing Unit (CPU) is composed of a set of Boolean logic circuits, it is capable only of performing elementary arithmetic operations such as addition, subtraction, multiplication, and so on. Thus the original programming language fed into the computer must be first converted by means of an interpreter (translates one line of code and executes it, then translates the next line, etc.), or a compiler (translates the whole program and then executes it) into a machine-readable assembly language (instructions, coded in this assembly language are then directly translated into machine-code, that consists of the electronical equivalents of 0's and 1's) . Only then the machine is able to execute the fed-in instructions.
The input is coded up in memory, which is a grid of electronic on-off switches. The processor, which is a chip of integrated circuitry, alters what is in the memory, resulting in a different on-off pattern of switches, and then the output decodes and displays the new contents of the memory. So the actual computation consists of the processor's activities on the memory. Accordingly the processor and the memory stand in mutual contact with each other.
Turing Machines Now exactly what is it that the processor has done? [See among other sources, RUCKER, Mind Tools, 1988]
The processor is able to read the symbol stored in whatever location it is looking at. That is, it can tell if the switch is set to ON or OFF.
The processor is able to change the contents of the memeory location it is currently scanning. That is, it can change the position of the switch it is observing.
The processor is able to move its attention to a new memory location. The processor is able to change its internal state. That is, it can change its predilections about what to do next.
In fact this is a concept of what a computer does, outlined by TURING in 1936. A machine, stripped to the bare bones of this concept is nowadays called a Turing Machine.
A Turing machine [See, among others, PETERSON, 1988, The Mathematical Tourist, pp. 194 ] consists of a Head that can scan a Tape, and that can be in one out of a finite number of internal states. Such a Tape, which can be interpreted as memory, consists of a linear arangement of cells. Each cell can itself be in one of a finite number of cell-states. Because we are aiming at electronic computers which have a memory board consisting of on-off switches, we consider cells which can be in only one out of two states, ON or OFF, which can be represented by the cell being BLACK or WHITE respectively. The Head of the Turing Machine can READ the cell (state) on which it is currently placed (we can conceive of the Head being moved along the Tape, or the Tape being moved over the Head), i.e. it can determine if that cell is BLACK or WHITE. If the cell is BLACK it can erase the black, or leaving it BLACK. If the cell is WHITE it can leave it like that or make it a BLACK cell. After this the Head can move one cell to the left or one cell to the right. When this is accomplished the machine can either stay in the same internal state, or change its state into another. After it has performed a certain number of such tasks, the machine will turn itself off.
An action table stipulates what a Turing Machine will do for each possible and relevant combination of cell-state (BLACK or WHITE) and internal state [ In a real (i.e. physical) computer this table is in fact one or another Boolean function, fed into the machine as a code, and will then be physically represented by an electrical circuit that gives a certain output depending on its input (for example numbers -- ultimately in the form of an on-off pattern of memory elements)]. The first part of the instruction specifies what the machine should write, if anything, depending on which cell-state (BLACK or WHITE) it encounters. The second part specifies whether the machine is to shift one cell to the left or to the right along the tape. The third part determines whether the machine stays in the same internal state or shifts to another state, which usually has a different set of instructions. Suppose [PETERSON, pp. 195] a Turing Machine must add two integers (this accordingly is a special Turing Machine -- out of many possible -- that specializes in executing this particualer task, the adding of any two whole numbers). We can represent such a whole number by a consecutive series of BLACK cells, for example the number 3 can be represented by three consecutive BLACK cells on the tape, and the number 4 can be represented by four such cells.
If we now want to ADD these two numbers, we write them both on the tape, with a WHITE cell in between. When we think of the cell-states as being either OFF (= WHITE) or ON (= BLACK), then our INPUT will be written on the tape as follows :
...00011101111000... In order to ADD these numbers the machine fills in the blank cell, giving :
...00011111111000... ,
and then goes to the end of the string (of 1's) and erases the last 1 in the row, which results in the correct answer, namely a consecutive series of seven 1's :
...00011111110000... An action table is needed to instruct the machine how to perform this addition (The foregoing was just an algorithm, a recipe, that still must be translated into an executable program). The table's first column gives the machine's possible internal states, and the first row lists all the cell-states being used (in our case the cell-states BLACK and WHITE).
Cell-state encountered
BLACK WHITE
(Internal) State 0 move right, get in state 0. put BLACK, move right, get in state 1.
(Internal) State 1 move right, get in state 1. move left, get in state 2.
(Internal) State 2 erase, stop.
Each combination of (internal) State and Cell-state specifies what, if anything, needs to be done to a cell, in which direction to move after the action, and the (internal) state of the machine, that is, which set of instructions it will follow for its next move.
The above action-table can now be written down in a more compact way by coding the details as follows : BLACK = X, WHITE = B, move right = R, move left = L, for example 1XR2 means :
IF the internal state of the machine is 1, and the cell encounered is BLACK, THEN the Head of the machine must move to the next cell on the right, and the machine must enter internal state 2. Two other examples of instructions are, say, 2BX2, or 2BL3, they signify the following :
2BX2 : IF the internal state is 2, and the encountered cell is WHITE, THEN the Head must make that cell BLACK, remain in state 2.
2BL3 : IF the internal state is 2, and the encountered cell is WHITE, THEN the machine must move to the next cell to the left, and go into (internal) state 3.
So we can now write down the above action-table (i.e. the program) more compact :
0XR0 1XR1 2XB stop 0BXR1 1BL2 Each one of these five instructions implies, among other things,
a (new) internal state and a final settling of the machine on some cell which is BLACK (= X) or WHITE (= B)
If this (new) internal state is, say, 1, and the machine has settled on a, say, BLACK (= X) cell, then, in the action-table, an instruction must be looked-up that begins with 1X. This is instruction (2) of the action-table : 1XR1. If no such entry is to be found in the action-table, or if there are more than one such entries, then (it is stipulated that) the machine will turn itself off, (it is so stipulated) because no definite course could then be follwed. The input pattern (in our example) is BLACK, BLACK, BLACK, WHITE, BLACK, BLACK, BLACK, BLACK, or, equivalently, X X X B X X X X. With the above action-table we can now compute the sum, i.e. 3 + 4 = 7. To begin with, the machine is set in state 0 and is placed at the BLACK (= X) cell farthest to the left. After execution of (1) [ See the above table ] it finds itself in state 0 and has moved one cell to the right, this new cell is again BLACK (= X). So it must execute (1) once more, resulting in having shifted one more cell to the right which is also BLACK (= X). Again it must execute (1), and this results in encountering the WHITE (= B) cell and still being in internal state 0. So next it must execute (4), which means it must make that cell BLACK, move one cell to the right and go into internal state 1. By doing so it encounters a BLACK cell, so it must execute (2), resulting in moving one cell to the right and remaining in state 1. It thereby finds again a BLACK cell, so it must again execute (2), resulting in finding another BLACK cell to the right. Again it must execute (2), resulting in encountering the last BLACK cell to the right. Once again it must execute (2), now resulting in finding a WHITE cell to the right. This implies that the machine must now execute (5). According to that instruction it must go one cell to the left and go into internal state 2. There it finds a BLACK cell. This implies that it must now execute (3), which means that the Head must make the encountered cell WHITE, and then turn itself off.
The result is a string of seven consecutive BLACK cells : X X X X X X X. With this the calculation is completed.
The figure below pictures the computational steps of this calculation :
Figure 2. At each step, a Turing Machine may move one space to the left or right. By following a simple set of rules, this particular machine can add two whole numbers. The strategy of this particular machine is : Starting with separate groups of -- as in the example -- three and four BLACK cells, and ending up with one group of -- as in the example -- seven BLACK cells.
The same action-table can generate the sum of any two whole numbers, no matter what their size, as long as it is finite. But adding two numbers such as 49985 and 51664, by itself, would require a tape with at least 100000 cells. To be capable of adding any two numbers, the tape would have to be infinitely long, which does however not mean an actually infinite tape, but only a potentially infinite tape, which means that what ever the length of the tape already is, we can always add some tape if necessary.
Similar tables can be worked out for subtraction and for practically any other mathematical operation. The sole condition is that the number of internal states of the machine, and the number of different cell-states listed in the action-table, is finite, which ensures that a routine, mechanical process can do the job.
For every calculation a digital computer can perform there is a corresponding Turing machine which can do that same calculation. Let us consider some more of such Turing Machines. In expounding them we will use the above notation :
The natural numbers 0, 1, 2, 3, ... will be used to describe the internal state of the machine. The total of possible internal states the machine can be in must always be finite. The size of the number of such possible internal states is an indication of the complexity of the machine, i.e. the complexity of the possible behavior of the machine (analogous to the complexity of a digital computer's processor design). Here we will denote the state 0 as a STOP sign, if and when the machine enters state 0, then it will stop moving and turn itself off.
A BLACK cell will be denoted by X, a WHITE cell by B. Move (the Head of the machine) to the next cell on the right will be denoted by R. Move (the Head of the machine) to the next cell on the left will be denoted by L.
The input is one or more BLACK cells. At the start (i.e. when the machine is switched on) the machine is placed at the left-most BLACK cell, and enters (internal) state 1.
Further we will describe the instructions using only strings of four of the above defined symbols, so a typical instruction could read :
2XB3, which means : IF the machine is currently in state 2 and reads a BLACK cell, THEN it must erase this black, i.e. it must make the cell WHITE, and must enter state 3.
Or (a typical instruction) could read :
1BL1, which means : IF the machine is currently in state 1, and reads a WHITE cell, THEN it must move one cell to the left, and remain in state 1. So the first two symbols together constitute a condition to be satisfied, and the last two symbols together constitute an action, that must be taken if and when that condition is indeed satisfied. If and when the machine reaches an instruction that tells it to enter state 0, it turns itself off. Not every machine (i.e. not every program or action-table) reaches an instruction that leads to 0. Some machines go into various sorts of endless behavior loops, and so do not yield a definite result. It is not at all unusual for a Turing machine to run forever. TURING's theorem says that there is NO general method that could determine in advance whether a particular machine (i.e. a particular program for such a machine) will run forever -- and consequently not yielding any definite result in a finite amount of time -- or that it will calculate a result and turn itself off. Here is an example of a Turing machine's action table that results into a loop and never gives any output at all :
1XL2 2BR1 At the start the machine enters state 1 and reads a left-most BLACK cell. So it must execute the first instruction, which means it must go one cell to the left and enter state 2. There i | 计算机 |
2014-23/2157/en_head.json.gz/13540 | / Disclaimer Using data from this database Information presented on this website is considered public information unless otherwise noted and may be downloaded and used. However, rather than copying and distributing information found here, we encourage users to access the database directly, since this is the original data source in MOST cases. We cannot guarantee that repackaged data are unmodified. Use of appropriate credit is requested; citations should reference the date of access. We strongly encourage when using others' data to notify them - they may have helpful information regarding the dataset - and to offer co-authorship if appropriate. While we make every effort to provide accurate and complete information, data may be updated, and we welcome suggestions on how to improve our pages and to correct errors. Most of the content in this database is information created and maintained by other state, federal, and private organizations. USGS does not control and cannot guarantee the relevance, timeliness, or accuracy of these data. We have made every effort to store descriptions of survey protocols and details on data collection to help users make the most informed decisions when using data. Always use data with awareness of the intent of original data collection as specified in the Study Design description. This database is maintained on a USGS server. For site security purposes and to ensure that this service remains available to all users, this government computer system employs software programs to monitor and manage security. Unauthorized attempts to upload information or change information on this website are strictly prohibited and may be punishable under the Computer Fraud and Abuse Act of 1986 and the National Information Infrastructure Protection Act. Information may also be used for authorized law enforcement investigations. Last modified July 2005. Bird Point Count Database, version 2.0. http://www.pwrc.usgs.gov/Point U.S. Department of the Interior, U.S. Geological Survey Patuxent Wildlife Research Center | | 计算机 |
2014-23/2157/en_head.json.gz/15478 | Tips for navigating...
Tips for navigating WYDOT's new website
WYDOT unveiled a new version of its Web site April 30, allowing the department to provide more information to the public in an easier to use format.
For those familiar with the previous format, all the pages previously contained in the left-hand navigation are now available as links under the tabs across the top of the page.
If you are more comfortable with old style of navigation, click on the three bars next to “Navigate” in the upper right-hand corner of the site and it will open a list of pages as it appeared in the left-hand navigation on the previous site.
Clicking on the site map icon immediately to the right of the “Navigate” bars will open a list of all pages on the site.
Once you're on one of the pages inside the site, all pages available under that page appears in the list under the yellow "Navigate" bar on the right side of the page. Back to news list | 计算机 |
2014-23/2157/en_head.json.gz/15525 | This issue in pdf
(72 pages; 11,6 Mb)
Special theme:
Analysis, Planning, Diagnosis and Simulation of Industrial Systems
Next Special theme:
Automated Software Engineering
About ERCIM News
< Contents ERCIM News No. 57, April 2004
R&D and TECHNOLOGY TRANSFER
MarineXML: Towards Global Standards for Marine Data Interoperability
by Brian Matthews
In a partnership with international agencies, the European 'Marine XML' project will demonstrate that XML can be used to support marine observation systems. Our management of the marine environment and marine risks is restricted by the lack of interoperability between the huge diversity of data formats, proprietary data management systems, numerical models, and visualisation tools. Different studies, instruments, programs, and data centres collect, process, analyse and archive data on the marine environment in such different ways that exchanging and comparing information between them to build a unified picture of the world's seas and oceans becomes a difficult task. Consequently, opportunities to better monitor and manage the marine environment are missed.
Exchanging marine information through an XML layer.
The aim of EU Marine XML project is to demonstrate that the Extensible Mark-up Language (XML) technology from the World-Wide Web Consortium (W3C) can be used to improve data interoperability for the marine community, and specifically in support of marine observing systems, whilst not rendering investment in existing systems obsolete. MarineXML is a partnership with international agencies, such as the International Oceanographic Commission (ICES-IOC) and the Global Ocean Observing System (EuroGOOS), and government departments and organisations responsible for data standards clustered around the North Sea (UK, Belgium, the Netherlands, Germany and Norway). Their participation will ensure that the research meets the needs of key stakeholders with interests in global ocean observing systems.
The objectives of the project are:
to produce a prototype marine data ontology framework for interoperability to produce working demonstrations of the data interoperability framework to develop a prototype 'Marine Markup Language' (MML)
to advance the standardisation of a Marine Mark-up Language.
The project is demonstrating that XML technology can be used to develop a framework that improves the interoperability of data for the marine community. The MarineXML Project will not result in the creation of a full MML specification but the project is addressing the underlying framework issues of interoperability between existing and emerging standards. It will provide a technical basis for the development of full specification, and look to standardisation post project through the IOC/ICES working group on XML for marine applications. Please contact: Brian Matthews, CCLRC E-mail: b.m.matthewsrl.ac.uk | 计算机 |
2014-23/2157/en_head.json.gz/15808 | UltraDNS Named 2006 Codie Award Finalist
The UltraDNS DNS Shield(TM) Selected for "Best Enterprise Security Solution" Category
BRISBANE, Calif.--Jan. 25, 2006--UltraDNS(TM) Corporation, the world's ...
on 2006-01-26 01:56:28 The UltraDNS DNS Shield(TM) Selected for "Best Enterprise Security Solution" Category
BRISBANE, Calif.--Jan. 25, 2006--UltraDNS(TM) Corporation, the world's leading provider of Managed DNS services today announced its revolutionary DNS Shield has been selected by the Software & Information Industry Association (SIIA) as a finalist for the 21st Annual Codie Awards in the Best Enterprise Security Solution category.
UltraDNS recently launched the DNS Shield in partnership with major Internet Service Providers (ISPs) including AOL and Earthlink, to provide the highest level of reliability and security to millions of Internet users. The DNS Shield is deployed within the core of the network of each participating partner, creating a hardened, secure, and robust Internet infrastructure for the millions of enterprise domains that UltraDNS powers. The finalists in this category provide the best overall security solutions for enterprises or large networks.
"UltraDNS is proud to have been selected as a finalist for this prestigious award. This recognition from the SIIA is not only a significant achievement for UltraDNS, but is also an acknowledgment of the increasingly important role DNS plays in supporting both ecommerce and business operations," said Ben Petro, president and CEO, UltraDNS. "The DNS Shield was created out of the need for a superior approach to the security of this critical component of the Internet."
Established in 1986, the Codie Awards remains the standard-bearer for celebrating outstanding achievement and vision in the software, digital content and education technology industries. UltraDNS was chosen as one of the finalists from more than 1,026 nominations submitted by over 500 companies in all categories which exceeded the 2005 record of participating companies.
"The 21st Annual Codie Awards continue the tradition of honoring the best of the software, information and education technology industries," said Ken Wasch, SIIA President. "When one considers the number of outstanding companies that competed this year, being named a Codie Awards Finalist is a significant achievement for UltraDNS."
The winners of the 2006 Codie Awards will be presented at a gala on May 16, 2006.
About UltraDNS
UltraDNS Corporation is the world's leading Managed DNS Service Provider, delivering superior security, reliability and performance to organizations that rely on DNS for their critical business processes, applications and services. With the growth in e-commerce and the emergence of advanced DNS-based communication and supply chain management services, organizations can no longer rely on traditional approaches to DNS. UltraDNS provides a range of global and local DNS solutions -- both managed services and custom infrastructure -- built on its unique Directory Services Platform and proprietary, patented technologies. Through its thousands of enterprise, service provider and TLD infrastructure customers, UltraDNS powers the resolution of over 15 million domains around the globe. UltraDNS has offices in California, Virginia, Arizona, Chicago and the UK. For more information please visit www.ultradns.com . Article Tools | 计算机 |
2014-23/2157/en_head.json.gz/15851 | Kaspersky Internet Security – multi-device 2014
Kaspersky Security for Android
ALL SECURITY FOR BUSINESS STOREINTERNET SECURITY CENTERTRIALS & UPDATES Free Downloads & Software Updates
Small Business Free Trial Downloads
All Trial Downloads SUPPORTPARTNERSABOUT US
Security for Small Office
Home →About Us →Management Team
Corporate NewsRSS FeedsWhy Kaspersky?Management TeamSecurity ExpertsEventsPress CenterContact Information
Eugene Kaspersky
Chief Executive Officer and Chairman
Eugene began his career in cybersecurity accidentally when his computer became infected with the ‘Cascade’ virus in 1989. Eugene’s specialized education in cryptography helped him analyze the encrypted virus, understand its behavior, and develop a removal tool for it. After successfully removing the virus, Eugene’s curiosity and passion for computer technology drove him to start analyzing more malicious programs and developing disinfection modules for them. This exotic collection of antivirus modules would eventually become the foundation for Kaspersky Lab’s antivirus database. Today the database is one of the most comprehensive and complete collections in cybersecurity, protecting systems from more than 100 million malicious programs.
In 1990 Eugene started gathering a team of like-minded enthusiast researchers to create the AVP Toolkit Pro antivirus program, which was recognized by the University of Hamburg in 1994 as the most effective antivirus software in the world.
Eugene and his colleagues then decided to establish their own independent company. In 1997 Kaspersky Lab was founded, with Eugene heading the company’s antivirus research. In 2007 he was named Kaspersky Lab’s CEO.
Today Kaspersky Lab is the world’s largest privately-held vendor of endpoint protection, operating in more than 200 countries and territories worldwide. The company employs approximately 3,000 professionals and IT security specialists in dedicated regional offices across 30 countries and its cybersecurity technologies protect over 300 million users worldwide.
Eugene has earned a number of international awards for his technological, scientific and entrepreneurial achievements. He was voted the World’s Most Powerful Security Exec by SYS-CON Media in 2011, awarded an Honorary Doctorate of Science from Plymouth University in 2012, and named one of Foreign Policy Magazine’s 2012 Top Global Thinkers for his contribution to IT security awareness on a global scale.
Follow Eugene:
Garry Kondakov
Chief Business Officer
Garry was appointed to the position of Chief Business Officer of Kaspersky Lab in January 2014. In his current role, Garry has responsibility for the company's marketing, business development, sales and customer service functions on all territories.
Earlier in 2011 he was appointed Kaspersky Lab’s Vice-President, Emerging Markets, overseeing the company’s operations in Latin America, Eastern Europe, the Middle East and Africa. From January 2008 until March 2011, Garry held the position of Managing Director of Kaspersky Lab Eastern Europe, Middle East and Africa (EEMEA). Prior to that, he was Kaspersky Lab’s Managing Director for Russia, the CIS and the Baltic States.
Garry graduated from Moscow State University with a degree in physics. He also completed a post-graduate course at the University’s Faculty of Computational Mathematics and Cybernetics and has an MSc in Management from the London Business School (LBS).
Andrey Tikhonov
Andrey was appointed Kaspersky Lab’s Chief Operating Officer in January 2012. In his position he is responsible for global administrative functions – finance, HR, IT, and work processes.
Prior to his current role Andrey held a number of senior management positions at Kaspersky Lab. In March 2009 he was appointed Chief Information Officer after five years as the company’s Technical Director. Before that, he was head of the Novell development department from 2002 following a successful period as a project manager.
Andrey has been working in the IT industry since 1989, when he started his career in a research institute of Russian Ministry of Defense, rising to the rank of lieutenant-colonel. Andrey graduated with distinction from a military academy in Kiev.
Nikita Shvetsov
Acting Chief Technology Officer
Nikita was appointed acting Chief Technology Officer at Kaspersky Lab in April 2014. In this role he is responsible for devising the overall technology and product strategy for the company, driving all product and services development as well as engineering and research activities, and overseeing the whole Kaspersky R&D organization as a main source of company’s expertise and intellectual potential.
Nikita joined Kaspersky Lab in 2004 as a virus analyst and in 2006 became a senior developer working on emulators for the heuristic engine. In 2009 Nikita was appointed Director of the Anti-Malware Research Unit, taking charge of developing and implementing strategies to protect users from malware, as well as coordinating work with independent test labs. Prior to his move to the acting CTO role, Nikita served as Vice President of Threat Research and Deputy CTO (Research). In this role he led efforts to work with Kaspersky’s virus labs around the globe and defined and drove the strategic direction of Kaspersky Lab’s threat prevention technologies such as anti-malware, Automatic Exploit Prevention, content filtering, DLP. Nikita’s leadership saw Kaspersky Lab twice win the prestigious ‘Product of the Year’ award from the highly-respected AV-Comparatives independent test lab. The company also ranked first in a TOP3 metric measuring consistent performance in independent tests throughout 2013, according to statistics gathered from the most authoritative test organizations.
Nikita has a degree in Computing Science from the Moscow State University of Electronics and Mathematics.
Marina Alekseeva
Chief Human Resources Officer
Marina Alekseeva was appointed Chief Human Resources Officer at Kaspersky Lab in August 2012. She is responsible for global HR functions such as Recruitment, Training and Development, Compensation and Benefits, Administration.
Marina joined Kaspersky Lab in January 2008 as the Deputy to the HR Director.
Marina has a more than 10 year experience in the field of HR, including IT industry. Before joining Kaspersky Lab she worked for T-Systems CIS (Deutsche Telekom AG) and UniCredit bank, rising from HR manager to the senior managerial positions.
Marina graduated from the North-West Russian Presidential Academy of National Economy and Public Administration, majored in People Management and Organizational Psychology as her second higher education.
Alexey De-Monderik
Corporate Advisor
Alexey holds the position of Corporate Advisor of Kaspersky Lab.
Alexey joined Eugene Kaspersky at the KAMI Information Technologies Center in 1991, and later co-founded Kaspersky Lab in 1997. Early on in the project he was responsible for product development. Later on his focus of responsibility shifted to the AV engine, and then to technology development.
Alexey graduated from the Moscow Aviation Institute in 1988, and then worked several years as a computer engineer in one of the Soviet rocket science institutes.
Free Trial Downloads
Site Map Privacy Policy Contact Us | 计算机 |
2014-23/2157/en_head.json.gz/15972 | Component Directory Lockdown – New in Firefox 3.6
in news & announcements / on November 16, 2009 at 10:35 pm / We hate crashes. When Firefox crashes, we try to get you back on your feet as quickly as possible, but we’d much rather you not crash in the first place. In Firefox 3.6, we are changing the way that some third party software hooks into Firefox which should eliminate a good chunk of those crashes without sacrificing our extensibility in any way. In the process, we’ll also be giving you greater control over the code that runs in your browser.
Firefox is built around the idea of extensibility – it’s part of our soul. Users can install extensions that modify the way their browser looks, the way it works, or the things it’s capable of doing. Our add-ons community is an amazing part of the Mozilla ecosystem, one we work hard to grow and improve.
In addition to the standard mechanism for extending the browser via add-ons and plugins, though, there has historically been another way to do it. Third-party applications installed on your machine would sometimes try extend Firefox by just adding their own code directly to the “components” directory, where much of Firefox’s own code is stored.
There are no special abilities that come from doing things this way, but there are some significant disadvantages. For one thing, components installed in this way aren’t user-visible, meaning that users can’t manage them through the add-ons manager, or disable them if they’re encountering difficulties. What’s worse, components dropped blindly into Firefox in this way don’t carry version information with them, which means that when users upgrade Firefox and these components become incompatible, there’s no way to tell Firefox to disable them. This can lead to all kinds of unfortunate behaviour: lost functionality, performance woes, and outright crashing – often immediately on startup.
In Firefox 3.6 (including upcoming beta refreshes), we’re closing this door. Third party applications can still extend Firefox via add-ons and plugins the way they always could, but the components directory will be for Firefox only.
What Does This Mean For Me?
If you’re a Firefox user, this should be 100% positive. You don’t have to change anything, your regular add-ons should continue to work properly – you just might notice fewer crashes or odd bugs. If you do notice that something has stopped working, particularly a third party addition to Firefox, you might want to contact the producer of that addition to ensure they know about the change.
If you’re a Firefox component developer, this shouldn’t be a big change, either. If you’re already packaging your additions as an XPI, installed as an add-on it’s business as usual. If you have been dropping components directly, though, you’ll need to change to an XPI-based approach. Our migration document on the Mozilla Developer Connection outlines the changes you’ll need to make, and should be pretty straightforward. The good news is that once you’ve done this, your add-on will actually be visible to users and will support proper version information so that our shared users are guaranteed a more positive experience.
If you haven’t downloaded the new Firefox beta yet, and want to give it a spin, you can find a copy here.
This post was originally published at the Mozilla Developer Center, and made available under the Creative Commons: Attribution-Share Alike license.
To have articles like this delivered automatically to your Feed Reader or Inbox, subscribe via RSS or email. Tags: news & announcements | 计算机 |
2014-23/2157/en_head.json.gz/16211 | What is New York State's Policy on Web Site Accessibility for Persons with Disabilities?
New York recognizes the importance of making its digital government services available to the largest possible audience. The purpose of this policy and the accompanying standards is to make New York State agency web-based intranet and internet information and applications accessible to persons with disabilities. This policy replaces and supersedes OFT's earlier policies, 96-13 and 99-3, regarding accessibility. This new policy and accompanying standards had extensive review by OFT's IT Accessibility Steering Committee made up of accessibility experts from nine state agencies, including the Office of the Advocate for Persons with Disabilities, and from the Center for Technology in Government. Additional accessibility workgroups, established by this steering committee and the NYS Information Resource Forum, also provided review and feedback, as well as a final review by the State CIO Council before the issuance of this policy and standards.
More Info Accessibility for NYS Web Sites
The policy describing accessibility requirements for NYS Web Sites. URL: http://www.oft.state.ny.us/policy/p04-002/index.htm | 计算机 |
2014-23/2157/en_head.json.gz/16515 | Sign Up for Emails / Login
HomeAbout UsOur ProductsSupercuts ServicesGift CardsNewsMobile AppSupercallLocationsFind Your StyleRock the CutBecome A FranchiseeCareersContact Us
Find a Supercuts: 5 Miles10 Miles25 Miles50 Miles
Mobile ApplicationThe Supercuts mobile application makes use of the Google Maps API, which is covered under the Google Privacy Policy. Scope of this Website Privacy PolicyThe purpose of this Website Privacy Policy is to let you know how we handle the information we receive from you through this website including through any optimized version of this website via a wireless device. As used in this Website Privacy Policy, terms such as “we” or “our” and “Company” refer to Regis Corporation and its subsidiaries and affiliates but does not include Regis Corporation’s franchisees. This site is intended for a United States audience. If you access this site from outside the U.S., you acknowledge, agree, and consent that any information you provide, including any personal information, will be transferred to and processed by a computer server located within the U.S., and subject to U.S. laws and regulations. Further, if you access this site from outside of the United States, you acknowledge and agree that you are responsible for compliance with any applicable local or national laws, rules or regulations applicable to such use.By using this website, you agree to be bound by this Privacy Policy as well as our Terms of Use. We may update and change this Privacy Policy and our Terms of Use and such updated versions will be posted on this website. It is your responsibility to review this Privacy Policy and our Terms of Use from time to time. By continuing to use this website, you consent to any updated version of this Privacy Policy and the Terms of Use posted on the website. Other websites operated by Company or affiliated with Company may have similar or different terms of use and privacy policies governing their use as may be posted on those websites. Note that websites operated by our franchisees are not governed by this Privacy Policy or our Terms. Furthermore, our website may provide links to other websites. We encourage you to review the terms of use and privacy policies on the websites that you visit and exercise good judgment when sharing your personal information online. This Website Privacy Policy does not apply to information shared with us through other means outside of our website. For example, if you share information over the telephone or in person at Regis salons, it is not covered by this Website Privacy Policy. Subject to applicable laws, we may use such information or share it with affiliates or any third parties without restriction. If you participate in certain activities or programs like entering a contest or participating in a survey, the collection and use of your personal information through those programs may be governed by other specific terms for such programs as posted on applicable portions of the website or other applicable websites. Some programs may be operated by our third party vendors on our behalf and governed by privacy policies of those third party vendors as described in those programs. Non-personal Information and Cookies“Non-personal information” means information that does not permit us to specifically identify you by your full name or similar unique identifying information such as an address or telephone number. Like many other websites, this website uses “cookie” technology and similar technology to gather non-personal information from our website visitors such as which pages are used and how often they are used, and to enable certain features on this website. You may disable these cookies and similar items by adjusting your browser preferences on your computer at any time; however, this may limit your ability to take advantage of all the features on this website. Keep in mind that cookies are not used to collect any personal information and do not tell us who you are. We may also collect other forms of non-personal information such as what web browsers are used to read our website and what websites are referring traffic or linking to our website. Aggregate and de-identified data regarding website users is also considered non-personal information. We do not limit the ways we may use or share non-personal information. Personal Information provided by you“Personal information” means information that specifically identifies you as an individual, such as your full name, telephone number, e-mail address, postal address, or certain account numbers. This website may include web pages or registration forms that give you the opportunity to provide us with personal information about yourself. You do not have to provide us with personal information if you do not want to; however, that may limit your ability to use certain functions of this website or to request certain services or information. We may combine personal information that you provide us through this website with other personal information held by the Company, including with affiliates or our vendors. For example, if you have purchased a product or service from us, we may combine personal information you provide through this website with information regarding your receipt of the product or service and information we receive from affiliated entities and other sources.We may use personal information for a number of purposes such as:• To respond to an e-mail or particular request from you. • To personalize the website for you. • To process an order as requested by you. • To provide you with information that we believe may be useful to you, such as information about products or services provided by us or other businesses. • To comply with applicable laws, regulations, and legal process. • To protect our rights, the rights of affiliates or related third parties, or take appropriate legal action, such as to enforce our website terms of use. • To keep a record of our transactions and communications. • As otherwise necessary or useful for us to conduct our business, so long as such use is permitted by law. You understand and specifically agree that we may use personal information to contact you through any contact information you provide through this website, including any email address, telephone number, cell phone number, text message number, or fax number. Sharing of Personal InformationWe share and give access to personal information to our employees and agents in the course of operating our businesses. For example, if you sent us an e-mail asking a question, we would provide your e-mail address to one of our employees or agents, along with your question, in order for that person to reply to your e-mail. We may also share personal information with other affiliates or business units within the Company.We also share and give access to personal information to our franchisees, affiliates and other companies that collaborate with us or perform services on our behalf. For example, we may hire outside companies to help us send and manage e-mail, manage survey programs, or host or operate our website or mobile applications. We may hire or collaborate with outside companies to conduct campaigns and promotions. In such cases, we may share your personal information with them so that we or such companies can send you promotional communications about products or services. We may also share your personal information with third parties who offer products or services that may interest you. If you choose to opt-out of receiving such promotional e-mail communications from us or other companies, you can do so by following the opt-out or unsubscribe instructions in such communications. If we share or give access to personal information to outside companies we require them to use the personal information for the limited purposes for which we shared the information. If you believe we or any company associated with the Company has misused any of your information please contact us immediately and report such misuse. We may share personal information if all or part of the Company is sold, merged, dissolved, acquired, or in a similar transaction. We may share personal information in response to a court order, subpoena, search warrant, law or regulation. We may cooperate with law enforcement authorities in investigating and prosecuting website visitors who violate our rules, or engage in behavior that is harmful to other visitors, or is illegal. If you submit information or a posting to a chat room, bulletin board, or similar “chat” related portion of this website, the information you submit along with your screen name will be visible to all visitors, and such visitors may share with others. Therefore, please be thoughtful in what you write and understand that this information may become public.Reviewing My InformationPortions of this website may permit you to create and view a personal profile and related personal information. If this function is available, we will include a link on the website with a heading such as “My Profile” or similar words. Clicking on the link will take you to a page through which you may review, edit and delete your visitor profile and related personal information. Website and Information SecurityWe use appropriate physical, electronic and procedural methods designed to protect the security and integrity of information submitted through this website. Due to the nature of the Internet and online communications, however, we cannot guarantee that any information transmitted online will remain absolutely confidential, and we are not liable for the illegal acts of third parties such as criminal hackers.Our Online Communication PracticesMost e-mail, including any e-mail functionality on our site, does not provide a completely secure and confidential means of communication. It is possible that your e-mail communication may be accessed or viewed inappropriately by another Internet user while in transit to us. If you wish to keep your information completely private, you should not use e-mail. We may send e-mail communications to you regarding new products or services or other topics. The Company may send electronic newsletters, notifications of account status, and other communications such as information marketing other products or services offered by us, on a periodic basis. To opt-out of any specific electronic communication you're receiving, click on the opt-out button associated with the specific communication or follow the instructions to unsubscribe from such lists. Children’s Online Privacy We will not intentionally collect any personal information from children under the age of 13 through this website. If you think that we have collected personal information from a child under the age of 13 through this website, please contact us via e-mail or mail to the contact information provided below in Contact Us.California Privacy RightsResidents of California may request a list of third parties to which Company has disclosed their personal information for direct marketing purposes by such third party during the preceding calendar year. If you are a California resident and would like to request a list, please send a request that includes your full name and address and indicates “California Privacy Notice Request” and send your request via e-mail to the contact information provided below in Contact Us. Contact UsTo contact us regarding this Website Privacy Policy and our related privacy practices, please contact us via e-mail at departmentcs@regiscorp.com; or by phone at 1-877-857-2070, or by mail to:Regis CorporationAttn: Department CS7201 Metro BlvdMinneapolis, MN 55439Copyright Notice © 2013 Regis Corporation. All Rights Reserved.Effective DateThe Effective Date of this Privacy Policy is September 25, 2013. Privacy Policy
Existing Franchisee
© 2014 Supercuts, a division of Regis Corporation. | 计算机 |
2014-23/2157/en_head.json.gz/16679 | <?dctmLink chronic_id='094e75f98003a4fa,094e75f98003a4f9,094e75f98003a4fb' object_id='094e75f98003a4fa,094e75f98003a4f9,094e75f98003a4fb' path=',/USPTO.gov/image/1998PARa98rf2-1.gif,/USPTO.gov/image/1998PARa98rf2-2.gif,/USPTO.gov/image/1998PARa98rf2-3.gif' edit_widget_type=content>
The PTO received 240,090 utility, plant and reissue (UPR) applications in FY 1998. Increases in the number of applications in communications and information processing technologies led the 9 percent growth from last year. We issued 140,574 patents, a 25 percent increase over FY 1997. Some of this increase was the result of a successful effort to reduce the backlog in the publications area.
For FY 1999, we expect UPR applications to increase another 8 percent to around 259,000, with the high technology areas leading this growth once again. We expect another large increase in the number of patents issued in FY 1999, partly as a result of a one-time process change which provides for parallel processing of allowed applications.
At the end of FY 1998, cycle time 1 averaged 16.9 months, with 32 percent of applications processed in 12 months or less. Our target processing time for FY 1998 applications was 16.7 months, with 33 percent of applications processed in 12 months or less. FY 1997 cycle time was 16.0 months.
Our customer survey showed that overall satisfaction with our performance was essentially unchanged at 52 percent, from 50 percent in 1996, but short of our FY 1998 goal of 57 percent. It showed, however, that our performance has improved in 23 of 33 operational areas. The areas of greatest improvement included application procedures, examination quality and staff courtesy.
The mission of the patent business area is to help our customers get patents; its performance goal is to grant patents to inventors for their discoveries. These were established to help us direct our efforts toward providing our customers with high-quality service, one of the PTO's two strategic goals.
To help achieve this mission and performance goal, we established the following five business goals.
Goal 1: Reduce the PTO processing time to patent original inventions to 12 months in 2003.
By 2003, we will reduce PTO processing time, or cycle time, for original UPR patents to 12 months from the time we receive an application to the time when we issue the patent or the inventor abandons the application. We committed to achieving this goal following our designation as a High-Impact Agency (HIA).
Although we are working toward processing all patents within 12 months, we expected cycle time to rise slightly in FY 1998 as a result of the increasing number of patent applications and the shortage of trained examiners. Actual cycle time increased only slightly more than anticipated.
The number of patent applications received increased 26 percent from 1993 through 1997, but the number of patent examiners increased only 17 percent over the same period. Backlogs increased, as did the complexity of patent applications. To help attack the problems of unacceptable cycle time, we hired 732 patent examiners this year. By the end of FY 1999 these new examiners will be working at full capacity and we expect to process 75 percent of inventions in 12 months or less, with an average cycle time of 10.9 months. We plan to hire at least 700 more examiners during FY 1999.
In another major step toward achieving our long-term goal, we reduced the backlog of patent applications in the pre-examination area. Filing receipts were mailed to customers and applications delivered to the examining corps in an average of 29 days, an improvement of over three and a half months from the beginning of the year.
In addition, we are improving our efficiency by making direct contact early with customers to anticipate and resolve potential problems with their applications. We also plan to improve our patents publication process to reduce the time between the allowance of an application and the granting of the patent from 4.3 months at the end of FY 1998 to one month by the end of FY 1999.
Goal 2: Establish industry sectors.
During 1998, the Patent Examining Corps realigned the 16 Examining Groups into six Technology Centers. These Technology Centers are technology-specific groups that parallel technology areas in private industry; they focus resources to meet specific customer requirements; and they allow us to take advantage of new management concepts and techniques. The Technology Center structure enables us to offer customers improved service and responsiveness. It allows us to customize resources and services, and helps us streamline processes. It also allows greater partnering with industry in developing technology-specific training, and enables us to better anticipate and respond to specific industry needs.
Goal 3: Receive applications and publish patents electronically.
The PTO will complete systems testing and begin full electronic processing of patent applications in FY 2003. This goal represents another of our HIA commitments.
In August 1998, the PTO began capturing bibliographic data from incoming patent applications by electronically scanning the documents. We also began sending an electronic acknowledgment of receipt to the applicant.
To achieve full electronic filing, the PTO must develop electronic systems to receive applications, process them, and publish issued patents. In FY 1999, working with volunteer applicants, we plan to offer Internet-based application filing. We also plan to work with vendors of intellectual property (IP) portfolio management packages to incorporate electronic filing capability into their products. We expect to begin receiving electronically filed U.S. applications by FY 2002.
In September 1999, we will establish within one of the technology centers a prototype system for processing applications filed electronically. We will build on this and other prototype systems to develop and establish a fully electronic processing system in one of the technology centers early in FY 2002. The entire patent examining corps will adopt this system during FY 2003.
To complement these systems, we also will develop a new publication system that will be deployed incrementally through FY 2003. This system will vastly improve the quality and timeliness of our patent publications, and significantly reduce our publication costs.
Electronic access for applicants
In another effort to make the patent system more open to customers, we plan to introduce the patent application information retrieval (PAIR) system in June 1999. PAIR will allow patent applicants and their designated representatives restricted Internet access to patent application information without compromising the confidentiality or security of other data. We will make PAIR available to a limited number of users during a pilot period, and extend it to other users when the necessary infrastructure is in place.
Patent application location monitoring (PALM)
PALM performs the PTO's workflow tracking and status reporting for patent applications processing, and is critical to the PTO's day-to-day operations. The current system is beset by problems, however: it is dependent on institutional knowledge to keep operating properly; it is costly and difficult to maintain; and it cannot take advantage of the advances in technology available in an open system environment.
We have begun work to replace PALM during FY 2000 with a product that will operate in an open system environment. This will make future modifications and enhancements to patent applications processing easier and cheaper. We have also begun to modify the existing PALM system to ensure that it will be Y2K-compliant. We should complete this work in March 1999.
To help improve the speed and quality of patent searches, in August 1998 we introduced browser-based software to permit examiners to search the text and images of U.S. and foreign patents. At the same time, PTO examiners were given access to the Derwent World Patents Index. We also expanded the prior-art search file to include approximately 46,000 full images from the European Patent Office and the Japanese Patent Office.
In April 1998, PTO examiners obtained access to the Elsevier Scientific Electronic Journals, a source of non-patent technical literature. We will add other electronic information sources, including IBM Technical Disclosure Bulletins and UMI ProQuest, as availability and budgets permit.
Back-file capture
In May 1998, we began electronically capturing the U.S. patents text file back to 1960. By mid-December 1998, we had captured 99 percent of U.S. patents issued between 1960 and 1970. We will add selected art dating back to 1920 to the electronic archive, and we will ultimately extend our text archive back to 1790. This information will be available to our trilateral partners as well.
Biotechnology patent sequence submission (PatentIn)
In June 1998, we began using the Windows version of the PatentIn program. We also continued to develop a version of the PatentIn software that can be used by Internet browsers. The browser will be integrated with the PTO's electronic filing system in 1999, to provide customers with an efficient method of submitting sequence listings that meet international patent application filing standards for gene and protein sequence information.
Goal 4: Exceed our customers' quality expectations through the competencies and empowerment of our employees.
We use a variety of methods to obtain information about our customer service, including surveys, focus sessions, roundtable discussions, and town hall meetings. In FY 1998 for example, we conducted a postal survey of more than 6,000 patent customers. The survey showed that overall satisfaction with our performance was essentially unchanged. A similar survey is planned for FY 1999. We use our client feedback to determine our customers' expectations of our service and to identify areas where we must improve to meet these expectations.
We give the highest priority to customer needs, and encourage all our employees to recognize their responsibility to the customer. In FY 1998 we established an in-house program to publicly recognize good service to customers, and identified 108 employees for recognition. In FY 1999, all patent employees will receive customer service training to emphasize their responsibility to customers.
Researching customer expectations
In FY 1999, each major patent operation center will establish a customer service function or office to address customer needs and to register the compliments and complaints that will help us to refine our service. Work on the first customer service offices is already underway in the six technology centers.
Each technology center will conduct a series of focus sessions and roundtable discussions to identify weak areas in our customer service, as will other offices in the patent business area. We will use the results of these meetings to develop our customer service improvement strategy.
We are also conducting an in-house assessment of our customer service. Telecare, a telephone measurement project, will involve placing about 1,700 calls to PTO employees by the end of FY 1999 and measuring the responses to these calls. We will use the information gathered by this project to improve communication with our customers. Another project, the In-Process Review, will analyze about 2,700 office actions for compliance with the examination-related customer service standards for clear and complete office actions and thorough searches.
At least one quality assurance specialist (QAS) is employed in each technology center to assist with the In-Process Review. The Office of Patent Quality Review will perform a second review of a statistically valid sample of those actions reviewed by the QAS. The Center for Quality Services, which administers customer surveys and focus sessions, will conduct customer interviews to review those same actions. We will use the results of the review to develop training programs and any necessary procedural changes.
Reviewing working procedures
The Patent Legal Administration has proposed a number of significant changes to the PTO's regulations to improve application processing. The proposed changes affect a wide variety of PTO processes, and are designed to reduce cycle time, align fees with work performed, and help prepare for the electronic processing of applications and publishing of patents.
On March 30, 1998, we inaugurated a Patent Working Lab to test new working procedures.
The goals of the lab are to:
improve patent application processing;
train technical support personnel to perform higher-level functions and thereby enable the examiners to focus on legal and technical work;
encourage teamwork and collaboration between team members;
improve examination quality;
improve customer and employee satisfaction.
We will assess the results of the lab from our customers' and employees' perspectives, as well as the agency's.
After six months of operation, the participants are enthusiastic about working in the lab environment. Employee satisfaction has improved significantly since the lab opened, and examiners and support staff are finding new ways to help each other.
On October 1, the lab entered the second phase of operation. This phase will test procedural changes incorporated into the program as a result of the experience gained from the first months of lab operation.
Goal 5: Assess fees commensurate with resource use and customer efficiency.
In order to encourage innovation and to ensure low-cost access to the patent system, our patent system has a fee to file patent applications much lower than the actual cost of processing those applications. We recover our costs through fees charged to maintain issued patents. The General Accounting Office, in past consultations, has pointed out that this does not align well with the actual costs of processing a patent, which tend to be high during the examination of the application and low after the patent is issued. The PTO has a responsibility to ensure that the patents and trademarks system encourages the broadest participation, however, and the current fee structure was designed with that end in mind. We will work with Congress to establish a fee schedule, aligned with costs, that encourages maximum participation in the patents and trademarks system. This goal also represents the third of our HIA commitments within the patent business area.
We began a project last year to determine if we could realign our fee structure to better match our costs while maintaining or raising customer participation. Teams of PTO employees traveled around the country to eight different cities, meeting with a cross-section of PTO customers and soliciting their views on possible improvements to our fee structure. A summary of their input has been placed on the PTO's Internet site. We are still examining the results of those meetings, and will discuss our conclusions with our customers before recommending any changes. In addition, we are conducting an activity-based costing project to obtain accurate cost data to accompany our customers' input.
1 In 1995, Congress changed the length of a patent's term from 17 years from the date the patent was issued to 20 years after the earliest effective filing date claimed by the applicant. As a result, the time the PTO spends processing an application now directly affects the length of the patent term. In response, we have changed our measure from traditional pendency to processing time, or cycle time. This allows us to measure the time we spend processing an invention, and excludes the time expended by the applicant, such as the time spent by the applicant in making a reply. Also, the pendency method tracked individual patent applications and separately tracked continuations, or second applications filed to continue the prosecution of a parent application; cycle time tracks inventions regardless of the number of applications filed on that invention.
This page is owned by Office of the Chief Financial Officer. | 计算机 |
2014-23/2157/en_head.json.gz/16734 | How Do I Send Email Safely to a Group?
Everyone has seen an email that has been forwarded and forwarded so many times that you have to scroll down several pages to find the content. As you scrolled down, you no doubt saw dozens of names and email addresses of people who were included in earlier threads and who now are having their information shared with whoever receives the email next. This is not only inconsiderate, but opens these people up to spam, scams, and email from people they might not have cared to correspond with.
Whenever you want to send or forward email to a group of people who don’t know each other, the proper etiquette is to put everyone’s email addresses on the Bcc: (or Blind Carbon Copy) line. This has the advantage of making your message look as if it were sent to each person individually. But more importantly, it keeps each of your contact’s email address private; no one else on the list, and no spammer, can see it.
In the past, criminals were just after e-mail addresses to spam or sell to other spammers. But now criminals also use the information about who your friends, and their friends, are to map your social network. Then, they send tailored spam, scams or phishes to you and a few of others in your social network because it makes the spam seem more legitimate. Consumers are far more likely to fall for a scam if a friend or family member is also on the "To:" line.
How to find the Bcc: Line
Every e-mail program has a Bcc: option. Search in your e-mail program’s 2000
Help if you can’t find it readily.
Example: In Windows Live Hotmail, to display the Bcc: (and Cc:) line, click Show Cc &Bcc in an e-mail message (as shown below). This will make the fields appear underneath the To: line.
Tip: To protect your email from being bounced around and exposed to people you don’t know, you may want to include a message like the one below as part of your signature field at the bottom of your e-mail messages:
Note: To help protect my privacy, please do not expose my e-mail address to others. If you’re sending e-mail to a group that includes people I do not know, please put my e-mail address on the Bcc: line.
While a note like this is not likely can’t guarantee to stop your email from being shared with others, it will help raise awareness of the issue and should significantly reduce the frequency of your exposure. | 计算机 |
2014-23/2157/en_head.json.gz/17243 | The Gaming Paradise
Thread: The Gaming Paradise
Black 2 or White 2?? I know last time I made the mistake of buying Pokemon Black instead of White. It had White Forest and more Triple Battles!!
My NHL 13 Ultimate Team name? The Paul Heyman Guys.
im never able to think up names usually
although my FIFA ultimate team is named Evolution after the stable
Started Sleeping Dogs on the PS3 last night. Cool game.
Any Doom 3 fans here? I never got a chance to play it when it first came out, so I'm giving the BFG Edition a go tomorrow. Sort of hoping for it to deliver some of the scares that I didn't get with Resident Evil 6.
Doom 3's okay. I think the biggest thing working against it was the fact that it was nothing like the first two. Doom 3 is of decent length but by the halfway mark you should start to get tired of it as it's pretty much the same thing from start to finish without change... I mean even down to the same-ish environment for 90% of the game.
I hope they still have the option to play as originally intended, where you can't have a flashlight and a gun out at the same time. Some people hated this but it really wasn't a big deal... everything's a frigging corridor so even if you can't see what you're shooting at, it's kind of hard to miss. Plus, I found that most of the time there was sufficient light to deal with things... maybe not a lot of it but I thought people massively over-exaggerated being unable to see what they're shooting at.
Scares... unless you have the volume up a decent amount and you're maybe playing it in the dark, I don't think it's scary enough. As you play the game you'll subconsciously learn the general routine of monster spawning and scares to the point where the game is no longer scary, but highly systematic. If you want a scary game, play Amnesia... there's a hell of a lot less action(you don't get weapons, at all), but the atmosphere and tension for me was just ridiculous.
I've only put two hours into it, but you seem to be pretty spot-on in most regards. I'll add to it that the shooting just doesn't feel particularly impactful or tight (which is something that I think id Software were really able to remedy with Rage, despite that game's other short-comings).
I'm limiting my play-time to late nights to add to the creep factor, but, yeah, nothing has really been particularly frightening. Just the occasional, semi-effective jump scare here and there. That said, I still think it has RE6 trumped in the horror department, so it's not a total bust, I guess.
I think the thing with the shooting is that the monster physics are too floaty and the gun sound effects are shockingly weak(the shotgun in particular). One of the things I did like was how the imp fireballs light otherwise dark or black rooms up. At the time it was cool looking as dynamic lighting was a relatively new thing.
Yeah, I noticed the thing with the impotent sound effects, too. You sort of don't realize how much heft they add to the overall gun-play experience until they're gone.
Owwww!! what happened to "Name your game and START A DAMN DISCUSSION!!!!!"?!?!
Anyway, I'm looking for a new FPS for my PS3. I love the story that is Modern Warfare thus far, and I'm curious as to how it ends. In other words, I'm considering buying Modern Warfare 3. Anyone have it? Is it any good? Worth the invest?
akbar loves,
going down,
this thread has no drama!,
year of the 3ds | 计算机 |
2014-23/2157/en_head.json.gz/17262 | Home»Games»Free-to-play C&C canned
Wednesday, 30 October 2013 08:58 Free-to-play C&C canned Written by David Stellmack
Be the first to comment! Fan dissatisfaction with alpha contributed Electronic Arts has decided to close Victory Games. The LA based studio was developing the new free-to-play Command & Conquer which was current in alpha testing. To say that the alpha testing wasn’t going well appears to be an understatement at best. Feedback from this alpha testing led to the decision to kill the game off and close the studio. While it was not perhaps the sole cause according to our sources, it certainly didn’t help the situation.
With the cancellation of this free-to-play C&C, it marks the fifth time that EA has elected to kill a C&C title offer while it was still in development. EA claims that the feedback from the alpha was clear, in the fact that the company isn’t making the game people want to play. EA claims that they are committed and determined to get it right and will look at the best way to get the C&C franchise back on track.
Victory Games had been working on the new C&C title since February of 2011. The studio was created by EA to focus solely on strategy games with C&C being its singular focus. EA says that it is working to find homes for the talented developers at Victory in other studios inside of EA’s family. Published in
No FIFA World Cup for X1 & PS4 as we know
« In six weeks GTA V sells 29 million copies Titanfall isn’t coming to PS3 or PS4 » | 计算机 |
2014-23/2157/en_head.json.gz/17472 | HomeMedia Video GamesPC GameBroken Sword - The Sleeping Dragon (PC) Broken Sword - The Sleeping Dragon (PC)
Genre: Simulation / Strategy „
Broken Sword: The Sleeping Dragon (PC CD)
Broken Sword: The Sleeping Dragon (PS2)
Broken Sword: The Sleeping Dragon (PC)
Broken Sword: The Sleeping Dragon (Xbox)
Broken Sword - Trilogy (Shadow of the Templars + Smoking Mirror + Sleeping Dragon + bonus Beneath a Steel Sky)
Broken Sword - The Sleeping Dragon
Bigbmxdave
average point and click game Broken sword : The sleeping dragon is the 3rd installment of the Broken sword saga from revolution game studio. It is also the first to be made in 3d which adds a few interesting features to the gameplay. It does make the game a bit prettier to look at but this also means it is harder to find certain items in levels. Several times during the game I found myself backtracking over old scenes many times over just to find the one random item that was hidden in a part of the level I hadn't moved the camera to.The game also suffers from the "clueless" camera angles that a lot of new 3d games suffer from in which the player cannot choose what to look at but instead must rely on the computer algorithm that tries to guess the correct angle to display. This often means the player misses key items such as doors or other characters.The storyline is basic and the wit and humour are a bit lacking when compared to the first two games. If you have played the first two and liked them then you would probably enjoy this game. Otherwise there are a variety of other older point and click games that you would be better off playing. Comments
Hilm
Could be better... Whilst Broken sword has great plots, with plenty of twists and turns in. It is some what lacking in game play. The camera angles are weird, I dislike the character movement and whilst voice acting etc is good, it doesnt really improve game play.If you are into games with decent plots and game play doesnt really bother you then get this game, however if you like games with decent game play then I wouldn't advise this game as it's lacking in man game play aspects. Comments
A good addition to the series but the new style doesn't always work. Broken Sword: The Sleeping Dragon is the third in the highly successful Broken Sword series of games. Unlike the previous two, however, this is no 2D adventure game - rather it is a 3D, adventure-based game. It still features the two stars of the previous games, French journalist Nicole Collard and American tourist (now patent lawyer) George Stobbart. In the first two games they defeated a modern branch of the Knights Templar and an ancient cult. This time, it's the future of the whole world that hangs in the balance
The Plot An ancient manuscript deciphered for the first time
the murder of a young hacker in Paris just before he is to be interviewed by Nico
a trail of clues that lead to the most astonishing myths and legends coming true
and a battle for the most powerful force in the world, and indeed the worlds itself. (Yeah, typical Broken Sword fodder! :-D) First Impressions The first thing that hits you is the graphics - they are breathtaking. After a little while getting used to the interface and the fact that the gameplay style is rather different to what I expected, this looked like turning into a superb game. Interface The interface uses the arrow keys for movements, the AWDS keys for interaction (depending on the object you're looking at the options will change - for instance, for a door you'll have the option to look at it, listen to what's happening on the other side, or open it), along with shift to run, control to crouch / move stealthily, and spacebar to bring up your inventory. Some have criticised this interface but I found that it worked rather well, and it was more satisfying to have several options as to what to do with each object than to just have a single "interact" button. What's visible to you onscreen shows up as a hotspot, when there're more than one you can scroll through them with the page up / page down keys. The main problem with the interface wasn't actually with the controls themselves, but with the camera. It changed of its own accord as you moved, which was fine normally and no doubt added to an almost cinematic quality to the game, however it did often mean your perspective shifted without you asking it to and meant that the direction key you were holding down was suddenly the wrong way. This did get very annoying in a few places. Just as in the second game, you get to play the part of both George and Nico at different times. Thought the interface is the fame, the way you solve certain puzzles does slightly depend on who you're using - for instance, George is stronger (so can push crates around etc) while Nico is more acrobatic. Difficulty The traditional adventure elements of the game were, for the most part, quite easy. The inventory was slightly limited but didn't seem terribly so, however this isn't a traditional adventure game. Interspersed with the traditional problem solving are some scenes of trying to get past guards or arranging handy nearby crates, Sokoban-like, to gain access to a new area. These again were pretty easy on the whole, though there were a couple of more challenging ones. You won't have too many problems completing the game. (It took us a little over 12 hours of playing time.) Aesthetics As mentioned, the graphics are beautiful, even the in-game graphics being nearly as good as the DivX video sequences and cut scenes. There were one or two glitches in the graphics near the end of the game but nothing terrible. Some of the best graphics I've ever seen grace this game. The voice acting is, as you would expect, excellent. (Sadly some of the dialogue is pretty terrible though!) The sound effects are also superb and lend considerably to the game's atmosphere. The music I found a little disappointing; and the fact that it mostly seemed to make its presence felt only when something important had happened felt a bit cheesy in the end. Not bad, but could have been better (and certainly nothing like as good as the excellent music in the second game.) Will You Still Be Playing it in 6 Months' Time? I very much doubt it - the game's too easy for that, though it's still of a respectable length. Is it Worth the Money? Since we bought it second-hand on Amazon Marketplace for £3 + P&P, I would definitely say yes. Had I paid full price for it when it had been released, however, I would have been somewhat disappointed. Other Thoughts What Revolution Software have done here is to create a hybrid game, a cross between a traditional adventure and a puzzle game, with a touch of Tomb Raider thrown in for good measure (an idea which seems to be reinforced by Nico's outfit when they're in hotter climes!). The real question any fans of the series will want to know about the new 3D element and new gameplay style is, does it work? The answer is both yes - to an extent. The graphics are wonderful and many of the locations (even some of the same characters) from the first game have been lovingly recreated, and I can't see anyone having a problem with them at all. The gameplay and puzzles are quite similar for the most part, though I suspect that most fans of adventure games will find the other gamelay elements more of a distraction or something to be got past than really enjoyed. The plot is developed in true Broken Sword style - complex and intelligent - until the very end, where it all becomes a bit farcical to be honest. It's a good game, but for those of us who loved the first two games, perhaps not as good as it should have been. Some fans may even feel betrayed because of the new hybrid game style, but I think overall it was an idea well worth experimenting with, and implemented quite effectively. Suffers slightly from being too short, as do both the earlier games. ***************** Final Ratings Graphics: - 94% - some of the best graphics I've ever seen, but occasional glitches towards the end of the game slightly pull back this rating. Sound: - 86% - great voice acting as ever and great ambient sound effects, slightly disappointing music however. Playability: - 75% - the interface takes a little getting used to and the changing camera angle is annoying. Longevity: - 73% - a little too short, or perhaps a little too easy. Only one really tough puzzle in the whole game. Replay Value: - 67% - you might play it again just for the graphics and for something a little different from other games. Value For Money: - 81% - if you can get it cheap it's well worth it. Overall Rating: - 79% - a good game that'll keep you happy for a while, but not a great one. Overall probably the weakest game in the Broken Sword series, but not by much and still well worth playing. On completion
of the game you get access to a few special features, but they're not that great in all honesty. (Concept art gallery etc.) System Specs (Minimum) OS: Windows 98/ME/2000/XP CPU: Pentium III 750 MHz RAM: 128MB 8x CD-ROM Drive DirectX 8.1 Graphics card: GeForce2 64 MB or Equivalent Spare HDD space: 1 GB Keyboard and Mouse /Gamepad (Recommended) CPU: 1.2 Ghz Pentium III Processor or equivalent Graphics Card: GeForce4 Ti 4200 or Equivalent Sound Card with 5.1 Surround Sound Support Rating: 12+ (a little bad language and a bit of gore in some scenes)Amazon have it new for £9.99 (the Marketplace ones that are on at the moment aren't very cheap), and Play.com are sold out (same price). Second-hand may be your best bet if you don't want to use Amazon. Comments
An ancient conspiracy, a broken code, an unsolved murder - the ultimate adventure. Broken Sword: The Sleeping Dragon represents the next generation in adventure gaming. Once more, George and Nico must travel the world, fighting through the steaming jungles of Congo, the eerie castles in Prague, the chic back-streets of Paris and the historic village of Glastonbury, wrestling danger and piecing together the clues that will unravel the secrets of the Sleeping Dragon and save mankind from the threat of global catastrophe.
Worms Forts Under Siege (PC)
Pippa Funnell - The Stud Farm Inheritance (PC)
Hearts of Iron 2 (PC)
X3 - Reunion (PC) | 计算机 |
2014-23/2157/en_head.json.gz/17553 | NIST Home > ITL > Applied and Computational Mathematics Division > High Performance Computing and Visualization Group > feffmpi
Parallel Processing Enables Rapid Computation of X-ray Absorption
A popular computer code for X-ray absorption spectroscopy (XAS) now runs 20-30 times faster, thanks to a cooperative effort of the Information Technology Laboratory (ITL) and the Materials Science and Engineering Laboratory (MSEL).
XAS is widely used to study the atomic-scale structure of materials, and is currently employed by hundreds of research groups in a variety of fields, including ceramics, superconductors, semiconductors, catalysis, metallurgy, geophysics, and structural biology. Analysis of XAS relies heavily on ab-initio computer calculations to model x-ray absorption in new materials. These calculations are computationally intensive, taking days or weeks to complete in many cases. As XAS becomes more widely used in the study of new materials, particularly in combinatorial materials processing, it is crucial to speed up these calculations.
One of the most commonly used codes for such analyses is FEFF. Developed at the University of Washington, FEFF is an automated program for ab initio multiple scattering calculations of X-ray Absorption Fine Structure (XAFS) and X-ray Absorption Near-Edge Structure (XANES) spectra for clusters of atoms. The code yields scattering amplitudes and phases used in many modern XAFS analysis codes. Feff has a user base of over 400 research groups, including a number of industrial users, such as Dow, DuPont, Boeing, Chevron, Kodak, and General Electric.
To achieve faster speeds in FEFF, James Sims of the ITL Mathematical and Computational Sciences Division worked with Charles Bouldin of the MSEL Ceramics Division to develop a parallel version, FeffMPI. In modifying the code to run on the NIST parallel processing clusters using a message-passing approach, they gained a factor of 20-30 improvement in speed over the single processor code. Combining parallelization with improved matrix algorithms may allow the software to run 100 times or more faster than current single processor codes. The latter work is in process.
The parallel version of the XAS code is portable, and is now also operating on parallel processing clusters at the University of Washington and at DoE's National Energy Research Scientific Computing Center (NERSC). A speedup of 30 makes it possible for researchers to do calculations they only dreamed about before. One NERSC researcher has reported doing a calculation in 18 minutes using FeffMPI on the NERSC IBM SP2 cluster that would have taken 10 hours before. In 10 hours this researcher can now do a run that would have taken months before, and hence would not have been even attempted.
Contact: James S. sims
See also: Feff
Date created: December 10, 2010 | Last updated: December 10, 2010 Contact: Webmaster | 计算机 |
2014-23/2157/en_head.json.gz/17711 | Former FSDB employee facing molestation ...
Documentary underway for St. Augustine's ...
St. Johns County school buses not part of ...
Sheriff's worker critical after medical ...
Last Updated: March 4, 2009 This Privacy Policy discloses the privacy practices of this website (the "Site"). Specifically, it outlines the types of information that we gather about you while you are using the Site, and the ways in which we use and share this information. This Privacy Policy does not apply to any information you may provide to us, or that we may collect, offline and/or through other means (for example, at a live event, via telephone, or through the mail). Please read this Privacy Policy carefully. By visiting and using the Site, you agree that your use of our Site, and any dispute over our online privacy practices, is governed by this Privacy Policy and our Terms of Service. Because the Web is an evolving medium, we may need to change our Privacy Policy at some point in the future, in which case we'll post the revised Privacy Policy on this website and update the "Last Updated" date to reflect the date of the changes. By continuing to use the Site after we post any such changes, you accept the Privacy Policy as modified. Your California Privacy Rights California Civil Code Section 1798.83, also known as the "Shine The Light" law, permits our customers who are California residents to request and obtain from us once a year, free of charge, information about the personal information (if any) we disclosed to third parties for direct marketing purposes in the preceding calendar year. If applicable, this information would include a list of the categories of personal information that was shared and the names and addresses of all third parties with which we shared information in the immediately preceding calendar year. If you are a California resident and would like to make such a request, please submit your request in writing to: Kim Jaske Online Privacy Coordinator Gannett Law Department 7950 Jones Branch Drive McLean, VA 22107 How We Collect and Use Information We may collect and store information, including personally-identifiable information (such as your name, postal address or email address) or other information, that you voluntarily supply to us while on our Site. Some examples of this type of information include information that you electronically submit when you contact us with questions, information that you post on blogs, discussion forums or other community posting and social networking areas on our Site, and information that you electronically submit when you complete an online registration form to access and use certain features of our Site. We also may ask for information (including a credit card number and other financial information) from those users who make purchases or have payment transactions on our Site. We also collect and store non-personally identifiable information that is generated automatically as you navigate through the Site. For example, we may collect information about your computer's connection to the Internet, which allows us, among other things, to improve the delivery of our web pages to you and to measure traffic on the Site. We also may use a standard feature found in browser software called a "cookie" to enhance your experience with the Site. Cookies are small files that your web browser places on your hard drive for record-keeping purposes. By showing how and when visitors use the Site, cookies help us deliver advertisements, identify how many unique users visit us, and track user trends and patterns. They also prevent you from having to re-enter your preferences on certain areas of the Site where you may have entered preference information before. This Site also may use web beacons (single-pixel graphic files also known as "transparent GIFs") to access cookies and to count users who visit the Site or open HTML-formatted email messages. The information we collect may be collected directly by us, or it may be collected by a third-party website hosting provider, or another third-party service provider, on our behalf. We use the information we collect from you while you are using the Site in a variety of ways, including, for example, to process your registration request, provide you with services and communications that you have requested, send you email updates and other communications, customize features and advertising that appear on the Site, deliver our Site content to you, measure Site traffic, measure user interests and traffic patterns, and improve the Site and the services and features offered via the Site. In addition, we may use any information submitted by or collected from you via the Site for any purpose related to the Site, including to contact you for customer service purposes, to inform you of important changes or additions to our Site or the services offered over our Site, and to send you administrative notices and any other communications that we believe may be of interest to you. We also may provide your information to our affiliates or to third parties, including our third party service providers, contractors and advertisers, for purposes related to Site administration and operation. For example, if you use a credit or debit card to complete a transaction on our Site, we may share your personal information and credit card number with a credit card processing and/or a fulfillment company in order to complete your transaction, or such service provider(s) may collect that information from you directly, on our behalf. We also reserve the right to use, and to disclose to third parties, all of the information collected from and about you while you are using the Site in any way and for any purpose, such as to enable us or a third party to provide you with information about products and services that may be of interest to you. In some cases we will use and/or share only non-personally identifiable information, but in other cases we may use and share personally identifiable information. If you do not wish your personally identifiable information to be used for these purposes, you must send a letter to the Online Privacy Coordinator whose address is listed at the end of this Privacy Policy requesting to be taken off any lists of personally identifiable information that may be used for these purposes or that may be given or sold to third parties. Our Site also includes links to other websites and provides access to products and services offered by third parties, whose privacy policies we do not control. When you access another website or purchase products or services from a third-party, use of any information you provide is governed by the privacy policy of the operator of the site you are visiting or the provider of such products or services. We also make some content, products and services available through our Site through cooperative relationships with third-party providers, where the brands of our provider partner appear on the Site in connection with such content, products and/or services. We may share with our provider partner any information you provide, or that is collected, in the course of visiting any pages that are made available in cooperation with our provider partner. In some cases, the provider partner may collect information from you directly, in which cases the privacy policy of our provider partner may apply to the provider partner's use of your information. The privacy policy of our provider partners may differ from ours. If you have any questions regarding the privacy policy of one of our provider partners, you should contact the provider partner directly for more information. We are an affiliate of the CareerBuilder online careers service. Through our cooperative relationship with CareerBuilder, we are able to provide you with access to the CareerBuilder products and services through a co-branded CareerBuilder site. When you provide information through the co-branded CareerBuilder site, we may use the information consistent with this Privacy Policy, and CareerBuilder may use the information consistent with its own privacy policy (available at www.careerbuilder.com). Likewise, we are an affiliate of the Topix online news service, which enables us to provide you with access to Topix products and services through a co-branded Topix site. When you provide information through the co-branded Topix site, we may use the information consistent with this Privacy Policy, and Topix may use the information consistent with its own privacy policy (available at www.topix.com). Please be aware that we may occasionally release information about our visitors if required to do so by law or if, in our business judgment, such disclosure is reasonably necessary: (a) to comply with legal process; (b) to enforce our Terms of Service; or (c) to protect the rights, property, or personal safety of our Site, us, our affiliates, our officers, directors, employees, representatives, our licensors, other users, and/or the public. Please also note that as our business grows, we may buy or sell various assets. In the unlikely event that we sell some or all of our assets, or our Site is acquired by another company, information about our Site users may be among the transferred assets. Data Collected in Connection with Ad Serving and Targeting We may use cookies, web beacons and similar technologies, and/or a third-party ad serving software, to collect non-personally identifiable information about site users and site activity, and we may use this information to, among other things, serve targeted advertsements on this site. The information collected allows us to analyze how users use the site and to track user interests, trends and patterns, thus allowing us to deliver more relevant advertisements to users. We also may use third-party service providers to target and serve some of the advertisements you see on the pages of our Site, and these providers likewise may use their own cookies, web beacons and similar technologies to collect non-personally identifable information from our Site. These service providers may use that information, sometimes in conjunction with similar non-personally identifable information gathered through other websites, to deliver advertisements on this Site, and on other websites that participate in our service providers? advertising networks, that are tailored to match the perceived interests of consumers. This information also may be used to help measure and research an advertisement?s effectiveness, or for other purposes. The data collected in connection with the ad serving and ad targeting on our Site does not identify you personally and does not include your name, address, email address or telephone number, but it may include the IP address of your computer. The use and collection of information by third-party advertising service providers is governed by the relevant third party?s privacy policy and is not covered by our privacy policy. If you would like more information about the information collection practices of a particular service provider, or if you would like more information on how to opt out of a service provider?s information collection practices, please click here. Information You Post to Blogs, Discussion Forums and Other Community Posting or Social Networking Areas Please keep in mind that whenever you voluntarily make your personal information or other private information available for viewing by third parties online - for example on blogs, discussion forums, or other community posting or social networking areas of our Site - that information can be seen, collected and used by others besides us. We cannot be responsible for any unauthorized third-party use of such information. Children's Privacy Statement This children's privacy statement explains our practices with respect to the online collection and use of personal information from children under the age of 13, and provides important information regarding their rights under federal law with respect to such information. This Site is not directed to children under the age of 13 and we do NOT knowingly collect personally identifiable information from children under the age of 13 as part of the Site. We screen users who wish to provide personal information in order to prevent users under the age of 13 from providing such information. If we become aware that we have inadvertently received personally identifiable information from a user under the age of 13 as part of the Site, we will delete such information from our records.Because we do not collect any personally identifiable information from children under the age of 13 via the Site, we also do NOT knowingly distribute such information to third parties.We do NOT knowingly allow children under the age of 13 to publicly post or otherwise distribute personally identifiable contact information through the Site.Because we do not collect any personally identifiable information from children under the age of 13 via the Site, we do NOT condition the participation of a child under 13 in the Site's online activities on providing personally identifiable information. How To Make Changes to Your Information If you are a registered member of our Site, you can make changes to your account information by logging in to the Site and using the account tools available via the Site. If you have subscribed to one or more of our email newsletters, you also may change your subscriber information, modify your subscriptions, and/or unsubscribe from these newsletters at any time by logging in to your account. If you have any questions about modifying your account or preference information, please visit the ?Customer Service? pages of our Site. Storage of Information All information gathered on our Site is stored within a database to which only we and our hosting services provider are provided access. However, as effective as the reasonable security measures implemented by us may be, no physical or electronic security system is impenetrable. We cannot guarantee the security of our Site?s servers or databases, nor can we guarantee that information you supply will not be intercepted while being transmitted to us over the Internet. Questions Regarding Privacy If you have any questions about this Privacy Policy, our privacy practices, or your dealings with us, you can contact: Kim Jaske Online Privacy Coordinator Gannett Law Department 7950 Jones Branch Drive McLean, VA 22107 Community Sponsors
Affordable Auto Insurance by Nsurance Nat
Affordable Insurance!
Customized radar focused in on St. Augustine.
See More Saint Augustine Coupons
Get news & deals from St. Augustine for free!
Arts & CultureBusinessCommunity SpiritCrimeEnvironmentEventsFamiliesHealthHome & GardenNewsPeoplePoliticsPublic SpacesReal EstateRestaurantsSchoolsShopping & ServicesSports & RecreationStyleTransportationUrban WildlifeWeatherWeirdArticle Archives Advertising Opportunities
Green Cove Springs NewsMore firefighters on the job in Clay CountyDogs at Clay County Animal Care and Control in need of homesHarvey's Supermarket closing in Green Cove Springs as is a distribution center, lost jobsTeen spends 35 days in jail after mistaken arrestFormer IT specialist accused of stealing $19K in computer equipmentJulington Creek NewsIf women can run nations and corporations, they can certainly learn to run a boat Advocacy group believes aquarium would provide much-needed lift for downtownLisa Almeida of Freedom Boat Club named Up and Coming Entrepreneur of the Year at the Women in Busin Freedom Boat Club promotes Dock Manager to Director of Dock OperationsFreedom Boat Club to hold Open HousePonte Vedra NewsDocumenting coastline before and after ArthurFour injured after deck collapses in Ponte VedraBody found at Ponte Vedra Beach identifiedCar hopper busted in Ponte VedraServices Wednesday for beloved Ponte Vedra principal FirstCoastNews.com | 计算机 |
2014-23/2157/en_head.json.gz/17719 | Spelunker
From StrategyWiki, the video game walkthrough and strategy guide wiki
Spelunker | Table of Contents | Walkthrough
スペランカー
Micro Graphic Image
Brøderbund Software, Ariolasoft, Irem
Atari 8-bit, Commodore 64, Arcade, NES, MSX, Wii
Splunker II (Arcade), Spelunker II: Yuushahe no Chousen (Famicom)
Neoseeker Related Pages
Spelunker Walkthrough for PS3; also on (wii-u, NES)
Spelunker Cheats for PS3
Spelunker Forums
Spelunker (video game)
Spelunker is one of the most maligned video games in history, right along with the Atari 2600 game E.T. The Extra-Terrestrial. But the fact is, Tim Martin's Spelunker did not start out deserving such a bad rap. It actually began life as a rather groundbreaking achievement. When it was released for the Atari 8-bit computer systems, the game's map was considerably large compared to other contemporary action games. It was one of the few distinguished games that ever went from a computer game to an arcade game instead of the far more common opposite. However, the version that stands out most in everyone's minds, possibly for the exposure it received on a system that heralded the comeback of video games in America, was the horribly converted NES game.
As an NES title, Spelunker sported some of the most unfair, and unmerciful controls ever seen. While the original versions, including the arcade game, never suffered from this fact, the NES version punished everything but the most deliberate of inputs. The fact that the player would die if he fell just a few pixels in height, coupled with the excruciating difficulty of jumping off of a rope or ladder, resulted in a catastrophe of unplayability. With no room for error, players lost a vast number of lives simply trying to get through the first level, let alone the entire game. No continues and no level selects meant that the player must start over at the same level every single time. To this day, numerous English and Japanese web sites pay homage to the unrelenting poorness of the NES version's controls.
Nevertheless, if the poorness of the controls for one conversion of the game could be overlooked, a rather enjoyable exploration game could be found. And that, perhaps, is one of the biggest reasons why the controls are lamented to such extent, because there is quite a fun game buried underneath. After Micro Graphic Image published the original Atari version, Brøderbund Software bought the rights to the game and took over distribution of the original version, as well as developing an identical version for the Commodore 64/128. In 1985, Irem thought it would make a great arcade game, so they bought the rights to convert it and distribute it in Japan. They are ultimately the ones responsible for the failed NES port as well as an MSX conversion (which contains many of the same faults). Irem concocted a sequel to the arcade game, which played a lot like the first, but in new locations. A sequel was also developed for the Famicom, which had a selection of players to choose from, and differed from the original quite drastically.
How to play →
Walkthrough →
↓ Jump to Table of Contents ↓
Gallery[edit]
Broderbund box
Commodore 64 box
MSX box
Famicom flyer
Famicom box
NES box
Retrieved from "http://strategywiki.org/w/index.php?title=Spelunker&oldid=691491" Categories: Guides at completion stage 41983GamesMicro Graphic ImageBrøderbund SoftwareAriolasoftIremPlatformArcadeMAMEAtari 8-bitCommodore 64/128NESMSXWiiWii Virtual ConsoleSingle player3DS Virtual ConsoleWii U Virtual Console This page was last modified on 8 January 2014, at 03:02.
About StrategyWiki
Ads By: Navigation menu
Create accountLog in Search
Staff lounge
Guide pagesGuide images Table of Contents
SpelunkerTable of Contents | 计算机 |
2014-23/2157/en_head.json.gz/19273 | Tuning Your Print Procedures for a Color-Managed Workflow
Tuning Your Print Procedures for a Color-Managed Workflow, Part 2
The CTS Experience
Tensioning Screens for Smoother Setups and Sharper Prints
Wide-Format Inkjets: inspiring new trends in display graphics
Putting Theory into Practice
(December 2004) posted on Mon Dec 13, 2004 Screen-process research brings rewards to the production floor
By Dr. John Andersonclick an image below to view slideshow
Last year, Screen Printing featured a pair of articles in which Dr. Anderson presented new insights about the behavior of tensioned mesh and suggested methods to improve stretching consistency. In this edition, he reveals additional findings about optimal squeegee and ink characteristics based on data from a major screen-printing study he and other researchers conducted in the United Kingdom. Following the publication of my two-part series on mesh-tension loss in the March and April 2003 issues of Screen Printing, I received a number of comments and questions. Here, I'll address those comments and offer clarification. However, the main focus of this article will be two additional parts of the research study in which I participated at the Welsh Center for Printing and Coating, University of Wales--Swansea, United Kingdom. These areas concern squeegee forces and deflection and ink properties. Both areas were tested in a series of experiments and engineered simulations to evaluate the actions and functions of the screen-printing process. More on mesh-tension loss After last year's articles on mesh-tension loss, several readers asked if the principle of fiber realignment held true with mesh on which warp and weft threads are fixed with heat and pressure at the points where they cross--the common manufacturing method used by mesh suppliers today. The quick answer is that the research covered in this and the previous articles was carried out between 1993-96, and the mesh fabrics used in the trials were not fixed in this way. Our findings with unfixed mesh demonstrated that friction between overlapping threads causes bowing of the threads and subsequent tension loss as the threads slowly realign over time. What's interesting to note, however, is that this research was prompted by an accidental discovery. Four meshes had been measured every day for 40 days after they were stretched. These meshes had shown stable tension levels for the previous 32 days. Then two of the tensioned screens were transported by car 200 miles to another location. They were measured immediately after arrival and were found to have lost 20% of their average tension values. But when the screens were returned to the original shop (again by car) and measured, no more tension loss was recorded, and the screens were stable at this lower tension level. Their tensions were then compared to those of the two screens left behind, which served as a control. Both control screens were found to have the same higher tension level as before. The sudden tension loss with the seemingly stable meshes needed an explanation, and research showed that it was fiber realignment. How the fixing of the mesh contact points together actually affects tension loss and fiber realignment is a matter that requires further research. But I speculate that fixing the threads would reduce the fiber-realignment effect and increase tension stability. Keep in mind, however, that the stretching method would continue to play a role. For example, a multiple-clamp stretching system with gaps between the clamps would create stretching variations as shown in Figure 1, which would result in misalignment of threads, cause tension inconsistencies, and, inevitably, lead to tension loss. Squeegee forces and deflection A key skill for a screen printer is properly setting up the squeegee, both in terms of angle and the vertical forces applied during the print stroke. Many screen-printing operations spend a great deal of time adjusting squeegee parameters without fully understanding the consequences of these adjustments. Accepted squeegee- setup practices are typically based on folklore and tricks developed decades earlier that are still used despite the fact that squeegees, screen meshes, and inks have all changed substantially. A key part of the research conducted at Swansea involved reengineering a cylinder press and a flatbed press with load-sensing instruments to measure and display the vertical forces applied on both sides of the squeegee blade. Displays were placed next to the press so that they were always visible to the press operator, who then used them to consistently set the static vertical forces on the squeegee. The measurement of these static vertical forces was supported by high-speed data recording of the dynamic vertical forces on the squeegee and other press components during printing operations. In the trials, more than 100,000 readings/sec were recorded, providing very detailed profiles of the forces and changes experienced by the squeegee. These changes and forces were found to occur in stages, and, as Figure 2 shows, they demonstrate that the forces on the squeegee are rarely stable. The constant changes the squeegee encounters during printing make even and consistent ink transfer an elusive target. As part of the study, we also ran experiments to evaluate squeegee deflection during printing by comparing set contact angle to the real contact angle during printing. The experiments considered not only deflection of the entire blade (macrodeflection), but also microdeflection of the blade edge where it makes contact with the screen. In these experiments, we used an array of sensors behind the rear edge of the blade to measure macrodeflection, then applied these measurements to a computer simulation to calculate microdeflection at the blade edge. One finding of these trials was that the squeegee's measured profile perpendicular to the print stroke did not match the profile expected. As Figure 3 shows, we anticipated that the lack of support at the ends of the squeegee would cause the ends to bow away from the direction of the print stroke. In reality, the lack of stencil openings at the squeegee ends and the presence of ink allowed the ends to slide easily with little friction or deflection. However, in the center of the screen, greater friction created by the open image areas of the mesh and the presence of more ink caused the center of the blade to deflect. This bowing results in variations in the squeegee contact angle and edge profile across the blade, which, in turn, causes variations in the amount of ink transferred across the print width. Figures 4A-4C depict the macrodeflection and microdeflection experienced by the squeegee blade under three conditions. The first two (Figures 4A and 4B) consider a soft squeegee (low durometer), while the third (Figure 4C) is a control showing the effects with a steel blade. Blade height and initial angle were identical for all three. Figure 4A shows vertical forces experienced by the blade and its resulting deflection. This is comparable to the deflection experienced during squeegee setup. Macrodeflection of the squeegee is measurable, but not large. However, the microdeflection chart indicates that the contact angle at the squeegee edge is significantly different than the initial set angle. In Figure 4B, the squeegee experiences the same vertical force as in Figure 4A, plus a horizontal force representing friction and drag from contact with the screen during the print stroke. Here, the macrodeflection image illustrates a large deflection of the squeegee from its initial set angle. The real impact of extra horizontal force is illustrated in the microdeflection image of the squeegee edge, which shows substantial distortion. Such distortion and higher stress at the edge can cause accelerated squeegee wear and lead to a rounded edge profile. With the steel squeegee blade represented in Figure 4C, the forces are too low to cause any deflection, so the real contact angle during printing is unchanged from the set contact angle. These results illustrate the effect of squeegee angle and its true value at the contact point. The true contact angle is highly important for the ink-transfer process because it controls or influences the contact time, ink-flow pattern, and amount of ink transferred. These experiments suggest that the significant time and effort shops put forth in changing the squeegee angle between jobs has a minimal impact on printing performance. The combination of vertical and horizontal forces during the print stroke and the actual squeegee angle they result in have a much greater effect than the initial angle setting. Our research at Swansea confirmed that correct ink transfer is achieved by balancing a number of dynamic squeegee and ink properties with variables such as screen coverage area, vertical forces, friction, and stroke speed. The range of successful combinations becomes smaller as the squeegee becomes harder. This is why softer squeegees are easier to set up, but provide less consistent results during printing. Controlling squeegee deflection One technique commonly used to control deflection and contact angle during printing is to use layered, multidurometer squeegees. Such blades combine softer outer layers with a harder, more rigid center layer to help maintain the correct contact angle without risking damage to the squeegee edge or screen. They also help simplify setup time and accommodate a larger range of press settings. Another common solution often suggested by press manufacturers is to use a reinforced squeegee. Most commonly, such reinforcement takes the form of a stiff backing plate to support the squeegee. Our experiments included printing trials to assess how the height of the backing plate relative to the squeegee might affect deflection. All the tests used identical squeegees backed with a steel plate. The results of three of the tests are shown in Figures 5A-5C. On the first squeegee, the plate was positioned flush with the bottom of the squeegee (Figure 5A). On the second, the plate ended 3 mm above the bottom of the squeegee (Figure 5B). And on the last, the backer was affixed 10 mm above the bottom of the squeegee (Figure 5C). The diagrams show only minor differences in squeegee deflection when the backing plate is offset 0 mm or 3 mm. However, note that the edge profile with a 0-mm plate height is somewhat flat. With a 3-mm backing, the squeegee edge is slightly rounder and may be more resistant to wear. At a 10-mm backing height, the deflection of the blade from its set angle becomes noticeable. This suggests that leaving 10 mm of squeegee unsupported will not provide the control necessary to maintain a particular contact angle while printing. These results indicate that a support plate should extend almost the entire free height of the squeegee blade. Keeping the backing plate just a little above the squeegee bottom may extend the edge life of the blade. It also can reduce the likelihood of ink contamination from ink that gets trapped between the squeegee and support blade or mesh damage from the support plate accidentally making contact with the screen. Overall, the deflection experiments suggest that minor changes in initial squeegee-angle setup may have little true effect on the real squeegee angle during printing unless a stiff or reinforced squeegee system is used to control macrodeflection of the blade, and hence, microdeflection at the blade edge. Optimizing squeegee setup During the experiments conducted at Swansea, the squeegee confirmed its status as one of the most significant, yet least controlled, elements of the screen-printing process. However, the data collected showed that by removing the individual printer's magic touch as the basis for squeegee setup and replacing it with measurable, repeatable procedures, we significantly reduce the number of variables that come into play and simultaneously shorten setup times. Adding instruments to the squeegee assembly to measure and control vertical squeegee forces during printing allows initial press setup to be achieved with standard, preset values and creates a common starting point irrespective of the type of job being printed or the person operating the press. Using a reinforcing support plate or a squeegee manufactured to resist deflection helps control the macro angle of the squeegee, resulting in less variation in the true contact angle at the squeegee edge. This will result in a much more sensitive press setup, because the tolerances for achieving the correct squeegee parameters become narrower. However, once an operator becomes familiar with the procedures, setups become easier and faster, on-press consistency becomes better, and print quality improves. If a shop employs instrumentation to measure squeegee deflection and reinforced squeegees to control this deflection, it can quickly establish an effective routine for setting up presses by the numbers--thereby ensuring accurate results with any job. Ink measurement and stabilization The other area investigated during the research program was ink properties and how they can be identified, measured, and stabilized. One key finding concerns the flow properties of screen-printing inks and how these properties are measured and controlled. The primary flow property in which screen printers are interested is viscosity, which relates to an ink's resistance to flow. Ink viscosity is typically measured by ink manufacturers using highly sensitive, sophisticated, and expensive tools, such as cone and plate rheometers. These devices measure the resistance of the ink to the relative rotation of two non-flat plates at various speeds to arrive at a viscosity value. However, the way this value relates to an ink's shearing characteristics under a squeegee blade is somewhat questionable and unclear. In reality, a great deal of ink development is based on practical, on-press trials and feedback from the printers using the inks. One factor influencing ink viscosity that printers can control is ink temperature. In general, the colder an ink is, the more viscous or resistant to flowing it becomes. So if a printer starts a production run with cold ink, the printed results will not be the same as when the ink is warm and flowing easily. This principle can be tested in any shop with two identical containers of ink--one at a warm room temperature and the other stored in a refrigerator. In some cases, the differences felt just by stirring the inks will be enough to demonstrate the impact of temperature. Alternately, both inks can be used on press with identical setups to print the same image, and the resulting prints can be compared. The differences will be obvious. During the course of our experiments at Swansea, we devised a simple method for controlling the ink so that it would always be delivered to the press at a consistent temperature level. The solution was a water-bath system for containers of ink. The bath consisted of a large plastic container filled to a particular water level. The bath was equipped with a large aquarium water heater, as shown in Figure 6. The water heater, which was self regulated to within ±0.5°C, allowed us to accurately control the water temperature and, hence, the ink temperature to match the range recommended by the ink manufacturer. The bath was left on continuously, and ink containers were placed in the water at least one hour before the ink was to be used. The ink temperature also was measured before use to ensure it had reached the correct temperature. In this manner, we were able to deliver inks that had the correct flow properties starting from the beginning of the print run, which ensured consistent and predictable printed results. Consistency through control Control is critical for achieving consistency in screen printing, but real control can only be gained by measuring the process and recording the results to arrive at meaningful benchmarks and targets. The studies outlined here demonstrate how measuring several simple elements of the production process can significantly reduce press variability and improve printing quality. The excessive time that many shops spend in setting squeegee angle, force, and speed on their presses through trial and error often has no significant impact on the final print. By adopting standard setup parameters, however, shops create a repeatable starting point that is supported with proven values and tolerances. As a result, setup effectiveness increases while setup time decreases. Keep in mind that the use of standard press-setup parameters does not imply that the modern printer requires fewer skills. The standard setups simply allow printers to focus their skills on real production issues, such as enhancing efficiency and profitability, rather than wasting valuable time on basic press-setup functions. Screen printing is fighting a battle with digital printing on one side and offset printing on the other. The result has been compression of several of screen-printing market segments. To ensure their survival, screen printers must increase the consistency and repeatability of the process while reducing production costs. Standardizing all elements of press setup and production--squeegees, inks, screens, press-operating procedures, etc.--is the most-effective way for screen shops to remain competitive and profitable. Author's note: The screen-printing research study referenced in this article was completed in 1997. However, research continues at the Welsh Center for Printing and Coating (WCPC), University of Wales--Swansea, United Kingdom. Among other projects that the Center is involved with are the development of digitally imaged disposable screens, creation of a high-speed screen-printing system, and more. For details on these and other projects, please contact Dr. Tim Claypole of WCPC by e-mail at t.c.claypole@swan.ac.uk or visit www.swan.ac.uk/printing. About the author Dr. John Anderson is a printing-industry consultant, engineer, and trainer based in Pittsburgh, PA. He specializes in simulated press-operation training for flexography, gravure, offset lithography, and screen printing. In 2003, he won the Academy of Screen Printing Technology's annual Swormstedt award, which recognizes authors of groundbreaking technical articles related to screen printing. He can be reached at fcaassociates@aol.com.
Dr. John Anderson | 计算机 |
2014-23/2157/en_head.json.gz/19600 | Guidelines for In-Game Advertising
The Interactive Advertising Bureau (IAB) released guidelines for in-game advertising measurement. If you've considered purchasing advertising in this format, the IAB's recommendations might come in handy as for as long as in-game ads have been around, everyone from media buyers to game publishers have struggled with ways to measure engagement and time users spend playing. The guidelines describe what constitutes an ad impression within the game environment as one that needs to be visible on the screen for at least 10 seconds before it is counted, establishes a common methodology for counting impression (which hopefully will make it easier to buy and sell), and provides some important definitions to help everyone understand and quantify the value. “We worked with key industry stakeholders to develop a single methodology for in-game advertising measurement,” said Jeremy Fain, vice president of IAB’s Industry Services. “We are confident that consumers’ increasing appetite for in-game experiences coupled with the widespread adoption of these guidelines by game publishers and ad servers will pave the way for more marketing innovation and spending in the games platform.”
“Establishing a standard methodology for measurement within an in-game environment eases the transactional process for marketers, and will allow for continued growth,” said David Gunzerath, SVP, Associate Director, Media Rating Council (MRC). “These guidelines will provide the framework for auditing ad impression measurement thus enhancing accountability within in-game advertising.”
Stay up to date on the latest Internet trends:
Request a professional subscription to Website Magazine,
the most popular print publication on Web success.
John AlanR 09-28-2009 3:16 PM
Is this article in reference to video games which require an internet connection?
It's generally wise to remember that if the audience is already paying to play a game, then they might be quite annoyed with distracting/gratuitous advertising. However, if the audience does not have to pay for the game, then advertising is more acceptable. | 计算机 |
2014-23/2157/en_head.json.gz/19865 | Ask a Game Design Expert
In requirement 4a, can you give me an example of how to change the rules of a game?
Sure – let’s take chess as an example. One of the standard ways to change the rules is to limit the amount of time a player may take for their moves – this is commonly known as ‘speed chess.’ If a player runs of time, they lose, no matter what their position is on the board or which player still has the most pieces. Another variation, standard in Shogi, is that instead of moving a piece, you can put a captured piece back on the board as your own (if you did this with standard chess pieces, you would have to somehow mark the piece to show that it was operating as a different color). Finally, you could change the movement of different patterns – like adding the ability to go one square side to side and front to back to the bishop or one square diagonally to the rook.
Write a comment about “In requirement 4a, can you give me an example of how to change the rules of a game?”
MEET THE EXPERTSDavid Radue is an Eagle Scout and co-leader of the Game Design merit badge development team. He is also the co-founder of the Salem Boardgames Group in Salem, MA, and has had a lifelong passion for games of all kinds. David is a mechanical engineer at MIT Lincoln Laboratory where he has worked on projects ranging from missile defense radars to laser communication systems.
David Mullich has been a Scouter for eight years and is the father of an Eagle Scout. He designed and programmed his first professional videogames for the Apple II computer while still in college and went on to become a game producer at such companies as Activision, 3DO, Spin Master and The Walt Disney Company. David has spoken about game development at the annual Game Developers Conference, volunteered as a game industry mentor at the USC GamePipe Laboratory, and serves on the Los Angeles Film School’s Game Production Department Advisory Committee.
Tom Miller is an Eagle Scout and has been a Scouter for many years. He remembers playing games of Yahtzee at family gatherings when he was in Cub Scouts, but his real passion for gaming began in high school, when he joined a club that played games from Avalon Hill such as Afrika Korps and Midway. Now as a father with three teenage sons (all Scouts), Tom continues to enjoy playing games with his family, be it board games like Settlers of Catan or electronic games like World of Warcraft. © 2014, Boy Scouts of America. All rights reserved. | 计算机 |
2014-23/2157/en_head.json.gz/22047 | All video games are first-person experiences, says Vlambeer
According to Vlambeer's Rami Ismail, all games are first-person experiences, regardless of the genre or mode.
Speaking during a lecture this past weekend at IndieCade East in New York City, Ismail noted that when players speak about their gameplay experiences, they speak from the first-person point of view. "I failed the jump" and "I saved the princess" are how we speak, as though we are the characters we control.
"The way you discuss, the way you play, the way you think about games is in terms of ‘I,'" Ismail said. "'I learned or failed or succeeded.' That direct link is something no other medium can offer. Books and movies will always have distance between the media and the person [reading or watching]. You don't read books with ‘I' as being about you."
Ismail said that games allow developers to shape worlds in which they explain certain things or feelings, and allow people to learn something through them. Players can be taught whatever the developer wishes without having to be explicitly told, as the game is the mechanism that should lead them to conclusions. The interaction between a game and its audience produces real emotions like guilt, pride and empathy.
Ismail discussed Yeti Hunter, first released at Vlambeer's show floor booth at the 2012 Game Developers Conference. In Yeti Hunter, players creep through an endless snowy forest in search of a yeti. Ismail said the game was inspired by real-life yeti hunters who stalk the creature without definitive proof of its existence, hoping to be the one to capture it on film. He said that as more people played Yeti Hunter in the hours following its release, more realized that there was likely no yeti in the game at all.
But then one person posted a screenshot of the elusive yeti to Twitter. According to Ismail, the most common comment was that the screencap must have been altered using Photoshop.
"The way you discuss, the way you play, the way you think about games is in terms of ‘I.'"
"What we created was a game, but it was also an emotionally-invested experience, " Ismail said.
Vlambeer's GlitchHiker was the developer's second attempt to create an emotional gaming experience. In GlitchHiker, the health of the game's coding structure was dependent on how well people played it. Successful players would earn more "lives" for the game, while users who played badly or continuously failed would cause it to lose lives. The game ran through an online server structure and was created at the 2010 Global Game Jam, though it lasted only nine hours after launch.
"As the number of lives went down, the game would start to glitch," Ismail explained. "The game got ‘sicker.' But people that did well wanted to play more because they felt this weird sense of responsibility for the game. The people that played badly kept trying, and some would continue to do terribly. One player played for eight hours straight, scoring more lives. One girl walked away from the game visibly upset when she found out she had wasted three of its lives. Nine hours after it was released, a drunk Canadian failed in three seconds and the game died."
Ismail and his partner, Vlambeer designer Jan Willem Nijman, initially wanted to build a game that included a world that would die, but came to the conclusion that the creation of an in-game world was entirely unnecessary. Making players directly responsible for the game's "health" rather than that of a fictional world would nurture emotional investment.
"We don't need to connect the player to a world in the game if we can connect the player directly to the game," he explained. "Instead of making people feel empathy for this abstract thing in a game, they were feeling something for the game itself."
Ismail said that what makes video games special to him is the way games that successfully create that first-person experience build bonds between the developer and the player. Both benefit from giving agency to players and granting them "as much responsibility as [developers] can for whatever they do."
"You make sure whatever it is that you're making them do or trying to make them feel, you make sure that it works."
"It's that word ‘I,' that personal connection to our players and the way we require a dialogue between players and creators, that's what makes games special to me," he said. "Games can be anything. The one thing games will always have is that unique bond between creator and player, that interactivity."
As for developers seeking to create that perfect storm of connectivity and emotional investment with players, Ismail said there is one important thing to remember.
"You just don't waste their time," he said. "You make sure whatever it is that you're making them do or trying to make them feel, you make sure that it works and you're not making something that is completely useless to them and is just fun for you. Vlambeer's golden rule: if we make something and we think it's wasting the players' time, we just drop it.
"Games are being played by all sorts of people in all sorts of ways," he said. "Everyone is a gamer, whether you're playing Solitaire or Angry Birds. Games are now an important part of culture."
Tour the next great Metroidvania with its developer
By Philip Kollar on Jul 24, 2014
By Griffin McElroy on Jul 24, 2014
Watch the first footage of Halo: Nightfall | 计算机 |
2014-23/2157/en_head.json.gz/22083 | Original URL: http://www.psxextreme.com/scripts/previews3/preview.asp?prevID=155
Lunar: Silver Star Harmony
Scheduled release date: Q1 2010 Publisher: Developer: Genre: Number Of Players: As most of you know, I was a gigantic role-playing enthusiast back in the days of the PS1; in fact, I played that genre almost exclusively, with a few exceptions here and there. I’ve moved on to appreciate most all other genres but I continue to have a soft spot for the old classics, which is why I was interested to learn more about the upcoming remake. Game Arts and XSEED will let PSP owners revisit one of the most beloved RPGs ever: it’s Lunar: Silver Star Harmony, which is an updated version of Silver Star Story Complete that arrived on the PS1. We’ve already seen a few remakes of this game but the latest promises to be the most “complete” adventure yet, and if you enjoyed the Star Ocean remakes, you’ll want to jump on board once again. And speaking of the latter remakes, you might recognize the visual overhaul found in Silver Star Harmony; the revamped sprites and smoother lines is very similar to what we found in the PSP versions of the Star Ocean titles.
However, I should clarify that certain graphical aspects of the game will remain the same. While the character avatars and designs have been redesigned and will now stand out more – the remodeling looks more like modern-day sprites, the likes of which you might find in the Atelier Iris series – the anime cut-scenes won’t be altered. You may see some slight refinement but they didn’t actually change the overall look. Now, if you’re unfamiliar with the game, this is a turn-based, old-school RPG that features an excellent storyline, great characters, and a style of exploring and gameplay that will resonate with veteran RPG aficionados. Although it was on the PS1, it looked and felt a bit more like an SNES role-playing quest, with the exception of the FMV cut-scenes. And perhaps the one element that stands out most of all is the sound and music; so much of the game relies on music (it’s actually part of the plot), and the songs remain some of the more memorable in history.
During the game, you will see the enemies on screen but touching one brings you into a separate turn-based battle screen, with your foes on the left. Standard exploring brings you through forests, caves, and dungeons, and of course, you will find plenty of new towns and allies (Jessica, Kyle, Nash, Mia, etc.). You will play as Alex, a young boy who idolizes the great Dragonmaster Dyne, a hero of heroes that helped to save the world from impending disaster. Not surprisingly and sticking with the theme of most old-fashioned RPGs, Alex soon finds himself in a very similar position, and he strikes out from his small town with a few childhood friends in tow…along with a weird white flying cat-like creature named Nall. As you progress, you will learn that your friend Luna can not only sing, but her voice actually possesses special powers and the evil at work needs it to fulfill a very dark plan.
The upgrades will also include some “additional gameplay features,” although we haven’t quite learned what they are just yet. If I had to guess, I’d say they were adding new abilities for characters and I know for a fact that more story elements will be included. Therefore, if you factor in the new and improved graphics, one has to conclude that Silver Star Harmony will be the best edition yet. If any of you still have the soundtrack from the PS1 version, you’ll remember just how amazing it was and for more good news, I’m here to tell you that the PSP version will boast a remastered set of tracks. All in all, this is one RPG that no fan should miss and if you haven’t yet had the pleasure, you’ll definitely want to pick this up when it arrives early next year. Really, this is old-school role-playing at its very finest and it’s guaranteed to put a wistful smile on your face.
12/7/2009 Ben Dutka | 计算机 |
2014-23/2157/en_head.json.gz/22917 | DistroWatch Weekly, Issue 503, 15 April 2013
Welcome to this year's 15th issue of DistroWatch Weekly! One interesting open-source software phenomenon is the availability of source code for all applications. For commercial Linux companies, like Red Hat, this has interesting implications, such as the possibility to be "cloned" by third parties. Over the years CentOS and Scientific Linux have emerged as the most popular free (as in "gratis") rebuilds of Red Hat Enterprise Linux (RHEL). Today's feature story is an overview and comparison of the two projects' most recent releases, both based on RHEL 6.4. In the news section, the PCLinuxOS developers release their first-ever variant for 64-bit computer systems, Lucas Nussbaum is elected as the new Debian Project Leader, Ubuntu readies the upcoming release with a host of new features but with shorter support, and Fedora delays the alpha release of version 19 over two installer bugs. Also in this issue, the developers of Cinnarch ponder their distro's future - without the much-loved Cinnamon desktop user interface. Finally, in a follow-up to our last week's article on ZFS and Btrfs file systems, a reader wants to know how the two compare with the more established Linux file system - the ext4. We wish you all a great Monday and, as always, happy reading!
Reviews: Bring in the clones - CentOS and Scientific Linux
News: 64-bit PCLinuxOS, Debian Project Leader elections, Ubuntu 13.04 features, Fedora 19 delay, Cinnarch dilemma
Questions and answers: Advantages and benefits of ZFS and Btrfs over ext4
Released last week: PCLinuxOS 2013.04, Fuduntu 2013.2, Manjaro Linux 0.8.5
Upcoming releases: DEFT Linux 8, Mageia 3 RC
New additions: REMnux
New distributions: Mnix
Feature Story (by Jesse Smith)
Bring in the clones - CentOS and Scientific Linux
In March 2013 two projects, CentOS and Scientific Linux, released updates to their respective distributions. Both projects provide clones of Enterprise Linux free of cost. As such both projects are important to the Linux ecosystem as they provide a means for users to take advantage of stable, high quality software without the high cost associated with enterprise quality products. While both projects released clones of Enterprise Linux 6.4 and while both projects maintain binary compatibility with their upstream software provider, these projects do carry subtle differences. They may be binary compatible with each other, but each project takes a slightly different approach in their presentation and configuration. With this in mind I would like to talk about what it is like to set up both CentOS and Scientific Linux.
Website & focus
Let's examine CentOS first. The CentOS team released version 6.4 of their distribution on March 9, 2013. The website indicates their distribution is designed to be binary compatible with their upstream vendor and very few changes are made to the upstream packages. Artwork and branding from upstream is swapped out for CentOS specific images and text, a few minor configuration changes are introduced, but otherwise CentOS maintains high fidelity with upstream. The project maintains detailed release notes and serves up both 32-bit and 64-bit builds of the CentOS distribution. The distribution is available in three editions. There is a minimal install ISO (300 MB), a net-install option (189MB) and a torrent file which will enable users to download two "Everything" DVDs which contain all of the distribution's packages. Some of the project's mirrors (though not all) additionally supply copies of the "Everything" ISO images directly for users who do not wish to download via BitTorrent. I opted to download the first of the two "Everything" DVDs as only the first disc is required for performing an installation. I find that I like the CentOS website, it's clean, easy to navigate and provides plenty of documentation along with helpful user forums.
Where CentOS seems intent on maintaining a distribution as close to upstream as possible, Scientific Linux has a slightly different mission statement. While still compatible with upstream, Scientific has an additional mission which is to provide a common base distribution that can be used between multiple scientific laboratories. This allows different labs to start with a common platform and build tools on top of the distribution and these tools can then be shared with other labs. Scientific Linux 6.4 was released on March 28, 2013. The distribution is available in both 32-bit and 64-bit builds. The download flavours include an installation DVD (3.4GB), a rescue & net-install CD (159MB) and two "Everything" DVD images which total 4.6GB in size. Previous releases of Scientific included a live disc but at the time of writing a live disc for 6.4 has not been uploaded to the project's mirrors. Again, as with CentOS, I opted to download the first of the two "Everything" DVD images for my trial. The Scientific website strikes me as being less complete compared with the CentOS website. The project provides downloads and documentation, but doesn't have a community forum and feels more like a jumping off point to other sites and documentation rather than a one-stop location for all our distribution needs.
Installation & Initial Impressions
Both Scientific Linux and CentOS use the Anaconda graphical installer. After offering to perform a media check against our installation disc to confirm our download wasn't corrupted, the venerable installer walks us through selecting our time zone, placing a password on the root user's account and partitioning the hard drive. I like Anaconda's partition manager which has a fairly straight forward interface and allows users to work with LVM volumes, RAID configurations and regular partitions. We also have the ability to enable encryption to protect our partitions. Not many file systems are supported -- we are limited to using ext2, ext3 and ext4 -- and Btrfs has not yet made an appearance in Enterprise Linux. The last screen of the installer asks us to select a role for our operating system. Available roles include Desktop, Minimum Desktop, Web Server, Virtual Host, Software Developer Workstation, Web Developer Work Station and Minimum. One of the few differences between the two distributions is the Scientific installer defaults us to the Desktop role while CentOS defaults to the Minimum role. In both cases I decided to run with the defaults offered to see where they would take me. Both distributions allow us to further customize which packages will be installed, which gives us additional flexibility. One option the Scientific installer gives us, which is not offered by the CentOS installer, is the ability to enable third-party software repositories during the initial install process. These third-party repositories contain multimedia codecs, Flash an | 计算机 |
2014-23/2157/en_head.json.gz/22947 | GameMaker: Studio
(Redirected from Game Maker Language)
For other uses, see Gamemaker (disambiguation).
Mark Overmars
Preview release
v1.99.65 Early Access Version
Delphi For GM Studio IDE, runners for games are built with appropriate languages for each target device
IDE for Microsoft Windows, Mac OS X (only for GameMaker for Mac)
Game creation system
GameMaker: Studio (originally named Animo and later Game Maker) is a proprietary game creation system created by Mark Overmars in the Delphi programming language.[1]
GameMaker accommodates the creation of cross-platform video games using drag and drop or a scripting language known as Game Maker Language, which can be used to develop more advanced games that could not be created just by using the drag and drop features. GameMaker was designed to allow novice computer programmers to be able to make computer games without much programming knowledge by use of these actions.
1 Development history
2 Design and uses
2.2 Scripting
2.3 Engine compatiblity
2.4 Export modules
3.1 Reverse engineering
3.2 Logo controversy
3.3 Digital rights management
3.4 2013 April Fools' Day joke
3.5 Hacking scandals
Originally titled Animo the program was first released in 1999,[2] and began as a program for creating 2D animations. The name was later changed to GameMaker, lacking a space to avoid IP conflicts with the 1991 software Game-Maker.[3] While Animo had a built-in scripting language, which was not as complex as it is in more recent versions, it and the next few versions of GameMaker did not have DirectX support, a separate runner to run games independently from the IDE, syntax highlighting, or the ability to compile games into executable files.[2]
Design and uses
Windows 7/8[4]
4096MB[5]
Graphics hardware
128MB graphics
Screen resolution of 1024×600
Broadband internet connection required at all times
GameMaker is designed to allow its users to easily develop video games without having to learn a complex programming language such as C++ or Java through its proprietary drag and drop system, in the hopes of users unfamiliar with traditional programming creating games by visually organizing icons on the screen.[6] These icons represent actions that would occur in a game, such as movement, basic drawing, and simple control structures. It is also possible to create custom "action libraries" using the Library Maker.
GameMaker primarily runs games that use 2D graphics, allowing the use of limited 3D graphics.[7] The program has no way of choosing which graphics API the runner uses for rendering on a given platform, always using Direct3D since 6.0 on Windows, and OpenGL since 7.0 on non-Windows based platforms. The program only supports the built in custom "d3d" mesh format which is not compatible with the DirectX mesh format and a converter is necessary to use more popular or standard 3D formats such as .3ds, and .obj for use in a 3D project. It also supports the ability to create particle effects such as rain, snow and clouds, however not natively in 3D except through use of Dynamic Link Library.
The latest iteration of the software uses a new extension mechanism which is incompatible with extensions written for older versions of the program, especially those built on top of another single extension known as "GM API". Versions 8.1 and lower had a variety of DLLs and wrappers to existing programming API's and libraries that extended GameMaker with things such as socket support and MySQL connectivity.
Game Maker Language (GML) is the primary scripting language that is interpreted similarly to Java's Just-In-Time compilation used in GameMaker, which is usually significantly slower than compiled languages such as C++ or Delphi. It is u | 计算机 |
2014-23/2157/en_head.json.gz/23238 | This content requires the Adobe Flash Player. Adobe Flash is most widely used multimedia technology on the Web.
Click here to get the latest version of Adobe Flash
MNS Home | About MNS | Careers | Contact MNS | Projects: TradeNet | CNP | CBRIS | PDMS | MCCI-CO
The MNS Vision
As the pioneer in implementing Electronic Data Interchange (EDI) in Mauritius, our mission is to provide quality value-added network services connecting businesses and government.
With a team of dedicated and service-oriented professionals, our solutions help our customers improve their services and competitive edge.
In this challenging era of Information and Communications Technology, we invite you to join us in our vision of bringing Mauritius...
Towards an Information Age...
Equal Opportunity Policy Following the recommendation to increase the use of IT by the World Bank in 1992, studies were initiated in November 1993 to examine the feasibility of implementing an electronic network that facilitates existing trade documentation process. After detailed system studies and intensive industry discussions, it was evident that a Value-Added Network (VAN) operator needed to be set up to operate the electronic network. This network would first operate a system modeled after the successful TradeNet System in Singapore, and eventually implement other systems. This VAN would be a tripartite joint venture company involving public and private sector representatives and a foreign technical partner. It would also operate autonomously and has to be self sustainable in the long run. After carefully examining the costs and benefits of such a proposition, the decision to set up Mauritius Network Services was made.
Mauritius Network Services was incorporated in April 1994.
Copyright ©2002-2014 Mauritius Network Services Ltd. All rights reserved | 计算机 |
2014-23/2157/en_head.json.gz/23427 | Home > Archive > 2012 > June > 22
What comes after blogging?
By Dave Winer on Friday, June 22, 2012 at 2:21 PM.
I've had a few brief conversations in the last few days with people who have Manila servers, who are trying out the World Outline software. This got me thinking how World Outline relates to Manila, and to blogging software like WordPress, Tumblr, etc. There's a podcast that goes with this post. It's only 20 minutes, and it's full of ideas, I think, that are worthwhile if you spend time thinking about social media. It goes into much more detail than I'll go into here. First, most people reading this probably don't know about Manila. Manila was an early blogging platform, but it started out as a full newsroom server app, with discussion software, editorial roles, tons of features that you don't see in blogging software. In a sense, blogging as an activity developed as Manila developed. It was launched in late 1999, and kept developing through early 2002. Why was blogging so appealing? Why did it work so well? There are lots of ways to pick this up. One is that gave us a structure to hang our writing on. Time. That allowed the software to be simpler, and for the design process for the software to be simpler. Not the look of the site, but the structure of the site and the software features built to support that structure. But time isn't the only structure we can hang our writing on. That has become more and more apparent as our blogs get overloaded and we get so many of them that we can't keep track of them. Our ideas are out there swimming in a vague space and we have little sense of its shape or dimension. I know that was true for me. How many blogs do I have spread across how many servers? I have no idea. There's a moment when you have something to write, and get confused about where it goes. That's a huge recurring question. And there is no answer sometimes. So you create yet another place to put stuff. Another place you will never remember. I wanted to figure out what comes next. We seemed to have the problem of a page licked, but we didn't have an organization that worked. Time wasn't enough of a structure. But I have a great tool for editing structure, the outliner. Somehow that must apply to this problem, or so it seemed. That's the new level of functionality we've arrived at. It's a milestone as big as the one we reached in the late 90s and early 2000s with calendar structures. Now we have a tool for editing general web structures. And it works. I guess that's what I have to demonstrate with the next screencast. How to use the World Outline to manage a very large base of content. This will be of interest primarily to people who manage large bases of content. Librarians, lawyers, researchers, writers, scientists, teachers. And guess who loved Manila? Largely those people. All that's explained, I hope, in the podcast. I hope you listen to it. And in the next screencast I hope to show you how I narrate my work with the outline. Follow @davewiner
© Copyright 1997-2012 Dave Winer. Last update: Friday, June 22, 2012 at 2:52 PM Eastern. Last build: 6/30/2012; 9:42:38 PM. "It's even worse than it appears." Previous / Next | 计算机 |
2014-23/2157/en_head.json.gz/23632 | Welcome to the new version of The Lounge!
Welcome to the shiny new version of The Lounge!
Those of you who have been with me since the beginning - 15 years now, eh? - have been, uh, treated to several major makeovers in Lounge software. Along the way we've gone from being a small, shoestring operation to what you see now: quite likely the largest independent help board for Microsoft software, a life support system for the hapless seeking help. Today marks another big day in the evolution of The Lounge. We're switching the guts of the system over to vBulletin, a board software system that's widely regarded as the best in the industry. There are two big reasons for the shift. First, going with vBulletin will give us lots of room to grow. Second, we now have at our beck and call a sizable pool of tech support people who know the inner workings of vBulletin.
You may have noticed the changes in the user interface. I, personally, think the new version is tremendous - much easier to figure out, and easier to use particularly for folks who spend a lot of time here. There were also many changes made under the covers.
That's just the beginning. Since we now have a bunch of support people, all sorts of improvements are possible. Stay tuned.
Got an idea? A suggestion? A... gripe? You came to the right place. Post your feedback right here, and let's see what we can do to turn The Lounge into the best board in the biz.
If you're trying to sign in, and the new Lounge won't accept your username/password, you've hit a snag in the upgrade.
Apologies all over. But we knew a small percentage of folks would get stung.
The problem originates with our new board software, vBulletin. It allowed us to bring across almost all of the username/password combos from the old Lounge. But if you're one of the unlucky few, you won't be able to sign in using your old password.
The solution? Click the "Forgot Password" link. You'll be prompted for your username - which is still there, trust me. But then you'll get an email which allows you to set your password.
Sorry. It's a technical snag that didn't have a good solution. So we opted to run a few of you through the Lost Password maze. When you get into the new Lounge, I hope you think it's worth the bump!
andyfboyd
We are aware of an iss | 计算机 |
2014-23/2157/en_head.json.gz/24230 | Tomb Raider: Definitive Edition
Posted by Stephen Riach on Tuesday, January 28, 2014 · Leave a Comment Last year’s Tomb Raider reboot was an outstanding game. It took a few cues from Uncharted to help modernize the franchise, which was fitting since Uncharted took more than a few from the original Tomb Raider series as well. Tomb Raider felt like a natural evolution of things, with an increase on logical reasoning behind what was going on. It provided a relatively short, but thrilling single-player experience with a multi-player mode thrown in for good measure that fell far short of the benchmark set by the Uncharted series. It featured some incredible graphics that resulted in a lot of breath-taking views, and the visible build-up of dirt and wounds on Lara in the gritty re-imagining added a lot.
When the Definitive Edition was first announced, the redone graphics received the bulk of the attention – with much of that being negative due to the existing PC version already trouncing the console releases. Fast-forward to the game’s release, and that’s still the biggest deal. As someone whose PC can’t handle Tomb Raider maxed out, this redone version was quite exciting since even grainy videos of the TressFX hair-flowing system were amazing, and this version was set to redo everything. Well, there’s a downside to that. While Lara’s new face bothered me in pre-release screens, it didn’t bother me at all while playing through it again. What did bug me was the change in skin tone and changes to her model as the game wears on. Before, dirt and debris buildup along with wounds added a lot to the reality of things. Now, those changes are there, but far less pronounced.
The change in skin tone takes her from a light tanned color to a nearly doll-like peach. Her skin looks too perfect, with no real signs of flaws. Throughout the adventure, you get bits of the build-up that was there before but the effects seem to been toned down a lot. Whether it’s due to criticism of Lara taking too much abuse or just a re-imagining of how she should be portrayed, it hurts to not have all of the damage evident on her face. She’s still a tortured soul and you can see that on her, but it’s not on pronounced and that hurts the immersion. Similarly, the wound damage isn’t as graphic as one would expect given next-gen hardware.
It may seem like an odd, and perhaps morbid thing to find interesting, but the depth of her wounds added to the reality of things. It’s not just a thing with her game though – the Yuke’s UFC games last gen featured the same kind of sliced-open flesh and you could see into the wounds as well. While the last gen version of Tomb Raider didn’t feature a lot of depth to the wounds, that was easy to handwave due to hardware and you could just use your imagination to fill the blanks. Now, in the version that is supposed to be the definitive one, the wounds still don’t have much depth to them. It hurts the reality of things. Similarly, the lack of entry wounds in some of the non-canon death animations do the same thing.
The redone graphics greatly benefit the environments and some minor aspects of the side characters that add a lot to the overall presentation. The entire game was redone with new graphics resulting in more realistic textures on trees and additional details being added to the world – like birds flying above and more lighting being added to liven the world up a bit. Lighting is a lot more pronounced now, so when lightning strikes, you’ll see more extreme color saturation during a strike. Similarly, fire now lights up Lara and the environment far more realistically than it did before. The clothing on some side characters looks much better, while faces are hit or miss. Most cutscenes feature the redone graphics, but on the rare occasion that doesn’t happen, it sticks out a bit like the in-game engine transitions in the God of War Collection on the PS3. It’s striking in a bad way, but doesn’t take you out of things. The revamped hair on everyone is amazing. Lara’s moving around realistically in wind will shock you, while the variety of hairstyles on display is impressive for everyone, and those with shorter hair – like the main antagonist and the eldest member of Lara’s crew, feature incredibly-realistic hair that adds a lot.
While the graphics warrant the most attention of anything in the Definitive Edition, there are some smaller additions that haven’t gotten much attention. The addition of all of the multi-player DLC is nice, but not really essential since that mode isn’t very good. The inclusion of a bunch of skin packs is nice, but also not essential. The pre-release comic and art book are nice in digital form, but not essential to the experience. New next gen-only features include the ability to talk and change weapons on the fly, or bring up the map at any time on both the Xbox One and PS4. PS4 owners get the exclusive ability to use the track pad for lighting the torch and can use Sixaxis controls to move Lara around in a parachute. The PS4 version is lauded as having a smoother overall framerate, while the Xbox One version sticks to a solid 30 FPS and doesn’t waver. This means that animations aren’t going to look as robust as they could, and seems a bit disappointing for something called a definitive edition. Control-wise, the PS4 pad’s going to be a bit more comfy for every function as the Xbox One pad is a little harder to use due to the iffy bumper controls for attaching rope lines and making use of survival skills. If you’ve got both next-gen systems, the PS4 version seems like the one to check out.
Audio-wise, nothing’s been changed and the voice work is as equally great now as it was before. Lara and the other major characters are acted very well, while some of the lesser protagonists are acted about as well as they can be given their limited characterization that rarely rises above stereotype level. The cast did a good job with what they were given, it’s just that some weren’t given much of anything to work with. The best way to approach this, and the inevitable next-gen revamps of past-gen hits, is how you would approach things like HD trilogies or remakes of PS2-era games. If you absolutely loved it, but can’t justify spending full-price on it again – a justifiable view – then simply wait for a price drop. HD trilogies tended to freefall in price, and while that won’t likely happen for this game for at least a little while since it’s on brand-new hardware, it will probably hit $40 soon. It’s a shame that each console version has some flaws though, since neither runs at 60 FPS all the time – so you know that while it’s called definitive, it could still be better, and it seems like the only way that can happen is via a PC version. Given what a mixed bag this definitive edition is, and its high cost with nothing really new being added, it’s tough to recommend a purchase at full price. Even though it’s an incredible game, it’s still one that can be beaten over a quick rental period, with multi-player that isn’t deep enough to sustain the experience for much longer.
Reviewed By: Jeremy Peeples
This review is based on a digital copy of Tomb Raider: Definitive Edition for the Xbox One provided by Square Enix.
VN:F [1.9.22_1171]please wait...Rating: 0.0/5 (0 votes cast) Category: Reviews · Tags: PS4, Xbox One | 计算机 |
2014-23/2157/en_head.json.gz/24238 | - GameCareerGuide.com
Features How I Got My Start in the Game Industry
How I Got My Start in the Game Industry [06.19.07] - Alistair Wallis
There's plenty of useful information out there on how to get your start in the games industry -- much of it on this very web site. There's a lot of choices out there though; a lot of possible paths you can take. Internships, placements, modding, schooling, and so on: it's a long list. Add to that the numerous unconventional ways of entering the industry and, well, it can sometimes be a bit confusing about exactly how you should go about it all. So, sometimes it can be helpful -- or, at the very least, interesting and inspiring -- to hear first hand from industry veterans about their own experiences starting their careers. We spoke to four notable figures from the games industry to get their stories, and found that even if the exact opportunities have changed, a number of things, like the rewards for passion and determination, have not. Philip Oliver Philip Oliver is best known as one half of U.K. Spectrum design duo the Oliver Twins. Along with his brother Andrew, the Oliver Twins produced some of the most popular games for the system in the 1980s, and even launched their own franchise, the Dizzy series of platformers. In 1990, they set up their own business, Interactive Studios, more recently known as Blitz Games. Philip officially became the company's CEO in 2001, and the company has remained one of the UK's top independent developers, with exceptional sales from last year's Burger King titles, Sneak King, Big Bumpin' and Pocketbike Racer. The brothers' introduction to computers started in 1980, playing games like Zork and Night Mission on a friend's father's Apple IIe. With Space Invaders appearing in arcades at around the same time, Oliver comments that the 12 year olds were "hooked" on video games. The Olivers' parents bought them a Binatone Pong console for Christmas the next year, but this was soon forgotten shortly after, in 1982, when their older brother bought a recently released Spectrum ZX81 and put it "under the family TV." It wasn't long before the twins had started learning how to program their own games in the BASIC language, thanks to instructions that came with the computer. "Most were derivatives of Pong," laughs Oliver sheepishly. Not that it mattered, though. The biggest motivator for programming their own games was simply the fact that "games cost money", and the 14 year old twins were somewhat lacking in that, though Philip does admit that there was a degree of "wanting to show off" involved as well. Later that year, they upgraded to a Dragon 32 -- a generally unsuccessful home computer that lacked the graphical power to even be able to display lower case letters easily. While the brothers attempted to write games for it, they all turned out "very slow." Fortunately, a further upgrade the next year to a BBC Micro Model B meant that they were not only working with solid hardware, but also that they were able to play a number of inspiring titles like "Snapper, Defender, Missile Command, Scramble, Revs, Repton and, of course, [David Braben's classic space sim] Elite." The Olivers continued their work with BASIC, pushing each other to try and make better games, often by "not letting each other go to bed." "I think the fact that we were brothers made a massive difference," Oliver says. "We stuck at it longer, we pushed each other harder and we learned from each other." The twins believed that by using their own "ideas and perseverance" they could "write great games". Even at that point though - staying up until all hours of the night, working on their own games, which were steadily becoming more and more proficient -- Philip never "projected ahead" for a possible career in the industry. "Industry?" he laughs. "At that point in time nobody really believed there was an industry; just a passing fad for a few nerdy hobbyists. Our view was that if we could get paid to for our hobby then we'd see how long we could avoid getting real -- dull -- jobs." So, with that goal in mind, the duo started sending copies of their games to local publishers like Europress and QuickSilva, though this was mostly unsuccessful. Their big break actually came when they entered a game making competition held by UK kids program The Saturday Show and won first prize: "a Commodore 64 monitor." More importantly though, they gained a great deal of exposure for their efforts and their game, Strategy, was later picked up by the leading publisher at the time, Acornsoft, and was released as Gambit. This lead to more work, starting with a regular freelance position producing games for the cover cassette on Model B Computing magazine. Oliver explains that twins got the job as a result of a phone call to someone at the magazine's publisher, Acorn User, who "didn't want to publish" their games, but thought Oliver was "worth talking to." "He agreed to send us some free games in return for us writing reviews of them which they he subsequently published," he explains. "It seemed a great deal for school kids and lead onto lots of reviews and then [producing] lots of mini-games for Model B Computing." Still though, neither Philip nor Andrew felt that they "were 'in the industry,'" just that they were doing a few "gigs" for "pitiful" money. "But each game was a little better than the last," Oliver says, "and we slowly learnt the skills required to make good games -- efficiently." With their reputation beginning to develop, the twins continued to shop their games around, entering into publishing agreements with numerous companies. But it was their signing with fellow British siblings Richard and David Darling, who were in the process of setting up Codemasters, in September of 1985 that really cemented their place in the industry. Their first title for the publisher, Super Robin Hood, hit number one on the Amstrad CPC charts, and netted the duo over £10,000 in royalties. "Up until then," Oliver muses, "we were amateurs just trying to get in. After this we knew we had the skills to create games that would sell and that gave us the confidence to start thinking of game development as a career."
Super Robin Hood, a classic. Looking at how it would be to start their career now, Oliver notes that things would be quite different. "Starting an actual business would be very tough now," he considers. "Getting a job in the industry would be much easier if you have the talent. There's more companies, with lots of vacancies and the skills required are obvious and it's easy to contact them. The internet makes finding the skills required and the companies hiring so easy." "Trust me," he laughs, "finding the smallest bit of information back in the early eighties was so tough!" Next:
Dave Perry | 计算机 |
2014-23/2157/en_head.json.gz/24555 | libbyclark's blog
Linux Video of the Week: Matthew Garrett Argues for Better Security in 2014
By Libby Clark - January 10, 2014 - 1:16pm In his keynote talk at LinuxConf Australia this week, Linux kernel developer Matthew Garrett argues that the software industry can help improve security at every level of the stack – and that it's possible to do so without sacrificing user freedom. Original article
Amahi's Open Source Home Server Software Goes Mobile
By Libby Clark - January 7, 2014 - 1:40pm Following a recent trend in network attached storage (NAS), Amahi, the open source home server software based on Linux, has added remote network access using its new mobile app for iOS and a forthcoming Android app.
The apps are the latest evolution in Amahi's open source business model as the company grapples with how to scale up. Users will pay a subscription fee for streaming from their home server remotely over the internet – to, say, watch movies from outside their home network – via the free app. Original article
The Most Popular Linux Stories of 2013 on Linux.com
By Libby Clark - December 30, 2013 - 6:00am Original article
A Summer Spent on the Linux Kernel Virtual File System
By Libby Clark - December 20, 2013 - 6:30am Calvin Owens has learned a lot about bug hunting and fixing just by following the discussion among developers on the Linux kernel mailing list. He's even contributed a few small driver fixes over the past year. But his first real deep dive into kernel development came during his Google Summer of Code internship with The Linux Foundation this year. Original article
The People Who Support Linux: Snowden Revelations Spur Engineer's Open Source Donation
By Libby Clark - December 18, 2013 - 9:19am Shocked and chastened by Edward Snowden's revelation this year of a vast NSA surveillance program, Antonius Kies resolved to better support free and open source software development. Thus Kies, a Linux desktop user and an engineer at Graz University of Technology in Austria, recently joined The Linux Foundation as an individual member. Original article
The Top 10 Best Linux Videos of 2013
By Libby Clark - December 12, 2013 - 10:00am This list of the best Linux videos of 2013 combines some of the most watched Linux Foundation videos of the year, along with a selection of the most inspiring, compelling, or just plain fun videos produced by others in the Linux community. In choosing videos for inclusion, we avoided purely promotional videos in favor of those that celebrate big milestones, seek to educate, or communicate a broader message about the values and mission of Linux and open source software. Original article
The People Who Support Linux: PhD Student Powers Big Data with Linux
By Libby Clark - December 9, 2013 - 3:36pm Open source technologies are powering the current trend toward big data and Michiel Van Herwegen, a PhD student in analytical CRM (customer relationship management) at Ghent University in Belgium, has a front row seat. Original article
A Summer Spent on OpenPrinting with the Linux Foundation
By Libby Clark - December 6, 2013 - 9:25am This past summer marked Moscow-based developer Anton Kirilenko's third Google Summer of Code internship with The Linux Foundation. That's three summers, three different projects and mentors, and three totally different experiences with Linux and open source software. Original article
The People Who Support Linux: Starting Over as a Linux SysAdmin
By Libby Clark - November 27, 2013 - 2:28pm James Hazelwood, a former engineer in the manufacturing industry, is going back to school to pursue a new career in Linux and IT. The Somerset, England resident spent 15 years in manufacturing and 5 years in tech support for fire alarm systems. But after being made redundant for the third time, he decided to retrain in system administration and is studying for a computing degree at Plymouth University, Hazelwood says. Original article
The People Who Support Linux: A Desktop Lover and Photo Editor
By Libby Clark - November 25, 2013 - 3:31pm Over the years, Lance Spaulding has worked with a medical company, a non-profit foundation, a credit card company, a start-up, a small e-commerce business, and now a large defense contractor. But at least one thing hasn't changed in that time: he's a devoted Linux desktop user and tinkerer. Original article
Home › Blogs › libbyclark's blog | 计算机 |
2014-23/2157/en_head.json.gz/25054 | Home › MalwareEspionage Campaign Targeting Israel Expands to Other Countries
By Steve Ragan on November 16, 2012 Tweet
The Xtreme RAT malware, which has been at the center of several reports of cyber attacks on Israel has expanded, researchers have discovered. This news follows a recent report from Norman ASA, who reported that the attack campaign has been going on now for more than a year. In October, Trend Micro reported on collected samples of malware linked to several system infections on computers used by the Israeli police. The malware was determined to include a backdoor using the Xtreme Remote Access Trojan (RAT) or Xtreme RAT. Roni Bachar, from Israeli security firm Avnet, told the Times of Israel the pattern of the attack and the type of virus used were very similar to other cases of attacks, which were found to have been sponsored by governments. “At this point, I think we can be fairly certain that it was sponsored by a nation-state, most likely Iran,” he added. In Addition, security firm F-Secure reported around the same time that the same malware was being used to target Syrian protesters. Earlier this week, Norman ASA explained that the older files uncovered by the company used bait documents and videos written in Arabic that were aimed primarily at a Palestinian audience, while the newer files were targeted more towards Israelis. The older files date back to October 2011, offering a solid link to recent events and demonstrating that the attackers have tossed a wide, yet demographically focused net, when it comes to their victims. "The bait files seem to focus on a number of areas (military, political and religious) which hints at a broad targeting, not only a specific sector," Snorre Fagerland, principal security researcher at Norman told SecurityWeek. "The malware is off-the-shelf, cheap stuff. No zero-days were seen, though there are a few tricks used; such as using special Unicode characters to reverse text direction, and thus hide the executable file extension," he said. On Thursday, Trend Micro reported that they’ve discovered additional targets, and that not only has the attack lasted for more than a year – the potential victim list is much larger than previously thought. “While the vast majority of the emails were sent to the Government of Israel...a significant amount were also sent to the U.S. Government,” wrote Nart Villeneuve, a Senior Threat Researcher at Trend. Included in the U.S. target list were email accounts at “state.gov,” “senate.gov,” and “house.gov,” and “usaid.gov.” The target list also included the governments of the UK, Turkey, Slovenia, Macedonia, New Zealand, and Latvia. In addition, the BBC (bbc.co.uk) and the Office of the Quartet Representative (quartetrep.org) were also targeted, he added. “It is important to note that while we discovered that these targets were sent this email, we have no information about how many received or potentially opened the malicious attachment. Based on our investigation, the malware was signed with an invalid certificate. When executed, it opens a decoy document and installs Xtreme RAT on the targets’ systems,” Villeneuve wrote. “This campaign it seems is far from over and whatever specific motivations the attackers may have, considering the various targets seen scattered in various states, is still a mystery.”
Steve Ragan is a security reporter and contributor for SecurityWeek. Prior to joining the journalism world in 2005, he spent 15 years as a freelance IT contractor focused on endpoint security and security training.Previous Columns by Steve Ragan:Anonymous Claims Attack on IP Surveillance Firm Brickcom, Leaks Customer DataWorkers Don't Trust Employers with Personal Data: SurveyRoot SSH Key Compromised in Emergency Alerting SystemsMorningstar Data Breach Impacted 184,000 Clients Microsoft to Patch Seven Flaws in July's Patch Tuesday sponsored links | 计算机 |
2014-23/2157/en_head.json.gz/25451 | (5) The Fast and Furious: ATI Radeon X1900 XTX Review
The famous Radeon X1800 product series by ATI Technologies was delayed significantly from its original date and when it hit the market, its architectural advantages over the competing GeForce 7800-series were not obvious for the consumer. While ATI’s Radeon X1800 XT was significantly faster compared to the GeForce 7800 GTX in a variety of benchmarks, it was left in the dust by the GeForce 7800 GTX 512, a product that was never available widely, but that created “the right” effect for the whole 7800 lineup during the holiday season.On the 24th of January, 2006, ATI tries to recapture leadership in both performance and availability with the Radeon X1900-series: more than 5000 units will be available for purchase on the first day, 60% of which are XTX flavours, with more than 50 000 graphics cards already made. Perhaps, the numbers are not really great, but we should keep in mind that we are talking about offerings that will cost $549 (Radeon X1900 XT 512MB), $599 (Radeon X1900 XT CrossFire Edition 512MB) and $649 (Radeon X1900 XTX 512MB), demand for which is unlikely to count in hundreds of thousands, especially right after the New Year parties. The new Radeon X1900-series not only offers instant availability, but also improved performance amid moderate increase in the size of the die and transistor count. The newbie features 48 pixel shader processors, three times more than the Radeon X1800 XT and two times more than the GeForce 7800 GTX. But there is a question whether modern games truly need extremely high pixel shader performance. Read on to find out whether and by what margin the Radeon X1900 XTX can beat Nvidia’s latest GeForce 7800 GTX 512 in the broadest set of benchmarks available over the Internet. Table of contents: | 计算机 |
2014-23/2157/en_head.json.gz/26807 | Sun hails open-source meta data repository
Sun Microsystems has announced that it has contributed meta data repository modules to the NetBeans open source development project as part of the Object Management Group's Model Driven Architecture (MDA) effort.
The repository can, for example, assist in the development of Web services applications by enabling developers to quickly locate objects to be fitted with SOAP interfaces, according to Sun. CORBA and other infrastructure standards also can be supported.
The NetBeans open source project is a Java-based effort that is being positioned as a compliant solution for the MDA specification. The OMG MDA is designed to protect software investments by providing a framework in which application infrastructure components, such as databases and programming languages, can be changed without requiring enterprises to change their underlying application architecture.
A meta data repository, which can hold information about programming objects so they can be reused, is critical to supporting the MDA, said Drew Engstrom, product line manager for Sun's ONE Studio tools. "When an organisation is doing object-based development, typically you end up with a library of hundreds of objects," Engstrom added.
The meta data repository is expected to be included in an upcoming version of the NetBeans integrated development environment (IDE), to be known as build 4.0, in six to eight months, according to Engstrom. The MDA implements OMG's Meta Object Facility, an abstract language for describing meta models, and integrates it into the NetBeans IDE. Sun donated NetBeans to the open source community two years ago, Engstrom said.
"This is the first time I've seen Model Driven Architecture tools that are available in open source," said Richard Soley, OMG chairman. This will mean developers can have the source code free, he added.
Other companies that have submitted MDAs include IBM, with WebSphere, and Rational Software, with XDE, Soley said.
NetBeans provides a common platform for Java tools and supports the Linux, Solaris and Windows operating systems, according to Sun.
Linuxworld: Sun boosts NetBeans
Using the NetBeans IDE for RESTful Web Services
MDA From a Developer's Perspective
Integrating applications
Data quality management tips and best practices
Enterprise Security of Microsoft SQL Server 2008 Improves Over Other Versions
JavaOne: JBoss on SOA middleware, Java EE and data services
Data grids for storage | 计算机 |
2014-23/2157/en_head.json.gz/28102 | Position Paper Regarding Web Services
Ken Laskey, SAIC
Web services are often discussed in the context of B2B solutions supporting commerce over the Internet. Finding items of interest, negotiating price and delivery, and providing security for and certainty of the completion of transactions are among the many challenges which Web-based applications must execute. However, Internet commerce is not the only domain which has these types of challenges. Consider a collaborative design team distributed at work locations around the world. The design team must chose among a number of possible design alternatives, system budgets for resources such as power, weight, and heat load may be traded between teams with different allocations, and configuration control must be maintained for reference designs and modified baselines. Thus, the needs of a collaborative design team are analogous, if not in many cases identical, to their commerce brethren.
It is the contention of this paper that a Web service environment which supports the requirements for electronic commerce will also support technical collaboration among distributed teams. In the following, several conceptual elements are laid out describing how such an environment would function, and indicating the type of services which would be available from a Web service infrastructure.
Collaboration for a distributed design team would use a virtual repository to "store" all design data and design tools. The actual physical location of any such resource would be with the individual(s) or organization responsible for the resource's creation, maintenance, and update. Access to the resource would occur transparently, as discussed below. Metadata would characterize each resource through information which would act as discriminators to users. Search mechanisms would be available to support a user specifying target values for the metadata categories and relative importance in finding an optimum meeting of requirements. For example, a designer might need a motor which supplies torque in a given range, but a more important constraints is that the size must fit within a certain package allocation. Indeed, the search would be against the virtual repository and go across information gathered from multiple sources. The search utility used could also be discovered through a similar search which has culled through published search algorithms to identify the best utility for the user's needs.
In a collaborative environment, it would be valuable to know that you and other members of the team are using consistent information accessed from a single, if virtual, source. In addition, it would be important to know when information which led to a design decision changed so that the decision could be reevaluated. The collaborative environment could establish subscriptions when information is used from the virtual repository and the user would be automatically notified when the resource was changed. This would apply not only to data but to tools used for design as well.
Given that data could be stored anywhere in any format, the collaborative environment would also need to support transparent access from any data format. This could be negotiated by publishing the data format as an element of the resource metadata or by a metadata element providing the means to invoke a utility, i.e. a utility API, which could read, extract, or otherwise access information from a data store. The latter would be ideally suited for proprietary formats because the utility and the metadata describing the API could be created by the format's owner and data access could be enabled without the details of the format ever being published.
Security would obviously be of paramount importance. Design information is often proprietary and access would be restricted to the design team or other approved users. In addition, security must restrict change access to the party responsible for the resource, while allowing read access to all users who need the resource for their work. This again applies to both data and tools used by the team. In addition, for an approved change to become part of the design environment, several steps, including loading the change, updating documentation, and notifying subscribed users must all be accomplished for the "transaction" to be accepted.
Web services can be envisioned which support many of these functions. Discovering and invoking the services would be a function of the supporting infrastructure and the interoperability of the infrastructure components would depend on the standards and protocols on which these were built. | 计算机 |
2014-23/2157/en_head.json.gz/28223 | (5) The Fast and Furious: ATI Radeon X1900 XTX Review. Page 6
The snapshot shows that the die surface of the R580 is somewhat larger than the R520, and the former is a square whereas the latter is a rectangle. The R580 in its turn is smaller than the less complex NVIDIA G70 chip thanks to 0.09-micron tech process.Nvidia G70, ATI R580, ATI R520 visual processing units. Click to enlargeSuch a small area increase is explained by the rather small increase in the amount of transistors, from 321 to 384 million. The R580 has 48 pixel shader processors on board as opposed to the R520’s 16 and it means the pixel processors don’t require too many transistors. Easy to calculate, 48 pixel processors are comprised of less than 90 million transistors – a quarter of the total amount. The rest of the transistors make up the caches, texture units, ring-bus memory controller, ultra-threading dispatch processor, etc.As for the marking, the text “ENG SAMPLE” speaks for itself – this is an engineering sample of the R580 chip. It is dated the 45th week of the last year, i.e. early November. ATI Technologies said they had received the first batch of commercial wafers from TSMC at the end of November and that they already had working samples of the R580 even before the official announcement of its predecessor, RADEON X1800 (R520).The GPU die is protected against damage with a traditional metal frame. Since we are dealing with a RADEON X1900 XT, the graphics processor is clocked at the same frequency as on the RADEON X1800 XT, i.e. at 625MHz. The graphics core frequency of the RADEON X1900 XTX is 650MHz.Like the RADEON X1800 XT, the RADEON X1900 XT CrossFire Edition uses Samsung K4J52324QC-BJ12 GDDR3 memory. These chips are 512Mbit each, so eight such chips suffice for a total of 512MB of graphics memory with 256-bit access. The access time of the chips is 1.25 nanoseconds; they are rated to work at 2.0V voltage and at 800 (1600) MHz frequency. The memory of the RADEON X1900 XT works at a lower frequency than on the RADEON X1800 XT: 725 (1450) MHz against 750 (1500) MHz.The memory frequency of the higher-performing RADEON X1900 XTX is 775 (1550) MHz and Samsung’s 1.1ns K4J52324QC-BJ11 chips are employed. These chips can theoretically be clocked at 1800MHz, but ATI decided to keep to more conservative settings. It is quite possible that ATI’s partners will come up with “updated” versions of RADEON X1900 XTX with higher chip/memory frequency at some moment in the future. Such graphics cards may appear along with the G71 chip (GeForce 7900), an updated version of the G70 processor, or earlier as “extreme” versions of the card.The cooling system of the RADEON X1900 XT/XTX is the same as the RADEON X1900 XT’s: a blower is pumping air from inside the PC case and into the thin-ribbed copper heatsink with a massive sole and is then exhausting it to the outside. The heatsink contacts the GPU die through a layer of dark-gray thermal paste; the memory chips give their heat away to the cooler’s aluminum sole via elastic rubber-like thermal pads. This cooling system is quite efficient and quiet even at the lowest speed of the fan. When the fan speed increases, the card becomes noticeably louder, the plastic casing acting as a resonator. Table of contents: | 计算机 |
2014-23/2157/en_head.json.gz/28996 | > Chris and Trish Meyer
> Timing Video to Audio
Timing Video to Audio
Wherein Mr. Video asks Ms. Audio: "What's my motivation in this scene?"
By Chris and Trish Meyer | March 13, 1998
Music Basics
In the case where the audio (typically, music) has already been supplied for a project, the first task is "spotting" the audio file to find the most interesting moments in it. The way to do this is by looking for peaks in the waveform in an audio file's "clip" window, listening to the sound around this point to verify what is actually going on at this moment in time, and setting a marker (a vertical line with a flag on top) for each interesting event. We tend to use simple, unnumbered markers (set in After Effects by using the asterisk key on an extended keyboard's numeric keypad; M is the magic key in Final Cut Pro) to denote at least the start of each musical bar or measure, and annotated markers for major sections of the music, such as chorus, guitar solo, et cetera. Sometimes we will even mark every major beat in a piece of music. The times of these markers are then transferred to some form of animation or "hit" sheet, along with a description of what each one was. We refer to this when setting up the timing of keyframes for our animations.When spotting music, we will place a marker at the start of each measure or even each beat. We often add a second layer (the orange track here) to hold additional markers and comments that describe the sections of the music, as well as master markers along the timeline (the numbered markers here) to quickly navigate between these sections.
By now, you are probably saying "You've explained how to find peaks, which are probably great clues to drum hits in the music, but what the heck is a beat, bar, or measure?" Most music is divided into divisions of time known as bars or measures. A basic rhythmic cycle of music tends to occur one per measure. Most popular music today is written in "4/4" time. The bottom number defines what the basic unit of time - a beat - is; in this case, it's a quarter note. The top number in this time signature defines that four of them make up a bar or measure of the music. If the music is not in 4/4 time, the next most common case is that it's in 3/4 time (three beats to a measure). A waltz is the most common example of music in 3/4 time (dum-da-da-dum-da-da). If you tend to clap your hands or tap your toes along with music, you will find you usually tap or clap in sync with each beat. Think about disco music for a second: That constantly thumping bass drum is pounding out each beat. Not all music, of course, is this obvious, but it usually isn't too hard to figure out the beat. The beat that seems to make you want to clap or tap the loudest is usually the "downbeat" or the first beat per measure. This is the most important beat in a measure of music to match visual cues to. Try listening to some rhythmic songs, and instead of clapping or tapping, count "1, 2, 3, 4" in time with the beats - with the 1 being the downbeat - to start to develop a feel for this.Sometimes you will be provided ahead of time with a numeric value for the tempo of a piece of music. If so, this will greatly aid you in making sure you're detecting bars and beats correctly. The most common unit of measure used for tempo is "beats per minute" (abbreviated as "bpm"). It means exactly what it says: This is how many beats occur in each minute. For example, if the tempo is 120 bpm, take the number of seconds in a minute (60), divide it by the bpm, and you now know the length or spacing of each beat in seconds - in this case, 0.5 seconds. To calculate what this means in frames, multiply this value by your frame rate. At 30 frames per second, 120 bpm works out to 15 frames per beat (60 ÷ 120 = 0.5; 0.5 x 30 = 15). Indeed, frames per beat - "fpb" for short - is the second most common unit of measure for tempo, although typically only musicians who create music specifically for video or film have ever heard of it. If you are working in a different visual frame rate, such as 24 frames per second (most film), just plug in that number in place of the 30 above. Using the example above, 60 seconds ÷ 120 bpm = 0.5 seconds x 24 = 12 fpb for film. For PAL, use 25; for NTSC video, use 29.97. There is a shortcut to this math: Assuming 30 fps and music in 4/4 time, the quickest path to fpb is to divide 1800 by the tempo in bpm (i.e. 1800 ÷ 120 bpm = 15 fpb). For film, PAL, and NTSC, the magic number is 1440, 1500, and 1798.2 respectively.To find out the duration of each measure, multiply the resulting frames per beat value by the number of beats per measure. If the music is in 4/4 time (four beats to a measure), multiply the answers above by 4: A tempo of 120 bpm now works out to 2.0 seconds, or 60 frames per measure at a frame rate of 30 fps. Common tempos for popular music range from 80 to 120 bpm, although it can range all over the place from a lazy jazz shuffle of 60 bpm to hyperkinetic rave music with a tempo of 160 bpm. If you are having trouble spotting all the beats in a piece of music just by looking at the waveform, the frames per beat and frames per measure values should provide a general guide for how often you should be locating beats and the starts of measures. Don't be too alarmed if it wanders a frame or so on any given beat. This happens because tempos often work out to fractional numbers of frames per beat. A tempo of 110 bpm, for example, works out to 2.182 seconds per measure, or 0.545 seconds per beat; multiplied by a frame rate of 30, you get 16.364 fpb. In the case where a beat lands between whole frames, you just have to pick the nearest frame. Elsewhere on PVC, we've posted an article that includes a list of "magic tempos" that work out to simple integer numbers of frame per beat. If you are working with a musician, try to get them to use one of these tempos to make your life easier.Bars and beats are great timing references for when visual events should happen. When in doubt, cut, start, or stop a scene or effect at the beginning of a measure, and perform fades over one or two beats. For faster events, you can usually keep dividing the length of beats by multiples of two to find good sub-hit points; dividing them by threes can also give them an interesting feel (known in musical terms as "triplets"). This, of course, is not a hard and fast rule you should follow at all times - what looks and feels best should always take precedence over mathematical rules - but it can give you a helpful framework to start with.I have been focusing on music, but similar principals can be applied to sound effects. Again, look at the tallest peaks in a sound effect's waveform profile to help locate the important points, such as when a door slams or train engine passes by. Mark these down on your animation timing sheet just as you would downbeats - the only difference is they don't follow any mathematical logic of beats between the peaks.next page: tips, and a case study
Page 2 of 3 pages « Previous
Most read articles by CMG Keyframes
Pasting Paths from Illustrator to After EffectsAfter Effects CC Technology PreviewAfter Effects CS5.5Luminance Ranges in VideoAfter Effects Apprentice Free Video: Depth of Field Blur | 计算机 |
2014-23/2157/en_head.json.gz/29150 | Little Outliner
Threads > 2012 > October
Thread started by Dave Winer on Tuesday, October 02, 2012.
Scripting.com redesign
I've been working up to this for a few months, now it's time to flip the switch and replace the old Scripting News home page with the new simplified and enriched version. Yes, it's got more features, but it's even cleaner than before.
1. The menu is at the top of the page, unchanged, as promised. I said it would be a fixture through the transition, so if you found your way around the various sites that make up Scripting News, the links are still be there after the transition. This helps smooth out what I find to be a jarring about website redesign. You know how to find everything and then all of a sudden -- where did it go?
2. There's a new About feature, very easy to find, in the upper-left corner, patterned after the approach taken in the Media Hackers site. And of course it's an outline, stored in my Dropbox folder, so it's reallly easy for me to edit. 3. The banner is now down the left edge instead of across the top. But continuity is maintained. It's still the same typeface, but it's even bigger than the one before. I think ones' name should be bold and big. 4. The biggest change is probably the most subtle. The text on the home page is rendered by the new outline renderer in op.render.viewOutline. This is the core rendering routine for the worldoutline software, but now it's so core it can be used in any application. So you see every bit of text exactly as it was meant to be rendered. 5. Comments are back. They of course, never really went away. I moved my editorial act, gradually over to the threads site, where it remains now. You can read the articles on the scripting.com home page, or on the threads site. The star at the bottom of each piece links to the page with comments, and the one with paragraph-level permalinks. If you want to point to one of my pieces you should point to the page on the threads site. It's the permanent view. The home page only shows the 25 most recent stories.
6. There is of course still an RSS feed, in exactly the same place. But there's also now a link in the head section of the HTML to a full outline of all the text in the new CMS, going back to March of this year when the threads site started. Nothing hard about this, because it's the outline where I do my writing. One big file for everything. Here's a snapshot of the Scripting News home page taken just before the transition. That's about it for now. Time for me to take a break, and when I come back, I'll fill in the links here and then flip the switch. Maybe tonight or early tomorrow. | 计算机 |
2014-23/2157/en_head.json.gz/29248 | CC Wiki Home
History What links here
Recent ChangesSpecial pages
Permanent linkBrowse properties Log in / create account
(OpenID)
CC Factsheet Please note: An account is needed only to edit the CC Wiki. If you need an account, please email webmaster at creativecommons org and we'll make you an account.
Creative Commons (CC) is a 501(c)(3) nonprofit corporation that develops legal and technical tools used by individuals, cultural, educational, and research institutions, governments, and companies worldwide to overcome barriers to sharing and innovation.
CC licenses and public domain tools are easy to understand and use, with 1) a human-readable deed that simplifies the terms of each license into a few universal icons and non-technical language, 2) lawyer-readable legal text, which has been vetted by a global team of legal experts, and 3) machine-readable code that enables search and discovery via search engines such as Google – lowering the transaction costs normally associated with seeking permission to use works by granting some rights in advance, consistent with the rules of copyright, and making the public domain more accessible.
CC tools constitute a globally-recognized framework, developed in consultation with legal experts and CC affiliate institutions in over 70 jurisdictions. Over 350 million CC-licensed works have been published by their authors on the Internet. The following are examples of CC uses in key sectors, followed by stories by creators leveraging the cultural and economic benefits of CC tools in Appendix A and descriptions of CC licenses and public domain tools in Appendix B. | 计算机 |
2014-23/2157/en_head.json.gz/29318 | Home > SaaS is cheaper than on-premise software, right? Maybe.
SaaS is cheaper than on-premise software, right? Maybe.
Is it less expensive to use Software as a Service SaaS than to purchase software for use on your premises? Research firm Gartner Group has issued a warning to CIOs not to assume that SaaS will in fact turn out cheaper in the long run. "In recent years there has been a great deal of hype around SaaS," stated Robert DeSisto, VP and distinguished analyst at Gartner, the information technology research and advisory company. "As a result, a great number of assumptions have been made by users, some positive, some negative, and some more accurate than others. The concern is that some companies are actually deploying SaaS solutions, based on these false assumptions."SaaS is cheaper during its first two years of use, Gartner finds, but the total cost of ownership over five years would be lower for on-premises software. It also warned that while most users will assume that they will be paying on a 'pay as you go' basis, there are still likely to be contractual considerations. In "the vast majority of cases," Gartner says that companies are pushed to sign predetermined contracts with fixed fees.In its report, Fact-Checking: The Five Most-Common SaaS Assumptions, Gartner also warned that SaaS is not necessarily faster to implement. While vendors quote 30 days as the normal implementation time, "some software can still take up to seven months to set up."Another assumption often made is that it is difficult or impossible to integrate SaaS with on-premises applications or data sources. Gartner advises that businesses need to remember that SaaS applications can be customized and are no longer only for basic functions and that data can be initially loaded to a SaaS application, then updated regularly or updated in real time using Web services. Gartner took the top-five assumptions that users make and provided a fact check on their accuracy.Assumption 1: SaaS is less expensive than on-premises software.Fact Check: True during the first two years but may not be for a five-year TCO. SaaS applications will have lower total cost of ownership (TCO) for the first two years because SaaS applications do not require large capital investment for licenses or support infrastructure. However, in the third year and beyond, an on-premises deployment can become less expensive from an accounting perspective as the capital assets used for the on-premises deployment depreciate.Assumption 2: SaaS is faster to implement than on-premises software.Fact Check: True for simple-requirement SaaS, which will be faster, but growing complexity and other factors are coming into play. There is a danger in applying the general rule of SaaS being faster to implement for a specific deployment. Vendors often quote time frames of 30 days to implement but neglect to say that SaaS deployments can take seven months or longer. As the complexity of the business process and integration increases, the gap advantage between SaaS and on-premises deployment times will narrow because a larger percentage of the deployment time is associated with customization, configuration, and integration, which are equally difficult with both delivery models.Assumption 3: SaaS is priced as a utility model.Fact Check: False in the vast majority of cases. Many SaaS vendors state that they are utility-based providers, similar to electric companies, claiming that you're only charged for what you use. However, for most SaaS deployments, this is false. In the vast majority of cases, a company must commit to a predetermined contract independent of actual use. In some cases, the application lends itself to metered use - for example, an e-commerce application may have pricing based on order transaction processes - but for the most part, utility examples are in the minorityAssumption 4: SaaS does not integrate with on-premises application and/or data sources.Fact Check: False. There are two primary methods of integrating SaaS offerings with on-premises applications and/or data sources. The first method is batch synchronization, which initially involves loading the SaaS application with data. Once this initial data load has been made, data can be incrementally synchronized on a scheduled basis. The second method is real-time integration using Web services. Another way to combine the two methods is by having a Web service trigger that is based on an event occurring in the SaaS service. Yet another method is emerging that involves integrating SaaS applications at the user-interface level through mashups.Assumption 5: SaaS is only for simple, basic requirements.Fact Check: False, but there are still limits. SaaS applications are highly configurable at the metadata level with many offering customization capabilities with platforms in the form of application platform as a service (APaas). There are industry examples in which complete custom applications have been built using SaaS APaas. However, some gaps remain for complex, end-to-end processes that require complex workflow or business process management capabilities. Source URL: http://www.accountingweb.com/topic/technology/saas-cheaper-premise-software-right-maybe | 计算机 |
2014-23/2157/en_head.json.gz/30264 | , Linux
Microsoft denies paying contractor to abandon Linux
Microsoft has denied paying a Nigerian contractor US$400,000 in a bid to battle Linux's movement into the government sector.Media reports alleged that Microsoft had proposed paying the sum to a government contractor under a joint marketing agreement last year in order to persuade the contractor to replace Linux OS with Windows OS on thousands of school laptops.Although a joint marketing agreement was drafted to document the best practices for using technology in education, it was never executed, said Thomas Hansen, regional manager for Microsoft West, East and Central Africa. It became clear, he added, that one customer wanted a Linux OS."As such, the joint marketing agreement became irrelevant; no such marketing agreement was ever agreed to, and no money was ever spent," he said.Apart from the fact that Linux is freely distributed, it's functionality, adaptability and robustness has made it the main alternative for proprietary Unix and Microsoft operating systems. Governments in Ghana, Namibia, Nigeria and South Africa have deployed Linux in departments and schools, but Hansen said that Microsoft has strong relationships with the governments in these countries."From our standpoint, those governments, and indeed every customer, should always decide which software solutions meet their needs most appropriately. We strongly believe that governments must carefully consider all costs of acquiring and using a PC, along with the benefits of widespread application availability, maintenance, and training," he said.Hansen emphasized that studies have shown that the Windows platform often costs the same as or less than Linux when the total cost of ownership is considered."Further, when the full range of user benefits are taken into account, such as the wide range of applications available, familiarity, and ease-of-use, Windows is often a much better overall value," he said. | 计算机 |
2014-23/2157/en_head.json.gz/31269 | Adobe guts mobile Flash player strategy
Adobe is reportedly planning to discontinue development of new mobile Flash …
Graphics software giant Adobe announced plans for layoffs yesterday ahead of a major restructuring. The company intends to cut approximately 750 members of its workforce and said that it would refocus its digital media business. It wasn’t immediately obvious how this streamlining effort would impact Adobe’s product line, but a report that was published late last night indicates that the company will gut its mobile Flash player strategy.
Adobe is reportedly going to stop developing new mobile ports of its Flash player browser plugin. Instead, the company’s mobile Flash development efforts will focus on AIR and tools for deploying Flash content as native applications. The move marks a significant change in direction for Adobe, which previously sought to deliver uniform support for Flash across desktop and mobile browsers.
Although Adobe will not be introducing its own Flash player plugin for additional platforms, the company will continue to support its existing implementations-including the ones for Android and the Blackberry tablet operating system-with updates that address bugs and security issues.
It’s not clear, however, whether the existing mobile browser plugins will be updated as new versions of Flash player are released. There are also some unanswered questions regarding the fate of the Open Screen program, though Adobe says that existing licensees will be able to continue developing their own Flash ports.
“Our future work with Flash on mobile devices will be focused on enabling Flash developers to package native apps with Adobe AIR for all the major app stores. We will no longer adapt Flash Player for mobile devices to new browser, OS version or device configurations,” the company said, according to a ZDNet report.
Adobe has struggled for years to make its Flash player viable on mobile devices. Although it has met with some success on Android smartphones, the quality of the Flash user experience varies between devices and is a lot less impressive on other mobile platforms.
The early attempts to bring Flash to mobile devices focused on a scaled-down implementation called Flash Lite that Adobe used to license to embedded software vendors. Flash Lite was hobbled by major limitations relative to the desktop version.
Adobe began work in 2008 on a project to bring native ARM support to the full desktop version of Flash, a move that eliminated the need for Flash Lite and made it possible for regular Flash content to work across desktop and mobile environments. Adobe also dropped the licensing fees and launched the Open Screen project to encourage support for Flash on a broader range of mobile products.
The goal of achieving ubiquitous support for Flash on handheld devices was thwarted, however, by Apple’s refusal to incorporate the plugin into its mobile Web browser. The decision launched a war of words between Apple and Adobe that eventually escalated to the point where former Apple CEO Steve Jobs issued an open letter to personally address the issue. Concerns about battery life, security, and stability have been cited as key reasons why Flash isn’t allowed on iOS.
In light of Flash’s extremely bad security track record and notoriously poor performance on Macs, Apple’s rejection wasn’t particularly surprising. The significant popularity of Apple’s devices has compelled most Internet video sites that have historically relied on Flash to also support the HTML5 video element with the H.264 codec. Today, HTML5 video is so widely supported that the lack of Flash on the iPhone is hardly noticed by users.
Other platforms have followed Apple’s lead in disallowing Flash, including Microsoft’s Windows Phone 7. Google, however, worked with Adobe to bring Flash to its Android mobile platform. As we wrote in our review of Android 2.2 last year when Flash support was introduced on Android handsets, the plugin worked acceptably well in many cases, but still had a number of minor problems.
Adobe wasn’t able to repeat that relative success on tablets, however. The Flash plugin wasn’t ready for the Motorola Xoom at launch and didn’t perform well on the Galaxy Tab 10.1. Flash was also disappointingly buggy on the Blackberry Playbook despite the fact that Adobe worked very closely with RIM on the product.
It’s clear that enabling the Flash plugin across the full spectrum of popular mobile platforms isn’t possible and that results are mixed on the platforms that Adobe has already committed to support. As the consensus in favor of HTML5 for rich Internet content on mobile platforms continues to solidify, it seems like Adobe has little to gain by continuing to pursue a derailed mobile Flash player strategy.
Transitioning the focus of mobile Flash towards native applications is a logical step that will better serve the interests of Adobe’s real constituents: the professionals who buy the company’s tools. It’s worth noting that Adobe has also recently taken some major steps to advance HTML5 on mobile devices-particularly with its acquisition of PhoneGap company Nitobi last month. | 计算机 |
2014-23/2157/en_head.json.gz/32258 | A glossary of World Wide Web Terminology
This glossary was compiled (in 1994)with the assistance of
The Windows Internet Tour Guide by Michael Fraase (Ventana Press, 1994)Mosaic Quick Tour for Windows by Gareth Branwyn (Ventana Press, 1994)
Cybermarketing by Len Keeler (Amacom Books, to be released 1995)
Synonymous with hyperlinks, anchor refers to non-linear links among documents. Or more simply put, it's the word or phrase that can be clicked
to connect to another page or resource.
Anchor Color
You guessed it, the color on screen that represents the anchors. The reason so many are blue is that is the default color.
This color can be changed to any combination of red, blue or green.
Agents are search tools that automatically seek out relevant online information based on your specifications. Agent A.K.A.s include: intelligent agent, personal agents, knowbots or droids.
Personal notes you can attach to the documents you have saved
in Mosaic. The notes are available to you whenever the document is
viewed.
Derived from the word archive, Archie is a Net-based service that allows you to locate files that can be downloaded via FTP.
(pronounced "Ask-ee") An acronym for American Standard Code for Information Exchange, ASCII is an international standard in which numbers,
letters, punctuation marks, symbols and control codes are assigned numbers
from 0 to 27. Easily transferred over networks, ASCII is a plain, unadorned
text without style or font specifications.
Asychronous Connection
The type of connection a modem makes over a phone line, this connection is not synchronized by a mutual timing signal or
AU Sounds
This is an audio format used in Mosaic.
Authoring Software
This term refers to software that enables the creation of
multimedia or hypertext documents and presentations.
This term refers to an interactive representation of a human in a virtual reality environment.
The range of transmission frequencies a network can use. The greater the bandwidth the more information that can be transferred over
that network at one time.
A transmission method in which a network uses its entire
transmission range to send a single signal.
Baud
A unit of data transmission speed, or the maximum speed at which data
can be sent down a channel. Baud is often equivalent to bits per second.
This is an acronym for Bulletin Board System, a computer equipped with
software and telecommunications links that allow it to act as an information host for remote computer systems.
BinHex
A file conversion format that converts binary files to ASCII text
A contraction of binary digit, a bit is the smallest unit of information
that a computer can hold. The speed at which bits are transmitted or bit rate is usually expressed as bits per second or bps.
A transmission method in which the networks range of transmission
frequencies is divided into separate channels and each channel is used to
send a different signal. Broadband is often used to send different types of
signals simultaneously.
A type of software that allows you to navigate information
The number of bits used to represent a character.
Compact Disk-Read Only Memory: An optical disk from which information
may be read but not written.
CD-R or Compact Disk-Recordable: Refers to computer peripheral disk drives
that allow the user to record content on to a blank compact disk.
A computer that has access to services over a computer network. The
computer providing the services is a server. Client-Server Architecture: An information-passing scheme that works as
follows: a client program, such as Mosaic, sends a request to a server. The server takes the request, disconnects from the client and processes the
request. When the request is processed, the server reconnects to the client
program and the information is transferred to the client. This architecture
differs from traditional Internet databases where the client connects to the server and runs the program from the remote site.
This is a general-purpose computer term that can refer to the
way you have your computer set up. It is also used to describe the total
combination of hardware components that make up a computer system and the
software settings that allow various hardware components of a computer system to communicate with one another.
The act of changing software or hardware actions by changing the
CyberMall
A term commonly used to describe an electronic site shared by a
number of commercial interests.
A term coined by William Gibson, a science fiction writer, to
refer to a near-future computer network where users mentally travel through
matrices of data. The term is now used to describe the Internet and the
other computer networks.
Dial-up Connection: The most popular form of Net connection for the home user, this is a connection from your computer to a host computer over standard telephone lines.
Direct Connection
A permanent connection between your computer system and the Internet. This is sometimes referred to as a leased-line connection because the line is leased from the telephone company.
An acronym for Domain Name Server, DNS refers to a database of Internet names and addresses which translates the names to the official Internet
Protocol numbers and vice versa.
When used in reference to the World Wide Web, a document is any file containing text, media or hyperlinks that can be transferred from an HTTP server to a client program.
Document Window
This is the Mosaic program's scrollable window in which HTML
documents can be viewed.
To transfer to your computer a copy of a file that resides on another computer.
The abbreviation for Digital Services Unit, DSU replaces the modem in synchronous connections to the Internet.
The abbreviation for Electronic Data Interchange, EDI system allows linked computers to conduct business transactions such as ordering and
invoicing over telecommunications networks.
External Viewer
This is the program used for presenting graphics, audio and video in Mosaic. Programs that allow the viewing of GIF and JPEG files
and the hearing of AU files falls into this category.
This is the acronym for Frequently Asked Questions. A common feature on the Internet, FAQs are files of answers to commonly asked questions. Read
FAQs before wasting electrons asking obvious questions. Saves you from
receiving flames. Firewall
This term refers to security measures designed to protect a networked system from unauthorized or unwelcome access.
File Transfer Protocol is a protocol that allows the transfer of files
from one computer to another. FTP is also the verb used to describe the act
of transferring files from one computer to another. GIF
Pronounced "jiff" -- as in the peanut butter, this acronym stands for Graphic Interchange Format, a commonly used file compression format
developed by CompuServe for transferring graphics files to and from online services.
A menu-oriented tool used to locate online resources.
Gopherspace
A term used to describe the entire gopher network.
This term refers to software applications that facilitate shared work on documents and information.
An acronym for Graphical User Interface, this term refers to a software front-end meant to provide an attractive and easy to use interface between
a computer user and application. Macintosh operating system is pretty
GUI, DOS is not.
The document displayed when you first open Mosaic. Home Page also refers to the first document you come to at a Web site.
Hotlists
Lists of frequently used Web locations and URLs (Uniform Resource Locators).
A computer acting as an information or communications server.
An acronym for HyperText Markup Language, HTML is the language used to tag various parts of a Web document so browsing software will know how to display that document's links, text, graphics and attached media.
HTML Document
A document written in HyperText Markup Language.
The abbreviation for Hypertext Transfer Protocol, HTTP is used to link and transfer hypertext documents.
The hypertext concept extended to include linked multiple media.
This term describes the system that allows documents to be cross-
linked in such a way that the reader can explore related documents by clicking on a highlighted word or symbol.
The abbreviation for Internet Architecture Board, the IAB is the council
that makes decisions about Internet standards.
The abbreviation for Internet Engineering Task Force, IETF refers to a
subgroup of the Internet Architecture Board that focuses on solving
technical problems on the Internet.
Inline Images
These are the graphics contained within a Web document. IP
The abbreviation for Internet Protocol, IP refers to the set of communication standards that control communications activity on the
Internet. An IP address is the number assigned to any Internet-connected computer.
The abbreviation for Integrated Services Digital Network, ISDN is a
telecommunications standard that uses digital transmission technology to
support voice, video and data communications applications over regular
telephone lines.
ISOC
This is the abbreviation for Internet Society, an organization formed to support a worldwide information network. ISOC is the sponsoring body of
the Internet Architecture Board.
The acronym for Joint Photographic Experts Group, JPEG is an image
compression format used to transfer color photographs and images over
computer networks. Along with GIF, is the most common way photos are moved
over the Web.
These are the hypertext connections between Web pages. This is a
synonym for hotlinks or hyperlinks.
When used in reference to a World Wide Web file, this term designates
an object linked to another layer of information.
From the book "Snow Crash" by Neal Stephenson, this term describes
a virtual online representation of reality.
An acronym for Multipurpose Internet Mail Extensions, MIME is a messaging standard that allows Internet users to exchange e-mail messages enhanced with graphics, video and voice. MIME file types are also used in Mosaic.
This is the common name of a World Wide Web multimedia browser program developed at the National Center for Supercomputing Applications in
Urbana-Champaign, Ill. The official, copyrighted name of the program is NCSA Mosiact.
The acronym for Moving Pictures Expert Group, MPEG is an international
standard for video compression and desktop movie presentation. A special
viewing application is needed to run MPEG files on your computer.
NCSA
This is the abbreviation for National Center for Supercomputing
Applications at the University of Illinois in Urbana-Champaign.
The abbreviation for Network File System, NFS is a protocol suite
developed and licensed by Sun Microsystems that allows different makes of computers running different operating systems to share files and disk
The abbreviation for Network Information Center, NIC is an organization
responsible for supplying information for component networks that comprise
The abbreviation for Network Operations Center, NOC is the organization responsible for the day-to-day operations of the Internet's component
A device attached to a network. A node uses the network as a means of
communication and has an address on the network.
NREN
The abbreviation for National Research and Education Network, NREN is
an effort to combine the networks operated by the U.S. government into a single high-speed network.
OSI Model
The Open Systems Interconnection (OSI) reference model for
describing network protocols was devised by the Internet Standards Organization. It divides protocols in to seven layers to standardize and simplify definitions.
An acronym for Point of Presence, POP is a service provider's location
for connecting to users. Generally, POPs refer to the location where people
can dial into the provider's host computer. Most providers have several POPs to allow low-cost access via telephone lines.
This is an acronym for Plain Old Telephone Service.
The abbreviation for Point-to-Point Protocol, PPP is an Internet
connection where phone lines and a modem can be used to connect a computer
A set of standards that define how traffic and communications are
handled by a computer or network routers.
This is a digital video standard developed for Apple Macintosh
computers. Special viewing applications are needed to run QuickTime movies.
A communications device designed to transmit signals via the most efficient route possible.
This term refers to a program that helps users find information in text-oriented databases.
A computer system that manages and delivers information for client computers.
The abbreviation for Standard Generalized Markup Language, SGML is an international standard for the publication and delivery of electronic information.
This term refers to software that is available on public networks
and BBSs. Users are asked to remit a small amount to the software developer, but it's on the honor system.
The acronym for Serial Line Internet Protocol, SLIP refers to a method
of Internet connection that enables computers to use phone lines and a
modem to connect to the Internet without having to connect to a host.
This is a communication mechanism originally implemented on the BSD version of the UNIX operating system. Sockets are used as endpoints for
sending and receiving data between computers.
Synchronous Connection
An analog to analog or digital to digital connection that is able to perform two or more processes at the same time by means of
a mutual timing signal or clock.
High-speed data line connection. T-1 operates at 1.45 Mbps.
These are formatting codes used in HTML documents. Tags indicate how parts of a document will appear when displayed by browsing software.
TCP-IP
The basic protocols controlling applications on the Internet.
This is the acronym for Tagged Image File Format, a graphic file format
developed by Aldus and Microsoft. Mosaic supports the viewing of TIFF images.
Trumpet Winsock
A popular, cheerier TCP/IP protocol stack.
This is the abbreviation for Uniform Resource Locator, The addressing system used in the World Wide Web and other Internet resources. The URL contains information about the method of access, the server to be accessed and the path of any file to be accessed.
This is a search utility that helps find information on gopher servers. Veronica allows users to enter keywords to locate the gopher site holding the desired information.
The abbreviation for Wide Area Information Service, WAIS is a Net-wide system for looking up specific information in Internet databases.
WAIS gateway
This term refers to a computer that is used to translate WAIS data so it can be made available to an otherwise incompatible network or application. Mosaic must use a WAIS gateway.
This is the software that allows a user to access and view HTML documents. Examples of Web browsers include Mosaic, Cello and Lynx.
Web Document
An HTML document that is browsable on the Web.
This term refers to the person in charge of administrating a World
Wide Web site.
Web Node
This term is synonymous with Web site or Web server.
An HTML document that is accessible on the Web.
This term refers to the space created by the World Wide Web.
World Wide Web: Also known as WWW or W3, the World Wide Web is a hypertext-
based Internet service used for browsing Internet resources.
WEBster reader Sean Murphy suggests the following
enhancements: The term avatar should be credited to Neal Stephenson's "Snow Crash."
The term bandwidth also broadly includes throughput, meaning the amount of
data sent.
The term baud is a unit of speed in data transmission, as one bit per
second for binary signals. [After J. M. E. Baudot (died 1903).]
Eight bits is equivalent to a byte.
In an X-11 environment, the meanings of client and server are reversed.
The term cyberspace, coined by William Gibson, appeared in Gibson's book
"Neuromancer."
Gopher was first developed at the University of Minnesota. | 计算机 |
2014-23/2157/en_head.json.gz/32835 | Breaking News Conquest of Elysium II | Game Front
Available for all the major operating systems (Windows, Mac OS X, and Linuxsorry, Sinclair ZX81 users!), Conquest of Elysium II allows gamers to step back in time and discover a true strategy gem that emphasizes gameplay over glitz, and rewards players with a rich tapestry of carefully woven fun. Dominions players will instantly recognize the themes and options explored in the game, although at a smaller scale than they are used to.
Just as Dominions is about choices, so is Conquest of Elysium II. A game of warfare, players can choose from twenty different character classes to represent their faction in the gameworld. Each class differs in traits and powers, with some influencing the battlefield and others controlling vast wells of magic. There are five different societies to play in, akin to the different ages in Dominions, and the world map is generated anew for each game. Besides battling other opponents, neutral creatures are scattered throughout the world. There are over three hundred types of critters, and over one hundred special abilities possessed by them.
Conquest of Elysium II supports up to eight players, although unfortunately it does not support multplayer outside of hotseat play. It is also not moddable, although with the random world generation, the score of characters, various societies, and the Monster Manual worth of denizens, Conquest of Elysium II is a game that can be discovered fresh for years.
Conquest of Elysium II at a Glance
Platform(s): PC, Mac
Developer: Illwinter Game Design
Publisher: Shrapnel Games | 计算机 |
2014-23/2157/en_head.json.gz/32863 | Terms of use and copyright of material on the website
All content and images on this site are protected by copyright. Copyright is either held by the Greek Orthodox Archdiocese or use of the content on this site has been authorized by copyright holders. Use of copyright material does not extend to any other party or entity. See our copyright page for a list of the sources of copyrighted material on our site as well as our Privacy Policy for more information.
Web sites are permitted to link to content on this site but are not permitted to transfer content or render content in a web-browser’s framed environment. Likewise, content may not be pulled from the web site of the Greek Orthodox Archdiocese's web site and rendered within a web page of another web site.
As such, no user, organization, or entity may use, remove, repurpose, or save on any retrieval system, web page, database, email listserver or other electronic means of dissemination the content from this site without the express written permission of the Greek Orthodox Archdiocese of America.
Parishes, organizations, and institutions of the Greek Orthodox Archdiocese; other Orthodox Churches; or Orthodox organizations and institutions may not use or copy content, images, graphics, or iconography from this site without the express written permission of the Archdiocese.
Content from this site may only be used in higher-education institutions as photocopies so long as the photocopies bear the URL from which the content was taken and explicitly state, “© Greek Orthodox Archdiocese of America. This material has been authorized for educational use only and may not be transferred or repurposed in any additional manner or medium.”
To obtain the proper permission to use content for this site, please contact the Archdiocese Department of Internet Ministries in writing at: internet@goarch.org. | 计算机 |
2014-23/2157/en_head.json.gz/34461 | Migration to the new version of Blogger, which is now out of beta, stirred a lot of controversy. The problem is that migration takes too long, or at least much longer than people might expect. Here's a balanced post from Google Groups that offers some suggestions:Okay, so it's now been a little over 11 hours since I initiated the switch over to the new Blogger. Admittedly, I didn't think that it was going to be a couple minutes to transfer over my more than 2,000 posts, but never did I think that I was going to be out of commission for, well, 11 hours (and counting). Yes, I'm still being "moved."(...) Blogger employees do admit that it may take longer in some cases, but I have to laugh when that "may take longer" means hours, and lots of them at that. This lack of communication with its customers is classic Blogger and something that really doesn't surprise me. Wouldn't it have been nice for Blogger to tell us that there was the possibility that things would take hours and hours for some of the larger blogs to prepare us for the extended down time? It's such a simple thing, but it's something that Blogger, for some reason, has always struggled with in my years of using them.All the new features that the new Blogger brings with it are great. But the thing that would make this place 100 X's better is open, honest communication mixed with better customer service. I realize that we're talking millions of blogs here, but I also realize we're talking about Google. With Google shares trading at $450+, you might think they could afford to increase Blogger's staffing just a tad. That would be a huge improvement. | 计算机 |
2014-23/2157/en_head.json.gz/34601 | ADN started with a clear revenue model. Twitter did not. ADN obtained $803,000 in funding from a user base of 12,000 to start the service. ADN's revenue model is built around getting paid by users of the service, while the model twitter arrived at after several years and over $1.1B in funding, is built around getting paid by advertisers. ADN's revenue interests are aligned with those of users and third-party developers, while twitter's revenue interests are aligned with those of advertisers. I go into detail as to why I think the ADN approach is worth trying in this post: Freedom, Openness, and Paid Access Social Networks.
From the beginning, ADN has been clear that Alpha is intended to be the first service of many offered atop the ADN platform. Twitter initially used its API as a way to bring in third-party developers, but ultimately could find no way to monetize without winnowing down third-party access to the API and pushing users to official twitter apps. In stark contrast, ADN is trying a developer incentive program that is designed to further align the interests of ADN, third-party developers, and end users.
Third-party developers are creating a vibrant array of apps, services, and libraries, and some are already moving beyond Alpha, with apps like ChatView, Patter and (of course) LongPosts, that create new user experiences atop the ADN platform. In keeping with twitter's focus on user metrics, the lion's share of third-party twitter apps (excluding clients) seem to relate user activity analysis, rather than the creation of new types of user-to-user functionality.
One more note: Short messaging systems have been with us for decades. It is easy to fixate on the fact that Alpha looks a lot like twitter, but to me that is the least important point of comparison. It is also one that is likely to shift as ADN evolves and expands in directions that twitter (because of it's business model) cannot follow.0 Replies – 0 Reposts – 1 StarsDiscussionLink to Conversation on ADN — | 计算机 |
2014-23/2157/en_head.json.gz/35758 | Home History Software & computer languages Main Menu
History of Computer Languages - the classical decade, 1950s Written by Harry Fairhead Article Index
History of Computer Languages - the classical decade, 1950s
Page 1 of 3In the first of a series of articles about the development of computing languages we look at the struggle to create the first high level languages.
If you are interested in the development of computer languages see the other parts of this series covering the 1960s, 70s, 80s ...
Computer Languages by Committee - the 1960s
The rise of people power - Computer languages in the 70's
Towards objects and functions - 1980s
The Pioneer Spirit
The ten years from 1950 saw the development of the first computer languages. From machine code to assembler was a natural step but to go beyond that took five years of work and the production of Fortran 1.
At the close of the 50s the programming world had the trinity of Fortran, Cobol and Algol and the history of computing languages had completed its most critical phase. The history of computing is usually told in terms of the hardware and as a result we tend to think of progress as how much smaller, faster and cheaper computers are. Computing isn't just applied electronics and there is another side to the coin. Computing is also about programming and the history of programming languages contains much of the real story of computing.
Although computer hardware has changed dramatically in a very short time its basic principles have remained the same. So much so that even Charles Babbage (1792-1871), the father of the computer, wouldn't have too much difficulty understanding an IBM PC. He might not understand transistors, chips, magnetic recording or TV monitors but he would recognise the same operational principle of a CPU working with a memory.
You could say that the only real difference between Babbage's computer and today's computer is the technology used to realise the design, and you would only be stretching the truth a little. However the same statement wouldn't be true of software. Babbage probably didn't even have a clear idea of programming as something separate from the design of his machine, a failing repeated by hardware designers ever since!
The first person you can pin the accolade of "programmer" onto is generally agreed to be Augusta Ada, Countess of Lovelace (1815-52). This unlikely sounding character helped Babbage perfect his design for a mechanical computer and in the process invented programming - but there are those who would argue that her role was much less.
Her contribution was recognised by the naming of a computer language, Ada, after her.By all accounts she was the archetypal programmer in more ways than one - she lost a great deal of money after dreaming up a crazy gambling algorithm! The most important observation is that Ada did seem to grasp the idea that it was the software and the abstract expression of algorithms that made the machine powerful and able to do almost anything.
The point of all of this is that programming and programming languages have a life of their own separate from the hardware used to implement them. Without hardware they would be nothing more than abstract mind games but once you have a computer, no matter how primitive the implementation, most of the difficult and interesting ideas are in the development of the software.
Put as simply as possible a program is a list of instructions and a computer is a machine that will obey that list of instructions. The nature of the machine that does the obeying isn't that complex but the nature of the instructions that it obeys and the language used to write them is.
Seeing the problem
If you can program, even a little bit, the idea of a programming language seems blindingly obvious.
This makes it very difficult to understand that there was a time when the idea of a programming language was far less than obvious and even considered undesirable! Today programmers needn't know anything about the underlying design of the machine. For them a program is constructed in a programming language and it is that language that is reality. How the machine executes this program is more or less irrelevant.
We have become sophisticated by moving away from the simplistic computer hardware. However this wasn't the case in the early days. Early programmers worked in terms of "machine code" and this was virtually only one step away from programming with a soldering iron! It also has to be kept in mind that most programmers of the time were the people who had a hand in building the machine or were trained in electronics. This confusion between electronic engineering and programming persisted until comparatively recently.
A machine code programmer sees the machine as a set of numbered memory locations and the operations that can be carried out are given numeric codes. The program and the data have to be stored in memory and the programmer has to keep track of where everything is.
For example, a computer might have two operation codes
01 x y
meaning add the contents of memory location x to the contents of memory location y and
meaning subtract contents of memory location x from the contents of memory location y. A program would then be written as a long list of numbers as in 01 10 15 02 18 17 and so on.. This program means add memory location 10 to 15 and then subtract memory location 18 from 17 but this is far from instantly obvious from a casual glance. To a programmer of the day however, it would have been like reading standard English.
Because they produced programs every day this list of operation codes was fixed in their mind and it was second nature both to read and write such programs. At this early stage programming was the art of putting the machine instructions together to get the result you desired. Machine Code Problems
Machine code had, and still has one huge advantage - because it is something that the machine understands directly it is efficient.
However its disadvantages are very serious and easily outweigh considerations of efficiency. It was difficult to train machine code programmers, it was far too easy to make mistakes and once made they were difficult to spot.
To understand machine code you have to know how the machine works and this required a certain level of sophistication and hardware knowledge. At first this didn't matter because the programmers were the people who built the machine and hence they found it all perfectly natural. However as soon as more programmers were needed the difficulties of explaining machine code to potential programmers who knew nothing about hardware became apparent and a real problem.
The problem with finding errors in machine code was simple to do with the lack of immediate meaning that the codes had. After you had been programming for a while your brain started to read 01 say as Add but even then reading 01 10 24 as Add memory location 10 to memory location 24 didn't give any clue as to what memory locations 10 and 24 were being used for. Machine code is easy for machines to read but always difficult for humans to understand. Assembler
The solution was to make use of short but meaningful alphabetic codes - mnemonics - for each operation. So instead of writing 01 10 15 a programmer would write ADD 10 15. Of course before the computer could make any sense of this it had to be translated back to machine code.
At first this was done by hand, but some unknown pioneer had the idea of getting the computer to do this tedious job - and the first assembler was born. An assembler is a program that reads mnemonics and converts them to their equivalent machine code.
It is difficult to say who invented the assembler, presumably because it was invented in a casual sort of way in a number of different places. An influential book of the time "The preparation of programs for a digital computer" by Wilkes, Wheeler and Gill (1951) is generally thought to be responsible for spreading the idea and the authors are also often credited with the first use of the term "assembler" to mean a program that assembles another program consisting of several sections into a single program. In time the term was restricted to cover only programs that translate readable symbols into much less readable numeric machine code.
The first assemblers did just this but once the idea of getting the machine to translate a symbolic program into numeric operation codes it didn't take long to think up other jobs it could do to make programming easier.
The best and most important idea was the introduction of symbols used to represent the addresses of memory locations. For example using such symbolic addressing a programmer could write ADD sum total and leave the assembler to work out where in memory sum and total would be stored.
Believe it or not this is a very sophisticated idea and needs a lot of additional programming in an assembler to make it work. Essentially you have to set up a table, the symbol table, where the names are collected and assign memory addresses to each symbol as the program is translated.
This introduced three important ideas - the concept of what later became known as a symbolic variable or variable for short, the idea of a symbol table and perhaps most important of all the notion that programmers could use their own art to make programming easier. This was the start of the language explosion.
Last Updated ( Tuesday, 24 September 2013 ) | 计算机 |
2014-23/2157/en_head.json.gz/35772 | IEEE Global History Network > Topic Articles > Barbara H. Liskov You are not logged in, please sign in to edit > Log in / create account - Donor information
Barbara H. Liskov
Revision as of 20:59, 6 January 2012 by Administrator1 (Talk | contribs)
Jump to: navigation, search Biography A pioneer in object-oriented programming, Dr. Barbara H. Liskov is perhaps best known for her seminal work on data abstraction, a fundamental tool for organizing programs. Her research in the early 1970s led to the design and implementation of CLU, the first programming language to support data abstraction. Since 1975, every important programming language, including Java, has borrowed ideas from CLU. Dr. Liskov's other extraordinary contributions include the Venus operating system, the Argus distributed programming language and system and the Thor system for robust replicated storage of persistent objects. Argus was a groundbreaking high-level programming language used to support implementation of distributed programs that run on computers connected by a network, such as the Internet. Her recent efforts have concentrated on language-based security and on making Byzantine fault tolerance practical.
Dr. Liskov is the Ford Professor of Engineering at the Massachusetts Institute of Technology in Cambridge, Massachusetts, where she has taught since 1972. In 2001, she became the associate head for computer science in the electrical engineering and computer science department. In the 1960s, Professor Liskov held positions at the Mitre Corporation in Bedford, Massachusetts, Harvard University in Cambridge, Massachusetts, and Stanford University in Palo Alto, California.
A member of the IEEE and the U.S. National Academy of Engineering, Dr. Liskov is a Fellow of the American Academy of Arts and Sciences and of the Association for Computing Machinery. She has written three books including, 'Abstraction and Specification in Program Development,' as well as more than 100 technical papers.
Further Reading Barbara Liskov Oral History
Retrieved from "http://www.ieeeghn.org/wiki/index.php/Barbara_H._Liskov"
Categories: Computer science | Object oriented programming Log in GHN Home | 计算机 |
2014-23/2157/en_head.json.gz/35774 | IEEE Global History Network > Topic Articles > John V. Atanasoff - Donate to IEEE History Center
PageDiscussion [0]View sourceHistoryAttachments
John V. Atanasoff
Revision as of 19:41, 22 January 2009 by Nbrewer (Talk | contribs)
Jump to: navigation, search John V. Atanasoff Page created by Chayes, 4 September 2008
Contributors: Chayes x1, Nbrewer x5, Kwiggins x1, Administrator1 x10, M.geselowitz x9, Rnarayan x1
Last modified by Administrator1, 22 July 2014
John Vincent Atanasoff was born in the town of Hamilton, New York on October 4, 1903. After John's birth, the Atanasoff family moved a number of times as Ivan Atanasoff sought better employment in several different electrical engineering positions. They eventually settled in Brewster, Florida, where John completed grade school. The Atanasoff home in Brewster was the first house the family had lived in that was equipped with electricity. By age nine, John had taught himself how to repair faulty electric wiring and light fixtures on their back-porch. It was recognized early that John Atanasoff had both a passion and talent for mathematics. His youthful interest in baseball was quickly forgotten once his father showed him the logarithmic slide rule he had bought for facilitating engineering calculations. The slide rule completely captivated the nine-year-old boy, who spent hours studying the instructions and delighting in the fact that this mathematical tool consistently resulted in correct solutions to problems. Young John's obsession with the slide rule soon led to a series of discoveries on the logarithmic principles underlying slide rule operation and, subsequently, to a study of trigonometric functions. It was not long before the gifted youth had achieved substantial progress in his math studies. At this time John's mother introduced him to counting systems and number bases other than base ten, including an introduction to the binary system which would prove important in his later work. John Atanasoff completed his high school course in two years, with excellence in both science and mathematics. He had decided to become a theoretical physicist, and with that goal in mind, entered the University of Florida in Gainesville in 1921. Because the university curriculum did not offer degrees in physics, John began his undergraduate studies in the electrical engineering program. The knowledge of electronics and higher math that John acquired as an electrical engineering student would later prove fortuitous in helping to transform the theory of the computer into a working reality John Atanasoff graduated from the University of Florida in Gainesville in 1925, with a Bachelor of Science degree in electrical engineering. He received his Master's degree in mathematics from the Iowa State College in Ames, Iowa in 1926. After completing his graduate studies, Atanasoff accepted a position teaching physics and mathematics at Iowa State College. He was then accepted into the doctoral program at the University of Wisconsin, and received his doctoral degree (Ph. D.) in theoretical physics from Wisconsin in 1930. In his doctoral thesis, "The Dielectric Constant of Helium", Atanasoff was required to do many complicated and time consuming computations. Although he utilized the Monroe mechanical calculator, one of the best machines of the time, to assist in his tedious computations, the shortcomings of this machine were painfully obvious and motivated him to think about the possibility of developing a more sophisticated calculating machine. After receiving his Ph. D. in theoretical physics in July 1930, John returned to the staff of Iowa State College and began his work on developing a better and faster computing machine. In 1970 John Atanasoff was invited to Bulgaria by the Bulgarian Academy of Sciences, and the Bulgarian Government conferred to him the Cyrille and Methodius Order of Merit First Class. This was his first public recognition, and it was awarded to him three years before similar honors were conferred to him in the United States. The credit for this timely recognition of Atanasoff's achievement should be given to the Bulgarian academicians, Blaghovest Sendov, Ph.D. and Kyrille Boyannov, Ph.D., among others. During his lifetime, the highest honor and recognition awarded to John Vincent Atanasoff, the Father of the Computer, was the National Medal of Science and Technology, conferred to him by George H. W. Bush in 1990 <rating comment="false">
Well Written?
1 (No)
5 (Yes)
</rating> <rating comment="false">
Informative?
Accurate?
</rating>
Retrieved from "http://www.ieeeghn.org/wiki6/index.php/John_V._Atanasoff"
Categories: Computers and information processing | Computer classes | Calculators | Engineered materials & dielectrics | Dielectrics | Dielectric materials Log in GHN Home | 计算机 |
2014-23/2157/en_head.json.gz/36412 | Database Administrator Please ensure you have JavaScript enabled in your browser. If you leave JavaScript disabled, you will only access a portion of the content we are providing. Here's how.
What do they do?
Key Facts & Information
A database administrator could...
Design a digital database of medical records that can be instantly transferred between clinics, unlike paper patient records.
Protect bank accounts from hackers by adding security features to a bank's financial database.
Make an inventory database for a chain of candy stores to help them keep the most popular candies in stock.
Create a database of DNA from people with multiple sclerosis to help researchers pinpoint the genes involved in the disease.
Databases are collections of similar records, like the products a company sells, information on all people with a driver's license for a state, or the medical records in a hospital. Database administrators have the important job of figuring out how to organize, access, store, search, cross-reference, and protect all those records. Their services are needed by law enforcement, government agencies, and every type of business imaginable. Management of large databases is also critical for scientific research, including understanding and developing cures for diseases.
Key Requirements
Logical, focused, detail-oriented, and able to communicate well and work in teams
Minimum Degree
Subjects to Study in High School
Biology, chemistry, algebra, geometry, algebra II, pre-calculus, English; if available, business, computer science
Median Salary
US Mean Annual Wage
Min Wage
Projected Job Growth (2010-2020)
Much Faster than Average (21% or more)
In Demand!
Read this article to learn about a day in the life of IBM database administrator Dwaine Snow.
Read this article to meet Phil Mcmillan, a database administrator for an insurance company.
In this interview, you'll meet database administrator Ryan Austin, who is also a database developer.
Computer and information systems managers Computer Programmer Computer support specialists Computer systems analysts Computer security specialists Mathematical technicians Computer operators Numerical tool and process control programmers Source: O*Net
Training, Other Qualifications
Rapidly changing technology requires highly skilled and educated employees. There is no single way to prepare for a job as a database administrator.
Some jobs may require only a 2-year degree. Most community colleges, and many other technical schools, offer an associate's degree in computer science or a related information technology field. Many of these programs are geared toward meeting the needs of local businesses. They are more occupation-specific than 4-year degree programs.
Many employers seek workers who have a bachelor's degree in computer science, information science, or management information systems (MIS). An MIS program usually is part of a university's business school. MIS programs differ quite a bit from computer science programs. MIS programs focus on business and management-oriented coursework and business computing courses. Now, more than ever, employers seek workers with a master's degree in business administration (MBA) and a concentration in information systems.
Despite employers' preference for those with technical degrees, people with degrees in a variety of majors find computer jobs. One factor affecting the needs of employers is changes in technology. Employers often scramble to find workers who know the latest new technologies. Many people take courses regularly to keep up with the changes in technology.
Jobseekers can improve their chances by working in internship or co-op programs at their schools. There are many internships where you can learn computer skills that employers are looking for.
Certification is a way to show a level of competence. Many employers regard these certifications as the industry standard. One way to acquire enough knowledge to get a database administrator job is to become certified in a specific type of database management. Voluntary certification also is available through various organizations associated with computer specialists.
Database administrators may advance into managerial positions. For example, a promotion to chief technology officer might be made on the basis of experience managing data and enforcing security.
Other Qualifications
Database administrators must be able to think logically and have good communication skills. Because they often deal with a number of tasks simultaneously, the ability to concentrate and pay close attention to detail also is important. Although database administrators sometimes work independently, they frequently work in teams on large projects. As a result, they must be able to communicate effectively with computer personnel, such as programmers and managers, as well as with users or other staff who may have no technical computer background.
Watch this video to meet Leland Chee, the keeper of a secret database called The Holocron, a digital encyclopedia of all the characters, planets, ships, and events in the Star Wars Universe.
The Internet and electronic commerce (e-commerce) generate lots of data. Computer databases that store information on customers, inventory, and projects are found in nearly every industry. Data must be stored, organized, and managed. Database administrators work with database software to find ways to do this. They identify user needs, set up computer databases, and test systems. They ensure that systems perform as they should and add people to the system as needed. Database administrators often plan security measures. Data integrity, backup, and security are critical parts of the job.
Database administrators work in offices or labs. They usually work about 40 hours a week, but evening or weekend work may need to be done to meet deadlines. Telecommuting (working from home) is common for computer professionals.
Like other workers who spend long periods of time in front of a computer, database administrators can suffer eyestrain, back discomfort, and hand and wrist problems.
Test programs or databases, correct errors and make necessary modifications. Modify existing databases and database management systems or direct programmers and analysts to make changes. Plan, coordinate and implement security measures to safeguard information in computer files against accidental or unauthorized damage, modification or disclosure. Work as part of a project team to coordinate database development and determine project scope and limitations. Write and code logical and physical database descriptions and specify identifiers of database to management system or direct others in coding descriptions. Train users and answer questions. Specify users and user access levels for each segment of database. Approve, schedule, plan, and supervise the installation and testing of new products and improvements to computer systems such as the installation of new databases. Review project requests describing database user needs to estimate time and cost required to accomplish project. Develop standards and guidelines to guide the use and acquisition of software and to protect vulnerable information. Review procedures in database management system manuals for making changes to database. Develop methods for integrating different products so they work properly together such as customizing commercial databases to fit specific needs. Develop data model describing data elements and how they are used, following procedures and using pen, template or computer software. Select and enter codes to monitor database performance and to create production database. Establish and calculate optimum values for database parameters, using manuals and calculator. Revise company definition of data as defined in data dictionary. Review workflow charts developed by programmer analyst to understand tasks computer will perform, such as updating records. Identify and evaluate industry trends in database systems to serve as a source of information and advice for upper management. Source: BLS
Companies That Hire Database Administrators
Explore what you might do on the job with one of these projects...
Go Wild! Try a Wildcard
Ready, Set, Search! Race to the Right Answer
Do you have a specific question about a career as a Database Administrator that isn't answered on this page? Post your question on the Science Buddies Ask an Expert Forum.
Institute of Electrical and Electronics Engineers Computer Society: www.computer.org
Software & Information Industry Association: www.siia.net
O*Net Online. (2009). National Center for O*Net Development. Retrieved May 1, 2009, from http://online.onetcenter.org/
Margolis, D. (2007, September). A Day in the Life of a Database Administrator. Retrieved November 10, 2009, from http://www.certmag.com/read.php?in=3040
Technology-colleges.info. (n.d.). Careers in Database Administrator. Retrieved November 10, 2009, from http://www.technology-colleges.info/database_systems_administrator.html
Technology-colleges.info. (n.d.). Careers in Database Administrator/Database Developer. Retrieved November 10, 2009, from http://www.technology-colleges.info/database_administrator_developer.html
Wired.com. (2008, September 23). The Star Wars Holocron. Retrieved April 8, 2014, from http://archive.wired.com/entertainment/hollywood/magazine/16-09/ff_starwarscanon?currentPage=all
We'd like to acknowledge the additional support of:
You can find this page online at: http://www.sciencebuddies.org/science-engineering-careers/math-computer-science/database-administrator?From=testb You may print and distribute up to 200 copies of this document annually, at no charge, for personal and classroom educational use. When printing this document, you may NOT modify it in any way. For any other use, please contact Science Buddies. | 计算机 |
2014-23/2157/en_head.json.gz/36528 | Why open source needs an attitude adjustment
Why must purists flame open source companies that charge customers for anything other than support? Bill Snyder (InfoWorld) on 23 May, 2008 11:27
Recession be damned. The first quarter of the year saw a record US$203.7 million of venture capital flow to young open source companies. You'd think that would be a cause for celebration, but for too many members of the open source community money is, well, icky.I pick that word deliberately, because the snarky elitists who want to keep open source pure -- and poor -- remind me of children. Case in point: MySQL. Not long before the database company was scooped up by Sun, at great profit to the founders and employees, there was a lot of nastiness about the decision to make a small set of features in WorkBench available to paying customers only.Imagine that. Asking people to pay for something useful. "Frankly, there are people who call themselves part of the MySQL community that have never contributed a line of code or paid them a dime," says Matt Asay, vice president of business development at Alfresco, an open source content and document management company. Asay, a frequent blogger, notes that other companies, including SugarCRM and Zimbra are also offering some closed extensions "as a way to get a decent percentage of customers to pay, and then reinvest the money in writing great code."Now think about people who use open source products for free. On the one hand, the use of software by large numbers of people and large companies validates it and makes it seem like a safe choice. And that's a good thing."Unfortunately, this cuts the other way, as well: The more free-riders, the more encouraged would-be purchasers will want be to free-ride as well. Why should you be the only sucker paying for what everyone else is using for free, and quite comfortably?" says Asay. "Ultimately, someone must pay for software in order to have it written. It doesn't grow on trees and it doesn't grow on communities, either," he adds.The free-loving purists could kill open sourceCharge for support? Of course; nobody I know thinks that's a bad idea. But what happens if those paying customers decide they don't need the support after a while, asks Open Sources blogger Savio Rodrigues. "I've spoken to many customers who are saying 'We bought support for two years and realized we just didn't use it as much as we thought. Also, with the source code being available, my software developers can support our use of product XYZ internally,'" he wrote in a recent post.Rodrigues goes on to note that "for the majority of single-vendor backed OSS [open source software] products, there is virtually no cost savings versus developing closed-source software. To close the feature/function gap, OSS vendors need faster revenue growth to fund this development expense," he wrote. "The OSS vendor community needs leaders who will stand up to 'the community' and make the tough business decisions needed to ensure that OSS isn't relegated to a small revenue slice of the software industry pie."Incorporation into the mainstream or militant separatism?Daniel Lyons, AKA the fake Steve Jobs, made a fascinating point in Forbes earlier this year. He likened the growing acceptance of open source in the commercial world to the incorporation of parts of gay culture into the mainstream.Elaborating on that theme, 451 Group analyst Matthew Aslett says: "The assimilation of any subculture or counterculture into the mainstream is a divisive moment -- signaling as it does both the success of the movement in reaching a wider audience, and the watering down of its principles by external forces. There are signs that an identity crisis is already impacting the free software and open source software movements."That's a great point. Open source has tentacles of influence everywhere, touching on mainstream software companies from IBM to -- gasp! -- Microsoft. See for example a discussion of how open source influenced Windows Server 2008 by Microsoftie Sam Ramji.Is that bad? I'd say no. If anything, it's a tribute to the quality of thinking behind open source projects. And so is the huge flow of capital from VCs to young open source companies. Those investments, of course, are predicated upon the possibility of a decent return. And that means those companies have to make money. As Asay puts it, "Open source is getting to the point where business models will vary greatly. It's not the end of the world."
More about IBMMicrosoftMySQLZimbra
Wed 18/11/2009 - 11:30
Costing the End Consumer Millions...
Look... I think you're missing the point... too many people are arguing the merits of Open Source as if Gays and Straights can't get along...
That's fairly lame argument...
Open Source is great... It's probably the best concept to ever hit the industry... the SAD SAD Truth of the matter is that the majority of the community is not responsible enough to keep it secure...
So while the "purests" go on about the merits of Open Source... they are costing the consumer millions...
I'm not talking about MircoSoft or MySQL... I'm talking about the rest of the world which compromises 99% of the software development industry...
I am of course making these statistics up... but it is the general truth...
To answer your question... Software requires people to build it... That is why you must charge for more than just support...
I think support should be free... I mean if your product isn't stable... don't give it away...
This free open source mentality needs to end. Sure you're going to get those who will argue everything in life is to be free, however for centuries humans have worked to earn their keep and been paid to do so. Open source was a cool idea perhaps 10 years ago, but in our economy today, folks need to pay for product! That's a no brainer.
EMC VNXe: Enterprise Features, Entry-level Simplicity
The need for advanced IT capabilities for competitive advantage isn’t limited to big companies. This whitepaper details how SMB’s and entry-level companies can still access functional capability and good economic value through the power of technology.
Simplify Firewall Policy Management | 计算机 |
2014-23/2158/en_head.json.gz/290 | Ehsan Akhgari
Home › Articles › Visual C++ › Unicode – Why Use It?
Unicode – Why Use It?
Posted on June 21, 2008 by ehsan
In this article, I’m going to answer the question most of the developers have before beginning to use Unicode. Why should one bother to write code that supports Unicode, especially if they are not targeting any non-English language market? Before we fall into it, I’d like you to know of Character sets, and their three varieties. If you’re seeking for a how-to guide on using Unicode, this article won’t help you. I suggest taking a look at my Unicode article.
Single-Byte Character Sets
In a single byte character set, all characters occupy only one byte. This means that a string would simply be a series of bytes coming one after another, and the end is indicated by a NULL character, which is a zero byte. This is the simplest of all character sets. Most of the well-known C runtime functions like strlen expect the characters to be in this format. But using this kind of character sets can cause problems when you want to build a program in a different language other than English. The problem is not only you should have a coding system (called a code page) for each and every language, but also some languages (like the Japanese Kanji) need so many characters that can’t be fit in a single byte, and 255 characters in a single byte character set is not room enough for them. This need, caused in the birth of double-byte character sets.
Double-Byte Character Sets
Sometimes called the Multi-Byte Character Sets (MBCS), the double byte character sets (DBCS) are just like the single byte character sets at the beginning. The fist 128 codes for each code page is the same as ASCII character codes. The other 128 codes are there and each language can define these characters to be whatever symbols they need in writing that language. So what’s the difference with single byte character sets? The difference is that for each code page, there are some codes that specify that the next byte following them should also be interpreted with that code, making a double byte. This results in a code page in which some characters require one byte, and the others require two bytes. Like the single byte character sets, the end of the string is specified by a single NULL character. This implementation is kind of painful. Consider you need to find out the size of a string. You can’t pass such a string to strlen directly, because it expects each byte is a character. You need to write your own string parsing routines for each CRT function that checks to see if a byte in the string specifies that the next byte should also be interpreted together with it or not. Fortunately enough, the MSVC CRT library has some functions that are called the MBCS functions, and can handle both the single byte and double byte character sets. For example, the MBCS version of strlen is called _mbslen.
The Windows API offers some functions that let you manage the MBCS strings to some extent. These API functions are listed in the below table. | 计算机 |
2014-23/2158/en_head.json.gz/369 | Getting into the game industry
mystic_realmz at March 17th, 2007 12:38 — #1
I'm new to the gaming industry, and would like to get involved with soundtracking for games. I have 25+ years of music experience, and 7 years of home studio experience. If anyone can offer advice on how to get into the gaming industry , or comment on the quality of my work...I would be very grateful.
So long, and thanks for all the fish!:)
Musician - Composer - Producer
thenut at March 18th, 2007 13:24 — #2
I don't see you having any problems jumping into this industry. With 25+ years of xp, this is like child's play to you by now. Best bet is to just look for a company and apply. If you want to increase your odds, release a couple demo reels (video games or clips) that use your music. It's a great way to express the mood of your music and how it influences game play. It's also a good idea to put some marketing effort into your demos too rather than just dry run through several games showing off your music. It's also a good idea to show that you have designer skills. Make a lot of effort to show that you carefully plan and design out your music. Write a little section on your site, or at least on your resume, how you work from start to finish. This will separate you from the dime a dozen musicians to one who knows how to play the game.
It's also a good idea to play a few games and know the industry. The worst thing you can do is apply to a video game company and tell the interviewer you don't play games. You may laugh at this, but I've seen it happen.
anubis at March 18th, 2007 21:44 — #3
Am I blind, or didn't your provide a link to your work ?
It's in his profile :lol:
karligula at March 19th, 2007 11:47 — #5
Tricky thing about being a musician in the games industry is studios don't need many musicians. Nowhere I've worked has ever had more than one guy doing the music. We have maybe one hundred staff at my current place and one musician. And he can do a complete soundtrack for a game in just a few weeks. In fact he tends to wander around asking for something to do!
You might have more chance if you do general sound design too, but even that doesn't generally need many staff. And I don't think music or sound jobs come up very often. There just aren't that many positions available.
Having said all that, I do hope you get your foot in the door somewhere!
Thank you very much to all who responded. I realize it's a tough market...music always has been. I'm not limiting myself to video games; however, it was in fact the very reason I became interested in soundtrack work to begin with. Playing the N64 one day, I realized that many of the games had, shall we say, less than enthusiastic creations for soundtrack music. From there I was obsessed with getting involved. I absolutely love playing games, and wish I had more time to do it. I have only one game to my credit, and it's listed on my site [www.mysticrealmz.com](www.mysticrealmz.com) . So, again thank you for the kind responses. I care not how difficult the business is...I will be persistent, and I will succeed.
Mystic Realmz | 计算机 |
2014-23/2158/en_head.json.gz/1711 | How to reduce the noise produced by the PC By Gabriel Torres
If you have installed at least one auxiliary fan in your PC, you must have noticed that it became a lot noisier. Extra fans are great to improve the air circulation in the case and to avoid overheating of the machine, but their greatest inconvenience is their noise level, that is, your computer becomes noisier. Depending on the type and quantity of fans you have added to your case, working at the computer may often become an impossible task due to the noise it makes. The solution for this problem is very simple: to reduce the rotation speed of the auxiliary fan when the computer is not operating at full power. The slower the fan turns, the less noise it makes. When you need greater thermal dissipation (for instance, when you are playing your favorite 3D game and/or when you are doing overclock), then you may have the fans turning at full speed. But how can we do that? Some high-end fans line have speed control, and some manufacturers developed circuits to control the speed of the fan, such as Cooler Master (visit http://www.coolermaster.com and see the Aerogate and the Musketeer). The problem is that those products are expensive. So, what to do? Today we will teach you how to assemble a fan speed control system. You will see that it is much simpler than it seems, and that its cost is much more accessible than the ready-made solutions on the market. You will need to have experience in soldering electronic components; If you don't, as a friend who knows how to use a soldering iron to help you.
The only component you will need to buy is a 100-ohm linear potentiometer, which can be easily found at electronic parts stores. You will also need to buy two feet of wire (~0.50 m) to make the necessary connections. You may also want to buy a knob to put in the central pin of the potentiometer. The connection is simple. The fan feeding wire will have to be cut in half in order to fit the potentiometer. That wire is red in fans directly connected to the motherboard (three-pin plugs) or yellows in fans connected directly to the power source (four-pin plugs). Peel the end of each part of the wire and make the connection to the potentiometer, as shown in the figure. The wire numbered 1 should be connected to one end of the wires resulting from the cut, and the one numbered 2 should be connected to the other one. Don't forget to isolate all connections with isolating tape. Figure 1: How to wire the potentiometer.
To improve control, the potentiometer may be installed at the front of the case, in one of the 5 1/4" bays that is not in use. All it takes is making a hole on the plastic cover and installing the potentiometer there.
If you have more fans that you want to control the rotation speed, you will have to repeat the operation, installing one potentiometer per fan. Originally at http://www.hardwaresecrets.com/article/How-to-reduce-the-noise-produced-by-the-PC/79
Total or partial reproduction of the contents of this site, as well as that of the texts available for downloading, be this in the electronic media, in print, or any other form of distribution, is expressly forbidden. Those who do not comply with these copyright laws will be indicted and punished according to the International Copyrights Law.
We do not take responsibility for material damage of any kind caused by the use of information contained in Hardware Secrets. | 计算机 |
2014-23/2158/en_head.json.gz/2958 | Standards at Adobe Collaborating to improve our customers’ experience through interoperability and innovation HOME > Forking Standards and Document Licensing Forking Standards and Document Licensing
There’s been quite a bit of controversy in the web standards community over the copyright licensing terms of standards specifications, and whether those terms should allow “forking”: allowing anyone to create their own specification, using the words of the original, without notice or getting explicit permission. Luis Villa, David Baron, Robin Berjon have written eloquently about this topic.
While a variety of arguments in favor of open licensing of documents have been made, what seems to be missing is a clear separation of the goals and methods of accomplishing those goals.
Developing a Policy on Forking
While some kinds of “allowing forking” are healthy; some are harmful. The “right to fork” may indeed constitute a safeguard against standards groups going awry, just as it does for open source software. The case for using the market to decide rather than arguing in committee is strong. Forking to define something new and better or different is tolerable, because the market can decide between competing standards. However, there are two primary negative consequences of forking that we need to guard against:
Unnecessary proliferation of standards (“The nice thing about standards is there are so many to choose from”). That is, when someone is designing a system, if there are several ways to implement something, the question becomes which one to use? If different component or subsystem designers choose different standards, then it’s harder to put together new systems that combine them. (For example, it is a problem that Russia’s train tracks are a different size than European train tracks.) Admittedly, it is hard to decide which forks are “necessary”.
Confusion over which fork of the standard is intended. Forking where the new specification is called the same thing and/or uses the same code extension points without differentiation is harmful, because it increases the risk of incompatibility. A “standard” provides a standard definition of a term, and when there is a fork which doesn’t rename or recode, there can be two or more competing definitions for the same term. This situation comes with more difficulties, because the designers of one subsystem might have started with one standard and the designers of another subsystem with another, and think the two subsystems will be compatible, when in fact they are just called the same thing.
The arguments in favor of forking concentrate on asserting that allowing for (1) is a necessary evil, and that the market will correct by choosing one standard over another. However, little has been done to address (2). There are two kinds of confusion:
humans: when acquiring or building a module to work with others, people use standards as the description of the interfaces that module needs. If there are two versions of the same specification, they might not know which one was meant.
automation: many interfaces use look-up tables and extension points. If an interface is forked, the same identifier can’t be used for indicating different protocols.
The property of “standard” is not inheritable; any derivative work of a standard must go through the process standardization itself to be called a Standard.
Encouraging wide citation of forking policy
The extended discussions over copyright and document license in W3C seems somewhat misdirected. Copyright by itself is a weak tool for preventing any unwanted behavior, but especially since standards group are rarely in a position to enforce copyright claims.
While some might consider trademark and patent rights as other means of discouraging (harmful) forking, these “rights” mechanisms were not designed to solve the forking problem for standards. More practically, “enforcement” of appropriate behavior will depend primarily on community action to accept or reject implementors who don’t play nice according to expected norms. At the same time, we need to make sure the trailblazers are not at risk.
Copyright can be used to help establish expected norms
To make this work, it is important to work toward a community consensus on what constitutes acceptable and unacceptable forking, and publish it; for example, a W3C Recommendation “Forking W3C Specifications” might include some of the points raised above. Even when standards specifications are made available with a license that allows forking (e.g. the Creative Commons CC-by license), the license statement could also be accompanied by a notice that pointed to the policy on forking.
Of course this wouldn’t legally prevent individuals and groups from making forks, but hopefully would discourage harmful misuse, while still encouraging innovation.
Dave McAllister, Director of Open Source
Larry Masinter, Principal Scientist
By Larry Masinter Creative Commons CC-by license (1)
Forking (1)
Forking Policy (1) | 计算机 |
2014-23/2158/en_head.json.gz/3888 | Arvind Krishna
General Manager, Development & Manufacturing, IBM Systems & Technology Group Download hi-res version (image/jpeg, 2 MB)
Full biography
Arvind Krishna is General Manager of IBM Systems and Technology Group’s Development and Manufacturing organization, consisting of more than 23,000 engineers and programmers working in 37 labs in 17 countries, including Canada, China, Germany, India, Russia and the United States. This team is responsible for the advanced engineering and development of a full technology portfolio, ranging from advanced semiconductor materials to leading-edge servers and systems technology.
Arvind was previously General Manager of IBM’s Information Management Software Division which includes database, data warehouse, information integration, master data management, integrated data management and associated software. He was responsible for the revenue, development, strategy, and ecosystem of the Information Management business, a multibillion dollar market. Prior to that, he was the Vice President of Strategy for IBM Software Group. Arvind has held many key roles in both IBM Software and IBM Research including pioneering IBM’s security software business. He has an undergraduate degree from the Indian Institute of Technology, Kanpur and a Ph.D. from the University of Illinois at Urbana-Champaign. He is the recipient of a distinguished alumni award from the University of Illinois, and is the co-author of 15 patents. | 计算机 |
2014-23/2158/en_head.json.gz/4795 | Fedora Core 15 x86_64 DVD
The Fedora Project is an openly-developed project designed by Red Hat, open for general participation, led by a meritocracy, following a set of project objectives. The goal of The Fedora Project is to work with the Linux community to build a complete, general purpose operating system exclusively from open source software. Development will be done in a public forum. The project will produce time-based releases of Fedora about 2-3 times a year, with a public release schedule. The Red Hat engineering team will continue to participate in building Fedora and will invite and encourage more outside participation than in past releases. Fedora 15, a new version of one of the leading and most widely used Linux distributions on the market, has been released. Some of the many new features include support for Btrfs file system, Indic typing booster, redesigned SELinux troubleshooter, better power management, LibreOffice productivity suite, and, of course, the brand-new GNOME 3 desktop: "GNOME 3 is the next generation of GNOME with a brand new user interface. It provides a completely new and modern desktop that has been designed for today's users and technologies. Fedora 15 is the first major distribution to include GNOME 3 by default. GNOME 3 is being developed with extensive upstream participation from Red Hat developers and Fedora volunteers, and GNOME 3 is tightly integrated in Fedora 15." manufacturer website
1 dvd for installation on an 86_64 platform back to top | 计算机 |
2014-23/2158/en_head.json.gz/7482 | Cyberoam appliances private key exposed
The private key that Cyberoam security appliances use to perform Deep Packet Inspection (DPI) of SSL traffic is in circulation on the internet. This allows anyone on the Cyberoam appliance's network to decrypt other users' encrypted data traffic. The company has responded by releasing an emergency patch that makes every device generate its own unique CA certificate and matching private key.
A week ago, the Tor Project had issued a warning, saying that all Cyberoam appliances appeared to be using the same private key in the SSL certificates that were used to allow the device to inspect SSL traffic. The company responded by posting the following on its blog: "Cyberoam's private keys cannot be extracted even upon dissecting the box or cloning its hardware and software. This annuls any possibility of tampering with the existing certificates on appliance". The company has since realised that this isn't true, and the statement has now been deleted from the blog posting. The amended version only states that the appliances don't offer a key import and export feature.
According to Cyberoam, the newly released OTA (Over The Air) update is installed automatically. It is designed to make appliances generate a unique private key automatically. The company says that using a single private key on all appliances is common practice in this field of industry. Research by The H's associates at heise Security has concluded that, for example, Fortinet appliances also appear to come with a "CA_SSLProxy" that is identical on all devices.
(djwm) « previous | | 计算机 |
2014-23/2158/en_head.json.gz/7785 | Research shows that computers can match humans in art analysis
Jane Tarakhovsky is the daughter of two artists, and it looked like she was leaving the art world behind when she decided to become a computer scientist. But her recent research project at Lawrence Technological University has demonstrated that computers can compete with art historians in critiquing painting styles.
While completing her master’s degree in computer science earlier this year, Tarakhovsky used a computer program developed by Assistant Professor Lior Shamir to demonstrate that a computer can find similarities in the styles of artists just as art critics and historian do.
In the experiment, published in the ACM Journal on Computing and Cultural Heritage and widely reported elsewhere, Tarakhovsky and Shamir used a complex computer algorithm to analyze approximately1,000 paintings of 34 well-known artists, and found similarities between them based solely on the visual content of the paintings. Surprisingly, the computer provided a network of similarities between painters that is largely in agreement with the perception of art historians.
For instance, the computer placed the High Renaissance artists Raphael, Da Vinci, and Michelangelo very close to each other. The Baroque painters Vermeer, Rubens and Rembrandt were placed in another cluster.
The experiment was performed by extracting 4,027 numerical image context descriptors – numbers that reflect the content of the image such as texture, color, and shapes in a quantitative fashion. The analysis reflected many aspects of the visual content and used pattern recognition and statistical methods to detect complex patterns of similarities and dissimilarities between the artistic styles. The computer then quantified these similarities.
According to Shamir, non-experts can normally make the broad differentiation between modern art and classical realism, but they have difficulty telling the difference between closely related schools of art such as Early and High Renaissance or Mannerism and Romanticism.
“This experiment showed that machines can outperform untrained humans in the analysis of fine art,” Shamir said.
Tarakhovsky, who lives in Lake Orion, is the daughter of two Russian artists. Her father was a member of the former USSR Artists. She graduated from an art school at 15 years old and earned a bachelor’s degree in history in Russia, but has switched her career path to computer science since emigrating to the United States in 1998.
Tarakhovsky utilized her knowledge of art to demonstrate the versatility of an algorithm that Shamir originally developed for biological image analysis while working on the staff of the National Institutes of Health in 2009. She designed a new system based on the code and then designed the experiment to compare artists.
She also has used the computer program as a consultant to help a client identify bacteria in clinical samples.
“The program has other applications, but you have to know what you are looking for,” she said.
Tarakhovsky believes that there are many other applications for the program in the world of art. Her research project with Shamir covered a relatively small sampling of Western art. “this is just the tip of the iceberg,” she said.
At Lawrence Tech she also worked with Professor CJ Chung on Robofest, an international competition that encourages young students to study science, technology, engineering and mathematics, the so-called STEM subjects.
“My professors at Lawrence Tech have provided me with a broad perspective and have encouraged me to go to new levels,” she said.
She said that her experience demonstrates that women can succeed in scientific fields like computer science and that people in general can make the transition from subjects like art and history to scientific disciplines that are more in demand now that the economy is increasingly driven by technology.
“Everyone has the ability to apply themselves in different areas,” she said. | 计算机 |
2014-23/2158/en_head.json.gz/9989 | Search BMP:search
Entire BMP Web Site
Best Practices Surveys: Survey Reports
List of Best Practices Surveys | Table of Contents
Tri-Cities Tennessee/Virginia Region - Johnson City, TN
Original Date: 01/22/2001
Revision Date: 01/18/2007
Best Practice : Electronic Village of Abingdon
The Electronic Village of Abingdon is an electronic network that provides public access to a full range of computerized information and communications services. This affordable business tool uses electronic technologies to benefit citizens and businesses within its Abingdon and Washington County service area.
In February 1996, the Town of Abingdon, Virginia realized the need for improved communications to enrich the economy of the town and the Washington County area. A plan, entitled Electronic Village of Abingdon (EVA), was developed to provide affordable broad bandwidth Internet/networking connections via a fiber optic cable. The plan called for the cable to be installed with minimal destruction to the town’s historical features and to keep installation costs as low as possible. In late 1997, a fiber optic cable was laid through a ten-block area consisting primarily of professional and residential businesses. Any building within 150 feet of this fiber backbone had accessibility to the network’s high-speed connections.
The EVA brings high-speed networking service to the community at processing speeds of either ten or 100 megabytes per second, four times faster than conventional modem speeds, depending on the type of service elected. Using digital satellite links (DSL), both the citizens and businesses within the community can connect to any permissible computer within Abingdon. These aspects not only tie individuals to business contacts, but greatly enhance the community’s capability to access education and civic organizations. This service is linked to 11 counties in Tennessee, providing educational programming for schools and the feasible transfer of technology information to industries within the region. The most advantageous feature is tele-medicine which involves the real- time transfer of patient files and medical history to other area hospitals of the region. The EVA provides full-time e- mail and Internet capabilities, and allows for electronic file transfers, remote computer operations, and video conferencing at the fastest connections possible.
The EVA network differs from others as its technology and scope of network coverage is much larger than a single or corporate business unit would attempt. The cost is comparable to conventional ISDN web service, typically $35 per month for a single computer ten-megabyte service and a one-time installation fee of $75. Requirements for using this network include a fiber transceiver or media converter (about $175) and a Pentium-class computer with a minimum of 16 megabytes of RAM plus Windows 98, Windows 95, or Windows for Workgroups software for any IBM compatible equipment. I-MAC computers come equipped with transceiver capabilities and Internet browsers, therefore e-mail programs can be downloaded once connected.
Multiple computers require a hub interconnection (starting around $125), but only one fiber transceiver is required. The monthly service fee decreases to $22 for two to three computers, and to $16 for four to six computers. Although the monthly service fee significantly increases for connections over seven, the service fee is still reasonable for the speeds, technical support, and service provided. Since the EVA was implemented, usage compared to availability is currently at 70%.
For more information see the
Point of Contact for this survey. | 计算机 |
2014-23/2158/en_head.json.gz/12341 | HTML5 Center
Apr 20, 2012 by Community Team
in Community Showcase, Podcast
The Anvil Podcast – CMU Sphinx
Rich: A few days ago I spoke with Alex Rudnicky, Evandro Gouvea, Bhiksha Raj, and Rita Singh, who all work on the CMU Sphinx project. If the embedded audio player below doesn’t work for you, you can download the audio in mp3 and ogg formats.
You can subscribe to this, and future podcasts, in iTunes or elsewhere, at http://feeds.feedburner.com/sourceforge/podcasts, and it’s also listed in the iTunes store.
Alex, Bhiksha, and Rita are currently with the language technologies institute at the school of computer science at Carnegie Mellon University. Evandro used to be there as well, but he’s currently working as a speech consultant in Germany, and at the moment he’s working with a Brazilian company called Vocalize. I spoke with these four about the history of the CMU Sphinx project, and what its aims are. Here’s some of my interview with them.
Alex: Let me give you a quick overview of it. The original Sphinx was developed as part of a dissertation by Kai-Fu Lee in the Computer Science department here. Kai-Fu has gone on to bigger things since then, and, in fact, you might know his name. The original project demonstrated something that people didn’t think was possible, which is to simultaneously do continuous speech, have it be connected speech, and be speaker-independent. These are things we take for granted today, but it was still a bit of a head-scratcher at that time.
That was Sphinx 1.
There was a Sphinx 2, which was developed by | 计算机 |
2014-23/2158/en_head.json.gz/13273 | Adobe Announces Public Beta of Photoshop CS3By Mike Pasini, The Imaging Resource
(Thursday, December 14, 2006 - 16:34 EST)
You need a CS2 license to run it longer than two days, but the beta shows off a new interface and some significant improvements to important tools. It's the first version of Photoshop to run natively on Intel Macs, too.
Encouraged by its experience with the Lightroom public beta, Adobe has announced it will offer a public beta of Photoshop CS3, the next version of the company's professional image editing software. The beta package, the first version of Photoshop to run natively on Intel-based Macs, also includes Adobe Bridge. With the final version expected to ship in Spring 2007, the beta for both Macintosh and Windows users will be available at Adobe Labs tomorrow as a 750-MB download in English only.While anyone can download the beta, you need a Photoshop CS2, Creative Suite, Production Studio or Bundle serial number to get a Photoshop CS3 beta serial number. The beta serial number is required to activate the Photoshop beta and use it beyond the two-day grace period.Read our preliminary peek at the new features, system requirements and important links.
Original Source Press Release:
Adobe Releases Beta Version of Photoshop CS3Company Also Previews Adobe Bridge and All-New Adobe Device CentralSAN JOSE, Calif. - Dec. 14, 2006 - Adobe Systems Incorporated (Nasdaq:ADBE) will introduce a beta version of Adobe Photoshop CS3 software, the next release of the world standard in digital imaging, on Friday, December 15th. Adobe is delivering a widely available Photoshop CS3 beta to enable customers to more easily transition to the latest hardware platforms, particularly Apple's new Intel-based systems. The beta is available as a Universal Binary for the Macintosh platform, as well as for Microsoft(r) Windows(r) XP and Windows Vista computers. The final shipping release of Adobe Photoshop CS3 is planned for Spring 2007. The software can be downloaded at: http://labs.adobe.com, in the early hours Pacific Standard Time on December 15.Packed with new features, Photoshop CS3 beta also includes a pre-release version of a major upgrade to Adobe Bridge, as well as a preview release of the all-new Adobe Device Central. Photoshop customers can use Adobe Device Central to design, preview, and test compelling mobile content, created specifically for smaller screens. This new tool, integrated in the Photoshop CS3 beta, simplifies and accelerates the creation of mobile content through a preview environment and built-in device profiles."This is an exciting time for the Mac, and Adobe wanted to ease the move to new Intel-based systems with a preview release of Photoshop CS3," said John Loiacono, senior vice president of Creative Solutions Business Unit at Adobe. "We didn't want to leave Windows customers out of the party, so the beta is available to everyone in the creative industry's most passionate user community -- no matter what their platform choice. We still have some surprises in store, but this beta gives customers an early chance to see the power of another great Photoshop release, optimized and tuned to run natively on the latest hardware and operating systems."To utilize Adobe Photoshop CS3 beta, customers require a serial number from either Adobe Photoshop CS2, Adobe Creative Suite(r) 2, Adobe Creative Suite Production Studio, Adobe Design Bundle, Adobe Web Bundle or Adobe Video Bundle. Adobe Photoshop CS3 beta is available in English only. Customers who have a valid serial number for all other language versions of qualifying Adobe products can download the software. Without a serial number, users can still download Adobe Photoshop CS3 beta, with the product expiring after two days. Customers must register online with Adobe or have an existing Adobe.com membership account to access the software.In related previews of future technologies, Adobe has also released two new web tools as beta releases. Adobe CSS Advisor is a new web-based community site to easily identify and resolve browser compatibility issues. Web designers and developers can contribute their own issues, comment on existing issues, or become an editor to participate in driving the site's future. The site is accessible at: www.adobe.com/go/cssadvisor. Also introduced is a beta release of Adobe Spry framework for Ajax, a designer-focused solution for adding the interactive power of Ajax when developing rich, dynamic web-sites. Adobe Spry framework for Ajax is available at: www.adobe.com/go/cssadvisor.System RequirementsFor Adobe Photoshop CS3 beta, recommended system requirements are as follows. For Macintosh: Mac OSX 10.4.8 or 10.5, 1 GHz PowerPC(r) G4 or G5 processor, Intel based Macintosh. For Windows: Intel(r) Xeon(r) , Xeon Dual, Centrino(r) or Pentium(r) 4 processor, Microsoft Windows XP with Service Pack 2 or higher, Microsoft Windows Vista. Both platforms require 512 MB RAM and a 1024x769 resolution screen. Photoshop CS3 beta will expire soon after the launch of Photoshop CS3 in Spring 2007. Details on final pricing, system requirements and availability have yet to be determined. | 计算机 |
2014-23/2158/en_head.json.gz/14071 | Oracle Waives Condition To Persuade PeopleSoft Board Jun 24, 2003 (10:06 AM EDT) Read the Original Article at http://www.informationweek.com/news/showArticle.jhtml?articleID=10800057
In an effort to entice PeopleSoft Inc.'s board to rethink its rejection, Oracle said Tuesday that it will waive a condition in its offer that required PeopleSoft keep its original offer for J.D. Edwards & Co. intact.
Oracle's change of heart means it will overlook PeopleSoft's most recent bid of $1.75 billion, up from $1.7 billion, to buy J.D. Edwards. PeopleSoft changed its bid last week in an effort to accelerate the merger of the two companies.
In a statement, an Oracle spokesman said that although Oracle has provided the waiver, "we continue to view the amended merger agreement as an unlawful device to deprive PeopleSoft shareholders of their right to vote with respect to the J.D. Edwards merger," adding that the condition Oracle has waived was identified by the PeopleSoft board of directors as an important reason for declining to pursue discussions with Oracle. "We hope that with this waiver, PeopleSoft will finally agree to meet with us, as their shareholders are demanding," he said.
For a second time, PeopleSoft's board last week rejected Oracle's unsolicited all-cash offer, which Oracle had upped from $16 per share to $19.50 per share. Despite the back-and-forth battles, Oracle chairman and CEO Larry Ellison pitched his philosophy on the future of IT systems and applications at Oracle AppsWorld in London this week. "Modern systems are going to focus on information," he told conference attendees. He said the biggest problem facing application software today is data fragmentation caused by information about businesses being stored in too many separate databases.
"Our data is so chopped up into so many little pieces, all we can see are the trees and we can't see the forest," Ellison said. "There is no way that clever systems integration, using Web services or anything else, can solve data fragmentation."
The solution, Ellison said, is to consolidate information into a single database and have applications sit on top. "That's how we got to the E-business Suite," he said, referring to Oracle's suite of software. "With the next few releases of E-business Suite, you will find CEOs, department managers, and individuals inside all departments go to their computers and find out what they are supposed to do and how well they are doing it."
Oracle also revealed availability of its Oracle Advanced Product Catalog, the core of Oracle's Product Lifecycle Management applications. The software centralizes all product and component information into a single global catalog. | 计算机 |
2014-23/2158/en_head.json.gz/18888 | E3 07: Call of Duty 4 - preview
Christian_Nutt
The Call of Duty 4 demo we were shown, instead of just showing a snippet of the game here or another bit there, gave us an excellent overview of the game's first mission: a helicopter infiltration, which drops you and your squad off on a ship crawling with soldiers during a storm.
The key to Call of Duty 4 is its atmosphere. Sure, the gameplay is solid - but it wouldn't matter if the developers didn't completely nail cinematic presentation and hyper-realism that has become the series' trademark. From what we've seen thus far, COD4 aces it. From the first dramatic moments you're pitched onto the rocking, rainy ship to infiltrate it till the harrowing escape from its rapidly sinking decks... it's non-stop action (well, probably the part they skipped over had some "stop" in it. We'll find out when the game comes out.)
It's difficult to describe what happened with accuracy - it went by fast, as the player seamlessly infiltrated the different areas of the ship with his squad and, eventually, tracked down the mission objective: a set of documents. Soon after, a detonation sent the ship into a wild angle - it's sinking. The dramatic finish was a dash back out of its twisty corridors and onto the deck, to jump onto the helicopter at the last possible moment. We also got a sneak peak at some shooting action in the Middle East.
The final interesting detail is a scoop on the multiplayer: you'll be able to create your own class, this time around, with access to abilities and weapons. It sounds like you'll specifically be able to tweak things (such as gaining "increased sprint") and then take your tweaked class online for multiplayer battling. This could add a whole new angle to the game - but balance will be the key or it'll just screw up everything. We'll see.
To check out the gameplay demo for yourself, look below: | 计算机 |
2014-23/2158/en_head.json.gz/20615 | Jenova Chen
a.k.a. Xinghan Chen or 陈星汉
is the visionary designer of the award-winning games Cloud, flOw, Flower, and most recently Journey. After earning a bachelor's degree for computer science in his hometown of Shanghai, Chen moved to Los Angeles, where he got a master's degree in the founding class of University of Southern California's Interactive Media and Games Division. Following school, he founded thatgamecompany with fellow graduates, where he remains its Founder and Creative Director.
thatgamecompany’s titles have garnered a multitude of honors over the years. flOw was selected as a permanent colloection of MoMA. Flower and Journey were awarded the British Academy of Film and Television Arts for Artistic Achievement, both were named the Best Independent Game at the Spike TV Video Game Awards, as well as the Best Downloadable Game and the Game of the Year at the Game Developers Choice Awards and the DICE Awards. To date Journey has the highest historical aggregate rating on Metacritic of any exclusive PlayStation Network title and had Entertainment Weekly call the game “A glorious, thoughtful, moving masterpiece”.
Personally, Jenova was named one of Variety’s 10 Innovators to Watch in 2008, was selected as one of the Most Creative Entrepreneurs in Business in both 2009 and 2010 by Fast Company and was given the prestigious honor of being named to the MIT Technology Review Magazine’s TR35 list, naming the World’s Top Innovators under the Age of 35 in 2008.
Selected Interviews New Yorker 2013
Gamasutra 2012
NPR 2009
CBC Radio Spark 6'30" 2009 Wall Street Journal 1 & 2 2009
Latest Version, Oct 2011
Jenova Chen 陈星汉 | 计算机 |
2014-23/2158/en_head.json.gz/21644 | In the MediaAnnouncementsCalendarNewsletter
Improved Message-Passing Algorithms for General-Purpose Optimization Based on ADMM SHARE:
Event Speaker: Jonathan Yedidia (Disney Research)Event Location: 32-141Event Date/Time: Tuesday, September 17, 2013 - 4:00pmReception to follow.
The alternating direction method of multipliers (ADMM) originated in the 1970s as a method for solving convex optimization problems. We begin by explaining how the ADMM can be re-interpreted as a message-passing algorithm on a bipartite graph, suitable for general optimization problems, including problems with hard constraints. Based on this interpretation, we introduce the Three Weight Algorithm (TWA), an improved version of the standard ADMM algorithm which assigns each message a weight. In the TWA, weights are restricted to three different values: infinity for absolute confidence, zero for complete lack of confidence and one as a default confidence level. This simple change has an enormous impact on the speed with which solutions are found for non-convex problems.
As a first example, the TWA is used to solve very large (e.g. 100 by 100 instead of 9 by 9) Sudoku puzzles. Here, the infinite weights play a major role and allow the propagation of information through hard constraints. In the second example, the TWA is used to pack a very large number of hard disks in a box. Now the zero weights play the major role and speed up convergence several orders of magnitude compared to standard ADMM. In a third example, we show that the TWA can solve difficult multi-robot trajectory planning problems.
These examples demonstrate the significant advantages that the TWA has compared to other message-passing algorithms like belief propagation, while having very similar memory requirements. Finally, we conclude by showing that the zero-weight messages in the TWA also naturally enable the integration of higher-order knowledge into a lower-level parallel optimization algorithm.
(This talk describes work done jointly with José Bento and Nate Derbinsky.)
Jonathan Yedidia is a senior research scientist at Disney Research Boston. He holds a Ph.D. in theoretical statistical physics from Princeton (1990). From 1990 to 1993, he was a junior fellow at Harvard's Society of Fellows. From 1993 to 1997 he played chess professionally (he is an international master and was New England chess champion in 1992 and co-champion in 2012). In 1997 he joined the web start-up Viaweb, where he helped develop the shopping search engine that became Yahoo! Shopping. In 1998, he joined Mitsubishi Electric Research Labs (MERL), where he worked on probabilistic inference algorithms such as the famous belief propagation algorithm, and their applications in a variety of fields, including communications, signal processing and artificial intelligence. He left his position as distinguished research scientist at MERL in 2011 to join Disney Research, where he works on algorithms for artificial intelligence, optimization, computer vision, and machine learning.
MIT Electrical Engineering & Computer Science | Room 38-401 | 77 Massachusetts Avenue | Cambridge, MA 02139 | 计算机 |
2014-23/2158/en_head.json.gz/22545 | ENTERTAINING THE FUTURE
Vol.33 No.2 May 1999
ACM SIGGRAPH
Dinosaurs - Before the Beginning
Mike MilneFrameStore
May 99 Columns
A couple of years ago - well, three actually - I was called to a pre-production meeting by one of our producers. Now these meetings are always a problem in our profession, because they always take place while you are awake. And the problem is that if you are awake, you're in one of several stages of increasing panic - depending on how many days there are to run before the delivery date of whatever animation you're working on - and the idea of taking time out to attend a meeting is about as welcome as the idea of Thanksgiving dinner is to a turkey. So, when a pre-prod meeting looms, the first question that the producer tends to be asked is "Can't you find someone else to do it? I'm trying to finish these Hypo-Energising Vitamin blobs for a shampoo commercial!" (or words to that effect). This occasion was no different, and I asked the question.
"No, I think this one's right up your street" was the producer's reply, followed by those honey-coated words that guarantee you'll drop everything and appear, bright-eyed and bushy-tailed, notebook akimbo and pencil poised, in the meeting room: "It's a big project." Ah! The Big Project! This could be the one which means you'll have to do three months' research in the Bahamas, perhaps, into - well, whatever it is that they do in the Bahamas that might conceivably be of use in a Big Project - not likely, I grant you, but nonetheless possible - or maybe it's the one where we get to recreate Ancient Rome for a costumed epic - naturally, we'll need to do some research for that one too ...visions of sipping a negrone in the twilit Piazza Navona ...
Let's Get Real
....Aah, but these dreams soon fade. Precisely how many times has a project required a computer animator to travel to exotic places to do research for a big project? Precisely none, in my case. Well, I suppose that's not quite true. I had to go to the east coast of Scotland one bitter January afternoon to look at a petroleum refining station (would we like to quote on a CG animation to explain the workings of the plant? We would. Would they like to pay what it would cost? They wouldn't). Also, I once spent a sweltering day in Madrid at a meeting about a proposed TV show - the day that the temperature hit 40 degrees in the shade (that's 105 for those of us who still cling to the old ways), and the production company didn't have any air conditioning (would we like to quote on producing a CG character in real time to interact with the actors on the show, five nights a week? We would. Would they like to pay what it would cost? You guessed it). So what is so attractive about a big project, if it doesn't entail lounging on a tropical beach for a few weeks? Well, I suppose it's that faint glimmer of a hope that a large project gives you the chance to Get It Right. Most of the careers of CG professionals working outside California (and maybe quite a few inside as well) are spent working on animations whose life cycles are relatively short - a TV commercial or broadcast title is typically a few weeks from start of preproduction to air date, and many times it can be much shorter. On one occasion I walked into a meeting on a Tuesday morning to be asked if I could produce an animation for a commercial that was to be aired on Friday. It was for one of those "Best Disco Dance Collection You've Ever Ever Heard In Your Life Ever" sort of CD compilations that the British music-buying public are so fond of. Were they happy for me to decide what the animation would consist of? They were. Did I accept the job? I did. Animated on Tuesday afternoon, rendered overnight, edited on Wednesday, dubbed and ready to play out on Thursday night, transmission on Friday. Was I proud of the job? Not particularly - but with that schedule, I wasn't complaining, and neither was the client.
Obviously this sort of thing isn't typical, but nevertheless the general demeanour of work for the CG professional tends to be hurried. A venerated TV design guru once said to me, back in the long lost days of analogue video and flying logos, "In television, everything is urgent but nothing is important." And it's precisely this urgency that, ultimately, can wear down a CG animator's pride in the profession. Everything is always being rushed out, there's no time to refine, and there's certainly no time to experiment - there's no time to Get It Right.
It Could Be The Big Project
So, a CG animator going to a Big Project meeting is as hopeful as a child on a birthday morning - maybe this is going to be the chance to really get to grips with something, to really hone it, to (if you'll pardon a third split infinitive) really Get It Right. And so it was with me as, bristling with smiles, notebooks, pencils and my obligatory mug of coffee, I breezed into the meeting room to meet the Big Project - or rather, its instigator - who turned out to be a producer from the British Broadcasting Corporation's Science department. His first words to me were something like: "About the dinosaurs in Jurassic Park..." and I didn't hear the rest of the sentence. It was drowned by the audible clang as my face fell, and I could see my producer glaring at me from the other side of the table - that "for God's sake be more positive" sort of glare - but I couldn't help it. This was obviously a JPBF job.
When the movie Terminator 2 came out at the turn of the decade, we had a rash of clients coming to meetings (at the company I then worked for) and starting off their briefs with: "You know that really good bit in Terminator 2 when..." and usually finishing off with "we've only got five grand and we want it by Thursday." This sort of thing came to be known as "T2BT" - Terminator 2 By Thursday - and was our code for an unrealistic project which the client couldn't afford and for which there wasn't enough time. Later, when Jurassic Park came out, there was a similar stream of requests from hopeful clients about massively advanced CG projects for no money in double-quick time ("You know that really good bit in Jurassic Park when you first see the brachiosaur..."). I dubbed these "JPBF" because they were usually a bit larger in scope than T2BT's, and had a little more time - hence "Jurassic Park By Friday."
And now, here I was with a textbook example of a JPBF taking shape before me, and I could feel my head shaking of its own accord as I started to say (in spite of the weapons-grade glare coming from my producer) "No, I'm sorry, whatever it is we can't do it, not for that kind of budget and deadline..." but the words died on my lips as I began to fall victim to the lure of the Big Project, as I started (blind fool that I was!) to listen to the man from the BBC.
Tim Haines (for that was his name - it still is, actually) is a producer for Horizon, the major U.K. science documentary series that has been running in the U.K. ever since I can remember. In fact, it was Horizon that encouraged me in my major career shift many years ago, when I moved from traditional graphic design into CG - I had seen a Horizon special on the brand-new field of computer imagery, which featured an interview with a certain Dr. James Blinn, who was explaining the even-newer technique of bump-mapping with the aid of a CG goblet, and the programme convinced me that my "changing horses in mid-stream" was not as foolhardy as it seemed to my friends and family at the time. Tim explained that it was his intention to make six half-hour documentaries, with the working title Walking with Dinosaurs, and the series would cover the natural history of dinosaurs. "Which particular bit of the history of dinosaurs?" I asked. "All of it," said Tim. "Ah. I see. And how much of that is computer generated?" "All of it," was the reply, "well, nearly all of it, anyway. And it must look like Jurassic Park - people must believe they're looking at the real thing. Can it be done?"
I tried to say no - I really tried: "No, I don't think... well, that is.. it would be very expensive..." I lamely stuttered, but Tim charged ahead at full steam. He produced a compilation of some wildlife footage, which we examined frame-by-frame; he explained that wildlife documentaries were filmed in 16mm, which is grainier than the 35mm used for feature films like Jurassic Park, so that the texture work need not be as detailed; that the grammar of wildlife camera work was very different from Hollywood's, and we could use that to our advantage; that many scenes were shot in low light conditions, and we could use that to help us; that there were plenty of present-day animals on which to base our animation; that dinosaurs had neither fur nor feathers (which he knew would be problematic for CG), and that therefore there were no major technical breakthroughs needed; and furthermore... there were many furthermores.
I wasn't totally convinced, but at least it wasn't really a JPBF. I mean, there was plenty of time - two years, apparently. And there would be a budget of a sort - not a Jurassic budget, by any means (this was to be a documentary series, after all), but more than anyone had spent on a documentary before - and furthermore... well, furthermore, there was that tempting glitter of the Big Project hovering over it all. It's enough to say that I left the meeting a couple of hours later having promised (to my producer's horror) to do a test for Tim to look at when he returned from filming frozen mummies above the snow-line in the Peruvian Andes, in three months' time. He was going to approach some other companies, and he would review the tests in October, with a view to starting a pilot sequence at the beginning of the following year.
And later that night, all I could think of was: "What if someone else gets it? What if, in two years' time, I turn on the television and see those dinosaurs walking across the screen, and I'm still animating Hypo-Energising Vitamin blobs? How could I ever live with myself?"
Research Begins
The sun was shining (quite a rare event in London) the following Saturday as I walked down Exhibition Road to my favourite building in London, the Natural History Museum. On the weekend you have to fight your way through squadrons of schoolchildren to get to the entrance, but it's worth it. The building was designed by Alfred Waterhouse in the last quarter of last century, and would rival many cathedrals in Europe - and the inside is just as lofty, every pillar carved with animals and birds and leaves, every arch a monument to the Victorians' sense of dominion over nature, their certainty that man was at the very pinnacle of earthly life - and that the dinosaurs, literally 'terrible lizards,' were right at the other end of the scale - not far above the blobs of amoebic jelly that had so recently been revealed as everybody's ancestor. They had the idea that the great reptiles had brains that were so small, and so far from the rest of their bodies, that the poor beasts could be eaten for breakfast by a passing predator and not know about it until after lunch. Of course that view has changed in the last couple of decades, and I was determined to catch up with everything that was new and exciting in paleontology. After an intensive session of photography in the dinosaur hall I went to the museum bookshop and bought every plastic dinosaur model and every dinosaur book that they stocked. When I flashed the company plastic at the cashier, I got that knowing look from her that said "You're really buying these for the kids, aren't you?" but I didn't care, because I really was buying them for a kid - and that kid was me. As a child, I was never a dino freak - and in what I laughingly call my adulthood, I have been more preoccupied with living creatures than the extinct variety, so I had a lot of catching up to do. The rest of the weekend was spent reading and playing with plastic dinosaurs, and trying to figure out how to do about 10 Jurassic-Parks-worths of CG effects for about one-tenth of the Jurassic Park effects budget. I bought a season ticket to London Zoo and found time to drop in at odd moments, making friends with a Sri Lankan elephant called Ghita and a rhino whose name I didn't know, but I called Ronnie. He didn't seem to mind, and would come trotting over to the corner of the compound where I could just reach down and scratch his nose for him. I couldn't help noticing the similarity between Ronnie and my plastic triceratops, and I soon realised that he would make the ideal subject for my test. I also soon realised that the summer had nearly gone, and that I had to complete the test before Tim returned from Chile - it's amazing how fast a deadline approaches when you think you have plenty of time! Ronnie's walk - with that surprisingly prissy front-hoof motion common to most ungulates, caused by the automatic snapping-back of the tendon as the hoof leaves the ground - became a triceratops walk, a cheesy child's toy became a CG Triceratops body, and the whole thing was composited over footage of the Serengeti plain at dawn. Add a little savannah soundtrack, and a foghorn note as the beast lifts its head, and voila! - a test. Green Light Means Go
A couple of months later, Tim called to say that he had a green light for the pilot sequence, and did we want to do it? We did. And could we start in January? We could. It was only then, when we were going through the groundwork for the pilot sequence, that I realised that I had stumbled into the biggest JPBF I'd ever come across, and what's more, I'd actually agreed to do it. The statistics reveal the full story: the pilot sequence was six minutes long, consisting of about 60 shots. Some of these would be pure live-action background (establishing shots, insects and vegetation) and some would be animatronic (close-up heads), but at least 45 shots would be CG. The budget would cover a total CG animation team of ... well, one, actually. And even though we had three months to do it, that really isn't a JPBF, relatively speaking, that's more of a JPBT - Jurassic Park By Tonight!
Luckily for me, my colleague Andrew Daffy (who had helped on the test) came to the rescue and volunteered to share the burden by working evenings and weekends, while during the day he animated Hypo-Energetic blobs of industrial vegetable oil masquerading as butter. He took on the task of modeling, texturing and animating Rhamphorynchus (a pterosaur) and Cetiosaurus (a sauropod), while I handled Liopleurodon Ferox (a pliosaur), Cryptoclidus (a plesiosaur), and Eustreptospondylus (a theropod). Yes, we did invent shorter names, but I'm afraid they weren't very imaginative.
One of the most satisfying things about this sort of project is meeting people who have made a lifetime's study of a subject, and who are really thrilled by it, and who want to tell you about it. Let's face it, that's a rare attitude among the creators of Hypo-Energetic blobs, who are probably more thrilled by the thought of a second Porsche and a weekend retreat somewhere in the Caribbean. Come to think of it, they want to tell you all about that, too, but it's not quite the sort of enthusiasm I had in mind. While making the pilot sequence, we exchanged a stream of email with our designated paleontologist from Portsmouth University, whose particular field of expertise covered the pliosaur and plesiosaur - both of them sea-dwelling creatures that had returned (or rather, their common ancestor had) to the ocean from a land-based existence, and who had readapted to an aquatic environment. Their peculiar mode of locomotion - a sort of underwater flying - has completely disappeared from the planet, leaving only the faintest of echoes in the forearm movement of seals and turtles. After reading through some fairly technical papers, and revisiting Bernoulli's theorem, I felt fairly competent to animate it - and was quite gratified at the result - it actually looked as if I'd Got It Right at last. I wish I could say the same about the theropod walking (another paleontological hot potato - did they run, or just walk very fast?), but unfortunately I can't - it was to be another year before somebody Got that one Right.
After a lot of hard work from some very dedicated people, the pilot sequence was finished, on time. And it's amazing how sound effects, music and a well-written commentary can cover up the shortcomings in a theropod's walk cycle! We showed the pilot to our paleontological expert; after seeing the section about the plesiosaur, he said: "I've been teaching that for many years. But this is the first time I've seen it with my eyes. Thank you." I think I saw a tear in his eye - there certainly were tears in mine. It was, quite simply, the most moving moment I've experienced in all my professional life.
Did the pilot enable Tim Haines to go to Cannes and raise the coproduction funds? It did. Did the pilot get the green light for the series? Absolutely. And did we Get It Right? No, not all the time. But we weren't complaining - and the client wasn't either.
Pilot Complete; Bring on The Big Project
The pilot was completed exactly two years ago as I write, and the go-ahead for the series marked the beginning of the real project, the Big Project. And just how big is it? Well, if you measure it by number of people working on it, it's quite modest compared to the U.S. equivalent. We have about 40 people on Walking With Dinosaurs, of whom less than half are directly engaged in CG animation or CG software development. Our project will end up with around 900 CG shots, which is only half the number I've heard suggested for the CG shots in Star Wars:The Phantom Menace, and of course we're working at video resolution (albeit at 16:9 anamorphic, which will shortly be the new European standard) which is considerably less expensive, in both rendering times and disk usage, than 35mm film. If we compare budgets - always a tricky one, that, since nobody likes to admit exactly how much something is costing - I would guess that our digital effects budget is somewhere between one-twentieth and one-sixtieth of the budget of Disney's current Dinosaur project (based entirely on rumour, which is notoriously unreliable). If we have a claim to Big Project status, it must be the sheer quantity of animation that we are producing - nearly three hours, or more than a quarter of a million frames of keyframed, photo-real animation. Well, the story of the series itself is far bigger than this article. There's so many things I'd like to tell you about it, now that we've all been immersed for a year and a half; I'd like to tell you about how Sharon turned a gutted floor into a dinostore, and how we interviewed for months to find our kindred spirits; I'd like to be able to tell you about the way Carlos and Virgil cracked the theropod walk, or about the way that we all stood in the rain and measured the bounce of an elephant's foot in Woburn Wildlife Park. I'd like to be able to tell you about how David, Stuart and Sophie built 40 dinosaurs, and how Daren painted an 18,000-pixel-wide diplodocus skin, and how we sat in an underground movie theatre for two days and listened to paleontologists from around the world argue about how dinosaurs moved; I'd like to tell you about Rich's iberosornis animation, which the client swore was a live-action shot, or how Marco led the Dinostore racing team to victory and still finished all his shots, or how Richard made the plesiosaurs live on the land; how Max's chorus line was cured of synchronised blinking, and how Alec added wobble to the longest neck ever seen on Earth, and how Tim tried to hide the photo of his baby sauropod. Most of all, I'd like to tell you how much we all get off on the simple process of trying to Get It Right. But that's another article, to be written much later when the dust has settled. Right now, the animation team has reached the last episode, and the end is - all too clearly - in sight. And, even though the pace is frenetic, and the deadline rushing towards us with all the inevitability of an asteroidal extinction event, I detect a certain calmness of purpose, a certain steadiness of eye, about the people working on the project. Did we get a chance to experiment, to refine, to hone? We did. Did we get to grips with the Big Project? We did, I think. And did we Get It Right? Well, that's for you to decide later this year. I hope we did.
FrameStore
9 Nole Street
London W1V 4AL
Tel: +44-171-208-2600
The copyright of articles and images printed remains with the author unless otherwise indicated.
Mike Milne began his career in the mid-sixties, working for the Centre for Advanced Study of Science in Art - an experimental venture funded by the philanthropist Erica Marx in 1966. When the Centre was wound up, Mike joined the ranks of the drop-out generation and went to live as an artist and beachcomber in the deserts of Southern Spain. In the following decade he found himself in many diverse occupations, including two years as the warden of a field study centre on an uninhabited island in the Firth of Clyde, and a season as a zoo inspector for the Universities Federation for Animal Welfare. It was during this time that he developed his passionate interest in wildlife and all aspects of natural history and evolutionary theory. Returning to England in the mid-seventies he joined an industrial graphic design studio at Smiths Industries plc, eventually becoming Corporate Publicity Design Manager in charge of corporate graphics for 30 companies in the group. After a chance meeting with a computer enthusiast in the late '70s, Mike decided that the future of graphics lay in computers, so he took a night school course in computer programming at Middlesex Polytechnic. This led to a job offer at Research Recordings (now Air TV) as Head of Graphics, operating one of the first commercially available computer animation packages - the now-defunct Via Video system.
While at RR, Mike devised a system for combining computer-generated animation with live-action footage that was used to great effect in music videos (such as Culture Club's Money-Go-Round) and led to a D&AD Silver Award for Outstanding Television Graphics for the first series of Spitting Image, a political satire show that ran for 10 years. In 1984, Mike was asked to join a small team at Electric Image to use the new Abel Image Research software running on one of the largest computer animation installations then operating in London. He became a shareholder and Director of the company, and his work was shown at the 13th international SIGGRAPH conference in 1986. Subsequently, as Director of Production, he was involved in the production of many innovative CG pieces, including the U.S. science documentary series After the Warming with James Burke, and the TV campaign for the privatisation of the U.K. Water Authority.
In 1990 he joined The Bureau as Head of the 3D department, which produced a string of successful CG commercials, before being asked to form the new computer animation department at FrameStore in 1992. The department was a success from the outset, notching up awards for the BBC's Morph and Griff campaign, and winning the London Effects and Animation Festival gold awards for several years in succession - most recently for the title sequence of the latest James Bond film, Tomorrow Never Dies. Mike is a regular speaker at computer animation conferences in Europe, and lectures at Bournemouth University, Mid-Glamorgan College and the London Animation Studio at Central St Martins.
Mike is currently working on a two-year project to create six wildlife films for the British Broadcasting Corporation, entitled Walking with Dinosaurs. The films will cover the natural history of the age of reptiles in documentary style, featuring photoreal CG animation set into live-action footage that is as close as possible, botanically, to the environment of the time. Mike sees Walking with Dinosaurs as both the greatest challenge of his career, and the ideal project to combine his 20 years' experience of computer-generated animation and a lifelong passion for the study of wildlife - both the living and extinct varieties. Reader Survey
Past Issues of CG On-Line
Join the ACM and SIGGRAPH
Join SIGGRAPH
Calendar - Upcoming Events
SIGGRAPH 98 - 25 years and very tangerine
SIGGRAPH 99
The SIGGRAPH home page
Professional Chapters
From the Editor - Gordon Cameron
About the Cover - Karen Sullivan
Entertaining the Future - Mike Milne
Gaming & Graphics - Richard Rouse III
VisFiles - Bill Hibbard
Images and Reversals - Thomas G.West
Real-Time Interactive Graphics - Scott S.Fisher and Glen Fraser
Professional Chapters - Scott Lang and Colleen Cleary
Education - R.J.Wolfe
CG Pioneers - Carl Machover
SIGGRAPH Public Policy - Bob Ellis
Standards Pipeline - George S.Carson
Comics from the Other Side - Teresa Lang
This quaters' issue is dedicated to our columnists who were given even more space than usual to air their views. | 计算机 |
2014-23/2158/en_head.json.gz/23033 | ASIS&T - The Information Society for the Information Age
About ASIS&T
SIGS & Chapters
E-Mail Lists Home > Publications > Bulletin
> December 2008/January 2009
ARIST
JASIST
Bulletin Issues
Past issues of the Bulletin are
available here, or through the ASIS&T Digital Library.
Please tell us what you think of this issue! Feedback
Bulletin, December 2008/January 2009
Open Source Software in Libraries
by Eric Lease Morgan, guest editor for special section
Eric Lease Morgan is with the University of Notre Dame and can be reached via email at emorgan<at>nd.edu>
It is a privilege and an honor to be the guest editor for this special issue of the
Bulletin of the American Society for Information Science and Technology on open source software. In it you will find a number of articles describing open source software and how it has been used in libraries.
Open source software or free and open source software is defined and viewed in a variety of ways, and the definition will be refined and enriched by our authors. However, very briefly, for those readers unfamiliar with it, open source software is software that is distributed under one of a number of licensing arrangements that (1) require that the software�s source code be made available and accessible as part of the package and (2) permit the acquirer of the software to modify the code freely to fit their own needs provided that, (3) if they distribute the software modifications they create, they do so under an open source license. If these basic elements are met, there is no requirement that the resulting software be distributed at no cost or non-commercially, although much widely used open source software such as the web browser Firefox is also distributed without charge. In This Issue
The articles begin with Scot Colford's �Explaining Free and Open Source Software,� in which he describes how the process of using open source software is a lot like baking a cake. He goes on to outline how open source software is all around us in our daily computing lives.
Karen Schneider's �Thick of the Fray� lists some of the more popular open source software projects in libraries and describes how these sorts of projects would not have been nearly as feasible in an era without the Internet.
Marshall Breeding's �The Viability of Open Source ILS� provides a balanced comparison between open source software integrated library systems and closed source software integrated library systems. It is a survey of the current landscape.
Bob Molyneux's �Evergreen in Context� is a case study of one particular integrated library system, and it is a good example of the open source adage "scratching an itch."
In �The Development and Usage of the Greenstone Digital Library Software,� Ian Witten provides an additional case study but this time of a digital library application. It is a good example of how many different types of applications are necessary to provide library service in a networked environment.
Finally, Thomas Krichel expands the idea of open source software to include open data and open libraries. In �From Open Source to Open Libraries,� you will learn that many of the principles of librarianship are embodied in the principles of open source software. In a number of ways, librarianship and open source software go hand-in-hand.
What Is Open Source Software About?
Open source software is about quite a number of things. It is about taking more complete control over one's computer infrastructure. In a profession that is a lot about information, this sort of control is increasingly necessary. Put another way, open source software is about �free.� Not
free as in gratis, but free as in liberty. Open source software is about community � the type of community that is only possible in a globally networked computer environment. There is no way any single vendor of software will be able to gather together and support all the programmers that a well-managed open source software project can support. Open source software is about opportunity and flexibility. In our ever-dynamic environment, these characteristics are increasingly important.
Open source software is not a panacea for libraries, and while it does not require an army of programmers to support it, it does require additional skills. Just as all libraries � to some degree or another � require collection managers, catalogers and reference librarians, future-thinking libraries require people who are knowledgeable about computers. This background includes knowledge of relational databases, indexers, data formats such as XML and scripting languages to glue them together and put them on the web. These tools are not library-specific, and all are available as open source.
Through reading the articles in this issue and discussing them with your colleagues, you should become more informed regarding the topic of open source software. Thank you for your attention and enjoy.
Explaining Free and Open
The Thick of the Fray: Open
Source in Libraries in the First Decade of This Century
The Viability of Open
Source ILS Evergreen in Context
The Development and Usage of
the Greenstone Digital Library Software
From Open Source to Open
Editor's Desktop
Inside ASIS&T
Don Kraft Named Editor
Emeritus of JASIST
The Student Scene: Students as
Technology Leaders
Selected Abstracts from JASIST
American Society for Information Science and Technology
Tel. 301-495-0900 / Fax: 301-495-0810 / E-mail: asis@asis.org | 计算机 |
2014-23/2158/en_head.json.gz/25346 | WEB ACTORS
Second Life Youtube
LIFE ON THE WEB
The secret of Apple In 2010, earnings from Apple Inc.. have exceeded those of Microsoft. However, the firm has repeatedly been on the brink of bankruptcy and has a time had to rely on help from Microsoft to redress its finances. It changed several times CEO too, up to 1997.
In 1983, John Sculley, who had made a great success at the head of PepsiCo, is hired to run the Apple firm. He increased the incomes from 800 to 8000 million a year. He will lead the company until 1993.
In 1986, a conflict occurs between Sculley and Steve Jobs, each one trying to push the other towards the exit. The board did not trust Jobs, too unstable to run the business and he was relieved of his duties.
It departed with some engineers to create Next. Sculley developped the Newton, a personal assistant that make the firm losing more money than it will bring it. He will commit more errors, including the choice of PowerPC processors rather than Intel, and poorly designed computers. His successor will be worse, and will give up the license of the Mac operating system to third party manufacturers. The system also becomes insufficient for new hardware.
The company, unable to produce a new operating system is on the verge of bankruptcy in 1997. She takes Steve Jobs has his head and with it the operating system from Next, based on Unix.
When he arrived and that he was presented the company's products, he was bewildered by the number of different models proposed. He then take a marker and draw on a whiteboard the diagram below: now Apple will manufacture only 4 computers. Since the arrival of Jobs, the company has always been successful with all the products, including the iMac, iPod, iPhone and iPad. What Jobs has magical, which means that all products receive public support?
The answer is given by John Sculley, in an interview for the online magazine CultOfMac.com.
Incidentally, we see that many of the qualities that Sculley attributes in his quotations to Jobs, who knows how to infuse them to the products, are also qualities that belong to Google products.
And according to Sculley, things have not changed since the early days of the firm:
Having been around in the early days, I don’t see any change in Steve’s first principles — except he’s gotten better and better at it. The Jobs' methodology
His methodology is totally confused with the operation of the Apple company.
Vision of the future
He was a person of huge vision. He believed that the computer was eventually going to become a consumer product. That was an outrageous idea back in the early 1980's. He felt that the computer was going to change the world. The most recent example of this ability to know what people might need is the Tablet PC. At the end of our article, the links show the reaction of some journalist to the announcement of this new object: "People do not need a Tablet PC." Sales prove otherwise.
The secret of Apple's success according to Steve Jobs: Just a few products.
A limited product range Steve Jobs gave this advice in April 2011 to Larry Page, new Google CEO: A company must have only five products. This is the case of Apple, here's the list: iPod, iPad, iPhone, Mac Pro and Mac Air laptop. It was he who drew the schema at right to the use of his engineers.
An advice that has been listened, since Google has significantly reduced the number of its services and software. But is that really what makes the success of Apple?
Unlike the firm of Steve Jobs, Samsung offers a wide range of smartphones, each offering a particular service: one a stylus, the other an integrated pico projector, etc ...
In April 2012, Samsung became the first manufacturer of smartphones on top of Apple and Nokia. Maybe Apple has a few products because it's easier for one man to supervise them and Steve Jobs wanted to participate in the design of all products.
The great skill that Steve has is he’s a great designer. Everything had to be beautifully designed even if it wasn’t going to be seen by most people. He was not a designer but a great systems thinker. These two quotations taken at different places in the transcript of the interview seem contradictory. In fact, what means Sculley, is that Jobs wants to make beautiful objects and appreciates the beauty in everything, but it does not create it itself. What he knows to create is a system to produce them.
User experience Steve in particular felt that you had to begin design from the vantage point of the experience of the user.
People in product marketing in those days asking people: What did they want? How can I possibly ask somebody what a graphics-based computer ought to be when they have no idea what a graphic based computer is? The iPod is a perfect example of Steve’s methodology of starting with the user and looking at the entire end-to-end system. The usability of Apple products, and primarily, that of the Mac operating system, has always been the strong point of the firm. We understand why.
Precision and perfectionism
He was also a person that believed in the precise detail of every step.
He was a perfectionist even from the early days. And he was constantly forcing people to raise their expectations of what they could do. So people were producing work that they never thought they were capable of. This finding from Sculley is fully confirmed by all books written about the history of Apple and that from the first days when Steve Wozniak was working on the Apple II, Jobs then encouraged he to continually add new features and improve the hardware to make it the best.
What makes Steve’s methodology different from everyone else’s is that he always believed the most important decisions you make are not the things you do – but the things that you decide not to do. He simplifies complexity. Reduce the object to the smallest form, is what we call programming optimization: rewriting a program to gradually make it simpler and more efficient. A legacy of what Jobs learned as a programmer?
Minimalism here is to simplify even more, to make things small, interfaces simpler.
It also makes us think about how Facebook took the ascendancy on MySpace. While MySpace multiplied steps to show more ads, garner more revenue, Facebook made the interface simpler. The second took the members of the first.
Part 2: The secret of Apple by Steve Jobs himself. Part 3: Steve Jobs vs Bill Gates. Reference
John Sculley On Steve Jobs, The Full Interview Transcript. Saying no. The secret of Apple is also to say no to many products that are not perfects. The article quotes St-Exupery:
"A designer knows he has achieved perfection not when there is nothing more to add, but when there is nothing left to take away." | 计算机 |
2014-23/2158/en_head.json.gz/26157 | (Redirected from Green screen)
"Green screen" redirects here. For other uses, see Green screen (disambiguation).
For the electronic music project, see Chroma Key. For musical tonality depending on key, see Key coloration. For Cromer Quay, see Cromer.
Today's practicality of green-screen compositing is demonstrated by Iman Crosson in a self-produced YouTube video.
Top panel: A frame of Crosson in full-motion video as shot in his own living room.
Bottom panel: Frame in the final version, in which Crosson, impersonating Barack Obama, "appears" in the White House's East Room.[1]
Chroma key compositing, or chroma keying, is a special effects / post-production technique for compositing (layering) two images or video streams together based on color hues (chroma range). The technique has been used heavily in many fields to remove a background from the subject of a photo or video – particularly the newscasting, motion picture and videogame industries. A color range in the top layer is made transparent, revealing another image behind. The chroma keying technique is commonly used in video production and post-production. This technique is also referred to as color keying, colour-separation overlay (CSO; primarily by the BBC[2]), or by various terms for specific color-related variants such as green screen, and blue screen – chroma keying can be done with backgrounds of any color that are uniform and distinct, but green and blue backgrounds are more commonly used because they differ most distinctly in hue from most human skin colors. No part of the subject being filmed or photographed may duplicate a color used in the background.[3]
It is commonly used for weather forecast broadcasts, wherein a news presenter is usually seen standing in front of a large CGI map during live television newscasts, though in actuality it is a large blue or green background. When using a blue screen, different weather maps are added on the parts of the image where the color is blue. If the news presenter wears blue clothes, his or her clothes will also be replaced with the background video. A complementary system is used for green screens. Chroma keying is also used in the entertainment industry for special effects in movies and videogames. The advanced state of the technology and much commercially available computer software, such as Autodesk Smoke, Final Cut Pro, Pinnacle Studio, Adobe After Effects, and dozens of other computer programs, makes it possible and relatively easy for the average home computer user to create videos using the "chromakey" function with easily affordable green screen or blue screen kits.[original research?]
2.1 Processing a green backdrop
2.2 Processing a blue backdrop
2.3 Major factors
5 Even lighting
In filmmaking, a complex and time-consuming process known as "travelling matte" was used prior to the introduction of digital compositing. The blue screen method was developed in the 1930s at RKO Radio Pictures. At RKO, Linwood Dunn used an early version of the travelling matte to create "wipes" – where there were transitions like a windshield wiper in films such as Flying Down to Rio (1933). Credited to Larry Butler, a scene featuring a genie escaping from a bottle was the first use of a proper bluescreen process to create a traveling matte for The Thief of Bagdad (1940), which won the Academy Award for Visual Effects that year. In 1950, Warner Brothers employee and ex-Kodak researcher Arthur Widmer began working on an ultraviolet travelling matte process. He also began developing bluescreen techniques: one of the first films to use them was the 1958 adaptation of the Ernest Hemingway novella, The Old Man and the Sea, starring Spencer Tracy.[4]
One drawback to the traditional traveling matte is that the cameras shooting the images to be composited cannot be easily synchronized. For decades, such matte shots had to be done "locked-down", so that neither the matted subject nor the background could shift their camera perspective at all. Later, computer-timed, motion-control cameras alleviated this problem, as both the foreground and background could be filmed with the same camera moves. Petro Vlahos was awarded an Academy Award for his refinement of these techniques in 1964. His technique exploits the fact that most objects in real-world scenes have a color whose blue-color component is similar in intensity to their green-color component. Zbigniew Rybczyński also contributed to bluescreen technology. An optical printer with two projectors, a film camera and a 'beam splitter', was used to combine the actor in front of a blue screen together with the background footage, one frame at a time. During the 1980s, minicomputers were used to control the optical printer. For the film The Empire Strikes Back, Richard Edlund created a 'quad optical printer' that accelerated the process considerably and saved money. He received a special Academy Award for his innovation.
For Star Trek: The Next Generation, an ultraviolet light matting process was proposed by Don Lee of CIS and developed by Gary Hutzel and the staff of Image G. This involved a fluorescent orange backdrop which made it easier to generate a holdout matte, thus allowing the effects team to produce effects in a quarter of the time needed for other methods.[5]
Meteorologists on television often use a field monitor, to the side of the screen, to see where they are putting their hands against the background images. A newer technique is to project a faint image onto the screen.
Some films make heavy use of chroma key to add backgrounds that are constructed entirely using computer-generated imagery (CGI). Performances from different takes can even be composited together, which allows actors to be filmed separately and then placed together in the same scene. Chroma key allows performers to appear to be in any location without even leaving the studio.
Computer development also made it easier to incorporate motion into composited shots, even when using handheld cameras. Reference-points can be placed onto the colored background (usually as a painted grid, X's marked with tape, or equally spaced tennis balls attached to the wall). In post-production, a computer can use the references to compute the camera's position and thus render an image that matches the perspective and movement of the foreground perfectly. Modern advances in software and computational power have even eliminated the need to accurately place the markers - the software figures out their position in space (a disadvantage of this is that it requires a large camera movement, possibly encouraging modern film techniques where the camera is always in motion).
Process[edit]
Film set for The Spiderwick Chronicles, where a special effects scene using bluescreen chroma key is in preparation.
The principal subject is filmed or photographed against a background consisting of a single color or a relatively narrow range of colors, usually blue or green because these colors are considered to be the furthest away from skin tone.[3] The portions of the video which match the preselected color are replaced by the alternate background video. This process is commonly known as "keying", "keying out" or simply a "key".
Processing a green backdrop[edit] | 计算机 |
2014-23/2619/en_head.json.gz/42482 | Berners-Lee applies Web 2.0 to improve accessibility
Accessibility seminars often begin with a quote by Tim
Berners-Lee: "The power of the web is in its universality. Access
by everyone regardless of disability is an essential aspect." It's
an old quote, but the web's inventor offered fresh ideas
yesterday.24 May 2006
E-commerce and the internet
Professor Sir Tim Berners-Lee presents his vision of the web's future at the 15th International World Wide Web Conference in Edinburgh today. At a press conference yesterday, he acknowledged that accessibility is failing the "essential aspect" he described back in 1997 when announcing the launch of the W3C's Web Accessibility Initiative (or WAI, pronounced 'way'). "That is a concern," he said of today's generally poor standard of web accessibility, when OUT-LAW asked for his opinion. Berners-Lee, who has served as W3C's Director since it was founded in 1994, pointed out that his WAI team is working hard on a new set of guidelines to address accessibility. Version 2.0 of the Web Content Accessibility Guidelines, or WCAG, has been long awaited and the working draft is near completion: a 'last call' for public comment closes on 31st May. Berners-Lee is not suggesting that WCAG 2.0 will present a quick-fix for web accessibility; but it should answer some of the criticisms of the current version. One such criticism is that WCAG 1.0 is difficult to apply to technological developments on the web. Berners-Lee seemed to understand this concern. "I was having a conversation with someone the other day about video blogging," he said. "Does a video blogger need captioning? It's not easy to do." So he suggested a novel approach "What about community captioning? The video blogger posts his blog – and the web community provides the captions that help others." This solution evokes the concept of Web 2.0, a collective term for services that let people collaborate and share information online. The term Web 2.0 has also been used as a synonym for the Semantic Web – something that Berners-Lee has been writing about for many years. His enthusiasm for the Semantic Web was obvious at yesterday's press conference – and again, he sees potential in it for web accessibility. He predicted great things for the Semantic Web in his 1999 book Weaving the Web. It describes an evolution in which machines become capable of analysing all the data on the web: the content, links and transactions between people and computers. "A 'Semantic Web,' which should make this possible, has yet to emerge," he wrote, "but when it does, the day-to-day mechanisms of trade, bureaucracy and our daily lives will be handled by machines talking to machine, leaving humans to provide the inspiration and intuition." This week's four-day conference is packed with talks and debates on the Semantic Web by academics and industry experts from around the world, addressing 1,500 delegates. Berners-Lee's vision is becoming a business case. He talked yesterday of websites "marshalling the community" to improve accessibility. He continued: "The Semantic Web lets you build a browser that is optimised for a particular disability." A browser of the future would understand the raw data it is dealing with, rather than just displaying it. It would know how to make it accessible. Unfortunately, time did not allow him to elaborate. When OUT-LAW asked whether he thinks further regulation is necessary to improve accessibility, Berners-Lee declined to take sides. Diplomatically, he pointed out that regulation is not his field of expertise. "What I would say is that everyone should reference the same guidelines," he said. His point is that W3C has written the de facto standard; but governments and non-governmental organisations have seen fit to write their own versions. "You can't design a site and try to make it compete with 152 different sets of guidelines from 152 different states," he said. "Keeping the standards homogenous is really important." In short, everyone should follow WCAG. Event: Website Accessibility 2006 Edinburgh, 13th June OUT-LAW has teamed up with Parallel 56 and User Vision to organise a national conference on best-practice public sector website accessibility. Hear from expert speakers on PAS 78, WCAG 2.0 and more. Full details at Parallel 56's website
(the Edinburgh conference)
Article from 2001, co-authored by Berners-Lee, on the Semantic Web
Tim's quote on accessibility, in its original context from 1997
W3C's WAI
Related Sectors
Join My Out-Law
See only the content that matters to you
Tailor Out-Law to your exact needs
Save the most useful content for later reading
Tailor our weekly eNewsletter to your interests
Already signed up to My Out-Law?
Learn how to fine tune Out-Law to your interests Find out more about IBC’s IT Summer Law School Legal NoticesCookiesPrivacySite MapAccessibility | 计算机 |
2014-23/2619/en_head.json.gz/45320 | The page not found was: announcingZAP.html
Client History: Mondo Pubishing - www.MondoPub.com
Mondo came to Cuesta back in 2001 with high anxiety about selecting a new Web developer. They had history with a previous developer who simply could not keep up with their needs and eventually left Web development for another career, which left them hanging in the lurch. Their first concern was to find a developer who was respected in the publishing industry, one who could keep up with emerging trends and most importantly, one who would be around for many years to come. By then, Cuesta had been in business for nearly 10 years, and we had worked with some of the biggest names in the industry. The launch of Mondo's first Cuesta-built site was a great success and stood the test of time. When they came to us nearly 10 years later with a concept for a new design and a desire to add features and functionality for products that didn't exist back in the day, we were able to change the look, feel and functionality without having to redevelop the site.
It was extremely important for us to seek out Cuesta Technologies, the time-honored experts in e-commerce and Web site development for our market."
Randi Machado, Former Marketing Director, MONDO Publishing | 计算机 |
2014-23/2619/en_head.json.gz/46364 | NYU’s Bregler Receives $1.47 Million Grant to Enhance Motion Capture Tools May 19, 2009
N-472, 2008-09
New York University Computer Science Professor Chris Bregler has received a $1.47 million grant from the U.S. Office of Naval Research (ONR) to enhance his laboratorys previous work on motion capture and computer vision. The goal of the project is to train a computer to recognize a person based on his or her motions and to identify, through motion capture and computer vision, an individuals emotional state, cultural background, and other attributes. Motion capture records movements of individuals, who wear suits that reflect light to enable the recording of their actions. It then translates these movements into digital models for 3D animation often used in video games and movies, such as The Polar Express and Iron Man. The more sophisticated computer-vision technology, by contrast, allows for the tracking and recording of these movements straight from video and without the use of motion capture suits. Bregler and his colleagues at NYUs Courant Institute of Mathematical Sciences have already developed a method to identify and compare the body language of different speakers-a trait they call body signatures. Titled GreenDot, the project employs motion capture, pattern recognition, and Intrinsic Biometrics techniques. This fall, their results showed that actress Tina Fey, who was widely praised for imitating Republican Vice-Presidential nominee Sarah Palins voice and appearance, also effectively channeled the Alaska governors body language. The research team-the NYU Movement Group-has also recently developed computer vision techniques that enable the researchers to capture and analyze large amounts of YouTube videos and television broadcasts. In collaboration with Peggy Hackney, a movement expert and a faculty member in the Department of Theater, Dance, and Performance Studies at the University of California, Berkeley, the research group has designed a new system that can automatically classify different motion style categories and find similarities and dissimilarities among body signatures. Under the ONR grant, Bregler and his team will seek to bolster their previous work in two areas. They will develop multi-modal sensors in order to capture subtle facial movements, full body motion, and multi-person interactions and they will create a computer infrastructure with the capacity to house a database allowing researchers to data-mine, discover, and model the complex and wide variety of different human activities and styles. For more about Breglers motion capture work, go to http://movement.nyu.edu/experiments/. This Press Release is in the following Topics:
Graduate School of Arts and Science, Research
NYU researchers in motion capture suits. | 计算机 |
2014-23/2619/en_head.json.gz/48077 | release date:September 2006
Johnny Hughes has announced the availability of a fourth update to CentOS 4 series, a Linux distribution built from source RPM packages for Red Hat Enterprise Linux 4: "The CentOS development team is pleased to announce the release of CentOS 4.4 for i386. This release corresponds to the upstream vendor U4 release together with updates through August 26th, 2006. CentOS as a group is a community of open source contributors and users. Typical CentOS users are organisations and individuals that do not need strong commercial support in order to achieve successful operation. CentOS is 100% compatible rebuild of the Red Hat Enterprise Linux, in full compliance with Red Hat's redistribution requirements. CentOS is for people who need an enterprise class operating system stability without the cost of certification and support.
CentOS is a freely available Linux distribution which is based on Red Hat's commercial Red Hat Enterprise Linux product. This rebuild project strives to be 100% binary compatible with the upstream product, and within its mainline and updates, to not vary from that goal. Additional software archives hold later versions of such packages, along with other Free and Open Source Software RPM based packagings. CentOS stands for Community ENTerprise Operating System.
Red Hat Enterprise Linux is largely composed of free and open source software, but is made available in a usable, binary form (such as on CD-ROM or DVD-ROM) only to paid subscribers. As required, Red Hat releases all source code for the product publicly under the terms of the GNU General Public License and other licenses. CentOS developers use that source code to create a final product which is very similar to Red Hat Enterprise Linux and freely available for download and use by the public, but not maintained or supported by Red Hat. There are other distributions derived from Red Hat Enterprise Linux's source as well, but they have not attained the surrounding community which CentOS has built; CentOS is generally the one most current with Red Hat's changes.
CentOS preferred software updating tool is based on yum, although support for use of an up2date variant exists. Each may be used to download and install both additional packages and their dependencies, and also to obtain and apply periodic and special (security) updates from repositories on the CentOS Mirror Network.
CentOS is perfectly usable for a X Window based desktop, but is perhaps more commonly used as a Server Operating system for Linux Web Hosting Servers. Many big name hosting companies rely on CentOS working together with the cPanel Control Panel to bring the performance and stability needed for their web-based applications. manufacturer website
Major changes for this version are: Mozilla has been replaced by SeaMonkey
Ethereal has been replaced by Wireshark
Firefox and Thunderbird have moved to 1.5.x versions
OpenOffice.org has moved from to the 1.1.5 version
1 DVD for x86 based systems back to top | 计算机 |
2014-23/2619/en_head.json.gz/49632 | What I Learned Doing a Multimedia Project on the French Revolution
What I Learned Doing a Multimedia Project on the French Revolution Lynn Hunt, July 2002
A few years ago, colleagues from the American Social History Project at the City University of New York (http://www.ashp.cuny.edu) approached me about their idea for a CD-ROM on the French Revolution. They had completed a successful CD-ROM project on American history, Who Built America? which won the 1994 American Historical Association James Harvey Robinson Prize, and wanted to branch out into other areas of history. I was to serve as consultant to the project along with Jack Censer at George Mason University. Although I knew nothing at all about CD-ROMs, and my expertise on the Internet consisted of sending and receiving messages on some antiquated, pre-Eudora e-mail system, I jumped at the chance, convinced that I ought to know something about this new format for presenting information. What was possible, what was feasible, both in financial and research terms? Would the CD-ROM as a medium open up issues previously unconsidered? Was it true that these new ways of organizing information would challenge our usual linear, narrative forms of historical argument? I believed that some kind of direct involvement would be the best way to find the answers to these questions.
On the spectrum that runs from those who still handwrite their articles, books, and personal communications to those who spend hours surfing the net, learning new software, and putting up their own web pages, I locate myself somewhere in the middle. True, in the early 1980s I did have one of the very first "portable" personal computers, the Osborne, which looked and weighed something like a sewing machine, offered a screen the size of a 4 x 6 index card, and could store exactly 33 pages of text on a single-sided, single-density diskette (for a brief history of the Osborne, see http://www.digitalcentury.com/encyclo/update/osborne.html). And by the mid-1980s I was encouraging my graduate students to get a personal computer to write their dissertations. But I never got a note-taking or bibliography program, never read my manuals, always cared more about the look of my printout than the heft of my RAM, and generally followed rather than led the way into the new technology. This has not changed.
But from the very first minute, I enjoyed working on the CD-ROM project, in part because it afforded occasions for wonderful collaborative conversations with colleagues such as Roy Rosenzweig and Steve Brier from the ASHP, and in part because I didn't have to do the hard part. Roy, Steve, Josh Brown and Pennee Bender organized the technical side of the project, from the digitization of images in the Museum of the French Revolution in Vizille, France, to the recording of Jack Censer's and my voice-overs for the slide shows in a sound studio in lower Manhattan. Jack and I planned together all the major sections and divided the work of choosing documents, writing the narration for the slide shows, and preparing the companion narrative history of the French Revolution, but Jack organized the final collection and translation of documents, images, and songs from his office at George Mason University and supervised the research of the post-doctoral fellows who worked on the project along with us. He stayed in constant contact with Roy Rosenzweig and the Center for History and New Media at GMU (http://www.chnm.gmu.edu) that was founded in 1994 and helped us with every aspect of the project.
Although our technologically informed colleagues worried that CD-ROMs might become obsolete by the time we had finished our project, the technology did not develop quite as quickly as was anticipated. Virtually every PC can now read CD-ROMs, and even though our project is now available in large measure on the Web—at http://www.chnm.gmu.edu/revolution/—the CD-ROM still has some advantages over the web. You don't have to be online, for starters, and the information is always available in the same place with the same mode of access. The server is never down unless your own computer crashes altogether. The CD-ROM comes in the back of a more or less standard printed narrative history (Liberty, Equality, Fraternity: Exploring the French Revolution, published in 2001 by Pennsylvania State University Press), but it offers enormous advantages over the book format. You can put much more information on one CD-ROM than in any one book, and you can include types of information that cannot be printed and that are rarely accessible to students or even to researchers, such as images from distant museums or libraries: this one has 250 images, most of them in color, from a variety of libraries and museums in France and the United States; recorded revolutionary songs with lyrics available in both French and English; 300 translated documents; maps; a timeline; a glossary; narrative slide shows with voice-over narration; and essays on how to read images or listen to revolutionary music by experts in those fields.
The CD-ROM also has a variety of pedagogical aids ranging from a search function that promptly gives the enquirer a list of every reference made, for example, by or about Robespierre to various workbook possibilities such as underlining, dog-earing of pages, note-taking and even indexing of the notes taken. Although the CD-ROM has "pages," these are only notional; the "reader" need not follow any particular order, indeed various features ensure that users will skip from section to section thereby creating their own narrative thread. There is simply no comparison between what can be offered in print form and on CD-ROM.
So is there any down side to this amazing technology? I see two types of problems, one that is solvable with some time and thought and one that might well prove insurmountable. The hurdle that can be jumped with a certain amount of practice is the navigation of such an immense amount of information. Students often complain in working with our CD-ROM that it's too complex: the students have to follow instructions in order to get to the materials they need, and they have to have some clue as to what they might want in order to make productive use of the sources. Using a CD-ROM on the French Revolution is very different from searching the web for information about Marie-Antoinette, for example. The web will give a student various potted histories, which are sometimes, even much of the time, erroneous in content and generally lacking in analysis, and perhaps an occasional painting of the queen. The CD-ROM begins by essentially posing a question: once the students have a list of all the places where Marie-Antoinette comes up from the search function, they have to decide where to go first-to a document, a song, an engraving?-and then have to figure out what to do with it by reading the headnote, lyrics, or caption or by returning to the narrative introduction to the CD-ROM section or to the printed text to find out about her place in events. In other words, the CD-ROM frustrates students who want quick answers; it basically poses question after question. It demands much more sustained attention than the web.
Yet while this problem can at least be addressed by well-focused pedagogy-getting the students to a place where they can truly benefit from the richness of the sources at their disposal-the second type of problem is much more intractable: the immense labor and therefore expense involved in developing a pedagogically useful CD-ROM. The cost of developing this CD-ROM on the French Revolution was well in excess of $500,000, and even with that budget we couldn't afford to pay the permissions for even very short clips from famous movies about the French Revolution. The National Endowment for the Humanities and the Florence Gould Foundation provided funding for the vital first years of collecting and translating documents, locating and digitizing images, drawing maps, recording narrations, and in particular, programming the digitized materials so that they would be structured in a clear manner, easily linked together, and searchable. The publisher, Pennsylvania State University Press, made a vital financial contribution to last minute adjustments, but no publisher, commercial or academic, would have been willing to support all these costs from the beginning. Those closer to the technological end of the project could tell fascinating-and exasperating-stories about the constantly disappointed expectation that a new, more standardized, platform for CD-ROMS would be developed in conjunction with our project and then made available to teachers to prepare their own materials for classroom use. Instead, our team had to work through almost every aspect of the project on its own. At least 40 different people contributed, including work-study students, graduate research assistants, translators, postdoctoral fellows, scanners, programmers, translators, sound technicians, piano players, map drawers, and project coordinators, not to mention the unpaid or very much underpaid labors of numerous faculty and curators in New York, Washington, Philadelphia, Los Angeles, Boston, Athens, Georgia, Vizille, and Paris. The great cost of the project measures the distance between a CD-ROM that simply gathers together digitized information in catalogue form (I have seen one CD-ROM of images of the French Revolution that numbers the images in order with no further information) and one that integrates a variety of different kinds of documents into an integrated package usable in thousands of different ways. It is one thing to store your family photos on a CD-ROM and quite another to develop a useful and interesting pedagogical tool.
The bad news is that the cost of producing multimedia packages for teaching is not going to go down anytime soon. Such projects require serious funding, take a long time to complete, and need high levels of organization. It seems very unlikely that an individual faculty member could undertake such an effort; putting a document reader on a CD-ROM or online is not the same thing as coordinating images, music, maps, narrated slide shows and lengthy narrative introductions with text documents. But there is some good news too. Universities have begun to establish centers for the new media to coordinate these endeavors and to keep abreast of the ever-expanding horizon of possibilities. And the horizon does keep expanding as computer memory grows and the capacity to receive and play audiovisual materials leaps forward in astonishing fashion.
Perhaps most encouraging for teachers and researchers is the decentralization that is inherent in the Internet as a medium. New sources of information keep popping up, sometimes from the most unexpected of places. An individual does a web page to provide links to everything interesting about Aphra Behn, a foundation makes a CD-ROM of the collected works of Voltaire, a university library puts its rare maps on the Web, a national library begins to digitize whole collections. A thousand flowers are blooming in the world of digitized information. Sitting in front of my tiny Osborne screen 20 years ago, I would never have even dreamed of the prospect, and even if I had, the flowers would have been gray and white!
—Lynn Hunt is Eugen Weber Professor of European History at UCLA and is the president of the AHA. She can be reached by e-mail addressed to lhunt@ucla.edu. | 计算机 |
2014-23/2619/en_head.json.gz/50711 | Startups For the Rest of Us
The podcast that helps developers be awesome at launching software products.
Episode 136 | AuditShark, Drip, and HitTail Updates
Startups for the Rest of Us Episode 136 [ 39:44 ] Play Now | Play in Popup | Download
Tom Fakes - http://blog.craz8.com
[00:00] Rob: In this episode of Startups for the Rest of Us, Mike and I are going to be talking about AuditShark, Drip and HitTail. This is Startups for the Rest of Us episode 136.
[00:09] Music
[00:17]Welcome to Startups for the Rest of Us, the podcast that helps developers, designers and entrepreneurs be awesome at launching software products, whether you’ve built your first product or you’re just thinking about it. I’m Rob.
[00:25] Mike: And I’m Mike.
[00:26] Rob: And we’re here to share our experiences to help people avoid the same mistakes we’ve made. What’s the good word this week, Mike?
[00:31] Mike: I’m sweating in Texas.
[00:33] Rob: And it’s swampy.
[00:35] Mike: Swampy is a good way to put it.
[00:37] Rob: Well we’re going to be talking today on the show. It’s going to be an update show where we talk a lot about projects we’ve been working on recently, progress we made or lack thereof in some cases. We got a comment on last week’s episode, episode 135 where we answered several listeners’ questions. And Ray piped in. He put a comment on the blog.
[00:55] And he said the question about the 19-year-old who doesn’t have any programming experience. I think I’d be tempted to suggest that he try and complete some of the free online courses that are all the rage at the moment. They are available I think for HTML, CSS, Ruby on Rails for example. It might be a quick way to build some basic skills quickly.
[01:12] It would be difficult to pick a technology stack if you have no experience. I would do that first then look at some freelancing job. With no skills I think even finding the right freelance jobs and getting them would be almost impossible. I thought we said that. Did we skip that step? Because that’s the first thing we should have said is first learn something and then go to oDesk and try to get contracts.
[01:31] Mike: Part of the comment also kind of includes that idea that he doesn’t know the technology stack, so he doesn’t necessarily know where to start. And we kind of give some ideas about where to start, which was go to the Ruby on Rails route or PHP or something along those lines just to get your feet wet in a way that’s not going to overwhelm you.
[0 1:51] Rob: Thanks for the clarification there, Ray. The other thing we got was an email from Matt Vanderpool and he attended MicroConf last month. The subject line of his email is heartfelt thank you for putting on MicroConf. And he says, hey Rob and Mike. I wanted to extend a heartfelt thank you to both of you for organizing MicroConf.
[02:10] I also wanted to tell you a little about what MicroConf has done for QAtab. Matt has an app called QAtab that’s at QAtab.com and it helps company improve their QA process. He says in the four months before MicroConf I did nothing with QAtab. I was too busy and didn’t have the time to work on it. In one month since MicroConf, I have reduced my consulting time so that I could work on QAtab.
[02:32] Added support for integrating with external ticketing systems based on information from Tom Fakes who I met at MicroConf. Set up a weekly meeting for product accountability with Najaf Ali who I met at MicroConf. Identified and reviewed competitors to help me determine what makes QAtab different. And then he list five or six other things all dealing with either dealing with information he got from MicroConf or people he met at MicroConf.
[02:54] He says I don’t know if I would ever make this progress without it. The talks were great and the networking was even better. Almost everyone who I explained the QAtab concept to immediately got it and that was very helpful. I think it was this kind of feedback that I was missing and that I really needed to keep going. Again thank you.
[03:11] So, we want to extend obviously a thank you to Matt for writing such a detailed email. Yeah, really just for letting us know the difference it makes and that’s frankly why we’re starting to do two of them. Certainly if it works out and it continues to make this kind of difference for people it’s kind of a no brainer.
[03:27] Mike: Yeah. We’ll have to take into account those people who are trying to get us to do three or four a year but we’ll see what happens.
[03:34] Rob: I saw an Asia request and an Australia, New Zealand request come through. We’re going to need some staff to help with those.
[03:43] Rob: So Mike, we haven’t heard much about details of AuditShark. What’s going on with that probably for a couple of months?
[03:51] Mike: So for people who haven’t heard much of what I’ve been working on AuditShark is a product designed for auditing web servers. Basically your infrastructure servers to make sure that you’re following best practices for security. Cause there are literally thousands and thousands of ways to configure a server. But not all of them are necessarily the right way or good way that isn’t going to end up in your server getting hacked.
[04:13] So, AuditShark is designed to check all this different pieces of information on your servers to make sure that the server is configured correctly out of the box, you know when you first start configuring it. And then to continually make sure that it’s still configured the right way in case those baseline recommendations change or if the server changes itself.
[04:30] For example, somebody actually does get into your server and hacks into it, chances are good that they’re going to start making some changes to it. And those are the type of things that you want to be notified about. These are not things that you’re going to go back into the server and check on a daily. You’d rather have software that will do that.
[04:47] So that’s what AuditShark is designed to do. From a problem standpoint obviously it’s giving you some at least piece of mind that at somebody is going in and taking a look at these servers, making sure that it is set up correctly. Those are the type of things that people I’ve spoken to find fairly valuable about it.
[05:03] So, over the past few weeks, I’ve been making a very focus effort to kind of get the product more or less the feature complete where it is enough to put in front of somebody where they would actually get value out of it. So to help with that I hire a developer to start building some of these control points. I spent some time doing some training for them.
[05:20] Over the past couple of weeks, he’s put together roughly between 100 and 150 different security checks that are built in the product now and they’re all targeted at Windows 2008. That’s just more or less the starting point. Next up on the docket is going after sequel server 2008. And then after that, I’m going to start taking a look to figure out whether we want to do more Windows checks and go down the route of Windows 2012 and the sequel 2012.
[05:46] Or, if we want to start going more towards the Linux side of things with Linux, Linux-based apps such as Apache or MySQL. Or, if we go after more of a platform specific approach where we’re going to maybe say Ruby on Rails, Gems or things like that. But essentially in order to make those decisions, I have to do a little bit more customer development.
[06:06] Rob: And does that mean are you are able to ask them in advance and figure it out or do you have to go through the process of actually installing it on their server, getting them to use it and then making a decision.
[06:16] Mike: No, I can just talk to them. I can ask them the questions. I can look at my spreadsheet to figure out what the numbers are of the people who are interested in different things and just go after the largest pieces. Figure out who is, one, willing to pay the most for it, and then two, what the largest number of people who are willing to pay that amount.
[06:35] You know just do a little bit of multiplication, you can figure out what’s going to be the most profitable area to go after next; at least for my launch list and the people that I talked to so far. Obviously, if I start progressing and marketing a lot more then that could change dramatically very very quickly. But it’s at least a starting point to give me an idea.
[06:53] But the fact that he was able to put together roughly 50 control points a week indicates to me that over the course of the next month or two months, it wouldn’t be a stretch to have like another 250, 300, 400 control points build where these are all each individuals checks and security checks that are being done on somebody’s machine that would be reported on a daily basis.
[07:14] And I think that regardless of the system that you’re running, there’s going to be value in that. And that’s really what I was – I spent a lot of time and effort getting to the point where I know that the system actually provides that value because that is the value proposition. If it doesn’t deliver, it’s very difficult to justify charging for it.
[07:31] Rob: Right, that makes sense. And have you limited this to or do you know yet who’s going to get the most value out of it. Like is it SAS operator, is it downloadable software companies. Is there any group or demographic that it’s particularly resonated with?
[07:46] Mike: Not that I have differentiated it. I mean I can certainly make guesses about it. My suspicion is that any company that basis 95% to 99% of its revenue off of its web servers they would be a good candidate for it. Because they’re going to want to make sure that those servers don’t go down whether it’s because of bad configurations or hackers getting in.
[08:09] We talked a little bit about Rudy a couple of weeks ago where his backups were failing. And he didn’t know that that was going on. I could very well build a control point that checks and validates your backup for example. And if your backups are failing, let’s say I check the log files for backups or there are very specific things that I can look at, I can tell you whether your backups are failing.
[08:29] And then it’s not six months, nine months down the road when your entire server crashes and then suddenly you find that your backups were failing seven months ago and you’re completely toast. Then something like that would provide a lot of value. But it is for those types of people who have the vast majority of revenue of their revenue come in from those servers.
[08:48] Because they basically are mission critical. If you lose those servers, you lose your business. Now again that’s just hypothetical, theories. I can stay here and talk till I’m blue in the face and say, “Yeah, that’s sounds reasonable.” But until I go talk to customers and say is this why you’re buying this product, it’s hard to make those judgment calls.
[09:04] Rob: Right. Cause the answer you gave me was the problem that you solved. You talked about a problem someone has and how AuditShark can solve it. And that totally make sense. Problem solutions [feed] I think you’re going to find that. What I’m asking about is the next step, product market [feed]. It’s what market are you going to focus on first with your limited resources being a single founder. You can’t go after every one who has 90% of the revenue from their web server cause that’s a huge huge market, and you just can’t possibly market to them. So that’s what I was probing after is who is that demographic.
[09:37] Mike: It’s probably going to be SAS operators initially just because I’m kind of plugged into that network to some degree, so I understand kind of the pulse of the market a little bit. And I understand a lot about how web severs work. What the problems around running them are. What the remediation issues are. I can certainly offer a value of there in terms of not just my knowledge but in the products applicability to solving those situations. So, it seems to me like that’s a reasonable place to start. Whether I stay there or not, it depends a lot on how the market responds to what I’m offering.
[0:10:09] Rob: I like it. That’s a good answer. At least you have one that’s not broad cause that is definitely a market that you have reached into.
[10:18] Mike: You’d mentioned a while ago that you were working on a couple of information products and I know that one of the first things was the video course on hiring a VA. I think you’d done a bunch of stuff back in January where you had somebody come in and do a lot of videos of you. How is that coming along?
[10:35] Rob: It was ready to launch. I mean I had all the videos by January and I was hoping to launch it in January. And then HitTail had a big spike and a bunch of press. You know it was great stuff. HitTail grew very very fast early in the year, and it took my attention off of the video course basically. I have kind of a goal to launch three courses this year, just smaller things that really focus on particular pain point.
[11:00] And so I put the video, the VA video course on hold. But I’d been getting emails about it probably one or two a month from you know whoever, listeners, people who hear about it and asking if I’ve launched it yet or where I’m going to launch it. Cause I don’t have a landing page for this one. This is probably the first time I’ve ever done this. I just plan to send it to basically my email newsletter list.
[11:23] It’s softwarebyrob.com. I have a newsletter for that. I’m going to test that out in thinking that in the next – it’s going to probably be 7 to 14 days that I’m going to get it out. It’s all uploaded. I have transcripts. I have audio versions. I have course notes. There’re really just one or two final pieces. I have a job description that I use, already uploaded that I use to hire VA’s.
[11:44] And then there’s just a couple like I want to get a sample screencast and a sample Google doc that I share to showing how simple it actually is to train someone to do something. So I’m excited. It’s fun. I like sharing this kind of information and being able to justify going into such depth. You can’t do it on a podcast. You can’t really do it in a talk at a conference going deeper than I’ve gone probably since my book. You know that’s the last time I really dove into something with this much detail and being able to handover so many like accompanying resources.
[12:15] Mike: That was really cool. I’d be interested in seeing that. I know there are a lot of people who I reply to and there were several people I talked to at MicroConf who were asking me. They’re like you’re involved in a lot. You do a lot of different things. How do you get all this stuff done? And I’m like the magic of it is I don’t actually do most of it. So as you said it’s through VA’s and you get other people to basically work through your processes.
[12:38] Rob: Yeah. That’s right. With HitTail, it’s kind of been a trip. One of my main marketing channels just stopped working very abruptly. It’s an algorithmic thing. Its paid acquisition that I’d been doing for about eight or nine months. And I stopped it for MicroConf because I was too busy. And when I came back and re-uped it just like ads aren’t appearing even with the same bids.
[13:00] It’s been kind of a shocker. It’s not the majority of growth but it has been the longest term and most stable, and it is a big chunk of the monthly growth that’s been happening over the past probably eight to nine months. And so the nice part is that it’s a SAS app. And SAS is beautiful because the money keeps coming in. It may not gross substantially this month.
[13:25] It’s been growing between 10% and 30% a month for 15 months. And the growth this month will be single digit at best. But it’s SAS. Like the revenue still comes in. It’s not like with the single download software product that I’ve had were when growth stops it really drops down substantially and you lose 50% or 80% of your revenue in a month.
[13:45] Mike: That’s nice to hear. As long as you’ve got that growth coming in, I think the thing that would concern me and obviously I’m not, you know, seeing all your numbers and everything. But if I were in that position what would concern me is the fact that if you lose like a major growth engine, does that mean that in another month or two because that growth engine is lost, now as you start to shed people who came in through that engine, are you going to start going in the other direction. Are you going to start to shrink because that growth is no longer being powered by that particular channel?
[14:17] Rob: Absolutely. If your churn is too high and you don’t have that large funnel of trials coming in, you will start to shrink. I’ve plotted it all out cause it’s been going on for about six weeks now. And it’s funny. I’m like working with an ad provider and all this stuff and they don’t really know how their system works. It’s like this black box algorithm.
[14:35] And there’s no flags on my account. There’s nothing. They’re like we don’t know why it’s not running. I mean it’s bizarre. So I’m doing all types of crazy testing like running the same ad but pointing it to just a completely different website like pointing it to my blog to see if the ad shows up. You’re right. If you lose a major growth engine and your churn is too high then of course a SAS app will shrink over time.
[14:57] This is obviously a concern. It’s not something – I don’t want to play it off and say since it’s SAS I’m immune to X, Y and Z cause it certainly not the case at all. And this is a major major issue on my play right now which is a bit of a bummer because I had all these processes in place that essentially are ensuring the growth, the continued growth of it daily even though I’m not focusing on it.
[15:17] And when something like this happens this is where I’ve had to pull away from what I was working on which was Drip. And I had to pull away from it and come back and so now I’m getting less done on the new project. That’s just a balancing act. When processes failed and you have to step back I think it’s the part that I don’t enjoy about having a lot of spinning plates is when one that I started spinning a long time ago that’s been going well for a long suddenly starts to wobble and I have to take my attention off of the new ones.
[15:45] Mike: Yeah. That’s always hard. That’s also one of the reason you don’t automate everything upfront because if that automation fails for whatever reasons, it’s going to take a long time to begin with, but then if that automation fails then it’s going to take you exponentially more time to figure out how to go back and fix everything especially, depending on what it is if it’s complicated or somebody changes your API or changes their webpage and your parsing it. It could just be really difficult. But, yeah, when those processes breakdown sometimes there’s just nothing you can do. You have to back and take a look at it.
[16:15] Rob: Right. So let’s bounce back to some of the things on your list. What else is new?
[16:20] Mike: Last week I was trying to get a customer installed. And basically run into a couple of things because I wanted to get some of the control points that I talked about that I was having the new developer worked on out into the system and test it. So we were making a lot of changes all at once. And I don’t know whether it was directly related to something that we did or whether it’s just the sheer number of control points that were added to the system that basically results were not coming back anymore. So I just basically found this out over the past day or two because the policies that I put in place to go out and audit my test machines, they’re just not sending the results back anymore. They used to. They were doing it fine every single day and now it’s a black hole.
[17:01] Rob: That’s not good.
[17:02] Mike: I think that I’ve got enough log in code in place to figure it out. It’s just that I haven’t had the time to sit down on the servers and figure out exactly what’s going on. I mean it may very well be something as simple as a configuration change in a config file some place on the server. But it could be something that’s a lot more complicated than that.
[17:21] And I know that this whole mechanism for passing that data back has got to change at some point. I think it was Gabriel Weinberg had written an article talking about scalability and how it can usually see the next order of magnitude of growth, but two orders of magnitude things are going to be so completely then it’s very difficult to figure out what’s going on or what it’s going to look like.
[17:42] And I think that it’s kind of at stage right now where you know before I was always passing back a handful of control points where it was less than ten. And at this point, I’m sending back pretty close to a 150. So, it really is a full order of magnitude higher than it was before. I don’t know what the problem is. It could be something simple.
[18:02] It may require some radical changes. I’m hoping its not radical changes. But again, it’s a problem where I’ve got to deal with it and I’ve got to deal with now. Because obviously I don’t want to start putting a lot more customers into this and then having them log in and have them not be able to do anything at all because the system just doesn’t work anymore.
[18:20] Rob: Right. That’s a pretty major one to find early on. This is exactly why we roll out to one customer at a time and test things. I mean it is a beta test in this case. But it’s also it’s also I guess one of the other big benefits that we talked about a lot is you’re learning what features your customers needs. And they push the limits of these things, right. Cause you just wouldn’t likely test with that many control points.
[18:41] Mike: Oh no, I would. I could easily foresee somebody running 500 or a 1000 or even 1500 or 2000 control points against a single machine because you’re not going to want to do those manually. Cause even if it took you like 5 seconds to check each one of them. I could easily see somebody doing that in a production environment. And this is only a 100 to 150. So, it’s got to get fixed. It absolutely has to get fixed.
[19:05] Rob: Very good. So basically the month since MicroConf has not treated me well at all. I just know after a month, just about out from under the email load that had piled up. Then the HitTail growth is stalled and now Drip…
[19:18] Mike: Is this tales from the dark side?
[19:20] Rob: It is man. It’s just one of those bad months. I was down like last week and the week before, I was really bummed about these things and they were impacted me. I was really shocked by it and surprise and everything. And I’m just kind of like dealing with it now. I’m realizing I’ll figure out a solution eventually. And the same thing with Drip.
[19:37] So with Drip probably three and a half to four weeks ago, we started with our customer no. 1. Cause HitTail has been using Drip for a while and it works and Drip is sending email and everything works great. We try to get customer no. 1 on the system and right away it was like you don’t have this feature that I absolutely need. So we implemented it.
[19:56] Then there were a couple more that he needed then we implemented them. As we went down the line, it just turned out his use case was way too complicated. He had so many landing pages and list and the interactive and he had custom segments and he had API calls. Not that stuff is that bad, but at this point in Drip’s lifecycle, it’s just not there. We can’t build all that.
[20:16] We could spend another month building it for him. And maybe it’s 1 out of 50 or 1 out of 100 potential customers has all the problem he did. So the cool part is that he is able to switch, since he’s a founder as well, he was able to switch pretty quickly from customer to kind of adviser. And we had a long hang out with myself and Derek who’s coding it and customer no.1.
[20:39] And he’s basically like you know you really got to think I don’t know if I would build all these features yet. And so that’s what we decided to do. We basically pull the plug on just on getting customer no. 1 on the app. And so we’ve had to switch up the plan a little bit and take evasive action and I have 16 other early access customers.
[20:57] And so now I’m choosing one who has a pretty simple use case and we started talking. And I would love to get him on board like today. But as this is going on, Derek and I started talking and he realized this just during the conversation that we have to do some refactoring of the data model that it was… Yup, you know how painful that’s going to be.
[21:17] Mike: Yup.
[21:18] Rob: So it felt like a one two punch man. Basically Derek, he’s like do you have time to chat. And I’m like this is not good. We never chat unless something is wrong you know. We started talking and it was basically a 90-minute Skype chat of me trying to figure out how we could avoid doing this. It’s about eight working days. Once we get data in the system and we have users on it, it would be one of those changes where it would literally take three to six man months to fix down the line if we have to migrate everybody.
[21:47] So it was that kind of thing where its like this gets us 80% of the way there but it’s going to be catastrophic if we ever had to add one additional variation of this or one additional piece of flexibility. We just saw quickly how that architecture had max itself out. So we’re several days into that. I think probably 3-4 days from now that would be done.
[22:08] And then we’ll be circling back and getting our next customer no. 1 on. But the good news I realized is that feature we built for our first customer no. 1 we are going to need. We’re going to need for ourselves or for this next customer no. 1. So we didn’t lose time per se. We didn’t waste time.
[22:29] Mike: Yeah. I think the one thing that you’ve mentioned that’s really important for some people to think about is that you may have your heart set on signing on this customer no.1 But understand and realize that it may not work out with that customer. They may have use cases or needs that are going to be so far above and beyond what the rest of your customers are going to want or need that it’s actually not worth building it for them.
[22:52] Because you’re going to end up doing all this customizations that are never going to be used by anybody else or going to be very difficult or going to be so time consuming that it’s going to set you so far back from launching that at the end of days it’s not going to be profitable. Or, you’re going to lose motivation or you’re just going to have the money to be able to make ends meet to roll it out to everybody.
[23:09] So being able to walk away from that customer no. 1 and say, “Look we just can’t help you right now. We maybe able to do it at some point in the future. But now is just not the right time.” In some cases, that’s the right decision to make. It’s not an easy decision to make. But sometimes it’s the right one.
[23:24] Rob: That’s what I was going to say. It was not an easy decision at all. It was not clear cut. At a certain point, it just starts feeling institutively wrong like you were doing so much work. And I kept asking myself the question how many other users of my first hundred are going to need what we’re building here. And that answer was like none or one.
[23:45] It was just such a small percentage that I know we can get quite a few people using and paying for the app without these features. And again it’s a judgment call. Someone else may have made a different one. But the thing that struck me was no matter how many emails and how many times I discussed it with customer no. 1 and tried to thoroughly understand his use case, there were still these little things that creep up once he tried to implement.
[24:07] It just goes back to that recommendation of get people actually using your app cause expect things to come up that you haven’t accounted for. I don’t know if I honestly believe you can account for things until they really start touching the code.
[24:21] Mike: There’s always the difference between like you know you can explain an idea to somebody else. And until you’ve explained it to them three or four times and actually showed them what you’re talking about, it is so hard to convey some of that stuff. I mean that’s part of why when we’re talking about outsourcing code to developers, that’s why it makes the screen mockup so incredibly important.
[24:41] Because it allows you to show them quickly what you’re talking about. Versus having them read a spec that they’re only going to skim anyway and they’re not going really understand it even if you’re talking to it right in there ear. So hey, I’ve got some of my own tales from the dark side. I’m letting go one of my developers. I had to get on to him a number of times over the past several months.
[24:59] And it’s really just not working out. I mean the relationship just kind of limp along for about six months. And I feel like I’ve tolerated it because I felt like I didn’t necessarily have a choice and I didn’t want to put the time and effort in to finding somebody else. But I’ve decided to just kill the relationship and focus more on people who are actually working out.
[25:16] Rob: That’s a bummer. I’m sorry to hear that. You have a couple of domestic developers, is that right?
[25:21] Mike: Yeah.
[25:22] Rob: Here in the US.
[25:23] Mike: He’s not one of them.
[25:24] Rob: Oh, he’s not. Okay.
[25:26] Mike: No. No.
[25:27] Rob: Well that’s bummer. I mean you have other help. Are they able to step up and make up for him or are you going to have to go on a search for someone?
[25:33] Mike: The lead developer who I have working on the policy builder and then the other developer I have who is using the policy builders, that’s really where a lot of the problems are. And we’ve had these discussions internally where we’re looking and all of the problems with the policy builder with the synchronization, and none of it is really major but collectively it tends to be a pain.
[25:55] So we’re actually looking at that stuff in trying to figure out what is the future of this going to look like. What are we going to do in the future? How is it going to work? We’ve talked about some different architecture consideration. It almost seems like the original thought that I’ve had a long time is to take the policy builder and put it in the web, so you didn’t have to have this additional downloadable component.
[26:12] And it looks like that’s probably the direction that we’re going to go. For now, we’re going to leave things the way that they are just because it would be an incredible waste of time and effort to back off and put things on hold and go back and rebuild it in the web. So, we’re just going to leave things the way they are. We’re going to basically work through the issue as we best as we can.
[26:32] The developer who’s building a lot of the control points had a lot of good feedback for me in terms of what the policy builders needs. But a lot of those suggestions are directly applicable to taking it and making it web base as well. She’s like in jQuery or something along those lines.
[26:47] I think that that that’s ultimately the direction we’ll go. And if that’s the case then this developer was working on that policy builder. So, if I basically halt the development there and put it in maintenance mode, it doesn’t necessarily matter that I’ve kind of lost him.
[27:01] Rob: That’s funny. I haven’t realized that the policy builder wasn’t web base yet. I can see the headaches of trying to build it on the web instead of as a desktop app. But I agree with you. I think it’s no brainer long term cause then you don’t have to support people downloading it and installing and running into all those issues that are hard to troubleshoot as opposed to you having kind of a SAS thing that you can maintain and fix right on the fly.
[27:24] Mike: Yeah. I mean the other thing with the policy builder is that there’s going to be people out there who have Mac. I mean there’s going to be a lot of people who have Mac and its Windows only right now. So moving it into the web it solves a lot of those problems. The other thing that it does is it allows us to avoid any database synchronization problems.
[27:42] You don’t have to have a sequel server on the client to run it. There’re some things along licensing that I can probably get away with a couple of components that I won’t need anymore. The major issue that it kind of brings in is how to test things. So once you’ve built a policy or control point, how do you test it?
[27:59] And for something like that I think that we’re going to move more towards an always connected model where if you push task down to a machine, because it’s going to maintain a constant connection out to AuditShark and the Cloud then it will always be able to contact it. Will you be able to do it through the UI, which will be pretty neat if we can get it working?
[28:17] And I don’t see any reason why we can’t. It’s just a matter of wiring all that stuff up and that’s a nontrivial amount of work. So that’s kind of the reason for putting things kind of in a maintenance mode for that while in parallel we walk down the road of building this other stuff.
[28:31] Rob: Absolutely. At this point man, I mean we’re both kind of in the same boat, right. We’re approaching launch on something. And I find myself everyday saying that’s a great feature. I’m entering in FogBugs and putting it in at priority 6 or 7. Like a great feature. We’re going to build it in six months. That’s typically the timeframe. You know I’ll say we’re building for 2.0.
[28:49] There’re some awesome features. I can think of a hundred of features I would want in an email marketing tool. But we can put them in a bug tracker and implement them later. Cause right now we just to go to implement the shortest path to providing actual monetizable value to people and that’s the same boat you’re in. Writing less code and doing more things to get you closer to launch frankly.
[29:13] Mike: Yeah. And the nice thing about that is if you can get to launch and it starts to gain traction then obviously you’re going to be gaining revenue from it. And you can bring more people on to build that version 2.0 kind of in parallel while you got version 1.0 in maintenance mode.
[29:28] Rob: So my last update for the day is also dealing with Drip. I have this really interesting experience last week. About two weeks we bailed on trying to get customer no. 1 on. And then we decided to re-factor. So we were in kind of just stand still mode, and I was looking at the future set and really asking myself what exactly are building and is this still in line with my original vision of Drip that I had way back.
[29:52] Is that still something people are willing to pay for? Does it provide value to a market? The answer was I think so but I don’t really know. And so I was trying to think how do I answer this better. How do I find out what people really want because I’ve had so many one off conversation whether it’s at MicroConf or whether it’s over Skype or whether it’s one on one via email.
[30:12] And when people say, “I’m really looking forward to Drip. I asked them, “What exactly do you think it does?” Because I haven’t spelled out what the features are and everybody has a different answer. Some people want behavioral email. Some people want workflow based email. You know there’re all these things that people want and I was trying to figure out how many of them each of these things.
[30:29] So realizing that the launch list. I have an email list of about 1400 people who’ve expressed interest in Drip. And they’ve come from all different sources from probably listeners of this podcast to people who heard about it at MicroConf to ads that I’ve run sending them through a landing page. And so I created a Google forum. I love Google forum. It’s like Wufoo but its super simple.
[30:53] You create this form. They have quite a few options. Nothing like Wufoo cause it’s free and everything goes into a Google doc spreadsheet when the info comes in. So, I created one of those. I send out a link. I’m trying to keep it as short as possible and I got a lot of responses. I got 22% response rate answer some really key questions.
[31:13] And some of them were directly in line with my hypothesis and other send me in a different direction frankly. But the most important thing is it really improves my mental state of like are we wondering here. Are we really building something people want and it totally confirmed that. Because an overwhelming number of people who responded and the percentage of people responded want exactly what we’ve already built to be honest.
[31:38] If we were refactoring right now based on what Drip is today and based on how I described it in this survey, we have a kind of critical mass in that group. But the key thing that we asked that I almost didn’t ask I added in last was question no. 1 was basically saying are you a and then there were choices. Are you a startup founder/software entrepreneur or are you an email marketer or a general marketer or other.
[32:04] And then there was a text box. So other wound up being like consultants and just kind of some random people. That was key because then all of the other responses I could segment in that Google spreadsheet and I could just order by one thing. And then I could look at how many startup founders said that particular thing and then how many email marketers said that particular thing.
[32:26] And that was a key insight that I’m glad I’ve done. So folks out there if you are going to send out a survey, try to get – you don’t want to ask five questions about who they are. You want to boil it down to one so that it doesn’t really clog up the survey.
[32:39] Mike: Yeah. That’s a good thing to ask. One of the things I’ve been doing is talking a look at the email addresses of the people who’ve been signing up for the AuditShark launch list. And using Rapportive you can basically backtrack their email address to see who they are, where they work at and what their title is at that company.
[32:55] And that’s extremely helpful for figuring out whether it’s more of a qualified lead than somebody who is just kind of idly interested. I mean if they sign up for AuditShark and they got like a Gmail account, chances are pretty good so far that I found that they’re not necessarily interested. It’s more because it’s a podcast listener or they’ve heard about it from a blog or something like.
[33:14] They’re kind of idly curious. Whereas I’ve gotten a lot of other emails that have come in and I’ve been able to backtrack it to companies for legitimate corporate email addresses and some of the titles are like operations, engineers or sys admin at such and such company. And it’s clear that they’re looking to solve a very specific problem. And you can reply to those people.
[33:36] You can go back to them either one on one with a survey like that with some very detailed and specific questions. By cross segmenting those types of people, you can send them different questions. So that you can essentially say okay sys admin are interested in these types of things.
[33:50]Let me drill in for the next sys admin who comes along let me send them a survey that kind of drills in to try and get more information about what a sys admin would look for as opposed to somebody who is just kind of curious or kind of drive by person.
[34:05] Rob: Yeah. That’s a really good point. I’m glad you mention that cause I’ve forgotten to mention that one of the choices after identifying who they were, I asked them what’s the no. 1 you think Drip is going to do. Like what are you most excited about. Give them a few choices and one of those choices was I’m just curious about Drip.
[34:20] And having that alone I think like 20% of the respondents said that and that was great. Because I was able to basically not listen to their opinion as much cause it’s not as important as a founder opinion who can actually use this today and really has a desire to use it.
[34:35] Mike: Yeah. And that’s definitely a differentiating factor between them. If you know what they do or what they are interested in, it helps you determine whether or not to ignore the things that they have to say. And it sounds a little callous, but at the time you have to as a startup founder you have to choose what advice you listen to. And you have to have some way of doing that. And that’s a good way of doing that.
[34:55] Rob: Right. I wouldn’t say ignore. I’d say prioritize. It just has to be prioritized lower than some of the other groups that I look at. There were a couple of other things I learned from the survey. I won’t go into too much detail but the one I already said so far which that is all the features that we have built so far including the ones for customer no. 1 are important to people. And that was a nice confirmation.
[35:16] Another one is that there were a lot of startup and software founders which I would have expected. You know confirmed it basically with data. And then another one was it was kind of the opposite side of the coin. There was an advanced feature which is like split testing, right. Splitting testing auto responder sequences against each other as well as subject lines. I mean really going in depth.
[35:38] It’s a very hard thing to build but it was on our docket and we pulled it off because our intuition was that people didn’t want anything that complicated. But it was the exact opposite. People are like this is a major feature that I want now like it was a big deal. So we’re turning around on that.
[35:54] And then another one was email workflow and trying to almost like infusion stuff. Not quite behavior but where you have really complex rules engines. I’ve heard from so many people that they wanted that. And so, I was entertaining the idea, wondering if we have to go down that path cause that’s not a fun thing to build. Frankly, we found that a smaller percentage wanted that than almost anything else.
[36:15] So, although, it’s a vocal group or it’s a group of people that I’ve talked to a lot cause I’ve heard that suggestion a lot, it’s not something that we’re not going to build tomorrow. And so again, we’re not going to ignore that feature but it’s prioritized lower than some of these other things that we’re talking about.
[36:30] Mike: The feature for AB testing or split testing any of the email campaigns not just the content of them but like the headlines and stuff to drive people in, yeah that’s definitely a feature that you should be working on for a second. I totally agree with the people who responded in that survey.
[36:47] Rob: Yeah. It was a long thought process. I won’t go into it. That was originally one of the value props I had mentioned when I was pitching Drip to anybody was it could do that. Because I don’t know of any other system that allows you to split task sequences, its auto responder sequences.
[37:01] It’s always you cans split test individual campaigns, individual blast emails but not sequences. But I have fought through; I had a series of discussions and such. We’ve moved it off cause it’s a lot of work to build. But as you said a lot of folks are interested in it so we will be building it.
[37:16] Mike: Cool.
[37:17] Rob: Yeah. This really reminds me why having a launch list is so crucial. I’ve been doing one on one interview with people like I said we email and Skype. But it’s really hard to get like a higher level idea from that. You get specific request and it’s hard to know how many others have that same problem. What the survey data did combined with those interviews is it gives me a much higher confidence in the direction we’re taking.
[37:37] It makes me feel better about what we’re building and it makes me, I don’t know. Just have a better outlook on it. It makes me want to move fast and get things done. So it’s not like we don’t already tell people to have a landing page and a launch list but one more reason to have something like that in place.
[37:52] Mike: Maybe you should have one for the VA videos.
[37:54] Rob: Oh my god. We’ll see how that goes. It’s going to be such an I told you so. Everyone is going to point.
[38:01] Mike: I can see it as the twitter hash tag right now.
[38:04] Rob: Hash tag I told you so.
[38:06] Mike: No we’ll see how that goes.
[38:10] Mike: I was reworking some of the marketing stuff on the website right now as well. I reworked the sign up process a little bit to emphasize some of the things that I took away from MicroConf like the 60 day money back guarantee versus a free trial, giving people a little bit more information on the sign up, doing annual plans versus just a monthly plan.
[38:30] Offering that to them upfront as opposed to basically offering it after they’ve already signed up. So I’m looking at doing those types of things. And the only other thing I have going on is about an hour after this podcast ends I’ll be doing our first mastermind group call this evening.
[38:45] Rob: Very nice. That’s a good move. Are you excited about it?
[38:49] Mike: Yeah. I am. I mean it’s just kind of exploratory at the moment. I mean we’ll do a couple and see how thing go. Maybe do one every other week the next every for the next six or eight weeks or something like that. And then I think everybody will kind of evaluate where things are at. Whether its meeting everyone’s needs or not or if we need to change anything. But I’m kind of excited and interested in seeing how things go but we arrange it at MicroConf, so another notch on the plus column for MicroConf.
[39:16] Rob: Yeah. Very cool. I’m looking forward to hearing how that goes and I’m sure listeners will be interested in updates in the future.
[39:22] Mike: Well if you have a question for us, you can call us into our voicemail number at 1-888-801-9690 or you can email it to us at questions@startupsfortherestofus.com. Our theme music is an excerpt from “We’re Outta Control” by MoOt used under Creative Commons. You can subscribe to us on iTunes by searching for startups or via RSS at startupsfortherestofus.com where you’ll also find a full transcript of each episode. Thanks for listening. We’ll see you next time.
About marla
View all posts by marla → Episode 135 | Five Year Startup Plan, What the Best Podcasts Get Right, How to Spend $10k on Marketing, and More Listener Questions…
Episode 137 | Startup Feedback, Finding Better Consulting Gigs, and More Listener Questions
8 Responses to “Episode 136 | AuditShark, Drip, and HitTail Updates”
re: Audit Shark
You shouldn’t touch a Linux port until you have proven the product is working and working well for Windows.
Concentrate on Windows, SQL Server and IIS.
I’d even have a base product that tests for standard holes of all sub products – then have more advanced versions for each major component.
So the standard package does 100 (for example) tests for the major holes for a reasonable price – the full tests would then do hundreds of tests for each major software component.
I’m personally not a Windows fanboy – but Windows developers are used to paying for software so it would be the best place to start.
Also I believe it may be easier to target Windows developers with a number of major dev websites/forums/publications.
I’ve been a long time listener and recent commenter and like the idea of Audit Shark – I think as Rob has mentioned and as you have hinted at – I think you are searching for the niche that could see its usage skyrocket. It may be for some not so common 3rd party applications that dev’s have difficultly configuring – either way I think you should concentrate on the big 3 to start with, Windows Server/SQL Server/IIS.
Mike Taber
I think you misunderstand. Linux is already working fine, as is Windows. It’s the policies that need to be built and they simply use the scripting engine that I wrote to perform their checks. It’s sort of like I wrote an OS, but there’s no applications to run, so it’s not terribly useful. The policies are where it’s at.
I have a contractor working on these so the major effort on my part is talking to customers, not building out the policies.
Windows 2008 is done. Windows 2012 is in the works. SQL will be next, unless it becomes too time consuming to implement some of these things due to missing functionality. In that case, it’s either IIS, or some Linux stuff.
I agree that there are likely opportunities for limited numbers of audit checks. $X/month for 100 checks/day for a specific OS/platform. I’m not sure if there’s additional need or desire for advanced customizations. Maybe there is, maybe not. It’s quite possible that the SMB space wouldn’t care, while the enterprise space would require it.
I also agree that the core three on Windows (OS, SQL, IIS) are going to be key moving forward. But that’s something I’m still trying to nail down for sure.
You should have links to the startups you mention in these notes.
@Sam – We do our best. If you hear any links we don’t have here feel free to post them into the comments (we very well may copy them into the main show notes as well).
@Mike, Sorry, yes I did misunderstand. It does make sense to test the Linux market if there is minimal work.
Another option maybe to try and partner with other vendors of large software applications. They might even be able to help with the policies for their own apps – you would probably need to get some little wins before you could convince other partners though.
I agree. It has to have some credibility first.
At 32:39, do you mean Rapportive rather than Reportive?
Yes, Rapportive. That was an error in the transcription.
New Here? We recommend starting with a few of our greatest hits:
When to Ask for a Credit Card
Benefits & Drawbacks of SaaS
Doing Things That Don't Scale
How to Run a Startup Mastermind
See the complete list...
Subscribe With iTunes Subscriber Statistics © 2014 Startups For the Rest of Us. All Rights Reserved. | 计算机 |
2014-23/2620/en_head.json.gz/592 | Developers Developers Developers Developers
Ok, the video of Microsoft CEO Steve Ballmer in the advanced stages of ecstatic frenzy chanting the "Developers" mantra was funny, but his company took it seriously, and Microsoft really does a better job than any other platform vendor encouraging small companies to write software that runs on the Windows platform. If you're a software company willing to commit to developing software for any variant of Windows, you can join the Empower Program for ISVs, which entitles you a huge pile of software at the ridiculously low price of $750. You get 5 copies of MSDN Universal (normally $2600 each) ... this is the package that includes top-of-the-line versions of every single Microsoft development tool and compiler, and Office, and Visio, and developer copies of every server product, and the MSDN library, and copies of every operating system ever shipped (Greek Windows 98SE? You got it!). Empower also includes 5 copies each of Windows XP, Office XP, and a bunch of servers with 5 client licenses... basically everything you need to develop software for Windows with a team of five programmers for $750.
There was one catch, which is why I refrained from signing up for Empower in the past: you had to go through a fairly annoying sign up process which included lots of non-optional questions about things like your annual revenues and how many employees you have... information points that I didn't really feel like Microsoft needed to have in their big fat Potential Competitors database, for when Bill Gates woke up one morning and decided to do a SQL query to find all the software companies that were ripe for a little friendly competition from Redmond.
One day Paul Gomes, a developer evangelist working out of Microsoft's New York office, called me up, as he does quite frequently, to complain about the fact that we were recommending our customers use Windows Server 2000 instead of 2003 for hosting FogBUGZ due to some incompatibilities in the threading model of IIS 6 (which we have since resolved, by the way). "Why didn't you sign up for Empower?" he asked.
I told him how I thought it was offensive that Microsoft wanted data on my sales and number of employees. "You're a platform vendor, but also a potential competitor, so I'm sensitive about that stuff," I said.
"I hear you," he said, and proceeded to call up the ISV relations group back at Redmond. They called me back and walked through the signup procedure, and I told them which questions I thought were inappropriate. Then they did something which surprised me: they made every one of those questions optional. Not just for me, for everyone.
So I signed up, and got a great big box in the mail with piles and piles of DVDs.
(Now if I could just figure out how to convince them to include Flight Simulator in MSDN Universal...)
Exceptions in the Rainforest
Ned: “The debate over exceptions and status returns is not about whether error handling is hard to do well. We all agree on that. It's not about whether exceptions make it magically better. They don't, and if someone says they do, they haven't written large systems in the real world. The debate is about how errors should be communicated through the code.”
Did you see the mention of the new Fog Creek Office in the Wall Street Journal?
AutomatedQA's TestComplete is such a slick product and seems to be just as capable as the market leader, Mercury Interactive WinRunner, at less than one tenth the price. Why does anybody pay $6000 a seat for WinRunner? | 计算机 |
2014-23/2620/en_head.json.gz/982 | DHTML Menu, (c) 2004 Apycom Software ADVERTISEMENT
Bush emails not a priority: Congress budgets $650,000 to declassify Nazi war docs
John Byrne Published: Wednesday March 4, 2009 Print This Email This
The National Archives and Records Administration (NARA) -- which has yet to complete a program designed to properly store electronic records even in the shadow of millions of missing White House emails sent during the presidency of George W. Bush -- apparently feels a World War II probe is more important.
So does Congress, if their most recent budget is any guide. Congress has just budgeted $650,000 for the declassification of any documents relating to US intelligence agencies and their relationships with Nazi or Japanese war criminals.
Thirty million dollars has already been spent on the Archives' program aiming to declassify records that explore the web of connections between intelligence agencies and Nazi and Japanese war criminals, according to Congressional Quarterly's Jeff Stein, who also reported the budget request. The Archives has already spent ten years attempting to sort through their files.
But Congress appears to think $30 million and a decade of time spent isn't enough.
"There's a million pages of Army and CIA documents left" to read and catalog, Miriam Kleiman, a spokeswoman for the National Archives and Records Administration, told Stein. She blamed other agencies for failing to produce documents to declassify in a timely manner, and says the Archives plans to hire additional staff to continue the project.
While the Archives focuses on Nazi-era documents, watchdogs have questioned the agency's ability to keep up with computer-era files -- namely, records of the Bush Administration. A comprehensive program to store electronic data is not slated to be completed until 2011, and has been hampered by delays and cost overruns.
Last September, the Government Accountability Office urged the Archives to create a backup plan should their systems be unable to process incoming records turned over by former President Bush in January of this year.
"If it cannot ingest the electronic records from the Bush administration in a way that supports the search, processing and retrieval of records immediately after the presidential transition, it will not be able to meet the requirements of the Congress, the former and incumbent presidents, and the courts for information in these records in a timely fashion," the GAO wrote. In December, a report indicated that the records would not be transferred on schedule for various reasons, among them Bush officials' court battles and technical problems the Archives has encountered.
Stories on millions of missing Bush White House emails, which the Bush White House said were not backed up by accident, have brought a spotlight to the Archives' capacity and technical resources relating to electronic data. The missing emails caused a stir in Congress because of the dates in which they were deleted; they included key dates in the timeline of the outing of CIA officer Valerie Plame Wilson, and were sought in connection with the alleged political firing of US Attorneys.
In a December interview, the Archives' general counsel Gary M. Stern told The Washington Post, "We hope and expect they all will exist on the system or be recoverable," even in coming weeks. "We can't say for sure." US government agencies were required by a 1999 law to release "all records" relating to their relationships with German scientists and intelligence agents. An agency set up for the task published books on their findings in 2004 and 2006, Stein notes.
"I thought this was concluded years ago," secrecy expert Steven Aftergood told Stein. "$650k is a good chunk of money in the declassification world. At roughly $1 per page declassified -- it could be $0.50 or as much as $3.00 per page declassified, depending on the need for multiple reviews or for detailed redaction of individual pages -- that would pay for more than half a million pages that could be declassified. At this point, the Nazi and Japanese Imperial (Government) records should be completed, and other initiatives funded." Get Raw exclusives as they break -- Email & mobile
Email - Never spam: ARCHIVESEXCLUSIVESADVERTISEFORUMSCONTACTGO AD FREE DONATERSS+MY YAHOO TIPS | 计算机 |
2014-23/2620/en_head.json.gz/2235 | Puppy Linux founder releases Quirky 1.0
In a post on his blog, Puppy Linux founder Barry Kauler has announced the release of version 1.0 of Quirky. Kauler says that, while the Quirky Linux distribution is in the same family as Puppy Linux, it's a "distinct distro in its own right."
Quirky 1.0 was built using the same Woof build system as Puppy Linux 4.3.x and later and is aimed at developers as a platform for trying out "some quirky ideas". Compared to Puppy, for example, which is intended for most users, the Quirky distribution includes fewer kernel and video drivers, but also includes a number of programs from a normal Puppy release.
Kauler notes that, "I'm not really aiming to please anyone except myself – or rather, it will be nice if many people like Quirky, but given that it is the plaything for me to try ideas, it cannot be expected to cater for everyone". The developer says that he hopes that the distro will keep getting smaller with each release and that, while it "may not have any practical usefulness", he hopes to embed the distro inside the Linux kernel.
More details about the release can be found in the release announcement. Quirky 1.0 is available to download from the Puppy Linux project site as an ISO image file.
New version of Puppy Linux mini system, a report from The H.
(crve) « previous | next »
| Permalink: http:/ | 计算机 |
2014-23/2620/en_head.json.gz/3385 | California Internet Voting Task Force Technical Committee Recommendations
11 Glossary
ActiveX control: A program packaged in a format designed by Microsoft that is downloaded from a web server to a client browser and run within the browser, all as a mere side effect of visiting a web page.
Applet: A program in Sun Microsystems� Java programming language that is downloaded from a web server to a browser and run in the browser as a side effect of visiting a web page.
Atomic: A multi-step operation is atomic if, whenever it is attempted, it either fails completely, accomplishing nothing at all, or succeeds completely, accomplishing all of the steps, but never stops in an intermediate, partially-completed state.
Authentication: Verification of the true source of a message. In the case of i-voting, this refers to verification that an electronic ballot really is from the person it claims to come from, and not just from someone trying to electronically impersonate that person.
Biometric: A digitizable characteristic of a person�s physiology or behavior that uniquely identifies him or her. Examples include thumb print, DNA sample, voice print, hand-writing analysis, etc. Browser: An application program such as Microsoft Internet Explorer or Netscape Navigator that allows the user to navigate the World Wide Web, and interact with pages from it.
Certification: The process the state uses to determine that a voting system meets the requirements of the California Election Code and can be used by any county that decides to select it.
Client: In a common two-computer interaction pattern, one of them, the client, initiates a request, and the other, the server, acts on that request and replies back to the client. In the case of i-voting, "client" refers to the voter�s computer that initiates the process of voting, and the server is the computer that accepts the ballot and replies to the client that it accepted it.
Cryptography: The mathematical theory of secret codes and related security issues.
Decryption: Decoding an encrypted message (usually using a secret key).
Digital signature: Cryptographically-generated data block appended to a document to prove the document was processed by the person whose secret key was used to generate the data block.
Encryption: Encoding (i.e. scrambling) a message using a secret key so that anyone intercepting the message but not in possession of the key cannot understand it..
Failure tolerance: The ability of a system to continue to function in spite of the failure of some of its parts.
eCommerce: Electronic commerce, i.e. financial transactions conducted over a computer network or the Internet.
Email: Electronic mail, i.e. messages and documents sent from one party to other specific, named parties.
Firewall: One or more computers standing between a network ("inside") and the rest of the Internet (outside). It intercepts all traffic in both directions, forwarding only the benign part (where "benignness" may be defined by a complex policy), thereby protecting the inside from attacks from the outside. HTML: Hypertext Markup Language, the notation used for formatting text and multimedia content on web pages.
HTTP: Hypertext Transfer Protocol, the communication protocol used between web browsers and web servers for transporting web pages through the Internet.
i-voting: Internet voting
Integrity: Protecting data from undetected modification by unauthorized persons, usually through use of a cryptographic hash or digital signature.
Internet: The worldwide system of separately-owned and administered networks that cooperate to allow digital communication among the world�s computers.
IP: Internet Protocol, the basic packet-exchange protocol of the Internet. All other Internet protocols, including HTTP (the Web) and SMTP (email) use it.
IP Address: A unique number (address) assigned to every computer on the Internet, including home computers temporarily connected to the Internet.
ISP: Internet Service Provider; a company whose business is to sell access to the Internet, usually through phone lines or CATV cable, to homes, businesses, and institutions.
Key: A typically (but not always) secret number that is long enough and random-looking enough to be unguessable; used for encrypting or decrypting messages.
Key pair: A pair of keys, one used for encrypting messages and the other for decrypting them. Used in public key cryptographic protocols for authentication, digital signatures, and other security purposes.
Kiosk: A booth- or lectern-like system with a screen, keyboard, and mouse mounted so they are available to users, but with a tamper-proof computer inside and a secure Internet connection to the server.
Mirroring: Keeping two or more memory systems or computers identical at all times, so that if one fails the other can continue without any disruption of service.
LAN: Local Area Network; a short-range (building-size) network with a common administration and with a only small number of hosts (computers) attached. The hosts are considered to be sufficiently cooperative that only light security precautions are required.
Malicious code: A program with undesirable behavior that operates secretly or invisibly, or is disguised as part of a larger useful program; in this document, the same as "Trojan horse".
NC: network computer; a widely-discussed hypothetical product that does not store software or files locally, but works only through a network.
Online: Generally, a synonym for "on the Internet", or sometimes, more specifically, "on the web".
Out-of-band communication: Communication through some means other than the primary channel under discussion. If the primary communication channel is the Internet, then out-of-band channel might be via U.S. mail, or a voice telephone connection, or any other channel that does not involve the Internet.
Packet: The smallest unit of data (along with overhead bytes) transmitted over the Internet in the IP protocol. PC: Personal computer; any commercial computers marketed to consumers for home or business use by one person at a time. In 1999, this includes Intel-based computers (and clones) running a Microsoft operating system or a competitor (e.g. Linux, BeOS, etc.), and it also includes Macintoshes.
Plug-in: A software module that permanently extends the capability of a web browser.
Privacy: Protecting data from being read by unauthorized persons, generally by encrypting it using a secret key.
Private key: A key, or one member of a key pair, that must be kept secret by one or all members of a group of communicating parties.
Protocol: An algorithm or program involving two or more communicating computers.
Public key: One member of a key pair that is made public.
Public key cryptosystem: A cryptographic protocol involving a pair of keys, one of which is made public and the other held secret.
Redundancy: Excess storage, communication capacity, computational capacity, or data, that allows a task to be accomplished even in the event of some failures or data loss.
Replication: A simple form of redundancy; duplication, triplication, etc. of resources or data to permit detection of failures or to allow successful completion of a task in spite of failures.
Script: In the context of this document this term refers to a program written in the JavaScript language, embedded in a web page, and executed in browser of the web client machine when it visits the web page.
Security: General term covering issues such as privacy, integrity, authentication, etc.
Server: In a two-computer interaction pattern, one of them, called the client, initiates a request, and the other, the server, acts on that request and replies to the client. In the case of i-voting the computer that receives and stored the ballots from voters is the server.
Spoof: To pretend, usually through a network, to be someone or somewhere other than who or where you really are
Trojan horse: A program with undesirable behavior that operates secretly or invisibly, or is disguised as part of a larger useful program; in this document, the same as "malicious code".
Tunnel: A cryptographic technique in which a computer is in effect attached to a remote LAN via the Internet, even if there is an intervening firewall.
URL: Uniform Resource Locator, i.e. a name for a web page, such as http://vote2000.ss.ca.gov .
USB port: Universal Serial Bus port; a port (connector) on newer computers used for high speed serial communication with attached devices.
Virus: A Trojan Horse program that actively makes, and covertly distributes, copies of itself.
Vote client: The computer that voters use to cast their ballots, which are then sent to the vote server.
Vote server: The computer(s) under control of the county that receives and stores votes transmitted by Internet from vote clients.
Web: The world-wide web, or WWW; the worldwide multimedia and hypertext system that, along with email, is the most familiar service on the Internet.
Web site: A collection of related web pages, generally all located on the same computer and reachable from a single top-level "home page".
Web page: A single "page" of material from a web site. | 计算机 |
2014-23/2620/en_head.json.gz/5389 | Home ATAPI
ATAPI
ATAPI, also known as Advanced Technology Attachment with Packet Interface, is the standard type of connection that keeps everything in your computer connected to each other. This article will go over all of the different functions of ATA and ATAPI, how it works, devices that use ATAPI, upgraded versions of ATAPI, and competing technologies.
What Is ATAPI
ATAPI is a newer version of the old ATA (Advanced Technology Attachment) connection. Where as ATA was exclusively designed for connecting hard drives to the motherboard, ATAPI is made to connect all portable devices to your motherboard including RAM, hard drives, CD-ROMs, DVD drives, and other devices. ATA and ATAPI were invented by Western Digital and the standard is maintained by X3/INCITS Committee. Western Digital was also the first to create Integrated Drive Electronics (IDE), which was the forerunner of ATA. In 2003, ATA was further upgraded to Serial Advanced Technology Attachment (SATA) and has since been dubbed Parallel ATA. IDE is the main connection in your computer where as ATAPI provides the commands that allow IDE to work with newer devices. ATAPI is included in the wide array of connections and technology known as EIDE (Enhanced Integrated Drive Electronics).
How ATAPI Works
ATAPI cables are made up of 40 separate wires that end in 40 slots on either in that are used to plug into the device on one end and the computer on the other end. Electrical output is transmitted from the device through these wires and into the CPU (central processing unit) where it is converted to information. Naturally, information can also be sent from the CPU to the device. It is a simple idea but time has shown a great struggle in perfecting these wires and improving them over the years to be more accessible and reliable and to make them transfer information faster than ever before. We’ll go over the various upgrades of ATAPI later on.
Devices That Use ATAPI
If you have ever opened up your desktop computer to add an extra hard drive, you already know what ATAPI is. ATAPI is the cable that you plug into your hard drive(s). It usually has at least two connections on it: Drive 0 for the master drive and Drive 1 for the slave drive. Your ATAPI cable may be slightly different and include a Drive 2 or even Drive 3 but the basic structure is the same. The ATAPI cable is also used for your floppy drive and CD-ROM drive. ATAPI cables can be used for a wide variety of devices but most other devices have switched to more modern cable systems such as Firewire or USB to be used externally.
To understand how ATAPI has been upgraded over the years, you must first get a grasp of how the technology started out. First, we had IDE which was then broadened to EIDE. IDE started off using ATA but was then upgraded to ATAPI. ATAPI itself has been through five main versions but the whole system went through IDE, ATA 1, ATA 2, ATA 3, ATAPI 4, ATAPI 5, ATAPI 6, ATAPI 7, and ATAPI 8. ATAPI was then upgraded further to SATA and eSATA, which we’ll go over later. IDE and ATA 1 were considered breakthrough technology at the time but looking back, it still had a lot of improvements to look forward to.
Competing Software & Technologies
Like everything else, many technologies have come out to compete with ATAPI over the years and it becomes up to the manufacturer of what type of connection they are going to include in their computers. Below are a few examples of competing software and hardware that have been introduced to the market.
SATA/eSATA
SATA (Serial Advanced Technology Attachment) is similar to ATA but it has a few major differences. For one, ATA required long cables with many pins to transfer information. SATA, on the other hand, brings this requirement down to seven pins and uses less cable. SATA also brought about the use of “hot swapping” and “hot plugging” which basically just means that you can unplug the device while the computer is still in operation. The “serial” part of SATA means that information is sent over the wires one bit at a time where as the previous PATA version (Parallel Advanced Technology Attachment) transferred information two bits at a time. While PATA may seem better, it requires more cables and equipment. Eventually, other technologies made SATA a better choice than PATA. ESATA (External Serial Advanced Technology Attachment) is the same thing as SATA but made for external use.
USB is probably a more familiar name to you than IDE, ATA, PATA, SATA, or eSATA. While USB has been used for less time than any of these older technologies, it has proved to be a very popular connection method. Many input devices for your computer such as mouses and keyboards now come accessible with USB and many other popular devices such as external hard drives and cameras use USB as well. Some devices such as electronic cigarettes and cell phones can even be charged by plugging a specially adapted charger into your computer’s USB ports.
Firewire is basically the same thing as USB but is made exclusively by Apple. Firewire has been included in all of the latest Mac PC models and the Ipod. Firewire strives to be one of the fastest connection methods in the computer industry and is incredibly efficient at pushing large amounts of information through cables in a very short time span. 2 Comments » | 计算机 |
2014-23/2620/en_head.json.gz/6351 | 10 Questions from the Academy: Masaya Matsuura
Masaya Matsuura is a member of the Academy of Interactive Arts & Sciences. He spoke at the D.I.C.E. Summit® in 2008 and the D.I.C.E. Summit® Asia in 2009. He works for NanaOn-Sha.Q: How do you measure success?A: By the number of people who felt happiness through my creations.Q: What's your favorite part of game development?A: Making totally new experiences into something tangible. Naturally a great effort from the development team is essential. In addition, there is a moment in development (usually later in the cycle) where the game finally becomes fun. A lot of developers say that this is the moment when a new experience is born, although I'm unable to properly put the sensation into words.Q: What game are you most jealous of?A: I never feel that way with either games or music.Q: What’s the one problem of game development you wish you could instantly solve?A: Reducing the volume of games that involve hurting people with knives and guns. The recent success of music games and also family games for the Wii springs to mind. Undoubtedly there are other possibilities out there that are as yet undiscovered.Q: On a practical basis, what’s the one thing you’re going to tackle next?A: We are working on a project providing a music game which has high playability despite being for cell phones. I'm really enjoying this interesting project and have high expectations for it.Q: Tell us one of your recent professional insights.A: I’ve been thinking about peripheral-based games these days. It is great that they’ve gotten really popular but I'm concerned about people throwing them out after the boom subsides. On the other hand it is true that games need to have more tangible experiences.I have a toy piano which is not special at all but it’s precious to me. There is no doubt that it’s just a toy and no matter how hard I practice it’s impossible for me to deeply express myself musically with it but despite that I probably wouldn't let it go. I want all games and peripherals to be cherished this way.Q: Are games important?A: Games are very important in order for human beings to expand the possibilities of expressions.Q: What's the biggest challenge you see facing the industry?A: Globalizing the industry without slowing down. Game creators need to wrestle with the necessity to market their games globally. Thanks to the internet we are living in an age of borderless information exchange. Music and film are adapting to this, trying to find new forms of expression. Should games continue to focus their content regionally then naturally we won't compare favorably to these other forms of entertainment. Of course, international events such as DICE Asia help in this respect.Q: Do you think it’s important for developers to continue playing games?A: I have no idea about that but without doubt the amount of time I have to play games is decreasing.Q: Finally, when you look at the future is there one great big trend that affects everyone?A: Games will one day cease to be contained inside physical screens. I believe that someday a new form of media will emerge and take the world by storm. It may not to be called “games” any more despite sharing some similar characteristics. It could possibly be something which is merged with other forms of existing media. | 计算机 |
2014-23/2620/en_head.json.gz/16087 | As this Privacy Policy changes in significant ways, we will take steps to inform you of the changes. Minor changes to this Privacy Policy may occur that will not significantly affect the ways in which we each use your personally identifiable information. In these instances, we may not inform you of such minor changes. When this Privacy Policy changes in a way that significantly affects the way we handle personal information, we will not apply the new Policy to information we have previously collected from you without giving you a | 计算机 |
2014-23/2620/en_head.json.gz/17634 | News, Tools and Techniques for Macintosh Creative Professionals home
Mac Alert The Blog Zone
search forum view posts forum list News Reviews
Audio Video Interactive Graphics Print Features Downloads Most Viewed
Mac Pro Sites Mac Animation Pro
Mac Audio Pro
Mac Design Pro
Mac DVD Pro
Mac Video Pro
Shopper Media Kit Contact Webmaster
Technology: Page (1) of 1 - 10/08/12
More Related Stories Do Androids Dream of Handwriting Recognition?
By Katia Shabanova
Engineers and programmers have been trying for decades to teach computers and other electronics to recognize handwritten text. Only in the last few years have the world's largest software companies made significant progress teaching smartphones and tablets to adequately recognize handwriting and translate it into typed text on the screen. This August, Samsung launched its Galaxy Note 10.1 tablet. As one of the key competitive advantages of the device, the manufacturer cited the S Pen recognition system, created in partnership with Wacom. A similar feature exists in the Galaxy Note II device, which Samsung recently presented at the Berlin Consumer Electronics Show IFA. The handwriting recognition function included in the device makes it just as comfortable to work on with your fingers as with the electronic S Pen. In the latest devices, the S Pen allows the transmission of up to 1,024 degrees of clicking on the screen -- achieving essentially the same precise screen recognition as if writing with an actual pen and paper. The Galaxy Note also supports the Shape Match and Formula Match features, allowing graphs, figures, charts and mathematical formulas to be drawn or recorded using either the S Pen's recognition and subsequent conversion to text, or the graphical format. But why, in 20 years of mobile, is this groundbreaking handwriting recognition feature just now being introduced? The answer is, it isn't. The first attempts to teach computers to understand handwritten text began in the Soviet Union in the 1960s, when the emergence of personal computers and smartphones was, it seemed, in an uncertain distant future. At the time, the space industry was driving the development of handwriting recognition, explains the head of Paragon Software's mobile development division, Alexander Zudin, as a way to avoid sending pencils and paper into space to cut down on costs. Each extra gram sent into space amounted to huge additional costs. Commercial use of handwriting recognition had not been identified at the time, but by the mid-1990s, with the appearance of the first handheld computers known as PDAs, keen interest was shown by various device manufacturers. Some form of handwriting recognition was installed on all the early devices from Palm, Apple (which entered the market with one of the first prototypes of the modern smartphone, Newton) and PDA manufacturers working under the first version of Windows Mobile. Interestingly, handwriting applications did not become the "killer apps" programmers thought they would be, simply because their development didn't meet consumer expectations. They just didn't work well. "'Teaching' PDAs every letter and number written by hand was too complicated," says Zudin, "and the recognition accuracy was very low." The qualitative leap in the handwriting recognition market didn't happen until about 20 years later, due to the massive proliferation of tablets and smartphones. Entering information by hand was a natural development in the new mobile world, with increased opportunities for handwriting recognition on the sensitive touch screens that could be found in nearly every purse, pocket or briefcase. Paragon Software began development of its own PenReader handwriting recognition software in 1997, says Zudin, for early PDA devices. Over the next few years, specialized operations, such as orthographical correction of recognition results and support for more than 30 world languages, were added to the PenReader engine, increasing the complexity, exposure and reliability of its handwriting identification. A full consumer version of PenReader with cursive text recognition was released in 2007. Unlike other input methods, PenReader adjusts to the user's natural handwriting style rather than requiring any handwriting modifications or use of complex symbols like those used in early handwriting recognition software. With increased precision in the technology, handwriting recognition has gained traction. The most popular devices and platforms all have handwriting recognition services available, especially as mobile applications. In the App Store, a number of programs can be found that allow graphical information to be drawn on the screen and saved as pictures -- and some programs even have the ability to recognize handwriting, including the popular note-taking app Evernote, which not only supports handwritten text notes, but also audio, speech-to-text and the ability to scan documents with cloud archiving capabilities. The iPad tablet, because of its size and shape, naturally lends itself to being scribbled on by hand, but a special stylus for this task costs about $80. Handwriting apps aren't just for iOS. In Google Play there are a number of applications based on handwriting recognition technology. Handwriting Dato and Handwrite Note Free, for example, are designed to store and catalog handwritten notes. More "advanced" applications, such as MyScript Calculator, are designed for writing complex mathematical calculations by hand. A new handwriting recognition feature is also expected to be supported in the latest Microsoft Windows 8. The handwriting recognition function was already available to Windows 7 users, but only enabled by expensive electronic pens and limited to just a few Windows 7 tablets. Microsoft has made a significant step forward with Windows 8, according to one of the beta testers of the system, making writing on the screen almost as easy as writing with a pen on paper. It is not only gadget manufacturers and their software developers that have shown interest in handwriting recognition, but also other industry players. The German automotive manufacturer Audi equipped their 2011 A8 and A6 models with on-board computers that support handwriting functions capable of entering information into the Multi Media Interface, which includes the car's media player and navigation system. Audi's rationale was that some users find it easier to manually enter information rather than pressing buttons. Paragon Software has seen increased awareness from manufacturers looking to pre-integrate handwriting recognition into personal computing devices and electronics of all types. PenReader, utilized in place of the default keyboard in any application, can be easily configured to integrate completely with any operating system, which is why the company recently released the PenReader API as part of a software development kit aimed specifically at device manufacturers. Zudin says that success of its consumer version has drawn interest from a spectrum of electronics makers for a range of end uses that the company never envisioned when it began tinkering with handwriting recognition 15 years ago. Google, also recognizing the attraction of alternate input methods, launched its Google Handwrite feature in July to facilitate handwritten Google web searches on touchscreen smartphones and tablets. Available in 27 languages for iOS5 and higher, and Android smartphones running 2.3 and higher (4.0 and higher on tablets), Google Handwrite looks to be a natural enemy of anti-keyboard pecking input apps like Swype, Swiftkey or PenReader, and even the company's own Google Voice Search speech recognition input method. Not so fast. Literally. Writing English characters by hand is still viewed as more cumbersome than typing, many believe. The path of adaptation seems to be to satisfy the ability to input information quickly -- whether by typing with your fingers or through voice recognition. There has been significant progress in these areas, with pre-integrated alternate input methods in high demand. But, for whatever reason, handwriting recognition still remains a niche. Paragon Software Group's lineup of effective, quality reference applications enrich mobile learning for users of 30 global languages with more than 350 electronic dictionaries, encyclopedias and phrase books developed in conjunction with the world's leading publishing houses, such as Duden, Berlitz, Langenscheidt, Merriam-Webster, Oxford, PONS, Le Robert, VOX, and others. For up-to-the-minute information on Paragon Software Group happenings, including news and information on new products, please connect with the mobile division on Facebook or Twitter @Paragon_Mobile.
Katia Shabanova is director of public relations at Paragon Software Group where she handles editorial exposure for the company's mobility and disk utilities divisions. Ms. Shabanova studied linguistics at Moscow State Linguistic University and University of Texas at Austin; English and German philology at Santa Clara University, California; and earned Master of Arts degrees in English and German Philology at Georg-August University of G?ttingen, Germany. You may contact her at kshabanova@paragon-software.com or connect with her on LinkedIn http://tinyurl.com/8nxzeou
Related Keywords:handwriting recognition, Galaxy Note 10.1, electronic S Pen | 计算机 |
2014-23/2620/en_head.json.gz/18437 | T.26
Wavee Weekend
Wavee Weekend font family
Anuthin Wongsunkakon is a Bangkok-based type designer and one of the partners of Cadson Demak, the first Thai communication design firm with typographic solution. He began studying graphic design at Rangsit University. After completing his bachelor degree he continued his studies in New York at Pratt Institute where he was trained by Antonio Dispigna.
Much of his work deals with print, logotype, and lettering, but he is best known for his contribution to Thai typography and for reintroducing custom font design service to the local business industry. Anuthin had provided fonts for many leading companies in Thailand including Advance Info Service; The largest mobile phone operator, Creative Technology of Singapore. dtac - Telenor (Thailand), Ikea (Thailand), Nokia (Thailand), CAT Communication Authority of Thailand, and Tesco Lotus. In publication area including Thai Edition of Men’s Health, Arena, Wallpaper* and L’Officiel. Aside from his custom-made ones, his Thai and Latin typefaces has been used by various enterprises, most notably in local and international magazines.
Besides being a designer, Anuthin is known as a writer and design educator. He frequently writes for several influential Thai magazines including Wallpaper* (Thai Edition) and art4d. In education, he first started his teaching career at Rangsit University, and later Bangkok University and Chulalongkorn University. Most of the articles from his controversy book on graphic design education in Thailand, Note The Norm, along with his various essays can be viewed at anuthin.org, a graphic design and typographic design archive weblog he found in 1999.
http://www.anuthin.org
http://www.anuthin.com
http://www.cadsondemak.com
http://www.facebook.com/anuthin.org
On the following page, you will find an interview with Anuthin Wongsunkakon, in which the designer of Helvetica Thai talks about various aspects of typography and typeface design.Continue reading this article: Interview with Anuthin Wongsunkakon
Test-drive Wavee Weekend in Typecast.
Wavee Weekend Volume (4 Typefaces) Details
Berlidin | 计算机 |
2014-23/2620/en_head.json.gz/18438 | Louis Suárez-Potts - interview
From LXF Wiki
Revision as of 17:02, 10 Feb 2009; view current revision←Older revision | Newer revision→
Flocking together
Linux Format was rather surprised to meet Louis Suárez-Potts in Montreal. What's an office app developer doing attending the Libre Graphics Meeting?
Earlier this year, Linux Format attended The Libre Graphics Meeting at École Polytechnique de Montréal in Canada, which touted itself as being, "all about participation; artists and developers, bring your laptops and show us what you can (and can't yet) do. Organise a BOF about your favourite project or feature." Unlike a lot of other conferences, which tend to be dominated by marketing and sales people, lead organizer Louis Desjardins arranged The Libre Graphics Meeting with the intention of it being for developers in particular: the air was abuzz with the heady idea of cross-pollination between open source projects and not just those concerned primarily with graphics.
We caught up with Louis Suárez-Potts, Community Development Manager and Community Lead for the world's leading open source desktop application, OpenOffice.org (http://www.openoffice.org), to seek his opinion on the challenges facing many open source projects, particularly the implementation of the Open Document Format (ODF), and the advantages and disadvantages of corporate contributions to open source development.
Linux Format: What else fills your working day, as well as your broad and exhausting-sounding remit at OpenOffice.org?
Louis Suárez-Potts As well as being the primary liaison with the OpenOffice.org (OOo) primary contributor and sponsoring company Sun Microsystems, CollabNet (www.collab.net), the software company hosting OpenOffice.org, employed me as the Senior Community Development Manager and open source strategy consultant from 2000 2007. Recently I've been examining the problematics of "infonomics," or the political economy of information. Some of my publications and papers can be found at http://homepage.mac.com/luispo/. One of my blogs touches on OpenOffice.org and open source; http://ooo-speak.blogspot.com, another centers on cultural criticism.
LXF: This second question may seem impertinent, but why did you want to come to the Libre Graphics Meeting? It seems like a strange choice of conference for someone so involved with the more er, Officey side of productivity applications...
LSP: Two main reasons. Firstly, this is a developer-focused conference. Most of the people here are involved with Open Source Graphics in some way. And of course there's the open document format. The charter of the ODF focuses mostly on documents, but there's also scope for graphics to be used in presentations and so forth. There has been a lot of interest and activity in how OpenOffice can transcend the boundaries of the traditional office suite, to start including things that are far more interesting from a creative point of view, and to solve the problem of how to deploy the ODF in other programs that aren't a part of the office suite. For example Scribus; if you can import your text in OOo format, why not also be able to save and share documents as an open source document, like ODF? The advantage is clear if for example you need to update a page regularly with data from a spreadsheet.
LXF: And the second reason?
LSP: It's not about competition we're not all competing with one `bad guy'; in open source generally, we need to pull together. Whatever open source project you're involved in, you have your own mission, but the aim of all our activities is primarily with offering usability and choice to the end user. The very notion of collaboration should extend beyond the boundaries of any open source project. A common format is important, but so are things like similar technology, architecture, code etc they're all important. No-one's going to actually buy applications like KOffice or OpenOffice.org, but they're going to choose one of these open source alternatives to the ubiquitous Microsoft Office on the strength of not only its own abilities, but also the way in which it interacts with the other programs on their desktop.
LXF: That is almost the entire purpose of open source, isn't it? It's one of the key differentiators between open source and proprietary software.
LSP: Supposedly! The Hobbesian ideal (that unrestrained competition is good for business) can be incompatible with the open source ethos; I'm not saying that developers should all collaborate on each other's projects that would be unworkable but what's needed is collaboration with other projects to capitalise on our strengths to address common issues. One good reason for OOo collaborating with Scribus: the latter application is very discriminating regarding fonts; they have to meet specifications that go far beyond what most other applications demand. The two teams have discussed the possibility of establishing a library of fonts ("library" in the old sense: a repository) that have been vetted according to Scribus' standards but can also work with other compliant applications, such as OOo. This would not only help Scribus and OOo, but the user too.
And, from my perspective, having ODF implemented by Scribus and other relevant applications seems desirable, though not as compelling as perfecting import functionality. One of the coolest things about The Libre Graphics Meeting is the superb broadsheet published by the conference team. It shows how good Scribus is (Louis Desjardins is a project member) and showcased the seriousness of the effort put into the event.
LXF: By extending an open source program's functionality, it will attract a broader base of potential users to migrate from the proprietary options? LSP: I'm not for one moment suggesting that OOo ought to be the Borg of products, assimilating everything! Something as simple as a pop-up box in OOo prompting a user about the option of using Scribus if the way in which they were formatting their file suggested to the program that they would be better off using a DTP application would be a great example of the strides in strategy that open source could be making in interoperability and user-friendliness. Likewise with graphics the same construct could point OOo users at Gimp if they are attempting to do anything more graphically complex than drawing a box in Ooo.
It's all about enabling users. Many if not most computer users never really go through the process of actually figuring out what the best program would be for completing the task in hand they just end up using whichever product is already there, the one that came with their computer (or support package, in the case of larger business users) rather than evaluating the different options that are available. In the open source world, you lose nothing by advertising somebody else's product anything that enlarges the sphere of open source users is a good thing. LXF: What sort of response do you get from other projects when you talk about this kind of interoperability?
LSP: Most of them are perfectly happy with this. Working together on compatibility is important. Yesterday me and some of the OOo team were talking with the KOffice guys and sharing tips about refining compatibility in general, and what the demands are surrounding ODF. And how to license the technology an important consideration with the release of GPL 3. LXF: Do you think that sponsor involvement introduces an overtone of obligation for attendees at events such as this?
LSP: It depends on the organisation. It can be difficult ethically it's one thing at a conference to be treated to a sumptuous dinner by a commercial entity showing its good faith with the aims of the open source, but higher levels of involvement can introduce a level of commercial pressure that doesn't always sit well with the academic nature of discussion. Rather than a junket, perhaps the money could be better spent by sponsor companies on, for example, training project managers how to work in a more non- hierarchical fashion...
It's clear that a lot of companies feel that there is a conflict between their business model and the academic aims of open source, but the lines between sponsor and contributor are becoming more blurred than they used to be Novell, it could be argued, falls into both categories: providing conference funding on one hand, and giving code to the community at the same time. There should be clear ethical boundaries, but who is going to take the decision at exactly where the line is drawn? Most corporations would typically fund some sort of event, but organisers need to be careful, otherwise someone's going to ask the awkward question, "How much does it cost to buy the interest of the developer?"
In my opinion, some companies don't really act as a sponsor in a true sense they're prepared to contribute resources by supplying people to work on areas of programming that lie within their own development framework, to the detriment of unassociated areas, and don't really work with the non-developer community that is, other businesses that are interested in the exact same open source areas on which the sponsor company is contributing to. These other parties will eventually get access to the code, because it all goes back into the source, but what they won't have an insight into the reasons why the code was put together in that way.
LXF: Is there an unassailable division between business and the community?
LSP: Not at all. In Ooo's case, back in 2002-3. the compatibility for the upcoming Mac OS X was handled by a widely distributed group of international contributors working towards a common goal what better example of community could there be? It would be great to have companies like Novell working more fully with the community, and not just in this market-heavy fashion. Novell sponsors OOo Con, which is great, but rather than a barge cruise, using that money to help with QA, localisation and training etc would see a better return for both Novell and the community as a whole.
People think divisions exist because of the nature of different participants' contribution or sponsorship. For instance, IBM can't be more contributive to OpenOffice. org because Ooo competes with some of its existing products; or why should Adobe contribute to an open source project when it competes with a product that's an important part of its revenue stream?
LXF: Is it difficult to co-ordinate a project that has contributions from both volunteers and paid staff alike?
LSP: In the classic corporate structure, you have marketing, who say "This is what users want" then they synthesise that or clarify it for the engineers; then the engineers say "You've gotta be kidding!" Well, in open source, "You've gotta be kidding!" is phrased a little more harshly, as volunteer developers can just walk away because he or she has no interest in doing that they want to do something that's more technologically interesting for them. This is the classic liberal approach (in the sense of liberty, rather than politics) and much of open source is these days conducted by people used to working in this fashion.
When a company is allocated a certain number of developers to work on a particular area, there can be tension as the developers aren't necessarily that keen to let the company allocate resources or dictate which areas of the project on which work has to be done. These areas aren't always that attractive to developers, because there is less opportunity to innovate. Many developers don't want to just produce open source equivalents of functionality already provided by programs from companies like Microsoft and Adobe; they are motivated by the opportunity to implement something that's cool. If marketing doesn't consider this to be worthwhile, or consumer group can't initially see the appeal for end-users, then commercial concerns can end up limiting the resources allocated to evolution of new features, and the developers end up having to work on these areas on their own rather than with the support of the sponsors. A lot of open source developers both paid and volunteers might be frustrated at this situation, but realise that this is the way things work at present, and just get on with it anyway.
LXF: Do you think that the PR aspect of sharing code with the open source community is an important motivator to do so?
LSP: I would be sceptical about how this would apply to the market as a whole, but it certainly has its advantages. Microsoft spends more money on recruiting developers than a company like Sun can. But you don't want Microserfs though: as a recruiter, you want developers that appreciate the technological and social freedom that open source provides the PR aspect is important way of attracting innovative and interesting people; Sun's accomplishment is proven in that it's getting a lot of good developers now.. In contrast to what I said earlier about enterprise having too much influence in aspects of a project relating to their own profit model, it would be true to say that involvement of profit-driven companies can provide important direction for open source projects. Look at Debian...
LXF: Debian? Ha! Ha!
LSP: You may laugh! Debian is an experiment in democracy run amok. It's very proud of that: it's radical, but such a rampant democratic approach but can dissuade businesses' opinion of open source as it can impede progress. Development doesn't take place in a vacuum; in business, to actually launch products you need someone who's responsibility it is to say "this is where we're going," and "this is what we do next!" that's why Mark Shuttleworth (supremo of Ubuntu, based on Debian) has been so important for Linux. LXF: It's interesting to note that though ODF is not supported directly by Windows Office 2007, Microsoft finances the ODF add-in for Word project on SourceForge to create a plugin for Microsoft Office that will be freely available under a BSD license. How many companies understand that?
LSP: As open source becomes more widespread, companies are getting a feel for what it's about, and are headed in the right direction. When they recognise that they don't have to do all their research in-house, everyone wins if they fund an open source project, it's like starting a subsidiary business, but with a lot less risk and is in many cases more cost-effective. LXF
Retrieved from "http://www.linuxformat.co.uk/wiki/index.php/Louis_Su%C3%A1rez-Potts_-_interview"
Categories: LXF Interviews | Graphics Views
Create an account or log in Navigation
Main Page Community portal Current events Recent changes Random page Help Search
Special pages About LXF Wiki Disclaimers | 计算机 |
2014-23/2620/en_head.json.gz/18780 | Paul Frields: Proprietary Software Has Had Its Day and is on the Way Out!
By Atanu Datta on January 1, 2009 in For You & Me, Interviews · No Comments
Paul W. Frields has been a Linux user and enthusiast since 1997, and joined the Fedora Documentation Project in 2003, shortly after the launch of Fedora. As contributing writer, editor, and a founding member of the Documentation Project steering committee, Paul has worked on guides and tutorials, website publishing and toolchain development. He also maintains a number of packages in the Fedora repository. In February 2008, Paul joined Red Hat as the Fedora Project Leader. Naturally, on the occasion of the 10th release of Fedora, we decided we needed the Project Leader’s insight into what goes inside the project. So, here’s Paul for you…
How and when did you realise that FOSS was something you were really interested in?
I had been using FOSS professionally for a few years as a forensic examiner. Then I started teaching it to others as a means of both saving taxpayer money, and producing results that could be independently verified, since the code was open and available to anyone. Also, if there were problems, frequently we could discover the reasons behind them and resolve them ourselves. I was really fascinated with the idea that all this great software was produced by people who, in many cases, hadn’t ever met in real life, collaborating over networks simply in pursuit of better code.
So what is your opinion on proprietary software?
Proprietary software is probably only useful in very niche cases—cases where the overall body of knowledge about whatever the software does is very rare or hard to obtain. For general-purpose uses, proprietary software has really had its day and is on the way out. The idea of paying for personal information management software, database services or word processing is pretty antiquated.
You’ve been in systems administration in previous work places, are an active documentation writer/contributor, and also maintain a few packages yourself. Can you explain a bit about each of these roles, and whether you’d rather call yourself a writer, a developer or a systems administrator?
Actually, I wasn’t ever really a true systems administrator. I did some work that bordered on sys admin work, but really, I was more of a dabbler. That having been said, I did get to touch upon a lot of different technology areas, from scripting and clustering, to building customised kernels and distros. I also spent a lot of time documenting what I did for other people, and teaching them hands-on. So I think the term ‘dabbler’ is probably best—Jack of all trades and master of none!
How did you get associated with the Fedora Project?
I started with Slackware but used Red Hat Linux starting with 4.1, all the way up until Red Hat Linux 9 was released. I didn’t watch project schedules that much or subscribe to a lot of lists, so I didn’t know about the Fedora Project until the fall of 2003. I was delighted to have a chance to contribute back to a community that had helped me build my skills, knowledge and career. I decided to get involved with documentation because I wasn’t really a software developer, but I was a decent writer.
What’s your role as the Fedora Project Leader?
The Fedora Project Leader is much like the Fedora Project’s CEO. Ultimately, I’m accountable for everything that happens in Fedora. Red Hat pays me to make sure that the Project continues to fulfil its mission as a research and development lab for both the company and the community, and that we are consistently moving forward in our mission to advance free and open source software worldwide.
Among Fedora Project contributors are those who are RH employees, and there are those who work as volunteers. How do you ensure there’s minimal clash of interests? For example, what if there’s a significant difference of opinion on the direction Fedora should be headed towards?
To minimise the chances of this happening, we have a well-defined governance structure. For example, we have a Fedora Engineering Steering Committee (FESCo) that is entirely community-elected and makes technical decisions on features, schedules, and is in charge of special-interest technical groups such as our packaging group. Having a central place for technical decision-making means that there’s a regular venue where arguments can be heard from both sides whenever someone is suggesting a change, whether that person is a volunteer or a Red Hat employee. We’re all members of the same community.
Just about every engineer in Red Hat works for some time in Fedora, since Fedora is the upstream for the Red Hat Enterprise Linux product. Part of my job, and the job of the Fedora Engineering Manager, Tom ‘Spot’ Callaway, is to make sure that work is coordinated between the internal Red Hat groups that work in Fedora and the external community. When decisions are made by the community, we make sure that the internal groups participate and are informed, and when there is something that the internal groups want to pursue, we make sure they are discussing those needs with the community. In general, it ends up being a pretty smooth interaction.
What is Fedora’s vision? By whom or how is it set or defined?
The Project Leader sets the vision for Fedora, which goes hand-in-hand with our mission to advance free software worldwide. The FPL’s vision is often about finding the next big challenge for our project to overcome, as a community. In the past, those challenges have included establishing governance, unifying the way we manage our software, and creating a leadership team for Fedora inside Red Hat. I’ve spent the last ten months turning my sights outward on two problems—making it easier for people to join our community, and enlarging that community to include all those who remix or reuse Fedora in various ways. Really, the vision can only be a reality if the community agrees it’s worthwhile, and it’s my job not just to identify that vision, but to enlist the community’s help to realise it.
There are some distributions that don’t really care about upstream, where the distro developers and maintainers want to push their patches first into the distros to make it stand out from the rest of the crowd. Fedora, I believe, has a strict policy of working with the upstream, and whatever feature sets are available in the final distro are completely in sync with the upstream. What’s your take on this and why do you think upstream contribution is more important?
Upstream contribution benefits the entire FOSS community. It’s how the open source development model works—it’s about collaboration and constant code review and refinement. When a distribution changes behaviour in a way that goes against the upstream model, three things happen.
First, there’s immediately an uneven user experience. Users who try out newer versions of the same software from the upstream find that the behaviour changes suddenly and without warning. They think this is either a regression or a mistake on their part, when in either case the difference has been caused by the distribution vendor.
Second, the workload on the maintainers of that distribution begins to multiply and, in some cases, increase exponentially. The further out of step with the upstream the distribution becomes, the more difficult it becomes to integrate the distribution-specific changes with new upstream releases as time goes on.
Third, because the upstream is testing the interaction of their software with other pure upstream releases, arbitrary changes downstream create problems in those interactions with other packages used in the downstream distribution. Every change begets more changes, and the result is a rapidly accelerating cycle of bugs and resulting patches, none of which are likely to be accepted upstream. So once you get on that treadmill, it’s very difficult to get off without harming the users and the community.
What’s the role of the Fedora Project Board, and you, as its chairman?
We have a Board with five community-elected members and four members appointed by Red Hat, who make policy decisions for the project as a whole. We document our mission and meetings through our wiki page.
One of my jobs is to chair the Fedora Board, and ensure that the governance of Fedora is working as smoothly as possible. I also am the person responsible for choosing the people who will fill the seats reserved for appointment by Red Hat. We turn over roughly half the seats on the Board after each Fedora release, so that there is always a chance for the community to make informed decisions about the leadership of the Project.
I always try to respect the need for balancing different constituencies on the Board, so these appointments are not limited to Red Hat employees. Last election cycle, for instance, I appointed Chris Tyler, a professor at Seneca College in Toronto and a long time Fedora community member, to the Board, which has proved to be an excellent choice. This flexibility goes hand in hand with the community’s ability to elect who they wish to the Board, so we tend to always have a mix of Red Hat employees and volunteers, which changes in a fairly smooth and continuous way every six months.
Tell us something about the Fedora Docs Project. What are the objectives and the to-dos?
The Docs Project is responsible for creating user-oriented guides and tutorials for Fedora, and also to keep our wiki-based information fresh and well-groomed. Although I don’t get to participate in the Docs Project as often
or as deeply as I used to, I still spend significant time keeping up with its tasks.
Right now, the most important task on which the Docs Project is engaged is fixing up our own process documentation, so we can enable all the new contributors to participate fully in writing, editing and publishing. We have an enormous virtual hack-fest happening over the Christmas and New Year holidays, where we will be training new volunteers on how to use tools, edit the wiki, and publish to the Web. In addition, we hope to also make some choices for an upcoming content management system that will make all these tasks easier in the future.
What do you think are the best features of Fedora 10? And your personal favourites?
I’m very excited about two capabilities in particular. One is PackageKit, and the other is our enhancements to virtualisation. Richard Hughes has built up some incredible capabilities for desktop interaction that, for Fedora 10, allow automatic search and installation of media codecs, which is very helpful for desktop users. In the future, though, these capabilities will be extended to add on-the-fly installation for user applications, fonts, hardware enablers, and a lot of other features. That’s something that proprietary desktops can’t provide because their model revolves around selling software to users, whereas we’re in the business of giving it away.
On the virtualisation front, we’ve made a lot of advances in areas like remote installation and storage provisioning. We’re showcasing a lot more flexibility for administrators who want to go from bare metal to a complete virtualisation platform without having to spend time in a noisy closet with their equipment. I’d like to see a lot more people looking at the power and capability of KVM, which is the Linux kernel’s built-in hypervisor. With 64-bit hardware becoming the norm, everyone’s system is potentially a virtualisation powerhouse and we’re going to be in a great position to tap that.
Of course, the advances we’ve made in PackageKit and in the virtualisation system pieces like libvirt and virt-manager are all run as independent upstream projects, so all distributions and users can benefit. I imagine that you’ll be seeing these advances in other distributions soon enough, but the Fedora platform tends to make that possible through our commitment to leading-edge development through the upstream.
What’s your opinion about Linux on the enterprise desktop? In which sectors do you find people are most likely to resist switching over from Windows and Macs? And what do you think the reasons are, behind their resistance?
As I mentioned before, I think the general-purpose case for proprietary operating systems on the desktop is becoming harder and harder to win. Information interchange becomes trickier, and vendor lock-in is too expensive a proposition for businesses that have to find a steady profit margin in a highly competitive, globalised market. Ultimately, I think there’s a broad range of businesses that are served by integrating open source technologies at every level from the edge to the desktop, and one of our purposes in Fedora is to provide a wide proving ground for those technologies, whether they’re targeted at the desktop user or the systems architect/administrator, and make them available to as large an audience as possible, for contribution.
There are some essential professional-quality software that are still missing from the ‘FOSS desktop’, viz. layout software like QuarkXpress and Adobe InDesign, that media houses like ours depend on; or there are professional sound and video editing tools that studios depend on. How can projects like Fedora, or FOSS heavyweights like Red Hat, encourage and facilitate developers of FOSS alternatives to develop something as good as the Linux (kernel), on which professionals can bet on?
By providing a robust platform for development, integration, and deployment that includes the latest advances in tools and toolkits, and making it flexible enough for ISVs and appliance builders to develop cost-effective and innovative solutions for their customers. That’s something at which the free software stack excels, and which we in Fedora and at Red Hat are constantly advancing through our upstream development model.
We can also advance by illustrating the open source development model as the best way to provide features faster to users and customers. Many software vendors that ‘get it’ are already moving to the way of doing business that Red Hat has been proving for years. They have a stream of constantly developing technology on the one hand, which feeds a stable, supportable branch on the other, backed by services, support, and training with extremely high value, for which users and customers are willing to pay—and that puts them in charge of their own technology roadmap.
What’s the road map of Fedora, and what can we expect in Fedora 11?
As we set the schedule for Fedora 11, we acknowledged that we were getting towards the time when Red Hat will be looking to branch our feature set for use in its next edition of the enterprise product, Red Hat Enterprise Linux 6. So there’re quite a few features we want to get entered into our Fedora 11 release, and we track those openly and transparently, like everything in our project. Have a look at feature list for Fedora 11.
That list changes over time as the FESCo evaluates developers’ proposals, and makes decisions on how best to include that work in the Fedora platform. In addition, there are always quite a few features and initiatives that come out of our engineering-focused North American FUDCon conference, which is happening January 9-11, 2009, in Boston. I would expect in the weeks following that conference, that the list will be expanding quite a bit, but some interesting additions are the Windows cross-compilation toolset and the introduction of DeviceKit.
Thanks for your time, Paul. Is there anything else you’d like to share with our readers?
I have never been so excited to be part of free and open source software. I would encourage readers to not only use the software we develop, but to consider how they can get involved in Fedora to advance the FOSS ecosystem as a whole. Even doing simple things like filing bugs, fixing text on a wiki, or writing small tutorials, can be useful to hundreds or thousands of people. Getting involved in free software was one of the best and most fulfilling decisions I’ve ever made, and I hope you’ll consider making the jump from consumer to contributor as I did.
Thanks for the chance to talk to your readers!
An Interview with Fedora Project Leader Jared Smith
Are Distro Release Cycles Too Short?
Fedora Scientific: Open Source Scientific Computing
About JBoss Application Server 7 and Goals for 2012: An…
Dell India R&D Centre is 10 — Team’s Take…
Tags: documentation, Fedora, FPL, Free Software, Interview, open source, Paul Frields, proprietary, Red Hat Article written by:
Atanu Datta
Grounded for Life. Pink Floyd. Amarok. Android. That 70's Show. Robert De Niro. Bass guitar. WordPress. Jana Novotná. Rock & Roll. Ah... well, the madcap laughed at the man on the border. Hey, ho, huff the Talbot... Lunacy FTW!
Director's Cut: Let's Roll Out A DVD Movie
Max Spevack: We Ensure Red Hat is a Good Open Source Citizen | 计算机 |
2014-23/2620/en_head.json.gz/19889 | Director of ALM Evangelism for Microsoft
> Brian Keller
> January, 2006
Will XNA tools be able to help reduce game sizes?
In this article, Aaron Stanton speculates that XNA technologies might be capable of helping game developers reduce the size of their games in order to fit more game content onto DVD's. I'm pleased to confirm that based on our tests with early versions of XNA technology on actual game sources we have indeed been able to identify some fairly substantial size reductions on several games. I should point out that reducing the size of games is certainly not the only benefit we will be enabling with XNA, but it does happen to be a nice side effect of using some of the new technologies we're building.
For the rest of this entry I'll provide a bit more context into how this game size reduction is achieved. I won't attempt to describe in full detail everything that our technology will enable since we are going to be presenting a lot more information at the upcoming Game Developer's Conference (don't worry if you're not attending GDC - the same content will also be available on our Web site after GDC).
The particular XNA technology which enables game size reduction (along with a slew of other benefits) is one we are currently calling XNA Build. XNA Build is a tool for helping game developers manage their game asset pipeline(s). If you're not familiar with a game asset pipeline, this is basically the way in which content finds its way from source assets (files created by tools such as 3D Studio Max and Maya) into the final format which is expected by the game engine. Along the way there are typically several tasks which are performed as files progress along the asset pipeline, resulting in a format that is consumable by the game engine.
To describe what exactly happens to a file as it moves through the asset pipeline I'll use as an example a canonical task known as a triangle stripper. A triangle stripper can efficiently reduce the extremely high-fidelity source assets from millions of polygons into a more digestible polygon count (thousands or tens of thousands of polygons), usually with the goal of yielding a higher frame rate in the game by reducing the number of polygons which must be processed. Now you might wonder why the content creation tool doesn't just save the file using fewer polygons and in an engine-consumable format to begin with, thus eliminating the need for a pipeline (at least in our simplified example). There are actually several good reasons why a game asset pipeline exists. A very basic reason is that you don't want to lose the high-fidelity version of the artwork that your artists spent hours or days creating because you might need it later. You could later decide to target a more capable engine or platform where you can afford to process more polygons while maintaining a high frame rate. In this case you would want to run the high-fidelity source art through the triangle stripper again, but on a more conservative setting. Another reason is that when you preview your game you might discover that you need to brighten a set of textures so they are easier to see. As anybody who works with audio or video editing can attest to, it's much better to start with the highest possible quality of an asset when performing such operations because there might be quality degradation suffered during the various edit steps. Yet another reason to have a game asset pipeline is that game development is a very iterative process - you may decide that you're willing to have a slightly slower frame rate in exchange for better detail, and hence you want to instruct your polygon stripper task not to strip out as much detail and to generate a new build to preview. Rinse, wash, repeat until you find the right balance - it's much easier to do this with a build pipeline than by having to go into the content creation tool and re-exporting everything each time. There are several other reasons for having a game asset pipeline which I won't go into here, but if you're interested in this topic I would recommend Ben Carter's book called The Game Asset Pipeline.
Now that we have the lengthy description of a game asset pipeline out of the way, let's get back to XNA Build. XNA Build is designed to help studios with their game asset pipeline by providing a tool for defining, maintaining, debugging, and optimizing their pipelines. XNA Build can also provide a great deal of additional information a studio can benefit from by capturing build pipeline dependencies. XNA Build can gather a lot of dependency information simply by "watching" files move through the pipeline. XNA Build also provides API access to its built-in dependency store so that tools such as Max and Maya can provide additional dependency data. Once you start collecting this dependency information, XNA Build can begin to give a studio a clearer view of the files going into their pipeline, the relationship of those files to each other, and the files coming out of the pipeline. This dependency information can unlock several benefits for the studio, such as enabling incremental builds, or giving an artist insight into where a given texture is used in the game. Now this might sound like studios should already have this information, but very few studios we talked to actually did. Most studios are stuck using primitive batch files or Perl scripts to model their build processes, and they rarely have the time or resources to invest in building out the infrastructure of their build pipeline. There are also other reasons studios don't have centralized dependency information, but I won’t go into all of them here. XNA Build, in addition to providing a productive way of creating and maintaining the build pipeline, is designed to start collecting and surfacing all of that dependency information to enable a studio to be more productive.
So how do we enable game size reductions? Well it turns out that once you have good dependency information (again, something that most studios don't benefit from today) you can begin to generate a list of files which are being deployed as part of your game but never actually used. This orphaned content can result from obsolete files, misspelled copies of files, or any number of factors. Without a reliable dependency tree, it's usually not worth risking removing files even if they appear like they might not be in use. Hence, the common practice is just to leave them in the pipeline, and ultimately these unused files could get deployed along with the game which leads to a larger game size. With XNA Build, since we can efficiently collect and report on this dependency information, it allows a studio to finally clean up their pipeline and game layouts by looking for files which are in the pipeline but never used by the game.
So does it work? The theory sounds right, but can we really reduce the size of games? Our results from looking at several titles with early versions of XNA Build indicate that we certainly can. Results will vary from game to game, of course, but one of the games I can share information for is a PC title called MechCommander 2. It turns out that 40% of textures which shipped on the final MechCommander 2 game disk were actually never used in the game! And MechCommander 2 is an older title with even fewer assets and less pipeline complexity than most modern titles, so for newer titles and next-gen titles we expect that we’ll see even better results. There were also some additional files besides textures which were never used in the game, such as a handful of mech models themselves which were never used by the game. I should point out that a 40% reduction in textures does not necessarily equate to a 40% reduction in overall game size, but it is a good proof point that we're headed in the right direction.
But should we care? As Aaron's article concludes, the games he sampled aren't yet coming close to filling up a DVD9 disk, which is the format used by the Xbox 360 and the original Xbox. But as tomorrow's games become bigger with more and more of that HD content we've come to love and expect, we want developers to concentrate on building a great game and worrying less about being constrained by game size. And for PC games, where CD media is still the norm to appeal to the broadest customer base possible, this can help a studio reduce the overall number of CD’s they have to manufacture while delivering as much game content as possible. That's why we believe that technologies such as XNA Build will play a very useful role in this and other game development scenarios.
There's also another way we benefit from reducing game sizes which is worth mentioning. The size of downloadable games and demos, such as those which have been available for the PC for years and are now on console with the introduction of the Xbox 360 Marketplace, can be reduced as well. Smaller download sizes obviously mean that gamers can spend less time waiting for a download and more time actually playing the game. And after all, enabling a great game playing experiences is the ultimate goal of XNA.
XNA: Enabling studios and publishers to develop better games
The past few months have been keeping me pretty busy, as you might discern from the lack of blog entries. That's because after we shipped Visual Studio 2005 on November 7th I transferred into a new role at Microsoft. My new job is on the XNA team, which has me working in the Xbox group. It should go without saying that the Xbox group is an amazing place to work, especially for a self-professed game addict like myself. But I don't get paid to play games all day. Actually, I don't get paid to play games at all really. In fact, if it's any consolation to the folks who continue to stalk their Best Buys, I also had to hunt down and buy my Xbox 360 at retail, which was without a doubt the best purchase I've ever made. I'm hooked on the 360, and the system has absolutely blown me away. But I digress... so what do we do on the XNA team? Well, the easiest way to describe it is that it's our job to build tools and technologies which will help game developers produce better games. And that, to me, is a very exciting challenge, with an even better payoff: better games.
The XNA mission statement is:
XNA enables studios and publishers to develop better games, more effectively, on all platforms.
Let's break that down a bit:
"XNA enables studios and publishers" - in case you're not familiar with the way the video game industry works, studios are the organizations that make the games (design, programming, art, testing, etc.) and publishers are the organizations that usually fund advances to studios and ultimately take those games to market (similar to the way a book publisher operates). There are some exceptions to this rule, of course - like EA and Microsoft, who act as publishers and developers simultaneously - but even within those organizations there are usually separate teams focused on the studio and publisher functions. With XNA, we're aiming to help both the studios and the publishers.
"to develop better games, more effectively" - Today, developers and publishers are forced to spend too much time and energy solving technology challenges - whether it's eeking out extra performance from their game engines, meeting the escalating demands of content creation, or maintaining complex tools and scripts to manage their game asset pipelines. If we can reduce the amount of effort required to solve even some of those technology challenges, then it will free up studios to focus on cramming more fun into their gameplay.
"on all platforms." - That's right. All platforms. The state of the games industry is such that many games are developed for multiple platforms to strive for maximal profitability. But building games for multiple platforms also introduces a lot of additional complexity. Hence, many of the tools we're building are actually targeted at the complexities of developing for multiple platforms. Of course, we expect we'll have a better out-of-box experience for targeting Microsoft platforms, but studios will still have that flexibility when they need to target other platforms as well.
If you want to read a bit more about XNA you can visit www.microsoft.com/xna. There's not a lot of information on that site today, but it will be updated closer to the GDC timeframe where we'll be disclosing a lot more and we're aiming to release a technology preview of one of our XNA tools. I'll also try to blog more, so I can talk more about what's coming in XNA... | 计算机 |
2014-23/2620/en_head.json.gz/20384 | The Fedora Project is an openly-developed project designed by Red Hat, open for general participation, led by a meritocracy, following a set of project objectives. The goal of The Fedora Project is to work with the Linux community to build a complete, general purpose operating system exclusively from open source software. Development will be done in a public forum. The project will produce time-based releases of Fedora about 2-3 times a year, with a public release schedule. The Red Hat engineering team will continue to participate in building Fedora and will invite and encourage more outside participation than in past releases. Fedora 15, a new version of one of the leading and most widely used Linux distributions on the market, has been released. Some of the many new features include support for Btrfs file system, Indic typing booster, redesigned SELinux troubleshooter, better power management, LibreOffice productivity suite, and, of course, the brand-new GNOME 3 desktop: "GNOME 3 is the next generation of GNOME with a brand new user interface. It provides a completely new and modern desktop that has been designed for today's users and technologies. Fedora 15 is the first major distribution to include GNOME 3 by default. GNOME 3 is being developed with extensive upstream participation from Red Hat developers and Fedora volunteers, and GNOME 3 is tightly integrated in Fedora 15." manufacturer website
1 dvd for installation on an 86_64 platform back to top | 计算机 |
2014-23/2620/en_head.json.gz/20607 | Paul Tyma
Think bigger.. a lot bigger...
Why you should join a start-up - and maybe why you shouldn't
I've recently been interviewing engineers for my new start-up (fyi, this is wholly separate from Mailinator). We're well-funded, have a world-changing idea, and as you can imagine, I plan to build an awesome engineering team. (Regardless of where you are, if you're a passionate developer, I'd love to hear from you. Check out the Job Description here and email me your resume at paul@refresh.io ).
I've been talking to engineers from all over hearing their stories. There's really amazing talent everywhere and honestly, a non-trivial amount of it seems to be idling or even decaying in environments that aren't using its full potential. A bunch of moons ago I used to work for Dow Chemical in the dreaded "IT department". It was pretty clear to me then that I was not growing technically in that job. I left to start my Ph.D. but I always vowed from then on that if I was going to be a software guy, I was going to work for companies who's business was creating software. In other words, at Dow I was an expense, I'd much rather be an asset.
Eventually and with that goal in mind, I ended up at Google. Without reservation I can say it was a fantastic experience. I have said before, "if you're the smartest person at where you work - quit". And trust me, nothing makes you realize how smart people can actually be by working at a place like Google. (To avoid any implications - I did eventually quit Google, but rest heavily assured, it was not because I anywhere even close to being the smartest person - read on!).
I did over 200 interviews while at Google and it was actually a bit fun to interview someone who was coming from someplace where they were the smartest person (at least about tech). I could always tell. It's no surprise that if you're the smartest person somewhere for a long time, you get used to it. You get used to waiting for people to catch up to where you are. By no fault of their own they walked into the interview with some attitude. An attitude of impatience if nothing else. After someone like that started at Google however, it didn't take long for them to realize the situation they were now in. It was humbling in many respects and I don't mean that negatively, simply they'd not recently (or ever) experienced a place where many of the people they met were at their level or better. Obviously, there are smart people everywhere but almost universally, smart people enjoy the company of others like them, the synergy makes them all better. This is why Silicon Valley is a magnet for them. As I said, Google (and similarly Facebook, etc) are great places to work. At some point after working there however I thought to myself what a wonderfully steady and safe place to work it was. My responsibilities, expectations, and compensation package were well outlined. I was working with awesome people and learned a ton but I still felt it was far too big for me to have any real impact. For a time, I worked on the Google Web Server which I could best describe to non-techies as "well, sort of the thing you interact with when you do a search" (this is a bad definition at best). A woman I was dating thought about that answer a moment and condescendingly replied - "what do you mean you work on that - isn't that done?"
In one sense she was right, I worked on that darn thing every day but to her it all worked the same. To her, I was having no impact.
It occurred to me that Google would be a fantastic place to work if what I wanted was a meaningful 9-5 job that after each day of work I could drive my minivan back to my home in the suburbs. But I didn't have a minivan. And I didn't own a home. And I didn't live anywhere near the suburbs. What the heck was I doing there? The smart-person environment was at start-ups too - I could get that there and even have some ownership of what I was building.
It's a relatively normal course of life in our sea of first-world problems that you'll have many chances to take risks early in life and those chances diminish as time goes on. Simply put, Google will always be there. And if Google isn't - the next big, awesome company will be. Every decade or so has a "company" (or two) where the greatest things and the greatest people are happening. At times it was Microsoft, Cisco, Apple, Google, Facebook, etc. I left Google not because Google was in any way bad, but because I wasn't done swinging for the fence. And I still had the luxury of trying. If I ever got to the point where I wanted to realign my life's risk profile, Google (or Google-next) would be there. And this is a pretty common theme - places like Google and Facebook incubate some set of people into entrepreneurs who then go start their own start-ups. But with big ideas, agility, and impact. And they don't tend to fall far from the tree. You might think Google doesn't like this - but I doubt that's true. This is a constant stream of risk-takers that go try stuff for them that they can buy back if needed.
What gets me today is how vibrant Silicon Valley is right now. And even for Silicon Valley this place is on-fire. It seems cities around the world try to copy it but that's really hard to do. The start-ups are here because the investors are here, and the investors are here because the start-ups are here. Guy Kawasaki wrote a great article several years ago partially about why Silicon Valley is Silicon Valley.
I am fully aware that Silicon Valley has a nasty habit of simply not being able to darn well shut-up about Silicon Valley. Other cities are hotbeds for tech too (Austin, NYC, etc), but truth be told, you could find a cadre of smart engineers doing a great start-up in a Des Moines, but it's not easy. There's LOTS of great companies in Silicon Valley that can take you to the next level. We're in the midst of a huge wave. Depending on your risk profile, joining a start-up or joining "a Google" is the best way to put your chips in the game. Regardless of where you are - if you're a crack-shot engineer looking to change the world, you could do worse than coming here. Again it's all about your risk profile and what's keeping you where you are (which may be great reasons). Start-ups will not only pay to relocate you, we'll put you up for a few months (in the corporate crash-pad) while you find your own place. Joining a start-up now will get you experience both technically and start-up-wise that you can't get anywhere else. I'm not thinking the start-up life is for everyone. I can definitely see a point in my life or where I have life-constraints where I'll want my job to be a less important part of my life (probably because my life will be more about, well, you know - just "life"). But for me right now, and maybe for you - I'm swinging for the fence. And love or hate IT departments, I couldn't do that there. Again, if you're a software engineer that loves what you do and lives in commuting distance to Palo Alto, CA or is willing to relocate, I'd really love to hear from you. We're well-funded and I'm literally building the first engineering team right now. It's a fantastic opportunity to get in on the ground-floor of a great start-up. Refresh.io jobs
Follow @paultyma
Why We'll Never Meet Aliens
A Google Interviewing Story
The Young Man's Business Model
The Idea about Ideas
Mom, I think I'm a Cyborg
Damn You Lizard Brain !
The Squirt-gun Offense
The Architecture of Mailinator
Why I work at Google
How to Pass a Silicon Valley Software Engineering Interview
Why you should join a start-up - and maybe why you...
Mailinator
Preemptive Solutions, Inc. | 计算机 |
2014-23/2620/en_head.json.gz/22326 | Research shows that computers can match humans in art analysis
Jane Tarakhovsky is the daughter of two artists, and it looked like she was leaving the art world behind when she decided to become a computer scientist. But her recent research project at Lawrence Technological University has demonstrated that computers can compete with art historians in critiquing painting styles.
While completing her master’s degree in computer science earlier this year, Tarakhovsky used a computer program developed by Assistant Professor Lior Shamir to demonstrate that a computer can find similarities in the styles of artists just as art critics and historian do.
In the experiment, published in the ACM Journal on Computing and Cultural Heritage and widely reported elsewhere, Tarakhovsky and Shamir used a complex computer algorithm to analyze approximately1,000 paintings of 34 well-known artists, and found similarities between them based solely on the visual content of the paintings. Surprisingly, the computer provided a network of similarities between painters that is largely in agreement with the perception of art historians.
For instance, the computer placed the High Renaissance artists Raphael, Da Vinci, and Michelangelo very close to each other. The Baroque painters Vermeer, Rubens and Rembrandt were placed in another cluster.
The experiment was performed by extracting 4,027 numerical image context descriptors – numbers that reflect the content of the image such as texture, color, and shapes in a quantitative fashion. The analysis reflected many aspects of the visual content and used pattern recognition and statistical methods to detect complex patterns of similarities and dissimilarities between the artistic styles. The computer then quantified these similarities.
According to Shamir, non-experts can normally make the broad differentiation between modern art and classical realism, but they have difficulty telling the difference between closely related schools of art such as Early and High Renaissance or Mannerism and Romanticism.
“This experiment showed that machines can outperform untrained humans in the analysis of fine art,” Shamir said.
Tarakhovsky, who lives in Lake Orion, is the daughter of two Russian artists. Her father was a member of the former USSR Artists. She graduated from an art school at 15 years old and earned a bachelor’s degree in history in Russia, but has switched her career path to computer science since emigrating to the United States in 1998.
Tarakhovsky utilized her knowledge of art to demonstrate the versatility of an algorithm that Shamir originally developed for biological image analysis while working on the staff of the National Institutes of Health in 2009. She designed a new system based on the code and then designed the experiment to compare artists.
She also has used the computer program as a consultant to help a client identify bacteria in clinical samples.
“The program has other applications, but you have to know what you are looking for,” she said.
Tarakhovsky believes that there are many other applications for the program in the world of art. Her research project with Shamir covered a relatively small sampling of Western art. “this is just the tip of the iceberg,” she said.
At Lawrence Tech she also worked with Professor CJ Chung on Robofest, an international competition that encourages young students to study science, technology, engineering and mathematics, the so-called STEM subjects.
“My professors at Lawrence Tech have provided me with a broad perspective and have encouraged me to go to new levels,” she said.
She said that her experience demonstrates that women can succeed in scientific fields like computer science and that people in general can make the transition from subjects like art and history to scientific disciplines that are more in demand now that the economy is increasingly driven by technology.
“Everyone has the ability to apply themselves in different areas,” she said. | 计算机 |