diff --git "a/docs_oracle_com/docs_oracle_com_samples.json" "b/docs_oracle_com/docs_oracle_com_samples.json" new file mode 100644--- /dev/null +++ "b/docs_oracle_com/docs_oracle_com_samples.json" @@ -0,0 +1,602 @@ +[ + { + "url": "https://docs.oracle.com/en/cloud/paas/analytics-cloud/essug/work-attributes.html", + "domain": "docs.oracle.com", + "file_source": "part-00331-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "\n\n \n \n \n \n Work with Attributes\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
\n Previous\n Next\n JavaScript must be enabled to correctly display this content\n \n
\n
\n
\n
    \n
  1. Using Oracle Analytics Cloud - Essbase
  2. \n
  3. Create and Manage Cube Outlines Using the Web Interface
  4. \n
  5. Work with Attributes
  6. \n
\n \n \n

Work with Attributes

\n
\n
\n
\n

Attributes describe characteristics of data, such as the size and color of products. You can use attributes to group and analyze members of dimensions based on their characteristics. For example, you can analyze product profitability based on size or packaging, and you can make more effective conclusions by incorporating market attributes, such as the population size of each market region, into your analysis.

\n

When manually working with attributes, use the outline editor and the Attributes tab in the outline inspector.\n

\n

Workflow for manually building attribute dimensions:

\n
    \n
  1. \n

    Create attribute dimensions.

    \n
  2. \n
  3. \n

    Tag the dimensions as attribute dimensions and set the attribute dimension type (text, numeric, Boolean, or date).

    \n

    Use the outline inspector, general tab to set the dimension as an attribute dimension, and to set the attribute dimension type.

    \n
  4. \n
  5. \n

    Add members to attribute dimensions.

    \n
  6. \n
  7. \n

    Associate a standard dimension with an attribute dimension, thereby defining the base dimension of the attribute dimension. Use the Attributes tab in the outline inspector to associate an attribute dimension to a base dimension.\n

    \n
  8. \n
\n

When creating an attribute dimension, by default, a base dimension is associated with the newly created attribute dimension. The associated base dimension is either a newly created last sparse dimension or the last existing sparse dimension.

\n

For example, if you create two sparse dimensions, dim1 and dim2, and then create an attribute dimension attr1, attr1 is associated with dim2 (the last sparse dimension that was created). If no sparse dimension was created recently, attr1 is associated with the last sparse dimension.

\n

See Working with Attributes in Designing and Maintaining Essbase\n Cubes.\n

\n
\n
\n
\n \n" + }, + { + "url": "https://docs.oracle.com/cd/E19575-01/820-3740/gamte/index.html", + "domain": "docs.oracle.com", + "file_source": "part-00352-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "Response Providers (Sun OpenSSO Enterprise 8.0 Technical Overview)

Sun OpenSSO Enterprise 8.0 Technical Overview

Response Providers

\n

\nResponse providers are plug-ins that provide\npolicy response attributes. Policy response attributes typically provide\nvalues for attributes in the user profile. The attributes are sent\nwith policy decisions to the PEP which, in turn, passes them in headers\nto an application. The application typically uses these attributes\nfor customizing pages such as a portal page. OpenSSO Enterprise includes one implementation\nof the com.sun.identity.policy.interfaces.ResponseProvider class,\nthe IDResponseProvider. See Chapter 2, Using the Policy Service API, in Sun OpenSSO Enterprise 8.0 Developer’s Guide for more\ninformation.

\n
\n\n
\n
" + }, + { + "url": "https://docs.oracle.com/cd/E88353_01/html/E37839/rcp-1.html", + "domain": "docs.oracle.com", + "file_source": "part-00331-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "\n\n\n\n \nrcp - \nman pages section 1: User Commands\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n\n\n\n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\tGo to main content\n\t
\n\t\t
oracle home
\n\t\t

\nman pages section 1: User Commands

\n\n\t\t
\n\t\t\t

Exit Print View

\n\t\t
\n\n\t\t\n\t\t
\n\t\t\n\t\t\t
\n\t\t\t\n \n \n \n\t\t\t
\n\t\t\t\t
 
\n\t\t\t\t\n\t\t\t\t\n\t\t\t
\n\t\t\t
\n\t\t\t\t
\n\n\t\t\t\t
\n\t\t\t
\n\t\t\t
\n\t\t
\n\t
\n\n\t
\n\t\t\t\n\t\t
\n\t\t\t  »  ...Documentation Home  »  Oracle Solaris 11.4 Reference Library  »  \nman pages section 1: User Commands  »  User Commands  »  rcp\n\t\t
\n\n\t\t\t\n\t\t
Updated: Wednesday, July 27, 2022
\n\t
\n\t\n\t
 
\n\t
 
\n\n\t
\n\t\t\n\t
\n\t\n\t
\n\n\t\t
\n\t\t\t
\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t
\n\n\t\t\t\n\t\t\t
\n\t\t\t\t
\n\t\t\t\t\t\n\t\t\t\t\t\n\t\t\t\t
\n\t\t\t
\n\t\t\t \n\t\n\t\t\t
\n\t\t\t\t\n\t\t\t\t\n

rcp(1)\n\n\n\n\n\n\n

\n

Name

rcp - remote file copy

\n

Synopsis

\n\n
rcp [-p] filename1 filename2
\n
rcp [-pr] filename... directory
\n\n

Description

\n\n

\"Caution\"

Caution  - \nAll data exchanges over this protocol are performed without encryption,\nand have no protection against spoofing or snooping of traffic. The\nin.rshd server is disabled by default on Oracle Solaris\nand most other modern operating systems, and both the in.rshd\nserver and the rcp command may be removed in future versions\nof Oracle Solaris. Use of the\nscp(1)\nor\nsftp(1)\nutilities is strongly recommended instead.\n


\n

The rcp command copies files between machines. Each\nfilename or directory\nargument is either a remote file name of the form:

\n
hostname:path
\n

or a local file name (containing no : (colon)\ncharacters, or / (backslash) before any :\n(colon) characters).

\n

The hostname can be an IPv4 or IPv6 address string. See\ninet(4P)\nand\ninet6(4P).\nSince IPv6 addresses already contain colons, the\nhostname should be enclosed in a pair of square brackets\nwhen an IPv6 address is used. Otherwise, the first occurrence of a colon can be\ninterpreted as the separator between hostname and\npath. For example,

\n
[1080::8:800:200C:417A]:tmp/file
\n

If a filename is not a full path name, it is interpreted\nrelative to your home directory on hostname. A\npath on a remote host can be quoted using\n\\ , \" , or\n' , so that the metacharacters are interpreted remotely.

\n

rcp does not prompt for passwords. It uses your current\nlocal user name, which must be accepted on hostname\nto allow remote command execution by\nrsh(1).

\n

rcp handles third party copies, where neither source\nnor target files are on the current machine. Hostnames can also take the form

\n
username@hostname:filename
\n

to use username rather than your current local\nuser name as the user name on the remote host.\nFile names that are not full path names are interpreted relative to the home\ndirectory of the user named username, on the remote\nhost.

\n\n

Options

\n\n

The following options are supported:

\n
\n\n
–p
\n
\n

Attempts to give each copy the same modification times, access times,\nmodes, and ACLs if applicable as the original\nfile.

\n
\n\n\n
–r
\n
\n

Copies each subtree rooted at filename; in this\ncase the destination must be a directory.

\n
\n\n
\n\n

Exit Status

\n\n

The following exit values are returned:

\n
\n\n
0
\n
\n

All files were copied successfully.

\n
\n\n\n
>0
\n
\n

An error occurred.

\n
\n\n
\n

See the NOTES section for caveats on the exit code.

\n\n

Attributes

\n\n

See attributes(7) for descriptions of the following attributes:

\n\n\n\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
ATTRIBUTE TYPE
ATTRIBUTE VALUE
Availability
network/legacy-remote-utilities
CSI
Enabled
Interface Stability
Obsolete
\n\n

See Also

\n\n

cpio(1),\nftp(1),\nrlogin(1),\nrsh(1),\nrsync(1),\nscp(1),\nsftp(1),\ntar(1),\ninet(4P),\ninet6(4P),\nip6(4P),\nhosts.equiv(5),\nattributes(7),\nin.rshd(8)

\n\n

Notes

\n\n

rcp is meant to copy between different hosts.\nAttempting to rcp a file onto itself, as with:

\n
example% rcp tmp/file myhost:/tmp/file
\n

results in a severely corrupted file.

\n

rcp might not correctly fail when the target of a copy\nis a file instead of a directory.

\n

rcp can become confused by output generated by commands\nin a $HOME/.profile or other shell startup file on the\nremote host.

\n

rcp requires that the source host have permission to\nexecute commands on the remote host when doing third-party copies.

\n

rcp does not properly handle symbolic links. Use\nrsync(1),\nor\ntar(1)\nor\ncpio(1)\npiped to ssh to obtain remote copies of directories containing\nsymbolic links or named pipes.

\n

If you forget to quote metacharacters intended for the remote host,\nyou get an incomprehensible error message.

\n

rcp fails if you copy ACLs to a file\nsystem that does not support ACLs.

\n

rcp is CSI-enabled except for the\nhandling of username, hostname, and domain.

\n

When rcp is used to perform third-party copies where\neither of the remote machines is not running Solaris, the exit code cannot be\nrelied upon. That is, errors could occur when success is reflected in the exit\ncode, or the copy could be\ncompletely successful even though an error is reflected in the exit code.

\n

This technology may be removed in a future release of Oracle Solaris.

\n\n

History

\n

Support for Kerberos, including the options\n–a, –k, –K,\n–PN, –PO, and –x,\nwas added in Solaris 10 3/05 and removed in Oracle Solaris 11.4.0.\nPrior to Solaris 10, a kerberized version was available in the\nSun Enterprise Authentication Mechanism (SEAM) add-on package for Solaris.

\n

Support for IPv6 was added in Solaris 8.

\n

The rcp command, including support for the options\n–p and –r,\nhas been present since the initial release of Solaris.

\n\n\n\n\t\t\t\t\n\t\t\t
\n\n\t\t\t
\n\t\t\t\t\n\t\t\t\t
\n\t\t\t\t\t
Previous
\n\t\t\t\t\t
Next
\n\t\t\t\t
\n\t\t\t\t
\n\t\t\t
\t\n\t\t
\n\t
\n\n\n\n\n" + }, + { + "url": "https://docs.oracle.com/cd/E19455-01/806-1360/imageviewer-22/index.html", + "domain": "docs.oracle.com", + "file_source": "part-00352-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "To Open an Image File Using Drag and Drop (Solaris Common Desktop Environment: User's Guide)

Solaris Common Desktop Environment: User's Guide

To Open an Image File Using Drag and Drop

\n
    \n

    If Image Viewer is already running, drop an image file icon on an open Image Viewer window.

    \n\n
\n

If Image Viewer is not running:

\n
    \n\n
  1. \n

    Drag an image file icon from File Manager and drop it on the workspace.

    \n\n
  2. \n\n
  3. \n

    Click mouse button 3 over the image file icon on the workspace and choose Open from the Workspace Object menu.

    \n\n

    Image Viewer starts and the image file is displayed in Image Viewer's View window.

    \n\n
  4. \n\n
  5. \n

    To remove the icon from the Workspace when you are finished with it, click mouse button 3 on the icon and select Remove From Workspace from the pop-up menu.

    \n\n
  6. \n
\n
\n\n
\n
" + }, + { + "url": "https://docs.oracle.com/cd/E19225-01/820-5821/bvaed/index.html", + "domain": "docs.oracle.com", + "file_source": "part-00656-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "Approval Form (Sun Identity Manager Deployment Reference)

Sun Identity Manager Deployment Reference

Approval Form

\nApproval Form\ncontrols the information that is presented to a resource, role, or organization\nowner when he is designated an approver of user requests. By default, this\npage displays a set of read-only fields that contain the name of the administrator\nthat started the process. It also displays information about the user, including\nthe account ID, role, organization, and email address.

\n

This form ensures that the resource owner gets a last chance to change\na user value before the user is created. By default, approving a user displays\nall the user attributes in read-only fields.

\n

You can customize Approval Form to:

\n\n\n
\n\n
\n
" + }, + { + "url": "https://docs.oracle.com/cd/E17904_01/webcenter.1111/e10149/img_text/wcsug_lay_selectpage.htm", + "domain": "docs.oracle.com", + "file_source": "part-00943-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "\n\n\n\n\nDescription of the illustration wcsug_lay_selectpage.gif\n\n\n

This image shows the Select a Page dialog. The header contains an icon with the label, \"Select a page,\" and a Close icon. Below the header, to the right is a Search field and icon. within the body of the dialog is a list, headed, \"Home Space Pages,\" with entries for all available pages in the Home Space. At the bottom of the dialog are Select and Cancel buttons. The Select button is grayed out.

\n\n\n" + }, + { + "url": "https://docs.oracle.com/cd/E19253-01/816-4556/nis2ldap-3/index.html", + "domain": "docs.oracle.com", + "file_source": "part-00273-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "

System Administration Guide: Naming and Directory Services (DNS, NIS, and LDAP)

When Not to Use the NIS-to-LDAP Service

\n

\nDo not use the N2L service in these situations:

\n\n
\n\n
\n
" + }, + { + "url": "https://docs.oracle.com/cd/E19620-01/805-4693/instructionset-76/index.html", + "domain": "docs.oracle.com", + "file_source": "part-00273-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "

x86 Assembly Language Reference Manual

Store Local Descriptor Table Register (sldt)

\n
sldt\tr/m16\n
\n
Operation

LDTR -> r/m[16]

\n
Description

The Local Descriptor Table Register (LDTR) is stored by sldt as indicated by the effective address operand. LDTR is stored into the two-byte register or the memory location.

\n

\nsldt is not used in application programs. It is used only in operating systems.

\n
Example

Store the LDTR in the effective address (addressed by the EBX register plus and offset of 5):

\n
sldt 5(%ebx)
\n
\n\n
\n
" + }, + { + "url": "https://docs.oracle.com/en/database/oracle/oracle-database/12.2/adobj/materialized-view-support-for-objects.html", + "domain": "docs.oracle.com", + "file_source": "part-00514-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "\n\n\n\n\n \n \n \n \n Materialized View Support for Objects\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
\n Previous\n Next\n JavaScript must be enabled to correctly display this content\n \n
\n
\n
\n
    \n
  1. Object-Relational Developer's Guide
  2. \n
  3. Design Considerations for Oracle Objects
  4. \n
  5. Materialized View Support for Objects
  6. \n
\n \n \n

9.9 Materialized View Support for Objects\n

\n
\n
\n
\n

Materialized view support is available for relational tables that contain columns of an object, collection, or REF type. Such materialized views are called object-relational materialized views.\n

\n

All user-defined types required by an object-relational materialized view must exist at the materialized view site as well as at the master site. They must have the same object type IDs and versions at both sites.

\n

Topics:

\n \n

9.9.1 Object, Collection, or REF Type Columns\n

\n
\n

To be updatable, a materialized view based on a table that contains an object column must select the column as an object in the query that defines the view: if the query selects only certain attributes of the column's object type, then the materialized view is read-only.\n

\n

The view-definition query can also select columns of collection or REF type. REFs can be either primary-key based or have a system-generated key, and they can be either scoped or unscoped. Scoped REF columns can be rescoped to a different table at the site of the materialized view—for example, to a local materialized view of the master table instead of the original, remote table.\n

\n
\n

9.9.2 Object Tables\n

\n
\n
\n

A materialized view based on an object table is called an object materialized view. Such a materialized view is itself an object table. An object materialized view is created by adding the OF type keyword to the CREATE MATERIALIZED VIEW statement. For example:\n

CREATE MATERIALIZED VIEW customer OF cust_objtyp AS\n SELECT * FROM HR.Customer_objtab@dbs1;

As with an ordinary object table, each row of an object materialized view is an object instance, so the view-definition query that creates the materialized view must select entire objects from the master table: the query cannot select only a subset of the object type's attributes. For example, the following materialized view is not allowed:

CREATE MATERIALIZED VIEW customer OF cust_objtyp AS\n SELECT CustNo FROM HR.Customer_objtab@dbs1;

You can create an object-relational materialized view from an object table by omitting the OF type keyword, but such a view is read-only: you cannot create an updatable object-relational materialized view from an object table. \n

\n

For example, the following CREATE MATERIALIZED VIEW statement creates a read-only object-relational materialized view of an object table. Even though the view-definition query selects all columns and attributes of the object type, it does not select them as attributes of an object, so the view created is object-relational and read-only:\n

CREATE MATERIALIZED VIEW customer AS\n SELECT * FROM HR.Customer_objtab@dbs1;

For both object-relational and object materialized views that are based on an object table, if the type of the master object table is not FINAL, the FROM clause in the materialized view definition query must include the ONLY keyword. For example: \n

CREATE MATERIALIZED VIEW customer OF cust_objtyp AS\n SELECT CustNo FROM ONLY HR.Customer_objtab@dbs1;

Otherwise, the FROM clause must omit the ONLY keyword. \n

\n
\n \n
\n
\n
\n
\n \n" + }, + { + "url": "https://docs.oracle.com/cloud/latest/related-docs/OSNUG/GUID-BDDF3FB1-763D-4898-BBC2-B09A9145C7D7.htm", + "domain": "docs.oracle.com", + "file_source": "part-00352-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "\n\n\n\n\n\n\nHow do I mute or unmute a Conversation?\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nGo to main content
\n
\n
\n\n 289/367 \n\n

How do I mute or unmute a Conversation?

\n
\n
\n

To mute a Conversation:

\n
\n\n
\n
    \n
  1. \n

    Tap \"Slide-out in the action bar to open the slide-out navigation, then tap \"Conversations Conversations.

    \n
  2. \n
  3. \n

    Long press the Conversation you want to mute, then tap \"Mute Mute.

    \n
  4. \n
\n

Alternatively, from within the Conversation, Tap \"Actions in the action bar to display the actions menu, then tap Mute.

\n

The muted Conversation now appears under the Muted filter.

\n
\n\n
\n

To unmute a Conversation:

\n
\n\n
    \n
  1. Tap \"Slide-out in the action bar to open the slide-out navigation, then tap \"Conversations Conversations.
  2. \n
  3. Tap \"View in the action bar to display the view menu, then tap Muted.
  4. \n
  5. Long press the Conversation you want to unmute, then tap \"Unmute Unmute.
  6. \n
\n
\n
\n\n
\n\n\n\n\n\n" + }, + { + "url": "https://docs.oracle.com/cd/E19681-01/820-3740/ggnqv/index.html", + "domain": "docs.oracle.com", + "file_source": "part-00943-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "Identity Web Services (Sun OpenSSO Enterprise 8.0 Technical Overview)

Sun OpenSSO Enterprise 8.0 Technical Overview

Identity Web Services

\n

For some time, OpenSSO Enterprise has provided client interfaces for access\nto core features and functionality. These interfaces are used by policy\nagents and custom applications developed by customers. With this release, OpenSSO Enterprise now\nexposes certain functions as simple identity web services allowing\ndevelopers to easily invoke them when developing their applications\nusing one of the supported integrated development environment (IDE)\nproducts. (The IDE generates the stub code that wraps a call to the\nweb service.) Identity Web Services are available using:

\n\n

They do not require the deployment of an agent or a proxy and\ninclude the following capabilities:

\n\n

For more information on identity services, see Identity Web Services and Part IV, The Web Services Stack, Identity Services, and Web Services Security.

\n
\n\n
\n
" + }, + { + "url": "https://docs.oracle.com/cd/E23824_01/html/821-1458/gljtt.html", + "domain": "docs.oracle.com", + "file_source": "part-00273-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "\n\n\n\n\nTroubleshooting Interface Configuration - Oracle Solaris Administration: Network Interfaces and Network Virtualization\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n\t\n\t\t\n\t\t\t\n\t\t\n\n\t\t\n\t\t\t\n\t\t\n\n\t\t\n\t\t\n\t\t\t\n\n\t\t\t\n\t\t\n\t
\n\t\t\t\tSkip Navigation Links\n\t\t\t
\n\t\t\t\tExit Print View\n\t\t\t
\"Link\n\t\t\tOracle Solaris Administration: Network Interfaces and Network Virtualization     Oracle Solaris 11 Information Library\n\t\t\t
\n
\n\n\n
\n
\n
\n \n \"Oracle\n \n
\n\n
\n \n \"Library\"\n \n
\n
\n \n
\n\n
\n\t\t \n \n \"PDF\"\n \n\t\t \n
\n \n
\n \n \"Print\n \n
\n \n \n
\n \n \"Feedback\"\n \n
\n\n
\n\n\n
\n
\n
\n \n\n
\n
\n \"search\n
\n\n
\n \n \n
\n\n
\n \"search\n
\n
\n\n
\n
\n
\n
\n
 Search Scope:\n
\n
    \n
  • \n
  • \n
\n
\n
\n
\n
\n
\n\t\t\t \n
\n
\n
\n \n\n
\n\n\n
\n\t

Document Information

\n

Preface

\n

1.  Overview of the Networking Stack

\n

Network Configuration in This Oracle Solaris Release

\n

The Network Stack in Oracle Solaris

\n

Network Devices and Datalink Names

\n

Administration of Other Link Types

\n

Part I Network Auto-Magic

\n

2.  Introduction to NWAM

\n

3.  NWAM Configuration and Administration (Overview)

\n

4.  NWAM Profile Configuration (Tasks)

\n

5.  NWAM Profile Administration (Tasks)

\n

6.  About the NWAM Graphical User Interface

\n

Part II Datalink and Interface Configuration

\n

7.  Using Datalink and Interface Configuration Commands on Profiles

\n

8.  Datalink Configuration and Administration

\n

9.  Configuring an IP Interface

\n

About IP Interface Configuration

\n

The ipadm Command

\n

IP Interface Configuration (Tasks)

\n

SPARC: How to Ensure That the MAC Address of an Interface Is Unique

\n

Configuring IP Interfaces

\n

How to Configure an IP Interface

\n

Setting IP Address Properties

\n

How to Set the Property of an IP Address

\n

Setting IP Interface Properties

\n

Enabling Packet Forwarding

\n

Administering Protocol Properties

\n

Setting TCP/IP Properties

\n

How to Restrict a Port's Access to root User Only

\n

How to Implement Symmetric Routing on Multihomed Hosts

\n

Monitoring IP Interfaces and Addresses

\n

How to Obtain Information About Network Interfaces

\n
\n

Troubleshooting Interface Configuration

\n

The ipadm command does not work.

\n

IP address cannot be assigned with the ipadm create-addr command.

\n

The message cannot create address object: Invalid argument provided is displayed during IP address configuration.

\n

The message cannot create address: Persistent operation on temporary object during IP interface configuration

\n
\n

Comparison Tables: ipadm Command and Other Networking Commands

\n

ifconfig Command Options and ipadm Command Options

\n

ndd Command Options and ipadm Command Options

\n

10.  Configuring Wireless Interface Communications on Oracle Solaris

\n

11.  Administering Bridges

\n

12.  Administering Link Aggregations

\n

13.  Administering VLANs

\n

14.  Introducing IPMP

\n

15.  Administering IPMP

\n

16.  Exchanging Network Connectivity Information With LLDP

\n

Part III Network Virtualization and Resource Management

\n

17.  Introducing Network Virtualization and Resource Control (Overview)

\n

18.  Planning for Network Virtualization and Resource Control

\n

19.  Configuring Virtual Networks (Tasks)

\n

20.  Using Link Protection in Virtualized Environments

\n

21.  Managing Network Resources

\n

22.  Monitoring Network Traffic and Resource Usage

\n

Glossary

\n

Index

\n\n
\n\n\n
\n\t\n\n\n\n

Troubleshooting Interface Configuration

\n\n\n

This section discusses common problems that you might encounter while using the ipadm\ncommand to configure IP interfaces.

\n\n\n\n

The ipadm command does not work.

\n\n\n

Manual IP interface configuration with the dladm and ipadm commands work only\non fixed-type network configuration profiles (NCP), such as DefaultFixed. If the active NCP\nin the system is an automatic-type profile, switch to a fixed-type profile before\nusing the dladm and ipadm commands.

# netadm list\nTYPE   PROFILE        STATE\nncp    DefaultFixed   disabled\nncp    Automatic      online\nloc    Automatic      offline\nloc    NoNet          offline\n...\n\n# netadm enable -p ncp defaultfixed
\n\n\n\n

IP address cannot be assigned with the ipadm create-addr command.

\n\n\n

With the traditional ifconfig command, you can plumb and assign an IP address\nwith a single command syntax. When using the ipadm create-addr command to configure\nan IP address, you must first create the IP interface with a separate\ncommand.

# ipadm create-ip interface\n# ipadm create-addr -T addr-type -a address addrobj
\n\n\n\n

The message cannot create address object: Invalid argument provided is displayed during IP address configuration.

\n\n\n

The address object identifies a specific IP address bound to an IP\ninterface. The address object is a unique identifier for each IP address on\nthe IP interface. You must specify a different address object to identify a\nsecond IP address that you want to assign to the same IP interface.\nIf you want to use the same address object name, then you must\ndelete the first instance of the address object before assigning it to identify\na different IP address.

# ipadm show-addr\nADDROBJ   TYPE     STATE   ADR\nlo0       static   ok      127.0.0.1/10\nnet0/v4   static   ok      192.168.10.1\n\n# ipadm create-addr -T static -a 192.168.10.5 net0/v4b
\n\n

or

# ipadm show-addr\nADDROBJ   TYPE     STATE   ADR\nlo0       static   ok      127.0.0.1/10\nnet0/v4   static   ok      192.168.10.1\n\n# ipadm delete-addr net0/v4\n# ipadm create-addr -T static -a 192.168.10.5 net0/v4
\n\n\n\n

The message cannot create address: Persistent operation on temporary object during IP interface configuration

\n\n\n

The ipadm command creates persistent configuration. If the IP interface that you are\nconfiguring was created as a temporary interface, then you cannot use the ipadm\ncommand to configure persistent settings on the interface. After you verify that an\ninterface that you are configuring is temporary, delete that interface, re-create it as\na persistent object, then resume configuring.

# ipadm show-if -o all\nIFNAME   CLASS      STATE   ACTIVE   CURRENT        PERSISTENT   OVER\nlo0      loopback   ok      yes      -m46-v------   46--         --\nnet0     ip         ok      yes      bm4---------   ----         --
\n\n

The absence of the 4 flag for IPv4 configuration or 6 flag for\nIPv6 configuration on the PERSISTENT field indicates that net0 was created as a\ntemporary interface.

# ipadm delete-ip net0\n# ipadm create-ip net0\n# # ipadm create-addr -T static -a 192.168.1.10 net0/v4
\n
\n\n
\n\t\n\t\t\n\t\t\t\n\t\t\t\n\t\t\n\t
\n\t\t\t\t
\n\t\t\t\t\tCopyright © 2011, Oracle and/or its affiliates. All rights reserved. Legal Notices\n\t\t\t\t
\n\t\t\t
\n\t\t\t\t\"Previous\"\n\t\t\t\t\"Next\"\n\t\t\t
\n\t\n
\n\n\n\n\n\n\n\n\n\n" + }, + { + "url": "https://docs.oracle.com/cd/E19225-01/820-5822/byabv/index.html", + "domain": "docs.oracle.com", + "file_source": "part-00131-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "Logging in to the Identity Manager End-User Interface (Sun Identity Manager 8.1 Business Administrator's Guide)

Sun Identity Manager 8.1 Business Administrator's Guide

Logging in to the Identity Manager End-User Interface

\n

Use the following instructions to log into the Identity Manager End-User\nInterface.

\n

\n\"Procedure\"To Open the End-User Interface

\n
    \n\n
  1. \n

    Open a Web browser and type the following URL into the address\nbar:

    \n\n\n\n\n\n
    \n
    \n
    http://<AppServerHost>:<Port>/idm/user/login.jsp
    \n
    \n
  2. \n\n
  3. \n

    Enter a user ID and password and click Log In.

    \n\n

    The\nend-user interface opens.

    \n\n
  4. \n
\n

Retrieving Forgotten User IDs

\n

Identity Manager allows end-users to retrieve their forgotten user IDs.\nFor more information, see Forgotten User ID in\nthe Logging in to the Identity Manager Administrator Interface section.

\n
\n\n
\n
" + }, + { + "url": "https://docs.oracle.com/cd/E19253-01/816-4558/abtrbl-26395/index.html", + "domain": "docs.oracle.com", + "file_source": "part-00943-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "

System Administration Guide: Naming and Directory Services (NIS+)

Removal or Dissociation of NIS+ Directory\nFrom Replica Fails

\n

When removing or disassociating a directory from a replica server you\nmust first remove the directory's org_dir and groups_dir subdirectories before removing the directory itself. After each\nsubdirectory is removed, you must run nisping on the parent\ndirectory of the directory you intend to remove. (See Removing an NIS+ Directory.)

\n

If you fail to perform the nisping operation, the\ndirectory will not be completely removed or disassociated.

\n
    \n

    If this occurs, you need to perform the following steps to correct the\nproblem:

    \n\n
  1. \n

    \nRemove /var/nis/rep/org_dir on\nthe replica.

    \n\n\n
  2. \n
  3. \n

    \nMake sure that org_dir.domain does not appear in /var/nis/rep/serving_list on\nthe replica.

    \n\n\n
  4. \n
  5. \n

    Perform a nisping on domain.

    \n\n\n
  6. \n
  7. \n

    From the master server, run nisrmdir -f replica_directory.

    \n\n\n
  8. \n
\n

\nIf the replica server you are trying to dissociate is down or\nout of communication, the nisrmdir -s command\nwill return a Cannot remove replica name: attempt to remove a non-empty table error message.

\n

In such cases, you can run nisrmdir -f -s replicaname on the master to force the\ndissociation. Note, however, that if you use nisrmdir -f -s to dissociate an out-of-communication replica,\nyou must run nisrmdir -f -sagain as soon as the replica is back\non line in order to clean up the replica's /var/nis file\nsystem. If you fail to rerun nisrmdir -f -s replicaname when the replica is back\nin service, the old out-of-date information left on the replica could cause\nproblems.

\n
\n\n
\n
" + }, + { + "url": "https://docs.oracle.com/cd/E23823_01/html/817-5093/fscachefs-70682.html", + "domain": "docs.oracle.com", + "file_source": "part-00943-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "\n\n\n\n\nUsing The CacheFS File System (Tasks) - System Administration Guide: Devices and File Systems\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n\t\n\t\t\n\t\t\t\n\t\t\n\n\t\t\n\t\t\t\n\t\t\n\n\t\t\n\t\t\n\t\t\t\n\n\t\t\t\n\t\t\n\t
\n\t\t\t\tSkip Navigation Links\n\t\t\t
\n\t\t\t\tExit Print View\n\t\t\t
\"Link\n\t\t\tSystem Administration Guide: Devices and File Systems     Oracle Solaris 10 8/11 Information Library\n\t\t\t
\n
\n\n\n
\n
\n
\n \n \"Oracle\n \n
\n\n
\n \n \"Library\"\n \n
\n
\n \n
\n\n
\n\t\t \n \n \"PDF\"\n \n\t\t \n
\n \n
\n \n \"Print\n \n
\n \n \n
\n \n \"Feedback\"\n \n
\n\n
\n\n\n
\n
\n
\n \n\n
\n
\n \"search\n
\n\n
\n \n \n
\n\n
\n \"search\n
\n
\n\n
\n
\n
\n
\n
 Search Scope:\n
\n
    \n
  • \n
  • \n
\n
\n
\n
\n
\n
\n\t\t\t \n
\n
\n
\n \n\n
\n\n\n
\n\t

Document Information

\n

About This Book

\n

1.  Managing Removable Media (Overview)

\n

2.  Managing Removable Media (Tasks)

\n

3.  Accessing Removable Media (Tasks)

\n

4.  Writing CDs and DVDs (Tasks)

\n

5.  Managing Devices (Overview/Tasks)

\n

6.  Dynamically Configuring Devices (Tasks)

\n

7.  Using USB Devices (Overview)

\n

8.  Using USB Devices (Tasks)

\n

9.  Using InfiniBand Devices (Overview/Tasks)

\n

10.  Managing Disks (Overview)

\n

11.  Administering Disks (Tasks)

\n

12.  SPARC: Setting Up Disks (Tasks)

\n

13.  x86: Setting Up Disks (Tasks)

\n

14.  Configuring Oracle Solaris iSCSI Targets and Initiators (Tasks)

\n

15.  The format Utility (Reference)

\n

16.  Managing File Systems (Overview)

\n

17.  Creating and Mounting File Systems (Tasks)

\n
\n

18.  Using The CacheFS File System (Tasks)

\n
\n

High-Level View of Using the CacheFS File System (Task Map)

\n

Overview of the CacheFS File System

\n

How a CacheFS File System Works

\n

CacheFS File System Structure and Behavior

\n

Creating and Mounting a CacheFS File System (Task Map)

\n

How to Create the Cache

\n

Mounting a File System in the Cache

\n

How to Mount a CacheFS File System (mount)

\n

How to Mount a CacheFS File System (/etc/vfstab)

\n

How to Mount a CacheFS File System (autofs)

\n

Maintaining a CacheFS File System (Task Map)

\n

Maintaining a CacheFS File System

\n

Modifying a CacheFS File System

\n

How to Display Information About a CacheFS File System

\n

Consistency Checking of a CacheFS File System

\n

Consistency Checking on Demand

\n

How to Specify Cache Consistency Checking on Demand

\n

How to Delete a CacheFS File System

\n

How to Check the Integrity of a CacheFS File System

\n

Packing a Cached File System (Task Map)

\n

Packing a CacheFS File System

\n

How to Pack Files in the Cache

\n

How to Display Packed Files Information

\n

Using Packing Lists

\n

How to Create a Packing List

\n

How to Pack Files in the Cache With a Packing List

\n

Unpacking Files or Packing Lists From the Cache

\n

How to Unpack Files or Packing Lists From the Cache

\n

Troubleshooting cachefspack Errors

\n

Collecting CacheFS Statistics (Task Map)

\n

Collecting CacheFS Statistics

\n

How to Set Up CacheFS Logging

\n

How to Locate the CacheFS Log File

\n

How to Stop CacheFS Logging

\n

How to View the Working Set (Cache) Size

\n

Viewing CacheFS Statistics

\n

How to View CacheFS Statistics

\n

19.  Configuring Additional Swap Space (Tasks)

\n

20.  Checking UFS File System Consistency (Tasks)

\n

21.  UFS File System (Reference)

\n

22.  Backing Up and Restoring UFS File Systems (Overview)

\n

23.  Backing Up UFS Files and File Systems (Tasks)

\n

24.  Using UFS Snapshots (Tasks)

\n

25.  Restoring UFS Files and File Systems (Tasks)

\n

26.  UFS Backup and Restore Commands (Reference)

\n

27.  Copying Files and File Systems (Tasks)

\n

28.  Managing Tape Drives (Tasks)

\n

Index

\n\n
\n\n\n
\n\tChapter 18

Using The CacheFS File System (Tasks)

\n\n

This chapter describes how to set up and maintain CacheFS file systems.

\n\n

This is a list of task maps in this chapter.

\n\n\n\n

For information on troubleshooting CacheFS errors, see Troubleshooting cachefspack Errors.

\n

Note - For important information about NFS version 4 and the CacheFS software, see NFS Version 4 and CacheFS Compatibility Issues.

\n
\n\n
\n\n
\n\t\n\t\t\n\t\t\t\n\t\t\t\n\t\t\n\t
\n\t\t\t\t
\n\t\t\t\t\tCopyright © 2004, 2012, Oracle and/or its affiliates. All rights reserved. Legal Notices\n\t\t\t\t
\n\t\t\t
\n\t\t\t\t\"Previous\"\n\t\t\t\t\"Next\"\n\t\t\t
\n\t\n
\n\n\n\n\n\n\n\n\n\n" + }, + { + "url": "https://docs.oracle.com/cd/E56676_01/html/E56681/preface.html", + "domain": "docs.oracle.com", + "file_source": "part-00331-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "\n\n\n\n \nUsing This Documentation - \nOracle® Solaris Cluster Data Services Developer's Guide\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n\n\n\n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\tGo to main content\n\t
\n\t\t
oracle home
\n\t\t

\nOracle® Solaris Cluster Data Services Developer's Guide

\n\n\t\t
\n\t\t\t

Exit Print View

\n\t\t
\n\n\t\t\n\t\t
\n\t\t\n\t\t\t
\n\t\t\t\n \n \n \n\t\t\t
\n\t\t\t\t
 
\n\t\t\t\t\n\t\t\t\t\n\t\t\t
\n\t\t\t
\n\t\t\t\t
\n\t\t\t\t\t
Search Scope:
\n\t\t\t\t\t
\n\t\t\t\t\t\t\n\t\t\t\t\t\t\n\t\t\t\t\t
\t\t\t\t\n\t\t\t\t
\n\t\t\t
\n\t\t\t
\n\t\t
\n\t
\n\n\t
\n\t\t\t\n\t\t
\n\t\t\t  »  ...Documentation Home  »  Oracle Solaris Cluster 4.3  »  Oracle® Solaris Cluster Data ...  »  Using This Documentation\n\t\t
\n\n\t\t\t\n\t\t
Updated: September 2015
\n\t
\n\t\n\t
 
\n\t
 
\n\n\t
\n\t\t\n\t
\n\t\n\t
\n\n\t\t
\n\t\t\t
\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t
\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t
\n\n\t\t\t\n\t\t\t
\n\t\t\t\t
\n\t\t\t\t\t\n\t\t\t\t\t\n\t\t\t\t
\n\t\t\t
\n\t\t\t \n\t\n\t\t\t
\n\t\t\t\t\n\t\t\t\t\n

Using This Documentation

\n \n
    \n
  • \n

    Overview – Contains information about\n the Resource Management API to develop Oracle Solaris Cluster data services on\n both SPARC and x86 based systems.

    \n
  • \n
  • \n

    Audience – Experienced system\n administrators with extensive knowledge of Oracle software and hardware.

    \n
  • \n
  • \n

    Required knowledge – Knowledge of the\n Oracle Solaris operating system, of Oracle Solaris Cluster software, and expertise with\n the volume manager software that is used with Oracle Solaris Cluster software.

    \n
  • \n
\n \n \n

Product Documentation Library

\n \n

Documentation and resources for this product and related products are available at http://www.oracle.com/pls/topic/lookup?ctx=E56676-01.

\n \n \n

Feedback

\n \n

Provide feedback about this documentation at http://www.oracle.com/goto/docfeedback.

\n \n\n\n\t\t\t\t\n\t\t\t
\n\n\t\t\t
\n\t\t\t\t\n\t\t\t\t
\n\t\t\t\t\t
Previous
\n\t\t\t\t\t
Next
\n\t\t\t\t
\n\t\t\t\t
\n\t\t\t
\t\n\t\t
\n\t
\n\n\n\n\n" + }, + { + "url": "https://docs.oracle.com/cd/E19575-01/821-0185/gfyxq/index.html", + "domain": "docs.oracle.com", + "file_source": "part-00273-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "Preventing the Service From Shutting Down When a User Logs Out (Sun GlassFish Enterprise Server v2.1.1 Administration Guide)

Sun GlassFish Enterprise Server v2.1.1 Administration Guide

Preventing the Service From Shutting Down When a User Logs Out

\n

By default, the Java VM catches signals from Windows that indicate that the operating system is shutting down, or that a user is logging out, and shuts itself down cleanly. This behavior causes the Enterprise Server service to shut down when a user logs out of Windows. To prevent the service\nfrom shutting down when a user logs out, set the -Xrs Java VM option.

\n

To set the -Xrs Java VM option, add the following line to the section of the as-install\\domains\\domain-name\\config\\domain.xml file that defines Java VM options:

\n
<jvm-options>-Xrs</jvm-options>
\n

If the Enterprise Server service is running, stop and restart the service for your changes to become effective.

\n
Note –

In some Windows 2003 Server installations, adding the -Xrs option to the domain.xml file fails to prevent the service from shutting down. In this situation, add the option to the as-install\\lib\\processLauncher.xml file as\nfollows:

\n
<process name=\"as-service-name\">\n   ...\n   <sysproperty key=\"-Xrs\"/>\n   ...
\n
\n
\n\n
\n
" + }, + { + "url": "http://docs.oracle.com/cd/B19306_01/appdev.102/b14260/img_text/adobj031.htm", + "domain": "docs.oracle.com", + "file_source": "part-00331-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "\n\n\n\n\n\n\nDescription of the illustration adobj031.eps\n\n\n\n\n

The figure shows the horizontal storage model, in which each view or subview is based on a different table.

\n

The tables are only_person, only_students, and only_employees. The Person_v view is based on the only_person table, the Student_v view is based on the only_students table, and the Employee_v view is based on the only_employees table.

\n

The tables all contain person attributes. In addition, only_students contains student attributes and only_employees contains employee attributes. The views contain the same attributes as the underlying tables.

\n\n\n" + }, + { + "url": "https://docs.oracle.com/cd/E37838_01/html/E67470/scard-use.html", + "domain": "docs.oracle.com", + "file_source": "part-00662-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "\n\n\n\n \nUsing a Smart Card in Oracle Solaris - \nManaging Authentication in Oracle® Solaris 11.4\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n\n\n\n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\tGo to main content\n\t
\n\t\t
oracle home
\n\t\t

\nManaging Authentication in Oracle® Solaris 11.4

\n\n\t\t
\n\t\t\t

Exit Print View

\n\t\t
\n\n\t\t\n\t\t
\n\t\t\n\t\t\t
\n\t\t\t\n \n \n \n\t\t\t
\n\t\t\t\t
 
\n\t\t\t\t\n\t\t\t\t\n\t\t\t
\n\t\t\t
\n\t\t\t\t
\n\t\t\t\t\t
Search Scope:
\n\t\t\t\t\t
\n\t\t\t\t\t\t\n\t\t\t\t\t\t\n\t\t\t\t\t
\t\t\t\t\n\t\t\t\t
\n\t\t\t
\n\t\t\t
\n\t\t
\n\t
\n\n\t
\n\t\t\t\n\t\t
\n\t\t\t  »  ...Documentation Home  »  Oracle Solaris 11.4 Information Library  »  Managing Authentication in ...  »  Using Smart Cards for Multifactor ...  »  Using a Smart Card in Oracle Solaris\n\t\t
\n\n\t\t\t\n\t\t
Updated: November 2020
\n\t
\n\t\n\t
 
\n\t
 
\n\n\t
\n\t\t\n\t
\n\t\n\t
\n\n\t\t
\n\t\t\t
\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t
\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t
\n\n\t\t\t\n\t\t\t
\n\t\t\t\t
\n\t\t\t\t\t\n\t\t\t\t\t\n\t\t\t\t
\n\t\t\t
\n\t\t\t \n\t\n\t\t\t
\n\t\t\t\t\n\t\t\t\t\n

Using a Smart Card in Oracle Solaris

\n\n\n\n\n\n\n

Smart cards use personal identification numbers (PINs) rather than passwords. The smart card is protected from misuse by the PIN, which is known only to the smart card's owner. To use the smart card, you insert the card in a smart card reader that is attached to a computer and, when prompted, type the PIN. The smart card can be used only by someone who possesses the smart card and knows the PIN.

\n

For computer use, a CAC, PIV or X.509 certificate-based smart card should remain in the reader for the duration of the session. When the smart card is removed from the reader, the credentials are unavailable in the existing login session to any applications that require re-authentication.

\n

Caution  - Log out during periods of inactivity. An authenticated smart card is a secure trusted link into the server. To prevent a possible attack from your local system, you must log out or remove your smart card or CAC when not actively working.


\n\n\n\t\t\t\t\n\t\t\t
\n\n\t\t\t
\n\t\t\t\t\n\t\t\t\t
\n\t\t\t\t\t
Previous
\n\t\t\t\t\t
Next
\n\t\t\t\t
\n\t\t\t\t
\n\t\t\t
\t\n\t\t
\n\t
\n\n\n\n\n" + }, + { + "url": "https://docs.oracle.com/cd/E05553_01/books/SysDiag/SysDiagEvntLogAdmin3.html", + "domain": "docs.oracle.com", + "file_source": "part-00273-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "\r\n\r\n\r\n\t\r\n\t\r\n\t\r\n\t\r\n\t\r\n\t\r\n\t\r\n\t\r\n\t\r\n\t\r\n\r\n\t\r\n\r\n\tBookshelf v7.7: About Events and Event Logging\r\n\r\n\t\r\n\t\r\n\r\n\r\n\r\n\r\n\r\n\t\r\n\t\t\r\n\t\t\r\n\t\t\r\n\t\r\n
  Bookshelf Home | Contents | Index | PDF   
\r\n\r\n\r\n\r\n
\r\n

System Monitoring and Diagnostics Guide for Siebel eBusiness Applications > Configuring Siebel Server and Component Logging > About Configuring Siebel Server and Component Logging >

\r\n

About Events and Event Logging

\r\n
\r\n

The elements of the event logging system are defined in the following bullets:

    \r\n
  • Event. An event is created each time you execute a program code (such as running a task).\r\n
  • Event Type. Event types are categories of events.\r\n
  • Event Subtype. Event subtypes are code references that define the event.\r\n
  • Log Level. The log level determines the amount of information that is written to the log file. Log levels are set for event types. Table 7 lists the log levels of event types.\r\n
  • Severity. A severity level is associated with each event subtype. The severity level and log level share the same scale and are compared when writing events to the log file. Table 7 lists the severity of event subtypes.\r\n
\r\n
\r\n\r\n\r\n\r\n \r\n\r\n \r\n\r\n\r\n\r\n\r\n \r\n\r\n \r\n\r\n\r\n\r\n\r\n\r\n \r\n\r\n \r\n\r\n\r\n\r\n\r\n\r\n \r\n\r\n \r\n\r\n\r\n\r\n\r\n\r\n \r\n\r\n \r\n\r\n\r\n\r\n\r\n\r\n \r\n\r\n \r\n\r\n\r\n\r\n\r\n\r\n \r\n\r\n \r\n\r\n\r\n\r\n\r\n
Table 7.  Severity and Log Levels
Log and Severity Level
Description

0

Fatal

1

Errors

2

Warnings

3

Informational

4

Details

5

Diagnostic

\r\n
\r\n
\r\n\r\n

When an event occurs, the severity level of the event (as defined by the event subtype) is compared with the log level of the event type. If the numerical value of the event severity level is equal to or lower than the numerical value of the event type log level, then the event is written to the log file. If the numerical value of the event severity level is higher than the numerical value of the event type log level, then the event is ignored.

NOTE:  Event subtypes with a lower numeric value have a higher severity. For example a value of 0 indicates the event subtype is more severe than one with a value of 5. By setting the event log level to a low number such as 1, only the most severe events are logged, but if the event log level is set to a higher number such as 5, more information is captured including less severe event subtypes.

\r\n

For example, the Siebel Server components in the Enterprise Application Integration component group (alias EAI) have an event type called EAI Siebel Wizard. Several event subtypes belong to the EAI Siebel Wizard event type, including:

    \r\n
  • EAI Siebel Wizard Invalid Business Component with a severity level of 2\r\n
  • EAI Siebel Wizard Invalid MVG with a severity level of 2\r\n
  • EAI Siebel Wizard MVG with a severity level of 3\r\n
\r\n

While the EAI component group is running, the process encounters a multi-value group (MVG). This encounter creates an event of the EAI Siebel Wizard MVG subtype. If the MVG is invalid, a second event of the EAI Siebel Wizard Invalid MVG subtype is created. If the log level of the EAI Siebel Wizard event type is set to 1, both events are ignored. If the log level is set to 3, both events are written to the log file.

Events are logged at the Siebel Server level and the component level. See Configuring Siebel Server Logging for details on Siebel Server events; see Configuring Siebel Server Component Logging for information on component events.

\r\n
\r\n\r\n\r\n\t\r\n\t\t\r\n\t\t\r\n\t\t\r\n\t\r\n
   
\r\n\r\n\r\n\t\r\n\t\t\r\n\t\r\n\t\r\n\t\t\r\n\t\r\n
System Monitoring and Diagnostics Guide for Siebel eBusiness Applications
 
\r\n\r\n\r\n\r\n" + }, + { + "url": "https://docs.oracle.com/en/database/oracle/oracle-database/19/dbseg/img_text/dbseg_pb_001b.html", + "domain": "docs.oracle.com", + "file_source": "part-00514-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "\n\n \n \n \n \n Description of the illustration dbseg_pb_001b.png\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n

Description of the illustration dbseg_pb_001b.png

\n
\n

Image dbseg_pb_001b.png illustrates the Oracle Database connection to Microsoft Active Directory. From top to bottom, left\n to right, are the following components:\n

\n

The application and administrative users, who connect to the Oracle database using either password, Kerberos, or PKI authentication.

\n

From the Oracle Database, a connection to the Microsoft Active Directory. The Active Directory has been enabled with password\n filters using Oracle Filter.\n

\n

Shared database users and database roles are mapped to Active Directory groups.

\n

The Active Directory now can be used for authentication, user management, account policy management, and group assignments\n of the shared users, roles, and groups.\n

\n
\n
\"Oracle
Copyright © 1996, 2024, Oracle and/or its affiliates. 
\n \n" + }, + { + "url": "http://docs.oracle.com/cd/E28280_01/doc.1111/e26692/img_text/insert_flowchart2.htm", + "domain": "docs.oracle.com", + "file_source": "part-00352-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "\n\n\n\n\n\nDescription of the illustration insert_flowchart2.gif\n\n\n

This flowchart illustrates the action sequence for checking a new file in to the Content Server repository.

\n
    \n
  1. \n

    Start the insert action.

    \n
  2. \n
  3. \n

    Check if content exists in the database.

    \n\n
  4. \n
\n\n\n" + }, + { + "url": "https://docs.oracle.com/cd/E18727_01/doc.121/e13415/T456136T456141.htm", + "domain": "docs.oracle.com", + "file_source": "part-00331-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "\n\n\n\n\n\n\n\n\nOracle Payments User's Guide\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nGo to main content\n
\n
\n
9/11 \n

Funds Disbursement Process Home Pages

\n

Payment Process Overview

\n

To present a simplified overview of the Oracle Payments payment process, the process begins when a source product, such as Oracle Payables, needs to pay documents payable, such as invoices. Oracle Payables groups the documents payable into a payment process request and submits it to Oracle Payments. Within Oracle Payments, the Build Payments program groups documents payable into groups known as payments, which represent individual checks or electronic deposits. The Create Payment Instructions program groups payments into payment instructions. Payment instructions are then printed or submitted electronically to payment systems or banks.

\n

Managing the Payment Process

\n

You can manage the payment process from the Oracle Payments Funds Disbursement Process Home Page or from the Payment Manager Dashboard in Oracle Payables. Your choice depends on:

\n\n

Oracle Payments Funds Disbursement Process Home Page

\n

The Funds Disbursement Process Home Page should be used when the deploying company uses a centralized payment processing system. Centralized payment processing is where one person or team is dedicated to payment processing, but not to Oracle Payables functions such as invoice selection and payment process request submission. It is that person's or team’s responsibility to ensure that the payments are paid. In this scenario, a user who is responsible for the payment process only, and not the Oracle Payables payment functions of entering invoices and submitting payment process requests, may decide to use the Funds Disbursement Process Home Page, but he has a choice.

\n

If the deploying company sets up Oracle Payments so that the Create Payment Instructions program is run separately from the Build Payments program, then payments from different payment process requests are mixed together when creating payment instructions, and it is more difficult to follow the payment process from the perspective of the payment process requests. In this case, users are advised to use the Funds Disbursement Process Home Page.

\n

Oracle Payables Payments Dashboard

\n

The Oracle Payables Payments Dashboard should be used when the deploying company uses a decentralized payment processing system. Decentralized payment processing is where more than person or team performs multiple payment functions, such as selecting the invoices to be paid, making the payments, and remedying any problems that occur with the payment process. In this scenario, a user who is responsible for invoice selection and the submission of payment process requests, as well following the payment process, may decide to use the Oracle Payables Payments Dashboard.

\n

Overview of the Funds Disbursement Process Home Pages

\n

The Funds Disbursement Process Home page is the first page the Payment Administrator sees after logging in. This page and its subsidiary pages enable the Payment Administrator to monitor and manage the payment process described in the previous section.

\n

The Funds Disbursement Process Home page is a read-only page that includes a Pending Actions region that displays the current status of payment process requests and payment instructions that are in-process. The Pending Actions region contains links to detailed views of these entities, as well as Take Action links to pages that enable the Payment Administrator to take the next step in the payment process. The Funds Disbursement Process Home page also contains links to other pages that are relevant to, but not necessarily part of, the payment process.

\n

During the payment process, the Payment Administrator uses the Funds Disbursement Process Home page to:

\n\n

Submitting and Monitoring Funds Disbursement Concurrent Requests

\n

The Concurrent Requests subregion of the side navigation bar enables the Payment Administrator to quickly submit and monitor any concurrent requests. Like the links in the Shortcuts subregion, these links enables the Payment Administrator to specify applicable parameters, schedule the concurrent request, and then monitor the submission.

\n

The table below describes the funds disbursement concurrent programs provided by Oracle Payments.

\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Funds Disbursement Concurrent Programs
Funds Disbursement Concurrent ProgramsDescription
Create Electronic Payment InstructionsSelects electronic payments and groups them into payment instructions.
Create Printed Payment InstructionsSelects printed payments and groups them into payment instructions.
Create Regulatory Reporting of PaymentsCreates regulatory reports or central bank reports.
Format Payment InstructionsUses XML Publisher templates to format payment instructions into payment files.
Payment File Accompanying LetterCreates an accompanying letter for a payment instruction file.
Payment Instruction RegisterCreates a report, displaying the contents of a payment instruction.
Payment Process Request Status ReportDisplays the contents and status of a payment process request.
Positive Pay FileCreates a positive pay file for each payment instruction the concurrent program is run against.\n

Note: If you run this program using the SRS functionality, the list of values for the Payment Process Profile field in the Schedule Request: Parameters page displays only payment process profiles that are limited to one or more internal bank accounts. Payment process profiles that can be used with all bank accounts do not appear in this list of values.

\n
Reset Periodic Sequence ValueResets periodic sequences to a specified number or letter. Periodic sequences can be specified in the payment process profile setup entity.
Send Separate Remittance AdvicesCreates remittance advices that notify payees that payments have been made to them.
\n

Actions the Payment Administrator Performs from the Funds Disbursement Process Home Page, Pending Actions Region

\n

The table below describes actions the Payment Administrator can perform from the Funds Disbursement Process Home page, Pending Actions region. Note that the actions described in the table below do not always need to be invoked; only when necessary and/or required by Oracle Payments settings.

\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Actions the Payment Administrator Performs from the Funds Disbursement Process Home Page
Payment Process TypeStatus of Payment Process TypeClicking Take Action Icon OpensAction Payment Administrator Performs
Payment Process RequestInformation Required - Pending ActionComplete Document Assignments pageEnables the Payment Administrator to assign internal bank accounts and payment process profiles to documents payable. Only used if the source product administrator does not provide this information.
Payment Process RequestDocument Validation Errors - Pending ActionResolve Document Validation Errors pageEnables the Payment Administrator to review and resolve document-level validation errors by dismissing individual documents payable from the payment process and/or by modifying Oracle Payments setup. Only used if Oracle Payments settings require document-level validation errors to be reviewed by a user.
Payment Process RequestPayment Validation Errors - Pending ActionResolve Payment Validation Errors pageEnables the Payment Administrator to review and resolve payment-level validation errors by dismissing individual payments or documents payable from the payment process and/or by modifying Oracle Payments setup. Only used if Oracle Payments settings require payment-level validation errors to be reviewed by a user.
Payment Process RequestPending Proposed Payments ReviewReview Proposed Payments pageEnables the Payment Administrator to review and approve or remove proposed payments after the Build Payments program has created them and before the payments are grouped into payment instructions. Only used if Oracle Payment settings require the review of proposed payments.
Payment InstructionFailed Validation Errors - Pending ActionResolve Payment Instruction Validation Errors pageEnables the Payment Administrator to review and resolve payment instruction-level validation errors by dismissing individual payments from the payment process, by modifying Oracle Payments setup, or by choosing to override certain errors. Always used for payment instruction-level validation errors.
Payment InstructionFormatted - Ready for TransmissionTransmit Payment Instruction pageEnables the Payment Administrator to initiate payment instruction transmission. Only used for those payment instructions that are not transmitted to the payment system automatically, based on Oracle Payments settings.
Payment InstructionTransmission FailedResolve Payment Instruction Transmission Failure pageEnables the Payment Administrator to respond to a transmission failure by retransmitting the file or by ignoring the failure by recording that the transmission was successful. Always used for failed payment instruction transmission.
Payment InstructionFormatted - Ready for PrintingPrint Payment Documents pageEnables the Payment Administrator to initiate payment document printing within Oracle Payments. Only used for those payment instructions that are not submitted for printing automatically, based on Oracle Payments settings.
Payment InstructionCreated - Ready for PrintingPrint Payment Documents pageEnables the Payment Administrator to initiate payment document printing within Oracle Payments. Always used for payment instructions for which formatting and printing are deferred due to another payment instruction locking the payment document needed to print the payments.
Payment InstructionCreated - Ready for FormattingPrint Payment Documents pageEnables the Payment Administrator to initiate payment document printing outside Oracle Payments, that is, initiate printing to file. Always used for payment instructions for which formatting and printing are deferred due to another payment instruction locking the payment document needed to print the payments.
Payment InstructionSubmitted for PrintingPayment Instruction page with a choice of navigating to the Record Print Status page or the Reprint Payment Documents pageThe Reprint Payment Documents page enables the Payment Administrator to reprint spoiled payment documents. Always available for payment instructions printed within Oracle Payments. The Record Print Status page enables the Payment Administrator to record the status of printed payment documents, including spoiled payment documents that should not be reprinted and skipped documents. This action also marks payments complete. Recording print status is a required action for printed payment instructions. This method of recording is always available for payment instructions printed within Oracle Payments.
Payment InstructionFormatted - Ready for RecordingRecord Print Status pageEnables the Payment Administrator to record the status of printed payment documents. This action also marks payments complete. Recording print status is a required action for printed payment instructions. This method of recording is always available for payment instructions printed outside Oracle Payments, that is, printed to file.
Stop Payment RequestPrintedResolve Stop Payment Request pageEnables the Payment Administrator to confirm or release a stop. Always used for payments that have a previous stop request placed on them.
\n

Note: The following sections describe actions that the Payment Administrator performs from the Funds Disbursement Process Home page. These actions are not presented in any particular order.

\n

Completing Assignments

\n

Source products are allowed to submit documents payable to Oracle Payments without assigning an internal bank account or a payment process profile to them. The Complete Assignments page and its subsidiary pages are used to assign required entities to documents payable, so that Oracle Payments has the necessary information to continue with the payment process. This page enables Payment Administrators to perform the first action that they can take during the payment process, which is to assign internal bank accounts and payment process profiles to documents payable. While providing this information, Payment Administrators can also change the assignments of documents that already have this information.

\n

Important: Once internal bank account and/or payment process profile information has been supplied for all documents and the payment process restarted, no assignments can be changed.

\n

Resolving Document Validation Errors

\n

Once all documents payable have been assigned all required attributes, the Build Payments program validates them, based on applicable validations assigned in Oracle Payments setup. When submitting the payment process request, the source product specifies whether documents that fail this document validation are rejected or whether the Build Payments program simply stops the payment process for review by the Payment Administrator.

\n

If review is required for failed documents, the Payment Administrator navigates to the Resolve Document Validation Errors page to review the validation errors, dismiss individual documents payable from the payment process, if necessary, and restart the Build Payments process when the errors have been resolved.

\n

The Payment Administrator can also leave the Funds Disbursement Process pages altogether in order to change setup or third party payee data that may have caused the error, and then return to the Resolve Document Validation Errors page to restart the Build Payments process.

\n

Resolving Payment Validation Errors

\n

The Resolve Payment Validation Errors page enables the Payment Administrator to resolve validation errors at the payment level. This page displays the proposed payments and validation errors, as well as the documents payable that comprise each proposed payment.

\n

Once payments have been built from documents payable, the Build Payments program validates them, based on applicable validations assigned in Oracle Payments setup. When submitting the payment process request, the source product specifies whether payments that fail this validation are rejected or whether the Build Payments program simply stops the payment process for review by the Payment Administrator.

\n

If review is required for payments that fail validation, the Payment Administrator navigates to the Resolve Payment Validation Errors page to review the validation errors, remove documents payable or entire payments, if necessary, and restart the Build Payments process when the errors have been resolved. The Payment Administrator can also leave the Funds Disbursement Process pages altogether to change setup or third party payee data that may have caused the error, and then return to the Resolve Payment Validation Errors page to restart the Build Payments process.

\n

Reviewing Proposed Payments

\n

The Review Proposed Payments page enables the Payment Administrator to review and approve proposed payments after the Build Payments program has created them. This page displays all proposed payments after they have passed validation, as well as the documents that comprise each proposed payment.

\n

When submitting the payment process request, the source product specifies whether the Build Payments process is stopped for payment review once the proposed payments are built. If review is required for proposed payments, the Payment Administrator navigates to the Review Proposed Payments page to review payments, remove payments or individual documents, if necessary, and then restarts the Build Payments process.

\n

Resolving Payment Instruction Validation Errors

\n

Once payment instructions have been built from payments, the Create Payment Instructions program validates them, based on applicable validations assigned in Oracle Payments setup. If a payment instruction fails validation, it is always stopped for review. The Resolve Payment Instruction Validation Errors page enables the Payment Administrator to review, resolve, or override validation errors found by the Create Payment Instructions program. This page displays the following:

\n\n

The Payment Administrator can remove payments, if necessary, leave the Funds Disbursement Process pages altogether to change setup or third party payee data, or, in the case of some errors, override the validation errors. If the validation errors are overridden or resolved, the payment process proceeds to formatting and then printing or transmitting the payment instruction.

\n

Note: At this action step, the Payment Administrator does not have the option of removing individual documents payable.

\n

Payment Instruction Status of Created

\n

If you notice that the Create Payment Instructions program has stopped, leaving a payment instruction with a status of Created, as seen in the Status column under the Pending Actions region of the Funds Disbursement Process Home page, or on the Payment Instruction Search page, you can move the Created status to a formatting phase by running the Format Payment Instruction program.

\n

To run the Format Payment Instruction program, perform either of the following steps:

\n\n

Note: The preceding actions are error-recovery procedures only. Normally, when a payment instruction successfully finishes validation, the Format Payment Instruction program is run automatically, moving the payment instruction beyond the Created status.

\n

Transmitting Payment Instructions

\n

Payment instructions that are electronic, as opposed to printed checks, must be transmitted to a payment system. This transmission occurs automatically or is deferred, based on Payments setup. If Oracle Payments is set up to defer payment instruction transmission, the Payment Administrator navigates to this page to manually initiate the transmission.

\n

The Transmit Payment Instruction page enables the Payment Administrator to initiate payment instruction transmission for those payment instructions that are not transmitted to the payment system automatically. This page enables the Payment Administrator to review payment instruction and transmission details before transmitting the instruction.

\n

Note: This action step is the Payment Administrator’s final opportunity to terminate a payment instruction.

\n

Resolving Payment Instruction Transmission Failure

\n

Occasionally, a transmission fails. The Resolve Payment Instruction Transmission Failure page enables the Payment Administrator to respond to a transmission failure by taking one of the following actions:

\n\n

The Resolve Payment Instruction Transmission Failure page displays an overview of the payment instruction, along with transmission details.

\n

Printing Payment Documents

\n

Oracle Payments setup enables the Payment Administrator to choose whether payment instructions are printed immediately after a payment instruction is formatted. For those payment instructions that are not printed immediately, the Payment Administrator must manually submit them for printing.

\n

Manual submission occurs in the Print Payment Documents page. This page enables the Payment Administrator to initiate payment document printing for those payment instructions that are submitted manually for printing, rather than automatically. The Print Payment Documents page enables the Administrator to review basic details of a payment document and to override the default printer before submitting the payment instruction for printing.

\n

In the case where a payment instruction is not formatted and printed because another payment instruction has locked the payment document (see below), this page is used to initiate both formatting and printing. In the case where a payment instruction is supposed to be printed outside Oracle Payments, that is, printed to file, this page is used to initiate formatting.

\n

The Print Payment Documents page prints both prenumbered and non-prenumbered payment documents. Behind the scenes, the print program invokes Oracle XML Publisher to print the payment instruction onto checks or into a payment file that is transmitted to a payment system for further processing and disbursement.

\n

Note: This action step is the Payment Administrator’s final opportunity to terminate the payment instruction.

\n

Locking and Numbering Payment Documents

\n

This section discusses printing in general, not just printing that is manually initiated by the Payment Administrator.

\n

Note: The term payment instruction refers to a collection of payments, as built by the Create Payment Instructions program. The term payment document can refer to the stock of paper that is used to print payments onto, such as check stock or a check book. Alternately, payment document can also refer to a physical payment, such as a printed check, which is printed onto a single piece of check stock.

\n

Payment document printing can occur immediately after payment instruction creation or later at the Payment Administrator’s request. Both the Print Payment Documents page and the Create Payment Instructions Program use the Format Payment Instructions program to perform the necessary print tasks. This program initiates payment document printing by internally tracking the numbering of the payment documents and locking the payment documents. For each payment instruction, Oracle Payments performs the following steps:

\n
    \n
  1. \n

    The system checks whether the required payment document is available for printing. If the payment document is unavailable, printing cannot continue:

    \n
      \n
    • \n

      If the Format Payment Instructions program was invoked by the Create Payment Instructions program, printing is deferred.

      \n
    • \n
    • \n

      If the Format Payment Instructions program was invoked by the Print Payment Documents page, an error message indicates that the Payment Administrator cannot print the payment instruction until he completes recording the print status of the prior payment instruction.

      \n
    • \n
    • \n

      In either case, the Payment Administrator must complete the previous payment instruction, that is, the one that is locking the payment document, by recording its print status. For information on recording the print status of the payment instruction, see Recording the Print Status of Prenumbered Payment Documents and Recording the Print Status of Non-Prenumbered Documents.

      \n

      If the payment document is available, Oracle Payments locks it for this payment instruction. If the Format Payment Instructions program is invoked by the Create Payment Instructions Program and the Create Payment Instructions Program created more than one payment instruction that requires the same payment document, then the payment instruction that was created first locks the payment document.

      \n

      Note: A payment document is unlocked if either of the following occurs:

      \n
        \n
      • \n

        payment instruction is terminated

        \n
      • \n
      • \n

        payment instruction prints correctly and its status is recorded by the Payment Administrator on the Record Print Status page

        \n
      • \n
      \n
    • \n
    \n
  2. \n
  3. \n

    Oracle Payments internally tracks the numbering of all the payments contained in a payment instruction, including the setup and overflow documents.

    \n

    Setup documents are occasionally required by older printing systems. These prenumbered setup checks are discarded after the print run, but the system tracks their numbers.

    \n

    Overflow documents are checks that are voided due to a continuation of descriptive text on more than one check stub. This occurs when the number of lines of descriptive text printed per check stub exceeds the maximum allowed.

    \n
  4. \n
  5. \n

    The Format Payment Instructions program sends the payment instruction to Oracle XML Publisher for printing.

    \n
  6. \n
\n

Reprinting Payment Documents

\n

The Reprint Payment Documents page is optional. If a Payment Administrator finds no problems with the initial print run or does not need to reprint, he can navigate directly to the Record Print Status page. The Reprint Payment Documents page can be used if, after printing has been submitted, the Payment Administrator discovers printing problems and wishes to reprint particular payment documents or the complete payment instruction. This page enables the Payment Administrator to reprint individual payment documents, ranges of payment documents, or the complete payment instruction, and then submit those payment documents for printing.

\n

Note: The Payment Administrator must visually inspect whether the checks printed. He can reprint the complete payment instruction only if the initial print run has not started.

\n

The Reprint Payment Documents page enables the Payment Administrator to:

\n\n

Note: The Reprint button is available only after Oracle Payments attempts to print a payment document. In the reprint scenario, the payment document is still locked from the initial printing attempt.

\n

Warning: Do not reprint the complete payment instruction if the initial printing attempt resulted in one or more checks printing successfully. If you reprint the entire payment instruction after successfully printing one or more payment documents, the numbering on the prenumbered payment documents may be incorrect. If printing did commence, but you need to reprint every payment document in the payment instruction, choose to reprint a range of payment documents and enter the first and last documents in the payment instruction as the range.

\n

Those payment documents that are selected for reprint are automatically marked as Spoiled. Because payment document numbers cannot be reused for prenumbered payment documents, reprinting on prenumbered payment documents requires the Payment Administrator to provide the first document number for the reprints. Oracle Payments can then correctly renumber the payments.

\n

Recording the Print Status of Prenumbered Payment Documents

\n

Since the actual printing occurs outside of Oracle Applications and has many potential failure points, Oracle Payments does not know the outcome of printing or reprinting payment documents. Consequently, the Payment Administrator needs to provide that information to Oracle Payments through the Record Print Status page. This page enables the Payment Administrator to:

\n\n

By default, payment documents that have been marked spoiled during the reprinting process are displayed as Spoiled in the Record Print Status page, but all other payment documents are initially displayed as Printed. For those payment documents that actually are spoiled or skipped, the Payment Administrator must enter applicable documents in the Record Spoiled Payment Documents region or the Enter Skipped Payment Documents region of the Record Print Status page.

\n

The Record Print Status page also enables the Payment Administrator to choose whether to submit the Positive Pay program immediately after he finishes recording the print status, if the applicable setup enables the choice. The program creates a positive pay file, formats it, and transmits it electronically to your bank. This prevents check fraud by informing the bank which payment documents are issued and for what amount.

\n

Important: Do not commit all print statuses unless you are sure that all documents with the status of Printed were successfully printed. If you click the Apply button, the payments are marked as complete and the payment documents are recorded as Printed. If you complete this action and discover printing errors, you will need to void the payment and select the documents to be paid in a new payment process.

\n

Recording the Print Status of Non-Prenumbered Documents

\n

When a payment document is not prenumbered, no renumbering needs to be done when payment documents are skipped. This is because the skipped document is simply a blank piece of paper that can be used in a new print run.

\n

Spoiled payment documents that have not been reprinted and that you wish to remove, rather than reprint, do need to be marked, so that Oracle Payments can consider those payments failed and notify source products appropriately.

\n

As with prenumbering, the Record Print Status page displays the Printed status by default. Since reprinting non-prenumbered documents involves using the same document number, rather than renumbering, all payment documents are shown with the status of Printed when the Payment Administrator first enters the Record Print Status page.

\n

Marking payment documents as Spoiled in the Record Spoiled Payment Documents Region, and later committing them through the Review Record Print Status page, results in the removal of the associated payments from the payment instruction and informs the relevant source products that the documents payable have not been paid.

\n

Marking Payments Complete

\n

Oracle Payments must notify source products when payments are complete, so that the source products can perform any necessary actions, such as accounting. Printed payments are considered complete when the payment documents are recorded as Printed. For electronic payments, however, determining the point at which a payment is considered complete is more complex and depends on the first party payer’s business practices, as well as on what notification (acknowledgement and clearing) the payer’s payment system supports. An electronic payment can be considered complete any time after formatting.

\n

In general, electronic payments in a payment instruction are automatically marked complete at some point chosen during the setup of payment process profiles. However, Oracle Payments also enables Payment Administrators to manually mark payments complete, before they are marked automatically. For information on setting up completion behavior, see Setting Up Payment Process Profiles., Oracle Payments Implementation Guide.

\n

To mark payments complete manually, the Payment Administrator navigates to the Mark Payments Complete page from the Funds Disbursement Process Home page by first selecting the Electronic Payment Instructions Not Marked Complete View under the Payment Processes region, clicking the Go button, and then clicking the Mark Payments Complete icon for an electronic payment instruction with a status of Formatted, Formatted - Ready for Transmission, Transmitted, or Transmission Failed. This view only shows payment instructions whose payment process profiles have the Allow Manual Setting of Payment Completion check box selected.

\n

Once the payments in a payment instruction have been marked complete by clicking the Apply button in the Mark Payments Complete page, the source product is notified that the payments are complete. Simultaneously, the payment instruction can no longer be terminated. Instead, if there are any problems with the payments, they must be voided. The Terminate Payment Process action, therefore, does not appear on any page that displays in the context of a payment instruction whose payments have been marked complete.

\n

Note: The Payment Administrator must mark all the payments in a payment instruction as complete. Oracle Payments does not support partial marking of payments as complete in a payment instruction.

\n

Recording Stop Payments Requests

\n

When the Payment Administrator determines that a payment needs to be stopped, he contacts the payer bank and requests a stop payment.

\n

Note: Payments does not support communication with payment systems regarding stop payments.

\n

The Payment Administrator then records the stop payment request in the Record Stop Payment Request page. To navigate to the Record Stop Payment Request page, he uses one of the following:

\n\n

If the Payment Administrator navigates to the Record Stop Payment Request page from the Funds Disbursement Process Home page, he enters a paper document number in the Paper Document Number field or a payment reference number in the Payment Reference field for the payment for which he wishes to record a stop payment request and then presses the Tab key on the keyboard. Information populates the Payee, Payment Date, and Amount fields. He then enters a stop request date, reason, and reference.

\n

To navigate to the Record Stop Payment Request page from the Payments Search page, the Payment Administrator performs a simple search. When the results display, he clicks the Stop Actions icon for the applicable payee, and information displays in the payee, payment date, and amount fields. He then enters a stop request date, reason, and reference. The reference is provided by the bank.

\n

Resolving Stop Payments Requests

\n

After a stop payment request has been made, the bank checks that the payment has not already been made and then confirms or denies the stop payment request. The Payment Administrator then uses the Resolve Stop Payment Request page to enter the confirmation or release of the stop payment request. This includes recording the confirmation or release date, reason, and reference. The reference is provided by the bank.

\n

Confirming a stop payment request causes Oracle Payments to automatically perform one of the following steps:

\n\n

Releasing a stop payment request causes Oracle Payments to record the release, but no other change occurs. The system continues to treat the payment normally.

\n

The Payment Administrator navigates to the Resolve Stop Payment Request page by using the Payments Search page to search for a payment with a stop request placed on it and then clicks the Stop Actions icon. Alternatively, he can click the Views button in the Payments Simple Search page, select the Resolve Stop Payment Requests View, and click the Stop Actions icon for the applicable payee.

\n

Voiding Payments

\n

Voiding a payment by specifying a void date and reason causes both Oracle Payments and the payment’s source product to reverse the payment.

\n

To void a payment, the Payment Administrator navigates to the Void Payment page in one of the following two ways:

\n\n

Note: A check should only be voided if it is in your physical possession or has been successfully stopped by your bank. A Payment Administrator cannot void a payment that has an unconfirmed stop payment request placed on it.

\n

Source products may restrict whether and when a payment can be voided. Consequently, when the Payment Administrator attempts to void a payment on the Void Payment page, Oracle Payments contacts the source product to check whether the payment can be voided. If the payment cannot be voided, the system displays an error message. When the Payment Administrator uses the Payment Search page, Oracle Payments automatically checks whether the payments in the results region can be voided and displays the Void icon only for those payments that can be voided.

\n

Voids are allowed on payments that have been transmitted to the payment instruction. However, if the payment system has already made the payment, this can cause a discrepancy between Oracle Payments and the real world. It is therefore best to check whether your payment system has actually made a transmitted payment before attempting to void it.

\n

Voiding all Payments

\n

Once the payments in a payment instruction are marked complete, a Payment Administrator cannot terminate the payment instruction. The only way to recover from an error or problem with the payment instruction, as a whole, is to void all the payments in the instruction. Because voiding all payments in a payment instruction is indicative of a serious payment-related problem, this action is intended only as an extreme error recovery procedure and should be invoked only when absolutely necessary. In addition, this page is disabled by default and can only be enabled through function security. Oracle Payments enables the Payment Administrator to void each payment individually, if necessary

\n

Voiding payments causes both Oracle Payments and the payment’s source product to reverse those payments. Source products may restrict whether and when payments can be voided. Therefore, when the Payment Administrator attempts to void all payments, Oracle Payments checks whether each payment can be voided. If all payments cannot be voided, Oracle Payments displays an error message.

\n

To void all payments in a payment instruction, the Payment Administrator must use the Payment Instructions Search page to navigate to the Void All Payments page. From the Payment Instructions Search page, the Payment Administrator searches on one or more variables, views the results, and then clicks the Void All Payments icon for the applicable payment instruction. In the Void All Payments page, the Payment Administrator enters a void date and reason that is applied to all payments in the payment instruction.

\n

Supporting SEPA Credit Transfer

\n

The European Payments Council (EPC) is the governing and coordination body of the European banking industry in relation to payments. It was established in the year 2002 and its purpose is to support and promote the creation of the Single European Payments Area (SEPA). The SEPA initiative involves creation of a zone for European countries (in 2008 the SEPA zone includes 31 countries) in which all payments in Euro are considered domestic even payment crossing borders. SEPA aims at improving the efficiency of cross border payments by developing common standards, procedures, and infrastructure to improve the economics of scale. The introduction of SEPA increases the intensity of competition amongst banks and corporations for customers across borders within Europe. SEPA provides cheaper, efficient, and faster payments within the SEPA zone to consumers, merchants, corporates and public administrations (Customers).

\n

SEPA introduces a new Pan-European payment scheme for payments, both credit transfers and direct debits. The SEPA implementation guidelines for credit transfers are based on the adoption of the ISO20022 (a UNIFI standard). The implementation guidelines issued by the governing body, the European Payments Council, prescribe specific ISO20022 messages to be used to initiate SEPA payments.

\n

SEPA implementation is important for the following reasons:

\n\n

A SEPA Credit Transfer (SCT) is a payment instrument for the execution of credit transfers in Euro between Customers located in the SEPA zone. The SEPA Credit Transfer is executed on behalf of an Originator. The payment is transferred from the Originator’s Bank account to the Beneficiary’s bank account.

\n

The following parties are involved in the process:

\n\n

\"the

\n\n

Credit Transfer Initiation Messaging

\n

The SEPA credit transfer initiation messaging includes the various components associated with the payment process. In this process, functional mapping of the attributes in Oracle Application to the SCT messaging elements in accordance to the SEPA implementation guidelines is provided. The SEPA messaging format is XML. The concept of Grouping modes and Batch Booking for SEPA payment formatting is introduced. The grouping modes describe the way in which the payments are grouped in the file, which is sent to the bank. Batch booking defines how the entry appears in the bank statement. If enabled, the Bank sends a single line for all the transactions under one group.

\n

To support SEPA implementation, Oracle Payments include the following components:

\n\n

Seeding XML Template and Mapping the Attributes for SEPA Messaging

\n

Oracle Payment now includes an XML template for specifying the SEPA payment format. All the attributes of Oracle Payments disbursement are mapped to the messaging elements as per SEPA format.

\n

This template is seeded and is used for creating the SEPA payment file.

\n

Payment Process Profiles

\n

The Payment Process Profiles for the SEPA Credit Transfer Initiation are seeded in Oracle Payments. The payment instruction grouping depends on the grouping mode selected in the Payment Process Profile.

\n

Three Payment Process Profiles are seeded representing each of the grouping mode, Single, Grouped, and Mixed.

\n

Capturing Batch Booking flag and Grouping mode in the Payment Process Profile Setup

\n

Grouping and Batch Booking are supported as per the ISO20022 guidelines. The Payment Process Profile includes the Grouping Mode and Batch Booking indicators.

\n

A Grouping mode field with a dropdown list is added in the Payment Process Profile setup window. The list has the following values, Single, Grouped, and Mixed. Based on the Grouping Mode selected the payment initiation message format differs. The default value is Grouped.

\n

The Payment Instruction grouping parameters for Grouped is:

\n\n

In the Payment Process Profiles window these fields are displayed as checked. You can change the default selection.

\n

A checkbox, Batch booking, is added in the Payment Process Profile setup window. It is unchecked by default. It indicates whether the bank should book transactions individually or per payment group. When payments are batch booked, they will generate a single bank statement line entry per group. This is optional.

\n

The company can have agreements with the bank to batch book certain transactions (usually ACH transactions) of similar characteristics. This is done outside the system.

\n

The batch booking checkbox is added to the Create Payment Process Profile window.

\n

Payment Method

\n

For SEPA transactions the payment method must be Electronic.

\n

Message Structure

\n

The SCT message structure consists of the following blocks:

\n\n

Group Header

\n

This is the first block in the SCT message. The Group Header consists of the following elements: SCT Message Identification, CreationDateTime, BatchBooking, NumberOfTransactions, Grouping, Initiating Party etc.

\n

Payment Information Block

\n

This block consists of a set of parameters, which apply to the debit side of the payment transaction. These include information like: Payment Information Identification, PaymentMethod, Payment Type Information, Requested Execution Date, Debtor, Debtor Account, Debtor Agent, Bank Charges Bearer etc.

\n

Credit Transfer Transaction Information Block

\n

This block consists of a set of elements providing information specific to the individual payments included in the SCT message. This consists of the following elements: Payment Identification, Amount, Instructed Amount, Creditor Agent, Creditor Agent Account, Creditor, Creditor Account, Payment Purpose and Remittance Information etc.

\n

Validation Sets for Checking the IBAN, BIC, Bank Charge Bearer and Legal entity Address

\n

For a payment to be sent to a bank in SEPA format there are some basic conditions and validations that must be met. The validation architecture in Oracle Payments is used to ensure a payment included in the SEPA format satisfies the conditions. This in turn ensures Straight Through Processing (STP).

\n

The following conditions must be met:

\n\n

These details are validated at the format level and at invoice validation levels. In this component the validations at format level are explained. The validations at invoice validation are discussed in a separate section. You have to setup the invoice related validations.

\n

When bank charge bearer setting is changed in the site level, the invoices that were created before the bearer is changed still continue to hold the old value. This change reflects only for the newly created invoices for this site and supplier.

\n

Supporting SEPA Credit Transfer Enhancement in Oracle Payments

\n

SEPA Credit Transfer feature introduced in Release 12 is based on SEPA Rule Book and SEPA implementation guidelines V2.3.The European Payments Council (EPC) recently published the new guidelines version V3.3 of these documents. There are changes prescribed in the usage rules of various message elements. The existing messaging structure must incorporate these changes to ensure that the message sent to banks is in accordance with the prescribed format.

\n

This topic describes the changes to the existing solution to comply with the latest SEPA guidelines including the mapping changes of various messaging elements of SEPA core payments.

\n

Mapping the Attributes for SEPA Messaging

\n

All the attributes of Payments disbursement are mapped to the messaging elements as per SEPA format. The existing functionality provides the mapping of the SEPA core elements. However, there are changes in the usage rules of various messaging elements in the latest SEPA guidelines and the mapping for those messaging elements are provided.

\n

In the latest version of SEPA guidelines, the mapping for the following new elements is supported:

\n\n

The mapping for the following elements is changed in accordance with the change in usage rules described in SEPA implementation guidelines V3.3:

\n\n

The mapping of these elements is supported as part of SEPA core payments. The mapping remains unchanged for the remaining elements.

\n

The SEPA guidelines are revised to support only ‘Mixed’ grouping mode for SEPA payment format. The grouping mode list of values now include only ‘Mixed’ or ‘None’ values. The seeded Payment Process Profiles (PPPs) values of Single and Grouped grouping are removed.

\n

Predefined Validations attached to SEPA Credit Transfer Payment Format

\n

SEPA credit transfer initiation message initiates credit transfers. The messages sent to bank must satisfy all the validations. The validation architecture in the Oracle Payments ensures that a payment included in the SEPA format satisfies the conditions. This ensures Straight Through processing (STP) and make sure that the file has no errors when being processed with bank. For SEPA payment format validations are added the check that the required information of various messaging elements is provided. Validation sets are provided to optimize the Straight Through Processing (STP).

\n

New validations sets are introduced to validate the initiating party name and identification of initiating party, debtor and ultimate debtor.

\n

New Attributes in Logical Grouping of Payments

\n

The payment information block in the SEPA message structure contains the attributes of logical grouping. The SEPA payments are grouped using the following attributes of the payment information block:

\n\n

With the changes in the usage rules, new elements, Instruction Priority and Category Purpose, are supported in the payment information block. These elements are included as attributes for logical grouping of payments. The invoice legal entity is added in the payment extract.

\n

Impact on Existing Users

\n\n
\n
\n \n\n\n\n" + }, + { + "url": "https://docs.oracle.com/cd/E26505_01/html/816-5180/kstat-delete-9f.html", + "domain": "docs.oracle.com", + "file_source": "part-00352-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "\n\n\n\n\nSynopsis - man pages section 9: DDI and DKI Kernel Functions\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n\t\n\t\t\n\t\t\t\n\t\t\n\n\t\t\n\t\t\t\n\t\t\n\n\t\t\n\t\t\n\t\t\t\n\n\t\t\t\n\t\t\n\t
\n\t\t\t\tSkip Navigation Links\n\t\t\t
\n\t\t\t\tExit Print View\n\t\t\t
\"Link\n\t\t\tman pages section 9: DDI and DKI Kernel Functions     Oracle Solaris 10 1/13 Information Library\n\t\t\t
\n
\n\n\n
\n
\n
\n \n \"Oracle\n \n
\n\n
\n \n \"Library\"\n \n
\n
\n \n
\n\n
\n\t\t \n \n \"PDF\"\n \n\t\t \n
\n \n
\n \n \"Print\n \n
\n \n \n
\n \n \"Feedback\"\n \n
\n\n
\n\n\n
\n
\n
\n \n\n
\n
\n \"search\n
\n\n
\n \n \n
\n\n
\n \"search\n
\n
\n\n
\n
\n
\n
\n
 Search Scope:\n
\n
    \n
  • \n
  • \n
\n
\n
\n
\n
\n
\n\t\t\t \n
\n
\n
\n \n\n
\n\n\n
\n\t

Document Information

\n

Preface

\n

Introduction

\n

Kernel Functions for Drivers

\n

adjmsg(9F)

\n

allocb(9F)

\n

allocb_tmpl(9F)

\n

aminphys(9F)

\n

anocancel(9F)

\n

aphysio(9F)

\n

ASSERT(9F)

\n

assert(9F)

\n

atomic_add_16(9F)

\n

atomic_add_16_nv(9F)

\n

atomic_add_32(9F)

\n

atomic_add_32_nv(9F)

\n

atomic_add_64(9F)

\n

atomic_add_64_nv(9F)

\n

atomic_add_8(9F)

\n

atomic_add_8_nv(9F)

\n

atomic_add(9F)

\n

atomic_add_char(9F)

\n

atomic_add_char_nv(9F)

\n

atomic_add_int(9F)

\n

atomic_add_int_nv(9F)

\n

atomic_add_long(9F)

\n

atomic_add_long_nv(9F)

\n

atomic_add_ptr(9F)

\n

atomic_add_ptr_nv(9F)

\n

atomic_add_short(9F)

\n

atomic_add_short_nv(9F)

\n

atomic_and_16(9F)

\n

atomic_and_16_nv(9F)

\n

atomic_and_32(9F)

\n

atomic_and_32_nv(9F)

\n

atomic_and_64(9F)

\n

atomic_and_64_nv(9F)

\n

atomic_and_8(9F)

\n

atomic_and_8_nv(9F)

\n

atomic_and(9F)

\n

atomic_and_uchar(9F)

\n

atomic_and_uchar_nv(9F)

\n

atomic_and_uint(9F)

\n

atomic_and_uint_nv(9F)

\n

atomic_and_ulong(9F)

\n

atomic_and_ulong_nv(9F)

\n

atomic_and_ushort(9F)

\n

atomic_and_ushort_nv(9F)

\n

atomic_bits(9F)

\n

atomic_cas_16(9F)

\n

atomic_cas_32(9F)

\n

atomic_cas_64(9F)

\n

atomic_cas_8(9F)

\n

atomic_cas(9F)

\n

atomic_cas_ptr(9F)

\n

atomic_cas_uchar(9F)

\n

atomic_cas_uint(9F)

\n

atomic_cas_ulong(9F)

\n

atomic_cas_ushort(9F)

\n

atomic_clear_long_excl(9F)

\n

atomic_dec_16(9F)

\n

atomic_dec_16_nv(9F)

\n

atomic_dec_32(9F)

\n

atomic_dec_32_nv(9F)

\n

atomic_dec_64(9F)

\n

atomic_dec_64_nv(9F)

\n

atomic_dec_8(9F)

\n

atomic_dec_8_nv(9F)

\n

atomic_dec(9F)

\n

atomic_dec_ptr(9F)

\n

atomic_dec_ptr_nv(9F)

\n

atomic_dec_uchar(9F)

\n

atomic_dec_uchar_nv(9F)

\n

atomic_dec_uint(9F)

\n

atomic_dec_uint_nv(9F)

\n

atomic_dec_ulong(9F)

\n

atomic_dec_ulong_nv(9F)

\n

atomic_dec_ushort(9F)

\n

atomic_dec_ushort_nv(9F)

\n

atomic_inc_16(9F)

\n

atomic_inc_16_nv(9F)

\n

atomic_inc_32(9F)

\n

atomic_inc_32_nv(9F)

\n

atomic_inc_64(9F)

\n

atomic_inc_64_nv(9F)

\n

atomic_inc_8(9F)

\n

atomic_inc_8_nv(9F)

\n

atomic_inc(9F)

\n

atomic_inc_ptr(9F)

\n

atomic_inc_ptr_nv(9F)

\n

atomic_inc_uchar(9F)

\n

atomic_inc_uchar_nv(9F)

\n

atomic_inc_uint(9F)

\n

atomic_inc_uint_nv(9F)

\n

atomic_inc_ulong(9F)

\n

atomic_inc_ulong_nv(9F)

\n

atomic_inc_ushort(9F)

\n

atomic_inc_ushort_nv(9F)

\n

atomic_ops(9F)

\n

atomic_or_16(9F)

\n

atomic_or_16_nv(9F)

\n

atomic_or_32(9F)

\n

atomic_or_32_nv(9F)

\n

atomic_or_64(9F)

\n

atomic_or_64_nv(9F)

\n

atomic_or_8(9F)

\n

atomic_or_8_nv(9F)

\n

atomic_or(9F)

\n

atomic_or_uchar(9F)

\n

atomic_or_uchar_nv(9F)

\n

atomic_or_uint(9F)

\n

atomic_or_uint_nv(9F)

\n

atomic_or_ulong(9F)

\n

atomic_or_ulong_nv(9F)

\n

atomic_or_ushort(9F)

\n

atomic_or_ushort_nv(9F)

\n

atomic_set_long_excl(9F)

\n

atomic_swap_16(9F)

\n

atomic_swap_32(9F)

\n

atomic_swap_64(9F)

\n

atomic_swap_8(9F)

\n

atomic_swap(9F)

\n

atomic_swap_ptr(9F)

\n

atomic_swap_uchar(9F)

\n

atomic_swap_uint(9F)

\n

atomic_swap_ulong(9F)

\n

atomic_swap_ushort(9F)

\n

backq(9F)

\n

bcanput(9F)

\n

bcanputnext(9F)

\n

bcmp(9F)

\n

bcopy(9F)

\n

bioclone(9F)

\n

biodone(9F)

\n

bioerror(9F)

\n

biofini(9F)

\n

bioinit(9F)

\n

biomodified(9F)

\n

bioreset(9F)

\n

biosize(9F)

\n

biowait(9F)

\n

bp_copyin(9F)

\n

bp_copyout(9F)

\n

bp_mapin(9F)

\n

bp_mapout(9F)

\n

btop(9F)

\n

btopr(9F)

\n

bufcall(9F)

\n

bzero(9F)

\n

canput(9F)

\n

canputnext(9F)

\n

clrbuf(9F)

\n

cmn_err(9F)

\n

condvar(9F)

\n

copyb(9F)

\n

copyin(9F)

\n

copymsg(9F)

\n

copyout(9F)

\n

crgetgid(9F)

\n

crgetgroups(9F)

\n

crgetngroups(9F)

\n

crgetrgid(9F)

\n

crgetruid(9F)

\n

crgetsgid(9F)

\n

crgetsuid(9F)

\n

crgetuid(9F)

\n

crgetzoneid(9F)

\n

csx_AccessConfigurationRegister(9F)

\n

csx_ConvertSize(9F)

\n

csx_ConvertSpeed(9F)

\n

csx_CS_DDI_Info(9F)

\n

csx_DeregisterClient(9F)

\n

csx_DupHandle(9F)

\n

csx_Error2Text(9F)

\n

csx_Event2Text(9F)

\n

csx_FreeHandle(9F)

\n

csx_Get16(9F)

\n

csx_Get32(9F)

\n

csx_Get64(9F)

\n

csx_Get8(9F)

\n

csx_GetEventMask(9F)

\n

csx_GetFirstClient(9F)

\n

csx_GetFirstTuple(9F)

\n

csx_GetHandleOffset(9F)

\n

csx_GetMappedAddr(9F)

\n

csx_GetNextClient(9F)

\n

csx_GetNextTuple(9F)

\n

csx_GetStatus(9F)

\n

csx_GetTupleData(9F)

\n

csx_MakeDeviceNode(9F)

\n

csx_MapLogSocket(9F)

\n

csx_MapMemPage(9F)

\n

csx_ModifyConfiguration(9F)

\n

csx_ModifyWindow(9F)

\n

csx_Parse_CISTPL_BATTERY(9F)

\n

csx_Parse_CISTPL_BYTEORDER(9F)

\n

csx_Parse_CISTPL_CFTABLE_ENTRY(9F)

\n

csx_Parse_CISTPL_CONFIG(9F)

\n

csx_Parse_CISTPL_DATE(9F)

\n

csx_Parse_CISTPL_DEVICE(9F)

\n

csx_Parse_CISTPL_DEVICE_A(9F)

\n

csx_Parse_CISTPL_DEVICEGEO(9F)

\n

csx_Parse_CISTPL_DEVICEGEO_A(9F)

\n

csx_Parse_CISTPL_DEVICE_OA(9F)

\n

csx_Parse_CISTPL_DEVICE_OC(9F)

\n

csx_Parse_CISTPL_FORMAT(9F)

\n

csx_Parse_CISTPL_FUNCE(9F)

\n

csx_Parse_CISTPL_FUNCID(9F)

\n

csx_Parse_CISTPL_GEOMETRY(9F)

\n

csx_Parse_CISTPL_JEDEC_A(9F)

\n

csx_Parse_CISTPL_JEDEC_C(9F)

\n

csx_Parse_CISTPL_LINKTARGET(9F)

\n

csx_Parse_CISTPL_LONGLINK_A(9F)

\n

csx_Parse_CISTPL_LONGLINK_C(9F)

\n

csx_Parse_CISTPL_LONGLINK_MFC(9F)

\n

csx_Parse_CISTPL_MANFID(9F)

\n

csx_Parse_CISTPL_ORG(9F)

\n

csx_Parse_CISTPL_SPCL(9F)

\n

csx_Parse_CISTPL_SWIL(9F)

\n

csx_Parse_CISTPL_VERS_1(9F)

\n

csx_Parse_CISTPL_VERS_2(9F)

\n

csx_ParseTuple(9F)

\n

csx_Put16(9F)

\n

csx_Put32(9F)

\n

csx_Put64(9F)

\n

csx_Put8(9F)

\n

csx_RegisterClient(9F)

\n

csx_ReleaseConfiguration(9F)

\n

csx_ReleaseIO(9F)

\n

csx_ReleaseIRQ(9F)

\n

csx_ReleaseSocketMask(9F)

\n

csx_ReleaseWindow(9F)

\n

csx_RemoveDeviceNode(9F)

\n

csx_RepGet16(9F)

\n

csx_RepGet32(9F)

\n

csx_RepGet64(9F)

\n

csx_RepGet8(9F)

\n

csx_RepPut16(9F)

\n

csx_RepPut32(9F)

\n

csx_RepPut64(9F)

\n

csx_RepPut8(9F)

\n

csx_RequestConfiguration(9F)

\n

csx_RequestIO(9F)

\n

csx_RequestIRQ(9F)

\n

csx_RequestSocketMask(9F)

\n

csx_RequestWindow(9F)

\n

csx_ResetFunction(9F)

\n

csx_SetEventMask(9F)

\n

csx_SetHandleOffset(9F)

\n

csx_ValidateCIS(9F)

\n

cv_broadcast(9F)

\n

cv_destroy(9F)

\n

cv_init(9F)

\n

cv_signal(9F)

\n

cv_timedwait(9F)

\n

cv_timedwait_sig(9F)

\n

cv_wait(9F)

\n

cv_wait_sig(9F)

\n

datamsg(9F)

\n

DB_BASE(9F)

\n

DB_LIM(9F)

\n

DB_REF(9F)

\n

DB_TYPE(9F)

\n

ddi_add_event_handler(9F)

\n

ddi_add_intr(9F)

\n

ddi_add_softintr(9F)

\n

ddi_binding_name(9F)

\n

ddi_btop(9F)

\n

ddi_btopr(9F)

\n

ddi_can_receive_sig(9F)

\n

ddi_cb_register(9F)

\n

ddi_cb_unregister(9F)

\n

ddi_check_acc_handle(9F)

\n

ddi_check_dma_handle(9F)

\n

ddi_copyin(9F)

\n

ddi_copyout(9F)

\n

ddi_create_minor_node(9F)

\n

ddi_cred(9F)

\n

ddi_device_copy(9F)

\n

ddi_device_zero(9F)

\n

ddi_devid_compare(9F)

\n

ddi_devid_free(9F)

\n

ddi_devid_get(9F)

\n

ddi_devid_init(9F)

\n

ddi_devid_register(9F)

\n

ddi_devid_sizeof(9F)

\n

ddi_devid_str_decode(9F)

\n

ddi_devid_str_encode(9F)

\n

ddi_devid_str_free(9F)

\n

ddi_devid_unregister(9F)

\n

ddi_devid_valid(9F)

\n

ddi_dev_is_sid(9F)

\n

ddi_devmap_segmap(9F)

\n

ddi_dev_nintrs(9F)

\n

ddi_dev_nregs(9F)

\n

ddi_dev_regsize(9F)

\n

ddi_dev_report_fault(9F)

\n

ddi_dma_addr_bind_handle(9F)

\n

ddi_dma_addr_setup(9F)

\n

ddi_dma_alloc_handle(9F)

\n

ddi_dma_buf_bind_handle(9F)

\n

ddi_dma_buf_setup(9F)

\n

ddi_dma_burstsizes(9F)

\n

ddi_dma_coff(9F)

\n

ddi_dma_curwin(9F)

\n

ddi_dma_devalign(9F)

\n

ddi_dmae_1stparty(9F)

\n

ddi_dmae(9F)

\n

ddi_dmae_alloc(9F)

\n

ddi_dmae_disable(9F)

\n

ddi_dmae_enable(9F)

\n

ddi_dmae_getattr(9F)

\n

ddi_dmae_getcnt(9F)

\n

ddi_dmae_getlim(9F)

\n

ddi_dmae_prog(9F)

\n

ddi_dmae_release(9F)

\n

ddi_dmae_stop(9F)

\n

ddi_dma_free(9F)

\n

ddi_dma_free_handle(9F)

\n

ddi_dma_get_attr(9F)

\n

ddi_dma_getwin(9F)

\n

ddi_dma_htoc(9F)

\n

ddi_dma_mem_alloc(9F)

\n

ddi_dma_mem_free(9F)

\n

ddi_dma_movwin(9F)

\n

ddi_dma_nextcookie(9F)

\n

ddi_dma_nextseg(9F)

\n

ddi_dma_nextwin(9F)

\n

ddi_dma_numwin(9F)

\n

ddi_dma_segtocookie(9F)

\n

ddi_dma_set_sbus64(9F)

\n

ddi_dma_setup(9F)

\n

ddi_dma_sync(9F)

\n

ddi_dma_unbind_handle(9F)

\n

ddi_driver_major(9F)

\n

ddi_driver_name(9F)

\n

ddi_enter_critical(9F)

\n

ddi_exit_critical(9F)

\n

ddi_ffs(9F)

\n

ddi_fls(9F)

\n

ddi_fm_acc_err_clear(9F)

\n

ddi_fm_acc_err_get(9F)

\n

ddi_fm_capable(9F)

\n

ddi_fm_dma_err_clear(9F)

\n

ddi_fm_dma_err_get(9F)

\n

ddi_fm_ereport_post(9F)

\n

ddi_fm_fini(9F)

\n

ddi_fm_handler_register(9F)

\n

ddi_fm_handler_unregister(9F)

\n

ddi_fm_init(9F)

\n

ddi_fm_service_impact(9F)

\n

ddi_get16(9F)

\n

ddi_get32(9F)

\n

ddi_get64(9F)

\n

ddi_get8(9F)

\n

ddi_getb(9F)

\n

ddi_get_cred(9F)

\n

ddi_get_devstate(9F)

\n

ddi_get_driver_private(9F)

\n

ddi_get_eventcookie(9F)

\n

ddi_get_iblock_cookie(9F)

\n

ddi_getiminor(9F)

\n

ddi_get_instance(9F)

\n

ddi_get_kt_did(9F)

\n

ddi_getl(9F)

\n

ddi_get_lbolt64(9F)

\n

ddi_get_lbolt(9F)

\n

ddi_getll(9F)

\n

ddi_getlongprop(9F)

\n

ddi_getlongprop_buf(9F)

\n

ddi_get_name(9F)

\n

ddi_get_parent(9F)

\n

ddi_get_pid(9F)

\n

ddi_getprop(9F)

\n

ddi_getproplen(9F)

\n

ddi_get_soft_iblock_cookie(9F)

\n

ddi_get_soft_state(9F)

\n

ddi_get_time(9F)

\n

ddi_getw(9F)

\n

ddi_in_panic(9F)

\n

ddi_intr_add_handler(9F)

\n

ddi_intr_add_softint(9F)

\n

ddi_intr_alloc(9F)

\n

ddi_intr_block_disable(9F)

\n

ddi_intr_block_enable(9F)

\n

ddi_intr_clr_mask(9F)

\n

ddi_intr_disable(9F)

\n

ddi_intr_dup_handler(9F)

\n

ddi_intr_enable(9F)

\n

ddi_intr_free(9F)

\n

ddi_intr_get_cap(9F)

\n

ddi_intr_get_hilevel_pri(9F)

\n

ddi_intr_get_navail(9F)

\n

ddi_intr_get_nintrs(9F)

\n

ddi_intr_get_pending(9F)

\n

ddi_intr_get_pri(9F)

\n

ddi_intr_get_softint_pri(9F)

\n

ddi_intr_get_supported_types(9F)

\n

ddi_intr_hilevel(9F)

\n

ddi_intr_remove_handler(9F)

\n

ddi_intr_remove_softint(9F)

\n

ddi_intr_set_cap(9F)

\n

ddi_intr_set_mask(9F)

\n

ddi_intr_set_nreq(9F)

\n

ddi_intr_set_pri(9F)

\n

ddi_intr_set_softint_pri(9F)

\n

ddi_intr_trigger_softint(9F)

\n

ddi_io_get16(9F)

\n

ddi_io_get32(9F)

\n

ddi_io_get8(9F)

\n

ddi_io_getb(9F)

\n

ddi_io_getl(9F)

\n

ddi_io_getw(9F)

\n

ddi_iomin(9F)

\n

ddi_iopb_alloc(9F)

\n

ddi_iopb_free(9F)

\n

ddi_io_put16(9F)

\n

ddi_io_put32(9F)

\n

ddi_io_put8(9F)

\n

ddi_io_putb(9F)

\n

ddi_io_putl(9F)

\n

ddi_io_putw(9F)

\n

ddi_io_rep_get16(9F)

\n

ddi_io_rep_get32(9F)

\n

ddi_io_rep_get8(9F)

\n

ddi_io_rep_getb(9F)

\n

ddi_io_rep_getl(9F)

\n

ddi_io_rep_getw(9F)

\n

ddi_io_rep_put16(9F)

\n

ddi_io_rep_put32(9F)

\n

ddi_io_rep_put8(9F)

\n

ddi_io_rep_putb(9F)

\n

ddi_io_rep_putl(9F)

\n

ddi_io_rep_putw(9F)

\n

ddi_log_sysevent(9F)

\n

ddi_map_regs(9F)

\n

ddi_mem_alloc(9F)

\n

ddi_mem_free(9F)

\n

ddi_mem_get16(9F)

\n

ddi_mem_get32(9F)

\n

ddi_mem_get64(9F)

\n

ddi_mem_get8(9F)

\n

ddi_mem_getb(9F)

\n

ddi_mem_getl(9F)

\n

ddi_mem_getll(9F)

\n

ddi_mem_getw(9F)

\n

ddi_mem_put16(9F)

\n

ddi_mem_put32(9F)

\n

ddi_mem_put64(9F)

\n

ddi_mem_put8(9F)

\n

ddi_mem_putb(9F)

\n

ddi_mem_putl(9F)

\n

ddi_mem_putll(9F)

\n

ddi_mem_putw(9F)

\n

ddi_mem_rep_get16(9F)

\n

ddi_mem_rep_get32(9F)

\n

ddi_mem_rep_get64(9F)

\n

ddi_mem_rep_get8(9F)

\n

ddi_mem_rep_getb(9F)

\n

ddi_mem_rep_getl(9F)

\n

ddi_mem_rep_getll(9F)

\n

ddi_mem_rep_getw(9F)

\n

ddi_mem_rep_put16(9F)

\n

ddi_mem_rep_put32(9F)

\n

ddi_mem_rep_put64(9F)

\n

ddi_mem_rep_put8(9F)

\n

ddi_mem_rep_putb(9F)

\n

ddi_mem_rep_putl(9F)

\n

ddi_mem_rep_putll(9F)

\n

ddi_mem_rep_putw(9F)

\n

ddi_mmap_get_model(9F)

\n

ddi_model_convert_from(9F)

\n

ddi_node_name(9F)

\n

ddi_no_info(9F)

\n

ddi_peek16(9F)

\n

ddi_peek32(9F)

\n

ddi_peek64(9F)

\n

ddi_peek8(9F)

\n

ddi_peek(9F)

\n

ddi_peekc(9F)

\n

ddi_peekd(9F)

\n

ddi_peekl(9F)

\n

ddi_peeks(9F)

\n

ddi_periodic_add(9F)

\n

ddi_periodic_delete(9F)

\n

ddi_poke16(9F)

\n

ddi_poke32(9F)

\n

ddi_poke64(9F)

\n

ddi_poke8(9F)

\n

ddi_poke(9F)

\n

ddi_pokec(9F)

\n

ddi_poked(9F)

\n

ddi_pokel(9F)

\n

ddi_pokes(9F)

\n

ddi_prop_create(9F)

\n

ddi_prop_exists(9F)

\n

ddi_prop_free(9F)

\n

ddi_prop_get_int64(9F)

\n

ddi_prop_get_int(9F)

\n

ddi_prop_lookup(9F)

\n

ddi_prop_lookup_byte_array(9F)

\n

ddi_prop_lookup_int64_array(9F)

\n

ddi_prop_lookup_int_array(9F)

\n

ddi_prop_lookup_string(9F)

\n

ddi_prop_lookup_string_array(9F)

\n

ddi_prop_modify(9F)

\n

ddi_prop_op(9F)

\n

ddi_prop_remove(9F)

\n

ddi_prop_remove_all(9F)

\n

ddi_prop_undefine(9F)

\n

ddi_prop_update(9F)

\n

ddi_prop_update_byte_array(9F)

\n

ddi_prop_update_int64(9F)

\n

ddi_prop_update_int64_array(9F)

\n

ddi_prop_update_int(9F)

\n

ddi_prop_update_int_array(9F)

\n

ddi_prop_update_string(9F)

\n

ddi_prop_update_string_array(9F)

\n

ddi_ptob(9F)

\n

ddi_put16(9F)

\n

ddi_put32(9F)

\n

ddi_put64(9F)

\n

ddi_put8(9F)

\n

ddi_putb(9F)

\n

ddi_putl(9F)

\n

ddi_putll(9F)

\n

ddi_putw(9F)

\n

ddi_regs_map_free(9F)

\n

ddi_regs_map_setup(9F)

\n

ddi_remove_event_handler(9F)

\n

ddi_remove_intr(9F)

\n

ddi_remove_minor_node(9F)

\n

ddi_remove_softintr(9F)

\n

ddi_removing_power(9F)

\n

ddi_rep_get16(9F)

\n

ddi_rep_get32(9F)

\n

ddi_rep_get64(9F)

\n

ddi_rep_get8(9F)

\n

ddi_rep_getb(9F)

\n

ddi_rep_getl(9F)

\n

ddi_rep_getll(9F)

\n

ddi_rep_getw(9F)

\n

ddi_report_dev(9F)

\n

ddi_rep_put16(9F)

\n

ddi_rep_put32(9F)

\n

ddi_rep_put64(9F)

\n

ddi_rep_put8(9F)

\n

ddi_rep_putb(9F)

\n

ddi_rep_putl(9F)

\n

ddi_rep_putll(9F)

\n

ddi_rep_putw(9F)

\n

ddi_root_node(9F)

\n

ddi_segmap(9F)

\n

ddi_segmap_setup(9F)

\n

ddi_set_driver_private(9F)

\n

ddi_slaveonly(9F)

\n

ddi_soft_state(9F)

\n

ddi_soft_state_fini(9F)

\n

ddi_soft_state_free(9F)

\n

ddi_soft_state_init(9F)

\n

ddi_soft_state_zalloc(9F)

\n

ddi_strtol(9F)

\n

ddi_strtoll(9F)

\n

ddi_strtoul(9F)

\n

ddi_strtoull(9F)

\n

ddi_taskq_create(9F)

\n

ddi_taskq_destroy(9F)

\n

ddi_taskq_dispatch(9F)

\n

ddi_taskq_resume(9F)

\n

ddi_taskq_suspend(9F)

\n

ddi_taskq_wait(9F)

\n

ddi_trigger_softintr(9F)

\n

ddi_umem_alloc(9F)

\n

ddi_umem_free(9F)

\n

ddi_umem_iosetup(9F)

\n

ddi_umem_lock(9F)

\n

ddi_umem_unlock(9F)

\n

ddi_unmap_regs(9F)

\n

delay(9F)

\n

desballoc(9F)

\n

devmap_default_access(9F)

\n

devmap_devmem_setup(9F)

\n

devmap_do_ctxmgt(9F)

\n

devmap_load(9F)

\n

devmap_set_ctx_timeout(9F)

\n

devmap_setup(9F)

\n

devmap_umem_setup(9F)

\n

devmap_unload(9F)

\n

disksort(9F)

\n

dlbindack(9F)

\n

dlerrorack(9F)

\n

dlokack(9F)

\n

dlphysaddrack(9F)

\n

dluderrorind(9F)

\n

drv_getparm(9F)

\n

drv_hztousec(9F)

\n

drv_priv(9F)

\n

drv_usectohz(9F)

\n

drv_usecwait(9F)

\n

dupb(9F)

\n

dupmsg(9F)

\n

enableok(9F)

\n

esballoc(9F)

\n

esbbcall(9F)

\n

flushband(9F)

\n

flushq(9F)

\n

freeb(9F)

\n

freemsg(9F)

\n

free_pktiopb(9F)

\n

freerbuf(9F)

\n

freezestr(9F)

\n

geterror(9F)

\n

gethrtime(9F)

\n

getmajor(9F)

\n

getminor(9F)

\n

get_pktiopb(9F)

\n

getq(9F)

\n

getrbuf(9F)

\n

gld(9F)

\n

gld_intr(9F)

\n

gld_mac_alloc(9F)

\n

gld_mac_free(9F)

\n

gld_recv(9F)

\n

gld_register(9F)

\n

gld_sched(9F)

\n

gld_unregister(9F)

\n

hat_getkpfnum(9F)

\n

hook_alloc(9F)

\n

hook_free(9F)

\n

id32_alloc(9F)

\n

id32_free(9F)

\n

id32_lookup(9F)

\n

inb(9F)

\n

inl(9F)

\n

insq(9F)

\n

inw(9F)

\n

IOC_CONVERT_FROM(9F)

\n

kmem_alloc(9F)

\n

kmem_cache_alloc(9F)

\n

kmem_cache_create(9F)

\n

kmem_cache_destroy(9F)

\n

kmem_cache_free(9F)

\n

kmem_free(9F)

\n

kmem_zalloc(9F)

\n

kstat_create(9F)

\n
\n

kstat_delete(9F)

\n
\n

kstat_install(9F)

\n

kstat_named_init(9F)

\n

kstat_named_setstr(9F)

\n

kstat_queue(9F)

\n

kstat_runq_back_to_waitq(9F)

\n

kstat_runq_enter(9F)

\n

kstat_runq_exit(9F)

\n

kstat_waitq_enter(9F)

\n

kstat_waitq_exit(9F)

\n

kstat_waitq_to_runq(9F)

\n

ldi_add_event_handler(9F)

\n

ldi_aread(9F)

\n

ldi_awrite(9F)

\n

ldi_close(9F)

\n

ldi_devmap(9F)

\n

ldi_dump(9F)

\n

ldi_get_dev(9F)

\n

ldi_get_devid(9F)

\n

ldi_get_eventcookie(9F)

\n

ldi_get_minor_name(9F)

\n

ldi_getmsg(9F)

\n

ldi_get_otyp(9F)

\n

ldi_get_size(9F)

\n

ldi_ident_from_dev(9F)

\n

ldi_ident_from_dip(9F)

\n

ldi_ident_from_stream(9F)

\n

ldi_ident_release(9F)

\n

ldi_ioctl(9F)

\n

ldi_open_by_dev(9F)

\n

ldi_open_by_devid(9F)

\n

ldi_open_by_name(9F)

\n

ldi_poll(9F)

\n

ldi_prop_exists(9F)

\n

ldi_prop_get_int64(9F)

\n

ldi_prop_get_int(9F)

\n

ldi_prop_lookup_byte_array(9F)

\n

ldi_prop_lookup_int64_array(9F)

\n

ldi_prop_lookup_int_array(9F)

\n

ldi_prop_lookup_string(9F)

\n

ldi_prop_lookup_string_array(9F)

\n

ldi_putmsg(9F)

\n

ldi_read(9F)

\n

ldi_remove_event_handler(9F)

\n

ldi_strategy(9F)

\n

ldi_write(9F)

\n

linkb(9F)

\n

mac(9F)

\n

mac_alloc(9F)

\n

mac_fini_ops(9F)

\n

mac_free(9F)

\n

mac_hcksum_get(9F)

\n

mac_hcksum_set(9F)

\n

mac_init_ops(9F)

\n

mac_link_update(9F)

\n

mac_lso_get(9F)

\n

mac_prop_info_set_default_link_flowctrl(9F)

\n

mac_prop_info_set_default_str(9F)

\n

mac_prop_info_set_default_uint8(9F)

\n

mac_prop_info_set_perm(9F)

\n

mac_prop_info_set_range_uint32(9F)

\n

mac_register(9F)

\n

mac_rx(9F)

\n

mac_tx_update(9F)

\n

mac_unregister(9F)

\n

makecom(9F)

\n

makecom_g0(9F)

\n

makecom_g0_s(9F)

\n

makecom_g1(9F)

\n

makecom_g5(9F)

\n

makedevice(9F)

\n

max(9F)

\n

MBLKHEAD(9F)

\n

MBLKIN(9F)

\n

MBLKL(9F)

\n

MBLKSIZE(9F)

\n

MBLKTAIL(9F)

\n

mcopyin(9F)

\n

mcopymsg(9F)

\n

mcopyout(9F)

\n

membar_consumer(9F)

\n

membar_enter(9F)

\n

membar_exit(9F)

\n

membar_ops(9F)

\n

membar_producer(9F)

\n

memchr(9F)

\n

memcmp(9F)

\n

memcpy(9F)

\n

memmove(9F)

\n

memset(9F)

\n

merror(9F)

\n

mexchange(9F)

\n

min(9F)

\n

minphys(9F)

\n

mioc2ack(9F)

\n

miocack(9F)

\n

miocnak(9F)

\n

miocpullup(9F)

\n

mkiocb(9F)

\n

mod_info(9F)

\n

mod_install(9F)

\n

mod_remove(9F)

\n

msgdsize(9F)

\n

msgpullup(9F)

\n

msgsize(9F)

\n

mt-streams(9F)

\n

mutex(9F)

\n

mutex_destroy(9F)

\n

mutex_enter(9F)

\n

mutex_exit(9F)

\n

mutex_init(9F)

\n

mutex_owned(9F)

\n

mutex_tryenter(9F)

\n

net_event_notify_register(9F)

\n

net_event_notify_unregister(9F)

\n

net_getifname(9F)

\n

net_getlifaddr(9F)

\n

net_getmtu(9F)

\n

net_getnetid(9F)

\n

net_getpmtuenabled(9F)

\n

net_hook_register(9F)

\n

net_hook_unregister(9F)

\n

netinfo(9F)

\n

net_inject(9F)

\n

net_inject_alloc(9F)

\n

net_inject_free(9F)

\n

net_instance_alloc(9F)

\n

net_instance_free(9F)

\n

net_instance_notify_register(9F)

\n

net_instance_notify_unregister(9F)

\n

net_instance_protocol_unregister(9F)

\n

net_instance_register(9F)

\n

net_instance_unregister(9F)

\n

net_ispartialchecksum(9F)

\n

net_isvalidchecksum(9F)

\n

net_kstat_create(9F)

\n

net_kstat_delete(9F)

\n

net_lifgetnext(9F)

\n

net_netidtozonid(9F)

\n

net_phygetnext(9F)

\n

net_phylookup(9F)

\n

net_protocol_lookup(9F)

\n

net_protocol_notify_register(9F)

\n

net_protocol_release(9F)

\n

net_protocol_walk(9F)

\n

net_routeto(9F)

\n

net_zoneidtonetid(9F)

\n

nochpoll(9F)

\n

nodev(9F)

\n

noenable(9F)

\n

nulldev(9F)

\n

numtos(9F)

\n

nv_alloc_fini(9F)

\n

nv_alloc_init(9F)

\n

nvlist_add_boolean(9F)

\n

nvlist_add_boolean_array(9F)

\n

nvlist_add_boolean_value(9F)

\n

nvlist_add_byte(9F)

\n

nvlist_add_byte_array(9F)

\n

nvlist_add_int16(9F)

\n

nvlist_add_int16_array(9F)

\n

nvlist_add_int32(9F)

\n

nvlist_add_int32_array(9F)

\n

nvlist_add_int64(9F)

\n

nvlist_add_int64_array(9F)

\n

nvlist_add_int8(9F)

\n

nvlist_add_int8_array(9F)

\n

nvlist_add_nvlist(9F)

\n

nvlist_add_nvlist_array(9F)

\n

nvlist_add_nvpair(9F)

\n

nvlist_add_string(9F)

\n

nvlist_add_string_array(9F)

\n

nvlist_add_uint16(9F)

\n

nvlist_add_uint16_array(9F)

\n

nvlist_add_uint32(9F)

\n

nvlist_add_uint32_array(9F)

\n

nvlist_add_uint64(9F)

\n

nvlist_add_uint64_array(9F)

\n

nvlist_add_uint8(9F)

\n

nvlist_add_uint8_array(9F)

\n

nvlist_alloc(9F)

\n

nvlist_dup(9F)

\n

nvlist_free(9F)

\n

nvlist_lookup_boolean(9F)

\n

nvlist_lookup_boolean_array(9F)

\n

nvlist_lookup_boolean_value(9F)

\n

nvlist_lookup_byte(9F)

\n

nvlist_lookup_byte_array(9F)

\n

nvlist_lookup_int16(9F)

\n

nvlist_lookup_int16_array(9F)

\n

nvlist_lookup_int32(9F)

\n

nvlist_lookup_int32_array(9F)

\n

nvlist_lookup_int64(9F)

\n

nvlist_lookup_int64_array(9F)

\n

nvlist_lookup_int8(9F)

\n

nvlist_lookup_int8_array(9F)

\n

nvlist_lookup_nvlist(9F)

\n

nvlist_lookup_nvlist_array(9F)

\n

nvlist_lookup_pairs(9F)

\n

nvlist_lookup_string(9F)

\n

nvlist_lookup_string_array(9F)

\n

nvlist_lookup_uint16(9F)

\n

nvlist_lookup_uint16_array(9F)

\n

nvlist_lookup_uint32(9F)

\n

nvlist_lookup_uint32_array(9F)

\n

nvlist_lookup_uint64(9F)

\n

nvlist_lookup_uint64_array(9F)

\n

nvlist_lookup_uint8(9F)

\n

nvlist_lookup_uint8_array(9F)

\n

nvlist_merge(9F)

\n

nvlist_next_nvpair(9F)

\n

nvlist_pack(9F)

\n

nvlist_remove(9F)

\n

nvlist_remove_all(9F)

\n

nvlist_size(9F)

\n

nvlist_t(9F)

\n

nvlist_unpack(9F)

\n

nvlist_xalloc(9F)

\n

nvlist_xdup(9F)

\n

nvlist_xpack(9F)

\n

nvlist_xunpack(9F)

\n

nvpair_name(9F)

\n

nvpair_type(9F)

\n

nvpair_value_boolean_array(9F)

\n

nvpair_value_byte(9F)

\n

nvpair_value_byte_array(9F)

\n

nvpair_value_int16(9F)

\n

nvpair_value_int16_array(9F)

\n

nvpair_value_int32(9F)

\n

nvpair_value_int32_array(9F)

\n

nvpair_value_int64(9F)

\n

nvpair_value_int64_array(9F)

\n

nvpair_value_int8(9F)

\n

nvpair_value_int8_array(9F)

\n

nvpair_value_nvlist(9F)

\n

nvpair_value_nvlist_array(9F)

\n

nvpair_value_string(9F)

\n

nvpair_value_string_array(9F)

\n

nvpair_value_uint16(9F)

\n

nvpair_value_uint16_array(9F)

\n

nvpair_value_uint32(9F)

\n

nvpair_value_uint32_array(9F)

\n

nvpair_value_uint64(9F)

\n

nvpair_value_uint64_array(9F)

\n

nvpair_value_uint8(9F)

\n

nvpair_value_uint8_array(9F)

\n

OTHERQ(9F)

\n

otherq(9F)

\n

outb(9F)

\n

outl(9F)

\n

outw(9F)

\n

pci_config_get16(9F)

\n

pci_config_get32(9F)

\n

pci_config_get64(9F)

\n

pci_config_get8(9F)

\n

pci_config_getb(9F)

\n

pci_config_getl(9F)

\n

pci_config_getll(9F)

\n

pci_config_getw(9F)

\n

pci_config_put16(9F)

\n

pci_config_put32(9F)

\n

pci_config_put64(9F)

\n

pci_config_put8(9F)

\n

pci_config_putb(9F)

\n

pci_config_putl(9F)

\n

pci_config_putll(9F)

\n

pci_config_putw(9F)

\n

pci_config_setup(9F)

\n

pci_config_teardown(9F)

\n

pci_ereport_post(9F)

\n

pci_ereport_setup(9F)

\n

pci_ereport_teardown(9F)

\n

pci_param_free(9F)

\n

pci_param_get(9F)

\n

pci_param_get_ioctl(9F)

\n

pci_plist_get(9F)

\n

pci_plist_getvf(9F)

\n

pci_plist_lookup(9F)

\n

pci_plist_lookup_int16(9F)

\n

pci_plist_lookup_int16_array(9F)

\n

pci_plist_lookup_int32(9F)

\n

pci_plist_lookup_int32_array(9F)

\n

pci_plist_lookup_int64(9F)

\n

pci_plist_lookup_int64_array(9F)

\n

pci_plist_lookup_int8(9F)

\n

pci_plist_lookup_int8_array(9F)

\n

pci_plist_lookup_plist(9F)

\n

pci_plist_lookup_string(9F)

\n

pci_plist_lookup_string_array(9F)

\n

pci_plist_lookup_uint16(9F)

\n

pci_plist_lookup_uint16_array(9F)

\n

pci_plist_lookup_uint32(9F)

\n

pci_plist_lookup_uint32_array(9F)

\n

pci_plist_lookup_uint64(9F)

\n

pci_plist_lookup_uint64_array(9F)

\n

pci_plist_lookup_uint8(9F)

\n

pci_plist_lookup_uint8_array(9F)

\n

pci_report_pmcap(9F)

\n

pci_restore_config_regs(9F)

\n

pci_save_config_regs(9F)

\n

pciv_send(9F)

\n

pciv_vf_config(9F)

\n

physio(9F)

\n

pm_busy_component(9F)

\n

pm_idle_component(9F)

\n

pm_lower_power(9F)

\n

pm_power_has_changed(9F)

\n

pm_raise_power(9F)

\n

pm_trans_check(9F)

\n

pollwakeup(9F)

\n

priv_getbyname(9F)

\n

priv_policy(9F)

\n

priv_policy_choice(9F)

\n

priv_policy_only(9F)

\n

proc_ref(9F)

\n

proc_signal(9F)

\n

proc_unref(9F)

\n

ptob(9F)

\n

pullupmsg(9F)

\n

put(9F)

\n

putbq(9F)

\n

putctl1(9F)

\n

putctl(9F)

\n

putnext(9F)

\n

putnextctl1(9F)

\n

putnextctl(9F)

\n

putq(9F)

\n

qassociate(9F)

\n

qbufcall(9F)

\n

qenable(9F)

\n

qprocsoff(9F)

\n

qprocson(9F)

\n

qreply(9F)

\n

qsize(9F)

\n

qtimeout(9F)

\n

qunbufcall(9F)

\n

quntimeout(9F)

\n

qwait(9F)

\n

qwait_sig(9F)

\n

qwriter(9F)

\n

RD(9F)

\n

rd(9F)

\n

repinsb(9F)

\n

repinsd(9F)

\n

repinsw(9F)

\n

repoutsb(9F)

\n

repoutsd(9F)

\n

repoutsw(9F)

\n

rmalloc(9F)

\n

rmallocmap(9F)

\n

rmallocmap_wait(9F)

\n

rmalloc_wait(9F)

\n

rmfree(9F)

\n

rmfreemap(9F)

\n

rmvb(9F)

\n

rmvq(9F)

\n

rw_destroy(9F)

\n

rw_downgrade(9F)

\n

rw_enter(9F)

\n

rw_exit(9F)

\n

rw_init(9F)

\n

rwlock(9F)

\n

rw_read_locked(9F)

\n

rw_tryenter(9F)

\n

rw_tryupgrade(9F)

\n

SAMESTR(9F)

\n

samestr(9F)

\n

scsi_abort(9F)

\n

scsi_alloc_consistent_buf(9F)

\n

scsi_cname(9F)

\n

scsi_destroy_pkt(9F)

\n

scsi_dmafree(9F)

\n

scsi_dmaget(9F)

\n

scsi_dname(9F)

\n

scsi_errmsg(9F)

\n

scsi_ext_sense_fields(9F)

\n

scsi_find_sense_descr(9F)

\n

scsi_free_consistent_buf(9F)

\n

scsi_get_device_type_scsi_options(9F)

\n

scsi_hba_attach(9F)

\n

scsi_hba_attach_setup(9F)

\n

scsi_hba_detach(9F)

\n

scsi_hba_fini(9F)

\n

scsi_hba_init(9F)

\n

scsi_hba_lookup_capstr(9F)

\n

scsi_hba_pkt_alloc(9F)

\n

scsi_hba_pkt_comp(9F)

\n

scsi_hba_pkt_free(9F)

\n

scsi_hba_probe(9F)

\n

scsi_hba_tran_alloc(9F)

\n

scsi_hba_tran_free(9F)

\n

scsi_ifgetcap(9F)

\n

scsi_ifsetcap(9F)

\n

scsi_init_pkt(9F)

\n

scsi_log(9F)

\n

scsi_mname(9F)

\n

scsi_pktalloc(9F)

\n

scsi_pktfree(9F)

\n

scsi_poll(9F)

\n

scsi_probe(9F)

\n

scsi_resalloc(9F)

\n

scsi_reset(9F)

\n

scsi_reset_notify(9F)

\n

scsi_resfree(9F)

\n

scsi_rname(9F)

\n

scsi_sense_asc(9F)

\n

scsi_sense_ascq(9F)

\n

scsi_sense_cmdspecific_uint64(9F)

\n

scsi_sense_info_uint64(9F)

\n

scsi_sense_key(9F)

\n

scsi_setup_cdb(9F)

\n

scsi_slave(9F)

\n

scsi_sname(9F)

\n

scsi_sync_pkt(9F)

\n

scsi_transport(9F)

\n

scsi_unprobe(9F)

\n

scsi_unslave(9F)

\n

scsi_validate_sense(9F)

\n

scsi_vu_errmsg(9F)

\n

sema_destroy(9F)

\n

sema_init(9F)

\n

sema_p(9F)

\n

semaphore(9F)

\n

sema_p_sig(9F)

\n

sema_tryp(9F)

\n

sema_v(9F)

\n

SIZEOF_PTR(9F)

\n

SIZEOF_STRUCT(9F)

\n

snprintf(9F)

\n

sprintf(9F)

\n

stoi(9F)

\n

strcasecmp(9F)

\n

strchr(9F)

\n

strcmp(9F)

\n

strcpy(9F)

\n

strlcat(9F)

\n

strlcpy(9F)

\n

strlen(9F)

\n

strlog(9F)

\n

strncasecmp(9F)

\n

strncat(9F)

\n

strncmp(9F)

\n

strncpy(9F)

\n

strqget(9F)

\n

strqset(9F)

\n

strrchr(9F)

\n

strspn(9F)

\n

STRUCT_BUF(9F)

\n

STRUCT_DECL(9F)

\n

STRUCT_FADDR(9F)

\n

STRUCT_FGET(9F)

\n

STRUCT_FGETP(9F)

\n

STRUCT_FSET(9F)

\n

STRUCT_FSETP(9F)

\n

STRUCT_HANDLE(9F)

\n

STRUCT_INIT(9F)

\n

STRUCT_SET_HANDLE(9F)

\n

STRUCT_SIZE(9F)

\n

swab(9F)

\n

taskq(9F)

\n

taskq_suspended(9F)

\n

testb(9F)

\n

timeout(9F)

\n

u8_strcmp(9F)

\n

u8_textprep_str(9F)

\n

u8_validate(9F)

\n

uconv_u16tou32(9F)

\n

uconv_u16tou8(9F)

\n

uconv_u32tou16(9F)

\n

uconv_u32tou8(9F)

\n

uconv_u8tou16(9F)

\n

uconv_u8tou32(9F)

\n

uiomove(9F)

\n

unbufcall(9F)

\n

unfreezestr(9F)

\n

unlinkb(9F)

\n

untimeout(9F)

\n

ureadc(9F)

\n

usb_alloc_bulk_req(9F)

\n

usb_alloc_ctrl_req(9F)

\n

usb_alloc_intr_req(9F)

\n

usb_alloc_isoc_req(9F)

\n

usb_alloc_request(9F)

\n

usb_client_attach(9F)

\n

usb_client_detach(9F)

\n

usb_clr_feature(9F)

\n

usb_create_pm_components(9F)

\n

usb_free_bulk_req(9F)

\n

usb_free_ctrl_req(9F)

\n

usb_free_descr_tree(9F)

\n

usb_free_dev_data(9F)

\n

usb_free_intr_req(9F)

\n

usb_free_isoc_req(9F)

\n

usb_get_addr(9F)

\n

usb_get_alt_if(9F)

\n

usb_get_cfg(9F)

\n

usb_get_current_frame_number(9F)

\n

usb_get_dev_data(9F)

\n

usb_get_if_number(9F)

\n

usb_get_max_pkts_per_isoc_request(9F)

\n

usb_get_status(9F)

\n

usb_get_string_descr(9F)

\n

usb_handle_remote_wakeup(9F)

\n

usb_lookup_ep_data(9F)

\n

usb_owns_device(9F)

\n

usb_parse_data(9F)

\n

usb_pipe_bulk_xfer(9F)

\n

usb_pipe_close(9F)

\n

usb_pipe_ctrl_xfer(9F)

\n

usb_pipe_ctrl_xfer_wait(9F)

\n

usb_pipe_drain_reqs(9F)

\n

usb_pipe_get_max_bulk_transfer_size(9F)

\n

usb_pipe_get_private(9F)

\n

usb_pipe_get_state(9F)

\n

usb_pipe_intr_xfer(9F)

\n

usb_pipe_isoc_xfer(9F)

\n

usb_pipe_open(9F)

\n

usb_pipe_reset(9F)

\n

usb_pipe_set_private(9F)

\n

usb_pipe_stop_intr_polling(9F)

\n

usb_pipe_stop_isoc_polling(9F)

\n

usb_print_descr_tree(9F)

\n

usb_register_hotplug_cbs(9F)

\n

usb_set_alt_if(9F)

\n

usb_set_cfg(9F)

\n

usb_unregister_hotplug_cbs(9F)

\n

uwritec(9F)

\n

va_arg(9F)

\n

va_copy(9F)

\n

va_end(9F)

\n

va_start(9F)

\n

vcmn_err(9F)

\n

vsprintf(9F)

\n

WR(9F)

\n

wr(9F)

\n

zcmn_err(9F)

\n\n
\n\n\n
\n\t\n\n

kstat_delete

\n

- remove a kstat from the system

Synopsis

\n
#include <sys/types.h>\n#include <sys/kstat.h>\n\n\n\nvoid kstat_delete(kstat_t *ksp);

Interface Level

\n

Solaris DDI specific (Solaris DDI)

Parameters

\n
ksp
\n

Pointer to a currently installed kstat(9S) structure.

\n
\n

Description

\n

kstat_delete() removes ksp from the kstat chain and frees all associated system\nresources.

Return Values

\n

None.

Context

\n

kstat_delete() can be called from any context.

See Also

\n

kstat_create(9F), kstat_install(9F), kstat_named_init(9F), kstat(9S)

Writing Device Drivers

Notes

\n

When calling kstat_delete(), the driver must not be holding that kstat's ks_lock.\nOtherwise, it may deadlock with a kstat reader.

\n
\n\n
\n\t\n\t\t\n\t\t\t\n\t\t\t\n\t\t\n\t
\n\t\t\t\t
\n\t\t\t\t\tCopyright © 1993, 2013, Oracle and/or its affiliates. All rights reserved. Legal Notices\n\t\t\t\t
\n\t\t\t
\n\t\t\t\t\"Previous\"\n\t\t\t\t\"Next\"\n\t\t\t
\n\t\n
\n\n\n\n\n\n\n\n\n\n" + }, + { + "url": "https://docs.oracle.com/cd/E05553_01/books/OrdMgtSIA/OrdMgtSIANetworkOrder29.html", + "domain": "docs.oracle.com", + "file_source": "part-00089-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "\r\n\r\n\r\n\t\r\n\t\r\n\t\r\n\t\r\n\t\r\n\t\r\n\t\r\n\t\r\n\t\r\n\t\r\n\r\n\tBookshelf v7.7: Converting a Quote to an Order to Disconnect a Network Site\r\n\r\n\t\r\n\t\r\n\t\r\n\r\n\r\n\r\n\r\n\r\n
\r\n\r\n\r\n\t\r\n\t\t\r\n\t\t\r\n\t\t\r\n\t\r\n
  Bookshelf Home | Contents | Index | PDF   
\r\n\r\n\r\n\r\n
\r\n

Siebel Order Management Guide Addendum for Industry Applications > Using Network Ordering > Process of Disconnecting a Site from a Network >

\r\n

Converting a Quote to an Order to Disconnect a Network Site

\r\n
\r\n

After creating a quote to disconnect a network site, the salesperson summarizes the changes that will be made and captures the due date. When the customer confirms this order, the salesperson converts the quote to an order.

This task is a step in Process of Disconnecting a Site from a Network.

To convert a quote to an order to disconnect a network site

\r\n
    \r\n
  1. Navigate to the Quotes screen.\r\n
  2. In the Quotes list, click the Quote Name for the quote for the disconnect.\r\n
  3. In the Quote Header form, click Auto Order.\r\n

    The quote is converted into an order.

    \r\n
  4. To submit the order, click Submit.\r\n
\r\n
\r\n\r\n\r\n\t\r\n\t\t\r\n\t\r\n
  
\r\n\r\n\r\n\t\r\n\t\t\r\n\t\t\r\n\t\r\n\t\r\n\t\t\r\n\t\r\n
Siebel Order Management Guide Addendum for Industry ApplicationsCopyright © 2007, Oracle. All rights reserved.
 
\r\n\r\n\r\n\r\n\r\n" + }, + { + "url": "https://docs.oracle.com/cd/B31104_02/books/EAI5/EAI5_UnderstandXMLIntDTD2.html", + "domain": "docs.oracle.com", + "file_source": "part-00273-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "\r\n\r\n\r\n\t\r\n\t\r\n\t\r\n\t\r\n\t\r\n\t\r\n\t\r\n\t\r\n\t\r\n\t\r\n\t\r\n\r\n\r\n\r\n\tBookshelf v7.8: Creating XML Integration Objects with the DTD Wizard\r\n\r\n\t\r\n\t\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\t\r\n\t\t\r\n\r\n\t\t\t\t\r\n\t\r\n
  Bookshelf Home | Contents | Index | PDF   
\r\n\r\n\r\n\r\n
\r\n

XML Reference: Siebel eBusiness Application Integration Volume V > XML Integration Objects and the DTD Wizard >

\r\n

Creating XML Integration Objects with the DTD Wizard

\r\n
\r\n

Siebel eAI provides two different wizards to create XML integration objects. An XML integration object is essentially an integration object with a base object type of XML. This wizard parses an external Document Type Definition (DTD) file to create an XML integration object.

To create an integration object

\r\n
    \r\n
  1. Select File > New Object.\r\n
  2. Select the EAI tab.\r\n
  3. Double-click the Integration Object icon.\r\n
  4. Complete the Integration Object Builder initial page:\r\n
      \r\n\r\n
    1. Select the project from the first drop-down list.\r\n
    2. Select EAI DTD Wizard as the Business Service.\r\n
    3. Navigate to the path to the location of the DTD or XML file that you want to use as the basis of the DTD and click Next.\r\n

      NOTE:  The Simplify Integration Object Hierarchy option creates a simpler and flatter internal representation of the XML integration object. Please note that this does not change the external representation. Having a simpler internal representation makes declarative data mapping easier.

      \r\n
    \r\n
  5. Select the source object and give it a unique name, and then click Next.\r\n
  6. Click on the plus sign to expand the list and select or clear the fields based on your business requirements.\r\n
  7. Click Next to go to the final page to review messages generated during this process and take necessary action.\r\n
  8. Click Finish to complete the process.\r\n

    You will see the integration object you created in the Integration Objects list, as shown in the following figure.

    \r\n
    \r\n\r\n\r\n\r\n \r\n\r\n\r\n
    \"Click\r\n
    \r\n
    \r\n\r\n\r\n
    \r\n
\r\n

NOTE:  You must review the integration objects and the integration components created by the Wizard and complete their definitions based on your requirements.

\r\n
\r\n\r\n\r\n\t\r\n\r\n\t\t\r\n\t\t\t\t\r\n\t\r\n
   
\r\n\r\n\r\n\t\r\n\t\t\r\n\t\r\n\t\r\n\t\t\r\n\t\r\n
XML Reference: Siebel eBusiness Application Integration Volume V
 
\r\n\r\n\r\n\r\n\r\n" + }, + { + "url": "https://docs.oracle.com/cd/E19879-01/820-7192/gifbj/index.html", + "domain": "docs.oracle.com", + "file_source": "part-00943-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "Management Rules Logger (Sun GlassFish Enterprise Manager Performance Advisor 1.0 Release Notes)

Sun GlassFish Enterprise Manager Performance Advisor 1.0 Release Notes

Management Rules Logger

\n

Description

\n

The management rules send notifications to the server log, but\nthe logger for messages from the management rules subsystem itself\nis com.sun.enterprise.management.runtimetuner.\nBy default, the log level for this logger is WARNING.\nTo get more information from this logger, you must add a property\nto the configuration. The property name is com.sun.enterprise.management.runtimetuner and the value is FINE.

\n

Workaround

\n

You can add this property on the Log Levels page in the Admin Console.\nFor details, see the Admin Console online help.

\n

You can also use the following asadmin set command\nto add this property. Substitute the name of the server instance or\ncluster for server. For example:

\n\n\n\n\n
\n
\n
asadmin set \nserver.log-service.module-log-levels.property.com\\\\.sun\\\\.enterprise\\\\.management\\\\.runtimetuner=FINE
\n

For more information about the asadmin set command,\nsee the Sun GlassFish Enterprise Server 2.1 Reference Manual.

\n
\n
    \n \n
\n
\n
" + }, + { + "url": "https://docs.oracle.com/cd/E36784_01/html/E36845/fahpm.html", + "domain": "docs.oracle.com", + "file_source": "part-00662-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "\n\n\n\n \n_rst_sent_rate_enabled - \nOracle® Solaris 11.2 Tunable Parameters\n Reference Manual\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\t
\n\t\t
oracle home
\n\t\t
\nOracle® Solaris 11.2 Tunable Parameters\n Reference Manual
\n\n\t\t
\n\t\t\t

Exit Print View

\n\t\t
\n\n\t\t\n\t\t
\n\t\t\t
\n\t\t\t\n\t\t\t
\n\t\t\t\t
 
\n\t\t\t\t\n\t\t\t\t\n\t\t\t
\n\t\t\t
\n\t\t\t\t
\n\t\t\t\t\t
Search Scope:
\n\t\t\t\t\t
\n\t\t\t\t\t\t\n\t\t\t\t\t\t\n\t\t\t\t\t
\t\t\t\t\n\t\t\t\t
\n\t\t\t
\n\t\t\t
\n\t\t
\n\t
\n\n\t
\n\t\t\t\n\t\t\n\n\t\t\t\n\t\t
Updated: December 2014
\n\t
\n\t\n\t
 
\n\t
 
\n\n\t
\n\t\t\n\t
\n\t\n\t
\n\n\t\t
\n\t\t\t
\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t
\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t
\n\n\t\t\t\n\t\t\t
\n\t\t\t\t
\n\t\t\t\t\t\n\t\t\t\t\t\n\t\t\t\t
\n\t\t\t
\n\t\t\t \n\t\n\t\t\t
\n\t\t\t\t\n

_rst_sent_rate_enabled

\n
\n \n
Description
\n
\n

If this parameter is set to 1, the maximum rate of sending a RST\n segment is controlled by the ipmadm parameter,\n _rst_sent_rate. If this parameter is set to 0, no\n rate control when sending a RST segment is available.\n \n

\n
\n \n \n
Default
\n
\n

1 (enabled)

\n
\n \n \n
Range
\n
\n

0 (disabled) or 1 (enabled)

\n
\n \n \n
Dynamic?
\n
\n

Yes

\n
\n \n \n
When to Change
\n
\n

This tunable helps defend against denial of service attacks on TCP by\n limiting the rate by which a RST segment is sent out. The only time this\n rate control should be disabled is when strict conformance to RFC 793 is\n required.

\n
\n \n \n
Commitment Level
\n
\n

Unstable

\n
\n \n
\n \n \n\t\t\t\t\n\t\t\t
\n\n\t\t\t
\n\t\t\t\t
Copyright © 2000, 2014, Oracle and/or its affiliates. All rights reserved. Legal Notices\n\t\t\t\t
\n\t\t\t\t
\n\t\t\t\t\t
Previous
\n\t\t\t\t\t
Next
\n\t\t\t\t
\n\t\t\t\t
\n\t\t\t
\t\n\t\t
\n\t
\n\n" + }, + { + "url": "http://docs.oracle.com/cd/E12102_01/books/AnyImp/AnyImp_ConfigFinance37.html", + "domain": "docs.oracle.com", + "file_source": "part-00514-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "\r\n\r\n\r\n\t\r\n\t\r\n\t\r\n\t\r\n\t\r\n\t\r\n\t\r\n\t\r\n\t\r\n\t\r\n\t\r\n\r\n\tBookshelf v7.8.3: Importing PeopleSoft Trees Into the PowerCenter Repository\r\n\r\n\t\r\n\t\r\n\r\n\r\n\r\n\r\n\r\n\t\r\n\t\t\r\n\t\t\r\n\t\t\r\n\t\r\n
  Bookshelf Home | Contents | Index | PDF   
\r\n\r\n\r\n\r\n
\r\n

Siebel Customer-Centric Enterprise Warehouse Installation and Configuration Guide > Configuring Siebel Financial Analytics > Process of Configuring Siebel Financial Analytics for PeopleSoft 8.4 >

\r\n

Importing PeopleSoft Trees Into the PowerCenter Repository

\r\n
\r\n

This task is a step in the Process of Configuring Siebel Financial Analytics for PeopleSoft 8.4.

The Designer displays the following tree information in the Import From PeopleSoft dialog box, to identify the tree you need to import:

  • SetID. If a tree is a SetID, it appears before the tree name.\r\n
  • Tree Name. The name of the tree.\r\n
  • Effective Date. The tree effective date appears after the tree name.\r\n
\r\n

PeopleSoft uses the SetID and the Effective Date to identify trees. When importing a tree from PeopleSoft, you can use the SetID and the Effective Date to select the tree. The SetID and the Effective Date are displayed in the source definition in the Source Analyzer.

You can import strict-level trees from the Trees tab in the Import From PeopleSoft dialog box. Detail and Summary trees appear in the Trees folder, and Winter trees appear in the Winter Trees folder.

To import PeopleSoft Tree into the PowerCenter repository

\r\n
    \r\n
  1. Open Designer, and connect to your repository.\r\n
  2. Open the Configuration for PeopleSoft 8.4 folder.\r\n
  3. On the Sources menu, click Import from PeopleSoft.\r\n
  4. Connect to the PeopleSoft OLTP.\r\n

    The Import From PeopleSoft dialog box appears.

    \r\n
  5. Click the Trees tab\r\n
  6. Click the Tree you want to import.\r\n
  7. Click OK.\r\n
\r\n
\r\n\r\n\r\n\t\r\n\t\t\r\n\t\t\r\n\t\t\r\n\t\r\n
   
\r\n\r\n\r\n\t\r\n\t\t\r\n\t\r\n\t\r\n\t\t\r\n\t\r\n
Siebel Customer-Centric Enterprise Warehouse Installation and Configuration Guide
 
\r\n\r\n\r\n\r\n" + }, + { + "url": "https://docs.oracle.com/cd/B31104_02/books/eMail/eMail_GlobalDeploy2.html", + "domain": "docs.oracle.com", + "file_source": "part-00273-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "\r\n\r\n\r\n\t\r\n\t\r\n\t\r\n\t\r\n\t\r\n\t\r\n\t\r\n\t\r\n\t\r\n\t\r\n\t\r\n\r\n\r\n\r\n\tBookshelf v7.8: About Global Deployment Setup and Configuration\r\n\r\n\t\r\n\t\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\t\r\n\t\t\r\n\r\n\t\t\t\t\r\n\t\r\n
  Bookshelf Home | Contents | Index | PDF   
\r\n\r\n\r\n\r\n
\r\n

Siebel Email Response Administration Guide > Global Deployment Setup and Configuration >

\r\n

About Global Deployment Setup and Configuration

\r\n
\r\n

If you wish to use Siebel Email Response as a global response system, you must configure Email Response accordingly. This chapter details how to configure and setup Email Response for global deployment. This involves planning your deployment, using the smart response character set, and setting and deploying the Email Response client.

When you have completed these tasks, you will be able to use Email Response to answer communications from across the globe.

\r\n
\r\n\r\n\r\n\t\r\n\r\n\t\t\r\n\t\t\t\t\r\n\t\r\n
   
\r\n\r\n\r\n\t\r\n\t\t\r\n\t\r\n\t\r\n\t\t\r\n\t\r\n
Siebel Email Response Administration Guide
 
\r\n\r\n\r\n\r\n\r\n" + }, + { + "url": "https://docs.oracle.com/cd/E19226-01/820-7692/6niudjaug/index.html", + "domain": "docs.oracle.com", + "file_source": "part-00943-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "Rotating the Server Log (Sun GlassFish Enterprise Server v3 Administration Guide)

Sun GlassFish Enterprise Server v3 Administration Guide

Rotating the Server Log

\n

Logs are rotated automatically based on settings in the logging.properties file. You can change these settings by using the Administration Console.

\n

\n\"Procedure\"To Rotate a Log File Manually

\n

\nYou\ncan rotate the server log file manually by using the rotate-log subcommand\nin remote mode. The server log in the default location is immediately moved\nto a time-stamped file and a new server log is created.

\n

Because log rotation is a dynamic operation, you do not need to restart Enterprise Server for\nchanges to take effect.

\n\n
    \n\n
  1. \n

    Ensure that the server is running.

    \n\n

    Remote subcommands\nrequire a running server.

    \n\n
  2. \n\n
  3. \n

    Rotate a log by using the rotate-log(1) subcommand.

    \n\n
  4. \n
\n
Example 7–5 Rotating a Log File Manually

This example moves the server.log file to yyyy-mm-dd_server.log and creates a new server.log file in the\ndefault location.

\n\n\n\n\n
\n
\n
asadmin> rotate-log\nCommand rotate-log executed successfuly.
\n

\n
See Also

You can also view the full syntax and options of the subcommand by typing asadmin help rotate-log at the command line.

\n
\n
    \n \n
\n
\n
" + }, + { + "url": "https://docs.oracle.com/cd/B40099_02/books/UPG/UPG_HowToDo3.html", + "domain": "docs.oracle.com", + "file_source": "part-00310-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "\r\n\r\n\r\n\t\r\n\t\r\n\t\r\n\t\r\n\t\r\n\t\r\n\t\r\n\t\r\n\t\r\n\t\r\n\r\n\t\r\n\r\n\tBookshelf v8.0: Process of Planning a Siebel Database Upgrade\r\n\t\r\n\t\r\n\t\r\n\r\n\r\n\r\n\r\n\r\n\t\r\n\t\t\r\n\t\t\r\n\t\t\r\n\t\r\n
  Bookshelf Home | Contents | Index | PDF   
\r\n\r\n\r\n\r\n
\r\n

Siebel Database Upgrade Guide > How to Perform a Siebel Database Upgrade >

\r\n

Process of Planning a Siebel Database Upgrade

\r\n
\r\n

Upgrades from: All Supported Siebel releases.

Environments: Development, production test, production.

Platforms: Windows and UNIX only. This topic does not apply to IBM z/OS.

This process is part of a roadmap. See Roadmap for Performing a Siebel Database Upgrade.

To plan the upgrade, read the following:

    \r\n
  1. How the Siebel Database Upgrade Works.\r\n
  2. Siebel Database and UI Upgrade Planning.\r\n
  3. If you are planning to migrate to Unicode, see Upgrade Planning for Siebel Unicode Support.\r\n
  4. Application Planning for a Siebel Upgrade.\r\n
  5. Siebel Marketing Upgrade Reference.\r\n
  6. Tables Modified or Seeded During a Siebel Upgrade.\r\n
\r\n

Remote Upgrade Planning

\r\n

See the following resources to deploy the upgrade to remote users on Siebel Mobile Web Clients:

    \r\n
  • Use Siebel Packager to create language packs for remote installation. See Going Live with Siebel Business Applications.\r\n
  • Use Siebel Anywhere to create installation kits for deployment. See Siebel Anywhere Administration Guide.\r\n
  • Generate a Siebel Remote database template. See Siebel Remote and Replication Manager Administration Guide.\r\n
  • Set up database extraction for Mobile Web Clients. See Siebel Remote and Replication Manager Administration Guide. \r\n
\r\n
\r\n\r\n\r\n\t\r\n\t\t\r\n\t\t\r\n\t\t\r\n\t\r\n
   
\r\n\r\n\r\n\t\r\n\t\t\r\n\t\t\r\n\t\r\n\t\r\n\t\t\r\n\t\r\n
Siebel Database Upgrade GuideCopyright © 2008, Oracle. All rights reserved.
 
\r\n\r\n\r\n\r\n" + }, + { + "url": "https://docs.oracle.com/cd/E28280_01/doc.1111/e14149/appendix_b.htm", + "domain": "docs.oracle.com", + "file_source": "part-00131-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "\n\n\n\n\n\n\nWorking Around the Java Modal Window Problem\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n
\n
38/39 \n

B Working Around the Java Modal Window Problem

\n

This appendix describes how Java handles modal windows that causes a problem when trying to display a context-sensitive help topic for a modal window. A modal window is one that does not allow focus to be shifted away from it. A nonmodal window is one that allows focus to be switched to another window.

\n

This appendix includes the following sections:

\n\n\n
\n

B.1 About the Java Modal Window Problem

\n

If a user requests help from a nonmodal window, it is possible to switch back and forth between the help window and the window requesting help. However, this is not possible when requesting help from a modal window. In Java, a modal window blocks access to all other windows created by the Java Virtual Machine, except yet another modal window. Thus, if help is requested from a modal window, OHJ must display help in a modal help window. Then, because OHJ is itself shown in a modal window, the user must close the help window to return to the application.

\n

When help is requested, OHJ determines whether the active window is modal. If it is, then it re-parents the normal OHJ topic windows and the OHJ navigator window into a new modal window. That new window appears in the foreground of the user's display, and the user can interact with it; in fact, they must interact with it if only to close the modal help window. Given the coarse implementation of modality in Java, this is the only solution that will work for all of the Java Virtual Machines currently supported by OHJ.

\n
\n\n\n
\n

B.2 Registering a Window

\n

In order for the OHJ workaround to work, OHJ must be able to track the currently active window. Use the registerClientWindow() method to register each window (Frame or Dialog) you create with the Help object.

\n
\n

Table B-1 registerClientWindow() Method

\n\n\n\n\n\n\n\n\n\n\n\n\n\n
ConstructorDescription
\n

registerClientWindow(Window aWindow)

\n
\n

Window instances registered with the Help object are tracked. If the active window is a modal dialog and help is requested, the Help object will take special action so that the help windows are not blocked by the active modal dialog.

\n

Parameters:

\n
    \n
  • \n

    aWindow - The Window instance to register.

    \n
  • \n
\n
\n
\n
\n\n\n
\n

B.3 Unregistering a Window

\n

If you registered your Window objects using Help.registerClientWindow(), you must also unregister them. When you know that a Window will no longer be active, you should unregister the window with the Help object using the unregisterClientWindow() method. It is important to note that failure to unregister Window instances may result in the window not being garbage collected.

\n
\n

Table B-2 unregisterClientWindow() Method

\n\n\n\n\n\n\n\n\n\n\n\n\n\n
MethodDescription
\n

unregisterClientWindow(Window aWindow)

\n
\n

Clients should unregister each Window instance they registered with the registerClientWindow() method once the window will no longer be active. Failure to unregister Window instances may result in the window not being garbage collected.

\n

Parameters:

\n
    \n
  • \n

    aWindow - The Window instance to register.

    \n
  • \n
\n
\n
\n
\n
\n\n\n
\n\n
\n\n\n\n" + }, + { + "url": "https://docs.oracle.com/cd/E19900-01/819-4736/6n6s9pj7k/index.html", + "domain": "docs.oracle.com", + "file_source": "part-00273-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "restore-domain(1) (Sun Java System Application Server Enterprise Edition 8.2 Reference Manual)

Sun Java System Application Server Enterprise Edition 8.2 Reference Manual

restore-domain(1)

\n

\nNAME\n | SYNOPSIS\n | DESCRIPTION\n | OPTIONS\n | OPERANDS\n | EXAMPLES\n | EXIT STATUS\n | SEE ALSO\n

\n

NAME

\n
    restore-domain– restores files from backup
\n

SYNOPSIS

\n
    \nrestore-domain [––domaindir domain_directory] [––filename backup_filename] [––description description] [––terse=false] [––verbose=false] [domain_name]
    \n

DESCRIPTION

\n
    \n

    This command restores files under the domain from a backup directory.\nThe restore-domain command is supported in local mode only.

    \n\n
\n

OPTIONS

\n
    \n\n
    \n
    \n––domaindir\n
    \n
    \n

    This option specifies the parent directory of the domain upon\nwhich the command will operate. The default is install_dir/domains.

    \n\n
    \n
    \n––filename\n
    \n
    \n

    The restore is performed using the specified zip file as the\nsource.

    \n\n
    \n
    \n––description\n
    \n
    \n

    A description can contain any string to help identify the\nparticular backup. The description is displayed as part of the information\nfor any backup.

    \n\n
    \n
    \n–t ––terse\n
    \n
    \n

    Indicates that any output data must be very\nconcise, typically avoiding human-friendly sentences and favoring well-formatted\ndata for consumption by a script. Default is false.

    \n\n
    \n
    \n–t ––verbose\n
    \n
    \n

    Indicates that output data is displayed with\ndetailed information. Default is false.

    \n\n
    \n
    \n\n
\n

OPERANDS

\n
    \n\n
    \n
    \ndomain_name\n
    \n
    \n

    This is the name of the domain to restore. If the domain is not specified\nand only one domain exists, it will be used automatically.

    \n\n
    \n
    \n\n
\n

EXAMPLES

\n
    \n\n
    \n
    Example 1 Using restore-domain
    \n
    \n\n\n\n\n
    \n
    \n
    asadmin>restore-domain --domaindir /opt/SUNWappserver/nondefaultdomaindir/domain1 --filename sjsas_backup_v00001.zip domain1\nSuccessfully restored the domain (domain1), from /opt/SUNWappserver/nondefaultdomaindir/domain1/backups/sjsas_backup_v00001.zip\n\nDescription: 1137030607263\nBackup Filename: /opt/SUNWappserver/nondefaultdomaindir/domain1/backups/sjsas_backup_v00001.zip\nDate and time backup was performed: Wed Jan 11 17:50:07 PST 2006\nDomains Directory: /opt/SUNWappserver/nondefaultdomaindir\nDomain Directory: /opt/SUNWappserver/nondefaultdomaindir/domain1\nDomain Name: domain1\nName of the user that performed the backup: jondoe
    \n
    \n
    \n\n
\n

EXIT STATUS

\n
    \n\n
    \n
    0
    \n
    \n

    command executed successfully

    \n\n
    \n
    1
    \n
    \n

    error in executing the command

    \n\n
    \n
    \n\n
\n

SEE ALSO

\n\nJava EE 5  Last Revised 22 Dec 2004

\nNAME\n | SYNOPSIS\n | DESCRIPTION\n | OPTIONS\n | OPERANDS\n | EXAMPLES\n | EXIT STATUS\n | SEE ALSO\n

\n
\n
    \n \n
\n
\n
" + }, + { + "url": "https://docs.oracle.com/cd/E19455-01/806-1075/msgs-1738/index.html", + "domain": "docs.oracle.com", + "file_source": "part-00352-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "NFS server string ok (Solaris Common Messages and Troubleshooting Guide)

Solaris Common Messages and Troubleshooting Guide

NFS server string ok

\n

Cause

\n

This message is the follow-up to the NFS server not responding error. It indicates that the NFS server is again operating.

\n

Action

\n

When an NFS server first starts, it is busy fulfilling client requests for a while. Be patient and wait for your client system to respond. Making many extraneous requests only further slows the NFS server response time.

\n
\n
    \n \n
\n
\n
" + }, + { + "url": "https://docs.oracle.com/cd/E13224_01/wlw/docs100/guide/webapplications/ui/uiCreateFormWizard.html", + "domain": "docs.oracle.com", + "file_source": "part-00662-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "\r\n\r\n\r\n\r\n\r\nCreate Form Wizard\r\n \r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n
\r\n\r\n>\r\n\r\n\r\n
\r\n\r\n \r\n \r\n \r\n \r\n \r\n \r\n
\"Previous\"\"Next\"\"vertical
\r\n\r\n\r\n \r\n
\r\n

Create Form Wizard

\r\n
\r\n
\r\n

Use this wizard to create new Beehive HTML forms and select/create an associated \r\n action.

\r\n

How To Open This Wizard

\r\n

To open the Create Form wizard:

\r\n
\r\n
    \r\n
  1. \r\n
    View a JSP or JSF page in the Page Flow perspective: Window > \r\n Open Perspective > Page Flow.
    \r\n
  2. \r\n
  3. \r\n
    Open the Create Form wizard: From the JSP \r\n Design Palette, drag and drop the Create Form \r\n icon unto the JSP page.
    \r\n
  4. \r\n
\r\n
\r\n

How to Use this Wizard

\r\n

Select Action Page

\r\n

The Select Action page allows you to select an existing action or create \r\n a new action to handle submission of the Beehive HTML input form. (The submitted \r\n data will be used to construct a form bean instance, and this form bean instance \r\n will be passed to the action you select/create here.)

\r\n

\r\n

Select Properties Page

\r\n

Select the input form fields that you want to expose on the JSP page. The \r\n list of available fields is taken from the form bean fields.

\r\n

\r\n

Arrange Fields Page

\r\n

Select the order in which the input fields will appear on the JSP page.

\r\n

\r\n

Related Topics

\r\n

JSP Design Palette

\r\n

Creating Forms for Collecting User Data

\r\n
\r\n\r\n\r\n

 

\"Skip\r\n \r\n\"Back

\r\n\r\n\r\n\r\n" + }, + { + "url": "https://docs.oracle.com/cd/E19830-01/819-4724/6n6rubcci/index.html", + "domain": "docs.oracle.com", + "file_source": "part-00331-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "list-system-properties(1) (Sun Java System Application Server Platform Edition 8.2 Reference Manual)

Sun Java System Application Server Platform Edition 8.2 Reference Manual

list-system-properties(1)

\n

\nNAME\n | SYNOPSIS\n | DESCRIPTION\n | OPTIONS\n | OPERANDS\n | EXAMPLES\n | EXIT STATUS\n | SEE ALSO\n

\n

NAME

\n
    list-system-properties– lists the system properties of the domain, configuration, cluster, or server\ninstance
\n

SYNOPSIS

\n
    \nlists-system-properties ––user admin_user [––passwordfile filename] [––host host_name] [––port port_number] [––secure|–s] [––terse=false] [––echo=false] [––interactive=true] [––help] [target target_name]
    \n

DESCRIPTION

\n
    \n

    Shared or clustered server instances will often need to override attributes\ndefined in their referenced configuration. Any configuration attribute in\na server instance can be overriden through a system property of the corresponding\nname. This command lists the system properties of a domain, configuration,\ncluster, or server instance.

    \n\n
\n

OPTIONS

\n
    \n\n
    \n
    \n–u ––user\n
    \n
    \n

    The authorized domain administration server administrative username.

    \n\n
    \n
    \n–w ––password\n
    \n
    \n

    The ––password option is deprecated. Use ––passwordfile instead.

    \n\n
    \n
    \n––passwordfile\n
    \n
    \n

    This option replaces the –– password option. Using the ––password option on the command line or through the environment is\ndeprecated. The ––passwordfile option specifies the name of a file containing the password entries in a specified format. The entry for the password must have the AS_ADMIN_ prefix followed by the password name in capital letters. For example, to specify the domain\nadministration server password, use an entry with the following format: AS_ADMIN_PASSWORD=password, where password is the actual administrator password. Other passwords that can be specified include AS_ADMIN_MAPPEDPASSWORD, AS_ADMIN_USERPASSWORD,\nAS_ADMIN_SAVEDMASTERPASSWORD, AS_ADMIN_MQPASSWORD, AS_ADMIN_ALIASPASSWORD, and so on.

    \n\n
    \n
    \n–H ––host\n
    \n
    \n

    The machine name where the domain administration server is running. The default value is localhost.

    \n\n
    \n
    \n–p ––port\n
    \n
    \n

    The port number of the domain administration server listening for administration requests. The default port number for Platform Edition is 4848. The default port\nnumber for Enterprise Edition is 4849.

    \n\n
    \n
    \n–s ––secure\n
    \n
    \n

    If set to true, uses SSL/TLS to communicate with the domain administration server.

    \n\n
    \n
    \n–t ––terse\n
    \n
    \n

    Indicates that any output data must be very concise, typically avoiding human-friendly sentences and favoring well-formatted data for consumption by a script. Default is false.

    \n\n
    \n
    \n–e ––echo\n
    \n
    \n

    Setting to true will echo the command line statement on the standard output. Default is false.

    \n\n
    \n
    \n–I ––interactive\n
    \n
    \n

    If set to true (default), only the required password options are prompted.

    \n\n
    \n
    \n–h ––help\n
    \n
    \n

    Displays the help text for the command.

    \n\n
    \n
    \n\n
\n

OPERANDS

\n
    \n\n
    \n
    \ntarget\n
    \n
    \n

    In Enterprise Edition, specifies the target on which you are listing the system\nproperties. Valid values are\n

      \n
    • \n

      \ndomain, which lists the system\nproperties defined for the domain

      \n\n
    • \n
    • \n

      \nconfiguration_name,lists the system\nproperties for the named configuration as well as those the cluster inherits\nfrom the domain.

      \n\n
    • \n
    • \n

      \ncluster_name, which lists the system\nproperties defined for the named cluster as well as those the cluster. inherits\nfrom its configuration and the domain.

      \n\n
    • \n
    • \n

      \ninstance_name, which lists the\nsystem properties delfined for the named server instance as well as those\nthe server inherits from its cluster (if the instance is clustered), its configuration,\nand the domain.

      \n\n
    • \n
    \n\n

    \n\n
    \n
    \n\n
\n

EXAMPLES

\n
    \n\n
    \n
    Example 1 Using list-system-properties
    \n
    \n\n\n\n\n
    \n
    \n
    asadmin> list-system-properties --user admin --passwordfile password.txt \n--host localhost --port 4849 http-listener-port=1088 mycluster\nhttp-listener-port=1088\nCommand list-system-properties executed successfully.
    \n
    \n
    \n\n
\n

EXIT STATUS

\n
    \n\n
    \n
    0
    \n
    \n

    command executed successfully

    \n\n
    \n
    1
    \n
    \n

    error in executing the command

    \n\n
    \n
    \n\n
\n

SEE ALSO

\n\nJ2EE SDK 1.4  Last Revised 16 Nov 2005

\nNAME\n | SYNOPSIS\n | DESCRIPTION\n | OPTIONS\n | OPERANDS\n | EXAMPLES\n | EXIT STATUS\n | SEE ALSO\n

\n
\n
    \n \n
\n
\n
" + }, + { + "url": "https://docs.oracle.com/cd/E29542_01/admin.1111/e15867/testing.htm", + "domain": "docs.oracle.com", + "file_source": "part-00943-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "\n\n\nUsing the Test Console\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n
\n
49/73 \n \n

40 Using the Test Console

\n

This chapter provides guidelines and information on testing services using the Oracle Service Bus Test Console, including a section that describes how to undeploy the Test Console in production environments.

\n

The Oracle Service Bus Test Console is a browser-based test environment you use validate and test the design of your system. It is an extension of the Oracle Service Bus Administration Console. (The Test Console is also available in Eclipse.) You configure the object of your test (proxy service, business service, XQuery, XSLT, or MFL resource), execute the test, and view the results in the Test Console. In some cases, you can trace through the code and examine the state of the message at specific trace points. Design time testing helps isolate design problems before you deploy a configuration to a production environment.

\n

The Test Console can test specific parts of your system in isolation and it can test your system as a unit. You can use the Test Console in clustered environments. However, Oracle does not recommend deploying the Test Console in production environments.

\n

You can access the Test Console from:

\n
    \n
  • \n

    The Project Explorer

    \n
  • \n
  • \n

    The Resource Browser

    \n
  • \n
  • \n

    The XQuery Editor

    \n
  • \n
  • \n

    Eclipse, using the Run As and Debug As options

    \n
  • \n
\n

For detailed procedural information, see Chapter 33, \"Test Console\" and \"Using the Oracle Service Bus Debugger\" in the Oracle Fusion Middleware Developer's Guide for Oracle Service Bus.

\n

This chapter contains the following topics:

\n\n\n
\n

40.1 Prerequisites

\n

To use the Test Console:

\n
    \n
  • \n

    You must have Oracle Service Bus running and you must have activated the session that contains the resource you want to test.

    \n
  • \n
  • \n

    You must disable the pop-up blockers in your browser for the XQuery testing to work. If you have toolbars in the Internet Explorer browser, this may mean disabling pop-up blockers from under the Options menu as well as for all toolbars that are configured to block them. XQuery testing is done only in the design time environment (in an active session).

    \n
  • \n
  • \n

    If you receive an error saying the Test Console service is not running, try setting the Admin server listen address to a specific valid address, such as localhost. In the Oracle WebLogic Server Console, go to Environment > Servers > admin_server_name > Configuration > General to set the Listen Address. Also, in a cluster, make sure all managed nodes are running.

    \n
  • \n
  • \n

    If you want the Test Console to generate and send SAML tokens to a proxy service, you must configure the proxy service to require SAML tokens and to be a relying party. For more information on creating a SAML relying party, see \"Create a SAML 1.1 Relying Party\" in the Oracle Fusion Middleware Oracle WebLogic Server Administration Console Online Help.

    \n
    \n

    Note:

    \n

    When creating a SAML relying party:

    \n
      \n
    • \n

      Only WSS/Sender-Vouches and WSS/Holder-of-Key SAML profiles are applicable to a proxy service.

      \n
    • \n
    • \n

      When you are configuring the relying party, for the target URL value provide the URI of the proxy service. You can view the URI of the proxy service by clicking on the proxy service name in the Oracle Service Bus Administration Console Project Explorer module. The URI displays in the Endpoint URI row of the Transport Configuration table.

      \n
    • \n
    \n
    \n
  • \n
\n
\n\n\n
\n

40.2 Testing Proxy Services

\n

You can test the following types of proxy services:

\n
    \n
  • \n

    WSDL Web service

    \n
  • \n
  • \n

    Messaging Service

    \n
  • \n
  • \n

    Any Soap Service

    \n
  • \n
  • \n

    Any XML Service

    \n
  • \n
\n\n
\n

40.2.1 Direct Calls

\n
\n

Caution:

\n

Testing proxy services with the direct call option enabled bypasses some important security steps, including access control. Oracle recommends that you not use the test framework in production systems. For information on untargeting the Test Console, see Section 40.11, \"Undeploying the Test Console.\"

\n
\n

A direct call is used to test a proxy service that is collocated in the Oracle Service Bus domain. Using the direct call option, messages are sent directly to the proxy service, bypassing the transport layer. When you employ the direct call option, tracing is turned on by default, allowing you to diagnose and troubleshoot a message flow in the Test Console. By default, testing of proxy services is done using the direct call option.

\n

When you use the direct call option to test a proxy service, the configuration data you input to the Test Console must be that which is expected by the proxy service from the client that invokes it. In other words, the Test Console plays the role of the client invoking the proxy service. Also, when you do direct call testing, you bypass the monitoring framework for the message.

\n

Figure 40-1 illustrates a direct call. The message bypasses the transport layer; it is delivered directly to the proxy service (P1).

\n
\n

Figure 40-1 Direct Call to Test a Proxy Service

\n\"Description
\nDescription of \"Figure 40-1 Direct Call to Test a Proxy Service\"
\n
\n\n

A direct call strategy is best suited for testing the internal message flow logic of proxy services. Your test data should simulate the expected message state at the time it is dispatched. Use this test approach in conjunction with setting custom (inbound) transport headers in the Test Console Transport panel to accurately simulate the service call.

\n
\n\n\n
\n

40.2.2 Indirect Calls

\n

When you test a proxy service with an indirect call (that is, when the direct call option is not selected), the message is sent to the proxy service through the transport layer. The transport layer performs manipulation of message headers or metadata as part of the test. The effect is to invoke a proxy service to proxy service runtime path.

\n

Figure 40-2 illustrates an indirect call. The message is first processed through the transport layer and is subsequently delivered to the proxy service (P1).

\n
\n

Figure 40-2 Indirect Call to Test a Proxy Service

\n\"Description
\nDescription of \"Figure 40-2 Indirect Call to Test a Proxy Service\"
\n
\n\n

Oracle recommends this testing strategy when testing a proxy service to proxy service interface when both services run in the same JVM. Use this test approach in conjunction with setting custom (outbound) transport headers in the Test Console Transport panel to accurately simulate the service call. For more information on transport settings, see Section 40.9, \"Test Console Transport Settings.\"

\n

Using an indirect call, the configuration data you input to the test is the data being sent from a proxy service, for example, from a route node or a service callout action of another proxy service. In the indirect call scenario, the Test Console plays the role of the proxy service that routes to, or makes a callout to, another service.

\n
\n

Note:

\n

Using an indirect call to a request/response MQ proxy service will not work.

\n

In addition, the Test Console does not display the response from an indirect call to an MQ or JMS request/response proxy service using a correlation based on a messageID. When you test an MQ or JMS request/response proxy service with an indirect call, the response is retained in the response queue, and not displayed in the Test Console.

\n

For more information, see \"MQ Transport\" in the Oracle Fusion Middleware Developer's Guide for Oracle Service Bus.

\n
\n
\n\n\n
\n

40.2.3 HTTP Requests

\n

When you test proxy services, the Test Console never sends a HTTP request over the network, therefore, transport-level access control is not applied. Transport-level access control is achieved through the Web application layer—therefore, even in the case that an indirect call is made through the Oracle Service Bus Administration Console transport layer, an HTTP request is not sent over the network and transport-level access control is not applied. For information about message processing in the transport layer, see \"Architecture Overview\" in the Oracle Fusion Middleware Concepts and Architecture for Oracle Service Bus.

\n

For information about transport settings, see Section 33.4, \"Understanding How the Runtime Uses the Transport Settings in the Test Console.\".

\n
\n
\n\n\n
\n

40.3 Testing Business Services

\n

You can test the following types of business services:

\n
    \n
  • \n

    WSDL Web service

    \n
  • \n
  • \n

    Transport Typed Service

    \n
  • \n
  • \n

    Messaging Service

    \n
  • \n
  • \n

    Any Soap Service

    \n
  • \n
  • \n

    Any XML Service

    \n
  • \n
\n

When testing business services, the messages are always routed through the transport layer. The direct call option is not available. The configuration data that you provide to the Test Console to test the service is that which represents the state of the message that is expected to be sent to that business service—for example, from a route node or a service callout action of a proxy service. The Test Console functions in the role of the caller proxy service when you use it to test a business service.

\n
\n

Tip:

\n

Ensure that the user name and password that you specify in the Test Console exists in the local Oracle Service Bus domain even if the business service being tested is in a remote domain. The test service performs a local authentication before invoking any proxy or business service.

\n
\n
\n\n\n
\n

40.4 Recommended Approaches to Testing Proxy and Business Services

\n

In the scenario depicted in Figure 40-3, a client invokes the proxy service (P1). The message flow invokes business service B1, then proxy service P2, then proxy service P3 before returning a message to the client. Interfaces are identified by number.

\n
\n

Figure 40-3 Test Scenario Example

\n\"Description
\nDescription of \"Figure 40-3 Test Scenario Example\"
\n
\n\n

There are many valid test strategies for this scenario. Oracle recommends the following:

\n
    \n
  • \n

    Complete the testing of interfaces other than the client interface to a given proxy service before you test the client call. In the sample scenario illustrated in Figure 40-3, this means that you complete the testing of interfaces 1 through 4 first, then test interface 5. In this way, the message flow logic for the proxy service (P1) can be iteratively changed and tested (through interface 5) knowing that the other interfaces to the proxy service function correctly.

    \n
  • \n
  • \n

    Validate and test all the XQuery expressions in a message flow prior to a system test. In Figure 40-3, interface 1 refers to XQuery expression tests.

    \n
  • \n
  • \n

    Test proxy service to business service (interface 2 in Figure 40-3) using a indirect call. In other words, route the messages through the transport layer.

    \n
  • \n
  • \n

    Test proxy service to proxy service (interfaces 3 and 4 in Figure 40-3) using an indirect call. In other words, disable the direct call option, which means that during testing, the messages are routed through the transport layer.

    \n
  • \n
  • \n

    Make your final system test simulate the client invoking the proxy service P1. This test is represented by interface 5 in Figure 40-3. Test interface 5 with a direct call. In this way, during the testing, the messages bypass the transport layer. By default, tracing is enabled with a direct call.

    \n
  • \n
  • \n

    Save the message state after executing successful interface tests to facilitate future troubleshooting efforts on the system. Testing interface 5 is in fact a test of the complete system. Knowing that all other interfaces in the system work correctly helps narrow the troubleshooting effort when system errors arise.

    \n
  • \n
\n
\n\n\n
\n

40.5 Tracing Proxy Services Using the Test Console

\n

Tracing the message through a proxy service involves examining the message context and outbound communications at various points in the message flow. The points at which the messages are examined are predefined by Oracle Service Bus. Oracle Service Bus defines tracing for stages, error handlers, and route nodes.

\n

For each stage, the trace includes the changes that occur to the message context and all the services invoked during the stage execution. The following information is provided by the trace:

\n
    \n
  • \n

    (New variables)—The names of all new variables and their values. To view values, click the + sign.

    \n
  • \n
  • \n

    (Deleted variables)—The names of all deleted variables.

    \n
  • \n
  • \n

    (Changed variables)—The names of all variables for which the value changed. To view the new value, click the + sign.

    \n
  • \n
  • \n

    Publish—Every publish call is listed. For each publish call, the trace includes the name of the service invoked, and the value of the outbound, header, body, and attachment variables.

    \n
  • \n
  • \n

    Service callout—Every service callout is listed. For each service callout, the trace includes the name of the service that is invoked, the value of the outbound variable, the value of the header, body, and attachment variables for both the request and response messages.

    \n
  • \n
\n

The trace contains similar information for route nodes as for stages. In the case of route nodes, the trace contains the following categories of information:

\n
    \n
  • \n

    The trace for service invocations on the request path

    \n
  • \n
  • \n

    The trace for the routed service

    \n
  • \n
  • \n

    The trace for the service invocations on the response path

    \n
  • \n
  • \n

    Changes made to the message context between the entry point of the route node (on the request path) and the exit point (on the response path)

    \n
  • \n
\n\n
\n

40.5.1 Example: Testing and Tracing a Proxy Service

\n

The following example scenario uses one of the proxy services in the Oracle Service Bus Examples domain as a basis of instruction, the loanGateway3 proxy service associated with the Validating a Loan Application example.

\n

To test this proxy service in the Oracle Service Bus Examples domain using the Test Console, complete the following procedure:

\n
    \n
  1. \n

    Start the Oracle Service Bus Examples domain and load the samples data.

    \n
  2. \n
  3. \n

    Log in to the Oracle Service Bus Administration Console, then select Resource Browser and locate the LoanGateway3 proxy service.

    \n
  4. \n
  5. \n

    Click the Launch Test Console icon for the LoanGateway3 proxy service. The Proxy Service Testing - LoanGateway3 page appears. The Direct Call and the Include Tracing options are selected.

    \n
  6. \n
  7. \n

    Edit the test XML provided to send the following test message, illustrated in Example 40-1.

    \n
    \n

    Example 40-1 Test Message for LoanGateway3

    \n
    \n<loanRequest xmlns:java=\"java:normal.client\">\n    <java:Name>Name_4</java:Name>\n    <java:SSN>SSN_11</java:SSN>\n    <java:Rate>4.9</java:Rate>\n    <java:Amount>2500</java:Amount>\n    <java:NumOfYear>20.5</java:NumOfYear>\n    <java:Notes>Name_4</java:Notes>\n</loanRequest>\n
    \n
  8. \n
  9. \n

    Click Execute.

    \n

    Scroll to the bottom of the results page to view the tracing results in the Invocation Trace panel, shown in Figure 40-4.

    \n
  10. \n
\n
\n

Figure 40-4 Invocation Trace for LoanGateway3 Proxy Service

\n\"Description
\nDescription of \"Figure 40-4 Invocation Trace for LoanGateway3 Proxy Service\"
\n
\n\n

The trace indicates the following:

\n
    \n
  • \n

    Initial Message Context—Shows the variables initialized by the proxy service when it is invoked. To see the value of any variable, click the + sign associated with the variable name.

    \n
  • \n
  • \n

    Changed Variables—$header, $body, and $inbound changed as a result of the processing of the message through the validate loan application stage. These changes are seen at the end of the message flow.

    \n
  • \n
  • \n

    The contents of the fault context variable ($fault) is shown as a result of the stage error handler handling the validation error. The non-integer value (20.5) you entered for the <java:NumOfYear> element in Example 40-1 caused the validation error in this case.

    \n
  • \n
\n

You can test the proxy service using different input parameters or by changing the message flow in the Oracle Service Bus Administration Console. Then run the test again and view the results.

\n
\n
\n\n\n
\n

40.6 Testing Resources

\n

You can test the following resources:

\n\n\n
\n

40.6.1 MFL

\n

A Message Format Language (MFL) document is a specialized XML document used to describe the layout of binary data. MFL resources support the following transformations:

\n
    \n
  • \n

    XML to binary—There is one required input (XML) and one output (binary).

    \n
  • \n
  • \n

    binary to XML—There is one required input (binary) and one output (XML).

    \n
  • \n
\n

Each transformation accepts only one input and provides a single output.

\n\n
\n

40.6.1.1 Example

\n

The following example illustrates testing an MFL transformation. The Test Console generates a sample XML document from the MFL file. Execute the test using the XML input. A successful test results in the transformation of the message content of the input XML document to binary format.

\n

Example 40-2 shows an example MFL file.

\n
\n

Example 40-2 Contents of an MFL File

\n
\n<?xml version='1.0' encoding='windows-1252'?>\n<!DOCTYPE MessageFormat SYSTEM 'mfl.dtd'>\n  <MessageFormat name='StockPrices' version='2.01'>\n   <StructFormat name='PriceQuote' repeat='*'>\n    <FieldFormat name='StockSymbol' type='String' delim=':' codepage='windows-1252'/>\n    <FieldFormat name='StockPrice' type='String' delim='|'codepage='windows-1252'/>\n   </StructFormat>\n  </MessageFormat>\n
\n

The XML document generated by the Test Console to test the MFL file in the Example 40-2 is shown in Example 40-3.

\n
\n\n
\n

Example 40-3 Test Console XML Input

\n
\n<StockPrices>\n    <PriceQuote>\n        <StockSymbol>StockSymbol_31</StockSymbol>\n        <StockPrice>StockPrice_17</StockPrice>\n    </PriceQuote>\n</StockPrices>\n
\n

In the Test Console, click Execute to run the test. Example 40-4 shows the resulting test data, the stock symbol and stock price in binary format.

\n
\n\n
\n

Example 40-4 MFL Test Console Results

\n
\n00000000:53 74 6F 63 6B 53 79 6D 62 6F 6C 5F 33 31 3A 53 StockSymbol_31:S\n00000010:74 6F 63 6B 50 72 69 63 65 5F 31 37 7C .. .. .. StockPrice_17|...\n
\n
\n
\n\n\n
\n

40.6.2 XSLT

\n

Extensible Stylesheet Language Transformation (XSLT) describes XML-to-XML mappings in Oracle Service Bus. You can use XSL transformations when you edit XQuery expressions in the message flow of proxy services.

\n

To test an XSLT resource, you must supply an input XML document. The Test Console returns the output XML document. You can create parameters in your document to assist with a transformation. XSLT parameters accept either primitive values or XML document values. You cannot identify the types of parameters from the XSL transformation. In the Input and Parameters panel of the XSLT Resource Testing page in the Test Console, you must provide the values to bind to the XSLT parameters defined in your document.

\n

For more information, see Section 33.2.2, \"Testing XSLT Transformations.\"

\n
\n\n\n
\n

40.6.3 XQuery

\n

XQuery uses the structure of XML to express queries across different kinds of data, whether physically stored in XML or viewed as XML.

\n

An XQuery transformation can take multiple inputs and returns one output. The inputs expected by an XQuery transformation are variable values to bind to each of the XQuery external variables defined. The value of an XQuery input variable can be a primitive value (String, integer, date), an XML document, or a sequence of the previous types. The output value can be a primitive value (String, integer, date), an XML document, or a sequence of the previous types.

\n

XQuery is a typed language—every external variable is given a type. The types can be categorized into the following groups:

\n
    \n
  • \n

    Simple/primitive type—String, int, float, and so on.

    \n
  • \n
  • \n

    XML nodes

    \n
  • \n
  • \n

    Untyped

    \n
  • \n
\n

In the Test Console, a single-line edit box is displayed if the expected type is a simple type. A multiple-line edit box is displayed if the expected data is XML. A combination input is used when the variable is not typed. The Test Console provides the following field in which you can declare the variable type: [] as XML. Input in the Test Console is rendered based on the type to make it easier to understand the type of data you must enter.

\n

Figure 40-5 shows an XQuery with three variables: int, XML, and undefined type.

\n
\n

Figure 40-5 Input to the XQuery Test

\n\"Description
\nDescription of \"Figure 40-5 Input to the XQuery Test\"
\n
\n\n

In the Test Console, all three variables are listed in the Variables panel. By default, XML is selected for the untyped variable as it is the most typical case. You must configure these variables in the Variables panel. See Section 33.2.3, \"Testing XQuery Transformations.\"

\n

You can also test an XQuery expression from the XQuery Editor.

\n
\n
\n\n\n
\n

40.7 Performing XQuery Testing

\n

You must disable the pop-up blockers in your browser for the XQuery testing to work. If you have toolbars in the Internet Explorer browser, you may need to disable pop-up blockers from under the browser Options menu as well as for all toolbars that are configured to block them.

\n

When performing XQuery testing in the Test Console, use the Back button to execute a new test. However, if you want to execute a new test after making changes to the XQuery, you must close and re-open the Test Console for the changes to take effect. For detailed information, see Section 33.2.3, \"Testing XQuery Transformations.\"

\n
\n\n\n
\n

40.8 Testing Services With Web Service Security

\n

The Test Console supports testing proxy services and business services protected with Web Service Security (WSS). A SOAP service is protected with WSS if it has WS-Policies with WS-Security assertions assigned to it. Specifically, a service operation is protected with WS-Security if its effective request or response WS-Policy includes WS-Security assertions. WS-Policies are assigned to a service by WS-PolicyAttachment. See \"Attaching WS-Policy Statements to WSDL Documents\" in the Oracle Fusion Middleware Developer's Guide for Oracle Service Bus. An operation might have both a request policy and a response policy.

\n

When an operation has a request or response WS-Policy, the message exchange between the Test Console and the service is protected by the mechanisms of WS-Security. According to the operation's policy, the test service digitally signs or encrypts the message (more precisely, parts of the message) and includes any applicable security tokens. You specify the input to the digital signature and encryption operations is the clear-text SOAP envelope specified as described in Section 33.1.2, \"Configuring Proxy Services Test Data\" and Section 33.1.6, \"Configuring Business Services Test Data.\"

\n

Similarly, the service processes the response according to the operation's response policy. The response may be encrypted or digitally signed. The test service then processes this response and decrypts the message or verifies the digital signature.

\n

The Test Console (Security panel) displays fields used for testing services with WS-Security: Service Provider, Username, and Password.

\n

If you specify a service key provider in the Test Console, all client-side PKI key-pair credentials required by WS-Security are retrieved from the service key provider. You use the user name and password fields when an operation's request policy specifies an Identity assertion and user name Token is one of the supported token types.

\n

The Service Provider, Username, and Password fields are displayed whenever the operation has a request or response policy. Whether the values are required depends on the actual request and response policies.

\n

Table 40-1 and Table 40-2 describe security scenarios.

\n
\n

Table 40-1 Digital Signature and Encryption Scenarios

\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
ScenarioIs Service Key Provider Required?
\n

The request policy has a Confidentiality assertion.

\n
\n

No. The test service encrypts the request with the service's public key. When testing a proxy service, the test service automatically retrieves the public key from the encryption certificate assigned to the service key provider of the proxy service.

\n

When testing a business service, the encryption certificate is embedded in the WSDL of the business service. The test service automatically retrieves this WSDL from the WSDL repository and extracts the encryption certificate from the WSDL.

\n
\n

The response policy has a Confidentiality assertion.

\n
\n

Yes. In this scenario, the operation policy requires the client to send its certificate to the service. The service will use the public key from this certificate to encrypt the response to the client. A service key provider must be specified and must have an associated encryption credential.

\n

If both request and response encryption are supported, different credentials must be used.

\n
\n

The request policy has an Integrity assertion.

\n
\n

Yes. The client must sign the request. A service key provider must be specified and must have an associated digital signature credential.

\n

Furthermore, if this is a SAML holder-of-key integrity assertion, a user name and password is needed in addition to the service key provider.

\n
\n

The response policy has an Integrity assertion.

\n
\n

No. In this case, the policy specifies that the service must sign the response. The service signs the response with its private key. The Test Console simply verifies this signature.

\n

When testing a proxy service, this is the private key associated to the service key provider's digital signature credential for the proxy service.

\n

When testing a business service, the service signing key-pair is configured in a product-specific way on the system hosting the service.

\n

In the case that the current security realm is configured to do a Certificate Lookup and Validation, then the certificate that maps to the service key provider must be registered and valid in the certificate lookup and validation framework.

\n

For more information on Certificate Lookup and Validation, see ''Configuring the Certificate Lookup and Validation Framework\" in Oracle Fusion Middleware Securing Oracle WebLogic Server.

\n
\n
\n\n
\n

Table 40-2 Identity Policy Scenarios (Assuming that the Policy has an Identity Assertion)

\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Supported Token TypesFoot 1 DescriptionComments
\n

UNT

\n
\n

The service only accepts WSS user name tokens

\n
\n

You must specify a user name and password in the Security panel.

\n
\n

X.509

\n
\n

The service only accepts WSS X.509 tokens

\n
\n

You must specify a service key provider in the Security panel and the service key provider must have an associated WSS X.509 credential.

\n
\n

SAML

\n
\n

The service only accepts WSS SAML tokens

\n
\n

You must specify a user name and password in the Security panel or a user name and password in the Transport panel. If both are specified, the one from the Security panel is used as the identity in the SAML token.

\n
\n

UNT, X.509

\n
\n

The service accepts UNT or X.509 tokens

\n
\n

You must specify a user name and password in the Security panel or a service key provider in the Security panel with an associated WSS X.509 credential. If both are specified, only a UNT token is generated.

\n
\n

UNT, SAML

\n
\n

The service accepts UNT or SAML tokens

\n
\n

You must specify a user name and password in the Security panel or a user name and password in the Transport panel. If both are specified, only a UNT token is sent.

\n
\n

X.509, SAML

\n
\n

The service accepts X.509 or SAML tokens

\n
\n

You must specify one of the following:

\n
    \n
  • \n

    user name and password in the Security panel

    \n
  • \n
  • \n

    user name and password in the Transport panel

    \n
  • \n
  • \n

    service key provider with an associated WSS X.509 credential

    \n
  • \n
\n
\n

UNT, X.509, SAML

\n
\n

The service accepts UNT, X.509 or SAML tokens

\n
\n

You must specify one of the following:

\n
    \n
  • \n

    user name and password in the Security panel

    \n
  • \n
  • \n

    user name and password in the Transport panel

    \n
  • \n
  • \n

    service key provider with an associated WSS X.509 credential

    \n
  • \n
\n
\n
\n\n

Footnote 1  From the Identity Assertion inside the request policy.

\n\n
\n

40.8.1 Limitations for Services and Policies

\n

The following limitations exist for testing proxy services with SAML policies and business services with SAML holder-of-key policies:

\n
    \n
  • \n

    Testing proxy services with inbound SAML policies is not supported.

    \n
  • \n
  • \n

    Testing business services with a SAML holder-of-key policy is a special case. The SAML holder-of-key scenario can be configured in two ways:

    \n
      \n
    • \n

      as an integrity policy (this is the recommended approach)

      \n
    • \n
    • \n

      as an identity policy

      \n
    • \n
    \n

    In both cases, you must specify a user name and password—the SAML assertion will be on behalf of this user. If SAML holder-of-key is configured as an integrity policy, you must also specify a service key provider. The service key provider must have a digital signature credential assigned to it. This case is special because this is the only case where a user name and password must be specified even if there is not an identity policy.

    \n
    \n

    Note:

    \n

    After executing a test in the Test Console, the envelope generated with WSS is not always a valid envelope—the results page in the Test Console includes white spaces for improved readability. That is, the secured SOAP message is displayed with extra white spaces. Because white spaces can affect the semantics of the document, this SOAP message cannot always be used as the literal data. For example, digital signatures are white-space sensitive and can become invalid.

    \n
    \n
  • \n
\n
\n
\n\n\n
\n

40.9 Test Console Transport Settings

\n

You use the Transport panel in the Test Console to specify the metadata and transport headers for messages in your test system.

\n

Figure 40-6 shows the Transport panel for a WSDL-based proxy service.

\n
\n

Figure 40-6 Transport Panel in the Test Console

\n\"Description
\nDescription of \"Figure 40-6 Transport Panel in the Test Console\"
\n
\n\n

By setting the metadata and the transport headers in the message flow of a proxy service, you influence the actions of the outbound transport. You can test the metadata, the message, and the headers so that you can view the pipeline output. The fields that are displayed in the Transport panel when testing a proxy service represent those headers and metadata that are available in the pipeline. The Test Console cannot filter the fields it displays depending on the proxy service. The same set of transport parameters are displayed for every HTTP-based request.

\n

The Username and Password fields are used to implement basic authentication for the user that is running the proxy service. The Username and Password fields are not specifically transport related.

\n

Metadata fields are located below the Username and Password fields and above the transport header fields. The fields displayed are based on the transport type of the service. Certain fields are pre-populated depending on the operation selection algorithm you selected for the service when you defined it.

\n

For example, in the Transport panel displayed in Figure 40-6, the SOAPAction header field is populated with \"http://example.orgprocessLoanApp\". This value comes from the service definition (the selection algorithm selected for this proxy service was SOAPAction Header). For more information about the selection algorithms, see Chapter 37, \"Modeling Message Flow in Oracle Service Bus.\"

\n

Specify the values in the Transport panel fields according to whether the message will be processed through the transport layer (for example, if you are testing the service using a direct call), or not (an indirect call).

\n

When testing a proxy service with a direct call, the test data must represent the message as if it had been processed through the transport layer. That is, the test data should represent the message in the state expected at the point it leaves the transport layer and enters the service. When testing a proxy or business service using an indirect call, the test data represents the data that is sent from a route node or a service callout. The test message is processed through the transport layer.

\n

When testing services that use the MQ transport, message text might appear garbled depending on the character set being used. The test console writes messages in the UTF-8 character format. If the MQ connection's coded character set identifier (CCSID) is configured to a value other than 1208 (UTF-8), the text appears garbled in the console. To avoid this, override the CCSID by defining a User Header named characterSet on the Transport panel of the Test Console. Set the value of the header to 1208.

\n

For information about specific headers and metadata and how they are handled by the test framework, see Section 33.4, \"Understanding How the Runtime Uses the Transport Settings in the Test Console.\".

\n
\n\n\n
\n

40.10 About Security and Transports

\n

When using the Test Console to test HTTP business services with BASIC authentication, the Test Console authenticates the user name and password from the service account of the business service. Similarly, when testing JMS, email, or FTP business services that require authentication, the Test Console authenticates the service account associated with the business service.

\n

When you test proxy services, the Test Console never sends a HTTP request over the network. Therefore, transport-level access control is not applied.

\n
\n\n\n
\n

40.11 Undeploying the Test Console

\n

Oracle recommends that you not use the test framework in production systems. For example, testing proxy services with the direct call option bypasses some important security steps, including access control.

\n

When you create an Oracle Service Bus domain, the Configuration Wizard, by default, includes the \"ALSB Test Framework\" (Test Console) as a target on the Admin Server and any Managed Servers. The following section describe different options for undeploying the Test Console:

\n\n\n
\n

40.11.1 Untargeting the Test Console Before Domain Creation

\n

To untarget the Test Console in the Oracle Fusion Middleware Configuration Wizard before a domain is created:

\n
    \n
  1. \n

    When creating an Oracle Service Bus domain with the Configuration Wizard, select optional configuration for Deployments and Services.

    \n
  2. \n
  3. \n

    In the related wizard pages that follow, for each server, deselect the ALSB Test Framework application.

    \n
  4. \n
\n

When the wizard creates the domain, the Test Console (OSB_ORACLE_HOME\\lib\\sbTestFwk.ear) is not deployed.

\n
\n\n\n
\n

40.11.2 Untargeting the Test Console when the Server is Running

\n

To undeploy the Test Console when an Oracle Service Bus domain is running, do the following:

\n
    \n
  1. \n

    Start the Oracle WebLogic Server Administration Console and log in.

    \n
  2. \n
  3. \n

    In the left navigation area, under Domain Structure, click Deployments. The Summary of Deployments page is displayed.

    \n
  4. \n
  5. \n

    In the Deployments table, click the \"ALSB Test Framework.\" The Settings page for the ALSB Test Framework is displayed.

    \n
  6. \n
  7. \n

    Click the Targets tab.

    \n
  8. \n
  9. \n

    Select the Component check box to select all the test framework resources, and click Change Targets.

    \n
  10. \n
  11. \n

    On the Target Deployments page, deselect the AdminServer and all Managed Servers.

    \n
  12. \n
  13. \n

    Click Yes. A message is displayed indicating that the settings have been successfully updated.

    \n
  14. \n
\n
\n\n\n
\n

40.11.3 Untargeting the Test Console when the Server is Not Running

\n

If an Oracle Service Bus domain is not running, you can use the WebLogic Scripting Tool (WLST) to untarget the Test Console from the Oracle Service Bus domain. For more information about WLST, see Oracle Fusion Middleware Oracle WebLogic Scripting Tool.

\n

To untarget the Test Console (\"ALSB Test Framework\"), complete the following steps:

\n
    \n
  1. \n

    If you have not already set up your environment to use WLST, see \"Main Steps for Using WLST\" in \"Using the WebLogic Scripting Tool\" in Oracle Fusion Middleware Oracle WebLogic Scripting Tool.

    \n
  2. \n
  3. \n

    Invoke WLST Offline.

    \n
    \nC:>java com.bea.plateng.domain.script.jython.WLST_offline\n
  4. \n
  5. \n

    To read the domain that was created using the Configuration Wizard execute:

    \n
    \nwls:/offline>readDomain(\"C:/oracle/user_projects/domains/base_domain\")\n
  6. \n
  7. \n

    To untarget the ALSB Test Framework application, execute:

    \n
    \nwls:/offline/base_domain>unassign(\"AppDeployment\", \"ALSB Test Framework\", \"Target\", \"AdminServer\", \"ManagedServer_1\", \"ManagedServer_2\")\n
    \n

    Include the names of all managed servers in the command.

    \n
  8. \n
  9. \n

    To update the domain execute:

    \n
    \nwls:/offline/base_domain>updateDomain()\n
  10. \n
  11. \n

    To close the domain execute:

    \n
    \nwls:/offline/base_domain>closeDomain()\n
  12. \n
  13. \n

    Exit from the WLST command prompt execute:

    \n
    \nwls:/offline>exit()\n
  14. \n
\n
\n
\n
\n\n\n
\n\n
\n\n\n" + }, + { + "url": "https://docs.oracle.com/cd/B40099_02/books/SystAdm/SystAdm_MgtFmwkAPI17.html", + "domain": "docs.oracle.com", + "file_source": "part-00089-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "\r\n\r\n\r\n\t\r\n\t\r\n\t\r\n\t\r\n\t\r\n\t\r\n\t\r\n\t\r\n\t\r\n\t\r\n\r\n\t\r\n\r\n\tBookshelf v8.0: startServer\r\n\t\r\n\t\r\n\t\r\n\r\n\r\n\r\n\r\n\r\n\t\r\n\t\t\r\n\t\t\r\n\t\t\r\n\t\r\n
  Bookshelf Home | Contents | Index | PDF   
\r\n\r\n\r\n\r\n
\r\n

Siebel System Administration Guide > Siebel Management Framework APIs > Methods Used with the Server-Level Service >

\r\n

startServer

\r\n
\r\n

Starts the application server in the particular enterprise. Boolean return value is True if a success start to the application server process is made. Note this is different than checking to see if the application server actually came up or not. You execute the siebctl command on the agent and if that command returns successfully a Boolean True is returned, if not, a Boolean False is returned.

Syntax
\r\n

See the following for syntax of this method:

Boolean startServer(String entName, String srvrName) throws Exception;

\r\n
\r\n\r\n\r\n\t\r\n\t\t\r\n\t\t\r\n\t\t\r\n\t\r\n
   
\r\n\r\n\r\n\t\r\n\t\t\r\n\t\t\r\n\t\r\n\t\r\n\t\t\r\n\t\r\n
Siebel System Administration GuideCopyright © 2010, Oracle and/or its affiliates. All rights reserved. Legal Notices.
 
\r\n\r\n\r\n\r\n" + }, + { + "url": "https://docs.oracle.com/javaee/5/tutorial/doc/bncdr.html", + "domain": "docs.oracle.com", + "file_source": "part-00656-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "\n\n\n\n\nOverview of the JMS API - The Java EE 5 Tutorial\n\n\n\n\n\n\n\n\n\n\n\n\n \n \n \n \n \n\n

Document Information

\n

Preface

\n

Part I Introduction

\n

1.  Overview

\n

2.  Using the Tutorial Examples

\n

Part II The Web Tier

\n

3.  Getting Started with Web Applications

\n

4.  Java Servlet Technology

\n

5.  JavaServer Pages Technology

\n

6.  JavaServer Pages Documents

\n

7.  JavaServer Pages Standard Tag Library

\n

8.  Custom Tags in JSP Pages

\n

9.  Scripting in JSP Pages

\n

10.  JavaServer Faces Technology

\n

11.  Using JavaServer Faces Technology in JSP Pages

\n

12.  Developing with JavaServer Faces Technology

\n

13.  Creating Custom UI Components

\n

14.  Configuring JavaServer Faces Applications

\n

15.  Internationalizing and Localizing Web Applications

\n

Part III Web Services

\n

16.  Building Web Services with JAX-WS

\n

17.  Binding between XML Schema and Java Classes

\n

18.  Streaming API for XML

\n

19.  SOAP with Attachments API for Java

\n

Part IV Enterprise Beans

\n

20.  Enterprise Beans

\n

21.  Getting Started with Enterprise Beans

\n

22.  Session Bean Examples

\n

23.  A Message-Driven Bean Example

\n

Part V Persistence

\n

24.  Introduction to the Java Persistence API

\n

25.  Persistence in the Web Tier

\n

26.  Persistence in the EJB Tier

\n

27.  The Java Persistence Query Language

\n

Part VI Services

\n

28.  Introduction to Security in the Java EE Platform

\n

29.  Securing Java EE Applications

\n

30.  Securing Web Applications

\n

31.  The Java Message Service API

\n\n

Basic JMS API Concepts

\n

JMS API Architecture

\n

Messaging Domains

\n

Point-to-Point Messaging Domain

\n

Publish/Subscribe Messaging Domain

\n

Programming with the Common Interfaces

\n

Message Consumption

\n

The JMS API Programming Model

\n

JMS Administered Objects

\n

JMS Connection Factories

\n

JMS Destinations

\n

JMS Connections

\n

JMS Sessions

\n

JMS Message Producers

\n

JMS Message Consumers

\n

JMS Message Listeners

\n

JMS Message Selectors

\n

JMS Messages

\n

Message Headers

\n

Message Properties

\n

Message Bodies

\n

JMS Queue Browsers

\n

JMS Exception Handling

\n

Writing Simple JMS Client Applications

\n

A Simple Example of Synchronous Message Receives

\n

Writing the Client Programs for the Synchronous Receive Example

\n

Starting the JMS Provider

\n

Creating JMS Administered Objects for the Synchronous Receive Example

\n

Compiling and Packaging the Clients for the Synchronous Receive Example

\n

Running the Clients for the Synchronous Receive Example

\n

A Simple Example of Asynchronous Message Consumption

\n

Writing the Client Programs for the Asynchronous Receive Example

\n

Compiling and Packaging the AsynchConsumer Client

\n

Running the Clients for the Asynchronous Receive Example

\n

A Simple Example of Browsing Messages in a Queue

\n

Writing the Client Program for the Queue Browser Example

\n

Compiling and Packaging the MessageBrowser Client

\n

Running the Clients for the Queue Browser Example

\n

Running JMS Client Programs on Multiple Systems

\n

Creating Administered Objects for Multiple Systems

\n

Editing, Recompiling, Repackaging, and Running the Programs

\n

Deleting the Connection Factory and Stopping the Server

\n

Creating Robust JMS Applications

\n

Using Basic Reliability Mechanisms

\n

Controlling Message Acknowledgment

\n

Specifying Message Persistence

\n

Setting Message Priority Levels

\n

Allowing Messages to Expire

\n

Creating Temporary Destinations

\n

Using Advanced Reliability Mechanisms

\n

Creating Durable Subscriptions

\n

Using JMS API Local Transactions

\n

Using the JMS API in a Java EE Application

\n

Using @Resource Annotations in Java EE Components

\n

Using Session Beans to Produce and to Synchronously Receive Messages

\n

Resource Management

\n

Transactions

\n

Using Message-Driven Beans to Receive Messages Asynchronously

\n

Managing Distributed Transactions

\n

Using the JMS API with Application Clients and Web Components

\n

Further Information about JMS

\n

32.  Java EE Examples Using the JMS API

\n

33.  Transactions

\n

34.  Resource Connections

\n

35.  Connector Architecture

\n

Part VII Case Studies

\n

36.  The Coffee Break Application

\n

37.  The Duke's Bank Application

\n

Part VIII Appendixes

\n

A.  Java Encoding Schemes

\n

B.  About the Authors

\n

Index

\n
 \n
\n
\n \n \n \n \n \n \n \n

The Java EE 5 Tutorial\n

\"Java
\n
\n\n
\n\t Home | \nDownload | \nPDF | \nFAQ | \nFeedback\n\n
\n
\n \"Previous\"\n \"Contents\"\n \"Next\"\n
\n
\n\n\t
\t \n \n\n

Overview of the JMS API

\n

This overview of the JMS API answers the following questions.

\n\n\n\n\n\n

What Is Messaging?

\n

Messaging is a method of communication between software components or applications. A messaging\nsystem is a peer-to-peer facility: A messaging client can send messages to, and\nreceive messages from, any other client. Each client connects to a messaging agent\nthat provides facilities for creating, sending, receiving, and reading messages.

\n\n

Messaging enables distributed communication that is loosely coupled. A component sends a message\nto a destination, and the recipient can retrieve the message from the destination.\nHowever, the sender and the receiver do not have to be available at\nthe same time in order to communicate. In fact, the sender does not\nneed to know anything about the receiver; nor does the receiver need to\nknow anything about the sender. The sender and the receiver need to know\nonly which message format and which destination to use. In this respect, messaging\ndiffers from tightly coupled technologies, such as Remote Method Invocation (RMI), which require\nan application to know a remote application’s methods.

\n\n

Messaging also differs from electronic mail (email), which is a method of communication\nbetween people or between software applications and people. Messaging is used for communication\nbetween software applications or software components.

\n\n\n\n

What Is the JMS API?

\n

The Java Message Service is a Java API that allows applications to\ncreate, send, receive, and read messages. Designed by Sun and several partner companies, the\nJMS API defines a common set of interfaces and associated semantics that allow\nprograms written in the Java programming language to communicate with other messaging implementations.

\n\n

The JMS API minimizes the set of concepts a programmer must learn\nin order to use messaging products but provides enough features to support sophisticated messaging\napplications. It also strives to maximize the portability of JMS applications across JMS\nproviders in the same messaging domain.

\n\n

The JMS API enables communication that is not only loosely coupled but also

\n\n\n
  • Asynchronous: A JMS provider can deliver messages to a client as they arrive; a client does not have to request messages in order to receive them.

    \n\n
  • \n
  • Reliable: The JMS API can ensure that a message is delivered once and only once. Lower levels of reliability are available for applications that can afford to miss messages or to receive duplicate messages.

    \n\n
\n

The JMS specification was first published in August 1998. The latest version is\nVersion 1.1, which was released in April 2002. You can download a copy\nof the specification from the JMS web site: http://www.oracle.com/technetwork/java/index-jsp-142945.html.

\n\n\n\n

When Can You Use the JMS API?

\n

An enterprise application provider is likely to choose a messaging API over a\ntightly coupled API, such as remote procedure call (RPC), under the following circumstances.

\n\n\n
  • The provider wants the components not to depend on information about other components’ interfaces, so that components can be easily replaced.

    \n\n
  • \n
  • The provider wants the application to run whether or not all components are up and running simultaneously.

    \n\n
  • \n
  • The application business model allows a component to send information to another and to continue to operate without receiving an immediate response.

    \n\n
\n

For example, components of an enterprise application for an automobile manufacturer can use\nthe JMS API in situations like these:

\n\n\n
  • The inventory component can send a message to the factory component when the inventory level for a product goes below a certain level so that the factory can make more cars.

    \n\n
  • \n
  • The factory component can send a message to the parts components so that the factory can assemble the parts it needs.

    \n\n
  • \n
  • The parts components in turn can send messages to their own inventory and order components to update their inventories and to order new parts from suppliers.

    \n\n
  • \n
  • Both the factory and the parts components can send messages to the accounting component to update their budget numbers.

    \n\n
  • \n
  • The business can publish updated catalog items to its sales force.

    \n\n
\n

Using messaging for these tasks allows the various components to interact with one\nanother efficiently, without tying up network or other resources. Figure 31-1 illustrates how\nthis simple example might work.

\n\n

Figure 31-1 Messaging in an Enterprise Application

\"Diagram

Manufacturing is only one example of how an enterprise can use the\nJMS API. Retail applications, financial services applications, health services applications, and many others can\nmake use of messaging.

\n\n\n\n

How Does the JMS API Work with the Java EE Platform?

\n

When the JMS API was introduced in 1998, its most important purpose was\nto allow Java applications to access existing messaging-oriented middleware (MOM) systems, such as\nMQSeries from IBM. Since that time, many vendors have adopted and implemented the\nJMS API, so a JMS product can now provide a complete messaging capability\nfor an enterprise.

\n\n

Beginning with the 1.3 release of the Java EE platform, the JMS\nAPI has been an integral part of the platform, and application developers can use\nmessaging with Java EE components.

\n\n

The JMS API in the Java EE platform has the following features.

\n\n\n
  • Application clients, Enterprise JavaBeans (EJB) components, and web components can send or synchronously receive a JMS message. Application clients can in addition receive JMS messages asynchronously. (Applets, however, are not required to support the JMS API.)

    \n\n
  • \n
  • Message-driven beans, which are a kind of enterprise bean, enable the asynchronous consumption of messages. A JMS provider can optionally implement concurrent processing of messages by message-driven beans.

    \n\n
  • \n
  • Message send and receive operations can participate in distributed transactions, which allow JMS operations and database accesses to take place within a single transaction.

    \n\n
\n

The JMS API enhances the Java EE platform by simplifying enterprise development, allowing\nloosely coupled, reliable, asynchronous interactions among Java EE components and legacy systems capable\nof messaging. A developer can easily add new behavior to a Java EE\napplication that has existing business events by adding a new message-driven bean to\noperate on specific business events. The Java EE platform, moreover, enhances the JMS\nAPI by providing support for distributed transactions and allowing for the concurrent consumption of\nmessages. For more information, see the Enterprise JavaBeans specification, v3.0.

\n\n

The JMS provider can be integrated with the application server using the Java\nEE Connector architecture. You access the JMS provider through a resource adapter. This\ncapability allows vendors to create JMS providers that can be plugged in to\nmultiple application servers, and it allows application servers to support multiple JMS providers.\nFor more information, see the Java EE Connector architecture specification, v1.5.

\n\n\n
\n
\n \"Previous\"\n \"Contents\"\n \"Next\"\n
\n\n
\n \t

Copyright © 2010, Oracle and/or its affiliates. All rights reserved. Legal Notices

\n \t
\n\n
\n\n\n\n \n\n\n\n \n\n\n\n\n" + }, + { + "url": "https://docs.oracle.com/en/database/oracle/oracle-database/18/multi/creating-pdbs.html", + "domain": "docs.oracle.com", + "file_source": "part-00131-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n Creating and Removing PDBs and Application Containers\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
\n Previous\n Next\n JavaScript must be enabled to correctly display this content\n \n
\n
\n
\n
    \n
  1. Administrator’s Guide
  2. \n
  3. Creating and Removing PDBs and Application Containers
  4. \n
\n \n \n

Part III Creating and Removing PDBs and Application Containers\n

\n
\n
\n
\n

You can create PDBs, application containers, and application seeds using a variety of techniques.

\n

For example, you can create a PDB from scratch, cloning an existing PDB or non-CDB, or plug in an unplugged PDB. You can also remove PDBs from a CDB.

\n
\n

Note:

You can complete the tasks in this part using SQL*Plus or Oracle SQL Developer.\n
\n

This part contains the following chapters:

\n
\n
\n \n
\n

Related Topics

\n \n
\n
\n \n
\n
\n \n" + }, + { + "url": "https://docs.oracle.com/cd/E19563-01/819-4429/acmir/index.html", + "domain": "docs.oracle.com", + "file_source": "part-00331-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "imsimta start (Sun Java System Messaging Server 6.3 Administration Reference)

Sun Java System Messaging Server 6.3 Administration Reference

imsimta start

\n

The imsimta start command starts up detached MTA\nprocesses. If no component parameter is specified, then the MTA Job Controller\nand MTA Service Dispatcher are started. Starting the Service Dispatcher starts\nall services the Service Dispatcher is configured to handle, which usually\nincludes the SMTP server.

\n

The services handled by the MTA Service Dispatcher must be started by\nstarting the MTA Service Dispatcher. Only services not being handled by the\nMTA Service Dispatcher can be individually started via the imsimta\nstart command. The Service Dispatcher may be configured to handle\nvarious services, for example, the multithreaded SMTP server.

\n
Note –

You must be logged in as root to use this utility.

\n
\n

Syntax

\n\n\n\n\n
\n
\n
imsimta start [component]
\n

If a component parameter is specified, then only detached processes\nassociated with that component are started. The standard component names are:

\n
    \n
  • \n

    \ndispatcher—Multithreaded Service\nDispatcher.

    \n\n\n
  • \n
  • \n

    \njob_controller—Schedules deliveries\n(dequeues messages).

    \n\n\n
  • \n
\n

Example

\n

Use the following command to start the MTA Job Controller and MTA Service\nDispatcher:

\n\n\n\n\n
\n
\n
imsimta start
\n
\n
\n
    \n \n
\n
\n
" + }, + { + "url": "http://docs.oracle.com/cd/E19504-01/802-5817/gettinghelp-66/index.html", + "domain": "docs.oracle.com", + "file_source": "part-00943-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "

Solaris Common Desktop Environment: User's Guide

To Display a Man Page

\n
    \n\n
  1. \n

    Click the Application Manager control in the Front Panel.

    \n\n\"Graphic\"
  2. \n\n
  3. \n

    Double-click the Desktop_Apps icon.

    \n\n
  4. \n\n
  5. \n

    Double-click the Man Page Viewer icon.

    \n\n

    A dialog box is displayed, prompting you to enter a man page name.

    \n\n\"Graphic\"
  6. \n\n
  7. \n

    Type the name of the man page you want to see and press Return.

    \n\n

    The man page is displayed in a quick help window.

    \n\n
  8. \n\n
  9. \n

    Click Close to dismiss the man page.

    \n\n
  10. \n
\n
\n
    \n \n
\n
\n
" + }, + { + "url": "https://docs.oracle.com/cd/A87860_01/doc/appdev.817/a86030/adx04x18.htm", + "domain": "docs.oracle.com", + "file_source": "part-00656-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "\n\n\n\n\n\n\n\n\n Using XML-SQL Utility (XSU), 18 of 26\n\t \n\n\n\n\n\n\n\n\n\n\n \n\n\n\n\n\n\n\n \n\n\n\n
Oracle8i Application Developer's Guide - XML
Release 3 (8.1.7)

Part Number A86030-01
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\"\"
\nLibrary
\"\"
\nProduct
\"\"
\nContents
\"\"
\nIndex
\n
\n
\n

\n\"Go\n\"Go\n\"Go\n \n\n\n

\n Using XML-SQL Utility (XSU), 18 of 26\n


\n\n
\n\n\n\n\n\n\n\n\n\n\n

Using the XML-SQL Utility for PL/SQL

\n\n\n\n\n\n

\nThe XML-SQL Utility PL/SQL API reflects the Java API in the generation and storage. The DBMS_XMLQuery and DBMS_XMLSave are the two packages that reflect the functions in the java classes - OracleXMLQuery and OracleXMLSave. \n

\n\n\n

\nBoth these packages have a context handle associated with them. Create a context by calling one of the constructor-like functions to get the handle and then use the handle in all subsequent calls.\n

\n\n\n\n

Generating XML with DBMS_XMLQuery

\n\n\n\n

\nGenerating XML results in a CLOB that contains the XML document. The steps involved in using the generation engine follow:\n

\n\n
    \n\n
  1. Create a context handle by calling the DBMS_XMLQuery.getCtx function and supplying it the query (either as a CLOB or a VARCHAR2)\n

    \n\n\n

  2. Bind possible values to the query using the DBMS_XMLQuery.bind function. The binds work by binding a name to the position. For example, the query can be something like, select * from emp where empno = :EMPNO_VAR. Here the user binds the value for the EMPNO_VAR using the setBindValue function. \n

    \n\n

  3. Set optional arguments like the ROW tag name, the ROWSET tag name or the number of rows to fetch etc.\n

    \n\n

  4. Fetch the XML as a CLOB using the getXML() functions. The getXML can be called to generate the XML with or without a DTD.\n

    \n\n

  5. Close the context.\n

    \n \n

\n\n\n

\nHere are some examples that use this PL/SQL package.\n

\n\n\n\n

XSU Example 13: Generating XML From Simple Queries (PL/SQL)

\n\n\n\n

\nIn this example, we will try to select rows from the emp table and get a XML document as a CLOB. We first get the context handle by passing in a query and then call the getXMLClob routine to get the CLOB value. The document will be in the same encoding as that of the database character set.\n

\n \n
\ndeclare\n  queryCtx DBMS_XMLquery.ctxType;\n  result CLOB;\nbegin\n\n  -- set up the query context...!\n  queryCtx := DBMS_XMLQuery.newContext('select * from emp');\n \n  -- get the result..!\n  result := DBMS_XMLQuery.getXML(queryCtx);\n  -- Now you can use the result to put it in tables/send as messages..\n  printClobOut(result);\n  DBMS_XMLQuery.closeContext(queryCtx);  -- you must close the query handle..\nend;\n/\n
\n\n\n

XSU Example 13a: Printing CLOB to Output Buffer

\n\n\n\n

\nThe printClobOut() is a simple procedure that prints the CLOB to the output buffer. If you run this PL/SQL code in SQL*Plus, you will see the result of the CLOB being printed out to screen. Set the serveroutput to on in order to see the results. \n

\n\n\n

\nThe printClobOut is shown below:-\n

\n \n
\n/CREATE OR REPLACE PROCEDURE printClobOut(result IN OUT NOCOPY CLOB) is\nxmlstr varchar2(32767);\nline varchar2(2000);\nbegin\n  xmlstr := dbms_lob.SUBSTR(result,32767);\n  loop\n    exit when xmlstr is null;\n    line := substr(xmlstr,1,instr(xmlstr,chr(10))-1);\n    dbms_output.put_line('| '||line);\n    xmlstr := substr(xmlstr,instr(xmlstr,chr(10))+1);\n  end loop;\nend;\n/\n
\n\n\n

XSU Example 14: Changing ROW and ROWSET Tag Names (PL/SQL)

\n\n\n\n

\nThe PL/SQL APIs also provide the ability to change the ROW and the ROWSET tag names. These are the default names that are put around each row of the result and around the whole document respectively. The procedures, setRowTagName and setRowSetTagName accomplish this as shown below:\n

\n \n
\n--Setting the ROW tag names\n\ndeclare\n   queryCtx DBMS_XMLQuery.ctxType;\n   result CLOB;\nbegin\n   -- set the query context.\n   queryCtx := DBMS_XMLQuery.newContext('select * from emp');\n \n   DBMS_XMLQuery.setRowTag(queryCtx,'EMP'); -- sets the row tag name\n   DBMS_XMLQuery.setRowSetTag(queryCtx,'EMPSET'); -- sets rowset tag name\n\n   result := DBMS_XMLQuery.getXML(queryCtx); -- get the result\n\n   printClobOut(result);  -- print the result..!\n   DBMS_XMLQuery.closeContext(queryCtx);  -- close the query handle;\nend;\n/\n\n
\n\n

\nThe resulting XML document has an EMPSET document element and each row separated using the EMP tag.\n

\n\n\n\n

XSU Example 15: Paginating Results Using setMaxRows() and setSkipRows()

\n\n\n\n

\nThe results from the query generation can be paginated by using the setMaxRows and setSkipRows functions. The former sets the maximum number of rows to be converted to XML. This is relative to the current row position from which the last result was generated. The skipRows parameter specifies the number of rows to skip before converting the row values to XML. For example, to skip the first 3 rows of the emp table and then print out the rest of the rows 10 at a time, we can set the skipRows to 3 for the first batch of 10 rows and then set skipRows to 0 for the rest of the batches.\n

\n\n\n

\nAs in the case of the XML-SQL Utility Java API, call the keepObjectOpen() function to make sure that the state is maintained between fetches. The default behavior is to close the state after a fetch is done. In the case of multiple fetches, we need to figure out when there are no more rows to fetch. This can be done by setting the setRaiseNoRowsException(). This causes an exception to be raised if no rows are written to the CLOB. This can be caught and used as the termination condition.\n

\n \n
\n-- Pagination of results\n\ndeclare\n  queryCtx DBMS_XMLquery.ctxType;\n  result CLOB;\nbegin\n\n  -- set up the query context...!\n  queryCtx := DBMS_XMLQuery.newContext('select * from emp');\n \n  DBMS_XMLQuery.setSkipRows(queryCtx,3); -- set the number of rows to skip\n  DBMS_XMLQuery.setMaxRows(queryCtx,10); -- set the max number of rows per fetch\n\n  result := DBMS_XMLQuery.getXML(queryCtx); -- get the first result..!\n\n  printClobOut(result); -- print the result out.. This is you own routine..!\n  DBMS_XMLQuery.setSkipRows(queryCtx,0); -- from now don't skip any more rows..!\n\n  DBMS_XMLQuery.setRaiseNoRowsException(queryCtx,true);\n                                         -- raise no rows exception..!\n  begin\n    loop  -- loop forever..!\n      result := DBMS_XMLQuery.getXML(queryCtx); -- get the next batch \n      printClobOut(result);             -- print the next batch of 10 rows..!\n    end loop;\n  exception\n    when others then\n    -- dbms_output.put_line(sqlerrm);\n       null; -- termination condition, nothing to do;\n  end;\n  DBMS_XMLQuery.closeContext(queryCtx);  -- close the handle..!\nend;\n/\n
\n\n\n\n\n\n\n
\n\n
\n
\n\n\n\n\n\n\n
\n\n\n\n\n\n \n\n
\"Go\"Go\"Go
\n
\n\"Oracle\"
\nCopyright © 1996-2000, Oracle Corporation.
\n
\nAll Rights Reserved.\n
\n\n\n\n\n\n\n\n\n\n\n\n\n
\"\"
\nLibrary
\"\"
\nProduct
\"\"
\nContents
\"\"
\nIndex
\n
\n \n
\n\n\n\n\n" + }, + { + "url": "https://docs.oracle.com/cd/E22985_01/html/E22986/z400000a1390217.html", + "domain": "docs.oracle.com", + "file_source": "part-00656-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "\n\n\n\n\nIncrease System Memory With Additional DIMMs - SPARC T4-1 Server HTML Document Collection\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n\t\n\t\t\n\t\t\t\n\t\t\n\n\t\t\n\t\t\t\n\t\t\n\n\t\t\n\t\t\n\t\t\t\n\n\t\t\t\n\t\t\n\t
\n\t\t\t\tSkip Navigation Links\n\t\t\t
\n\t\t\t\tExit Print View\n\t\t\t
\"Link\n\t\t\tSPARC T4-1 Server HTML Document Collection \n\t\t\t
\n
\n\n\n
\n
\n
\n \n \"Oracle\n \n
\n\n
\n \n \"Library\"\n \n
\n
\n \n
\n\n
\n\t\t \n \n \"PDF\"\n \n\t\t \n
\n \n
\n \n \"Print\n \n
\n \n \n
\n \n \"Feedback\"\n \n
\n\n
\n\n\n
\n
\n
\n \n\n
\n
\n \"search\n
\n\n
\n \n \n
\n\n
\n \"search\n
\n
\n\n
\n
\n
\n
\n
 Search Scope:\n
\n
    \n
  • \n
  • \n
\n
\n
\n
\n
\n
\n\t\t\t \n
\n
\n
\n \n\n
\n\n\n
\n\t

Document Information

\n

Using This Documentation

\n

Related Documentation

\n

Feedback

\n

Support and Accessibility

\n

Installation

\n

Confirming Server and Site Specifications

\n

Server Overview

\n

Front Panel Components

\n

Front Panel System LEDs and Buttons

\n

Rear Panel Components

\n

Rear Panel System LED and Button

\n

Confirming Specifications

\n

Preparing for Installation

\n

Shipping Kit Inventory List

\n

Handling Precautions

\n

ESD Precautions

\n

Tools Needed for Installation

\n

Installing the Server

\n

Rack Compatibility

\n

Stabilize the Rack

\n

Installing Slide Rails

\n

Install the Server

\n

(Optional) Installing the CMA

\n

Verify Correct Operation of the Slide Rails and the CMA

\n

Connecting Cables

\n

Cabling Requirements

\n

Identifying Ports

\n

Connecting Data and Management Ports

\n

Secure Cables in the CMA

\n

Powering On the Server for the First Time

\n

Oracle ILOM System Console

\n

Connect a Terminal or Emulator to the SER MGT Port

\n

Power On the Server for the First Time

\n

Verify Functionality

\n

Assigning a Static IP Address to the SP

\n

Administration

\n

Understanding System Administration Resources

\n

Oracle ILOM Overview

\n

Platform-Specific Oracle ILOM Features

\n

Oracle Solaris OS Overview

\n

OpenBoot Overview

\n

Oracle VM Server for SPARC Overview

\n

Multipathing Software

\n

Hardware Management Pack Overview

\n

Source for Downloading Hardware Management Pack Software

\n

Hardware Management Pack Documentation

\n

Accessing the Server

\n

Log In to Oracle ILOM

\n

Log In to the System Console

\n

Display the ok Prompt

\n

Display the Oracle ILOM -> Prompt

\n

Use a Local Graphics Monitor

\n

Oracle ILOM Remote Console

\n

Controlling the Server

\n

Power On the Server (Oracle ILOM)

\n

Power Off the Server (Oracle ILOM)

\n

Reset the Server (Oracle Solaris OS)

\n

Reset the Server (Oracle ILOM)

\n

Reset the SP to Default Values

\n

Configuring Hardware RAID

\n

Hardware RAID Support

\n

Important Hardware RAID Guidelines

\n

Prepare to Use the FCode Utility

\n

FCode Utility Commands

\n

Create a Hardware RAID Volume

\n

Hot Spare Drives in RAID Volumes (LSI)

\n

Determining If a Drive Has Failed

\n

RAID Drive Replacement Strategies

\n

Locate Device Paths

\n

Changing Server Identification Information

\n

Change Customer Data on FRU PROMs

\n

Change System Identifier Information

\n

Configuring Policy Settings

\n

Specify Cooldown Mode

\n

Restore the Host Power State at Restart

\n

Specify the Host Power State at Restart

\n

Disable or Re-Enable the Host Power-On Delay

\n

Specify Parallel Boot of the SP and Host

\n

Configure Host Behavior (Keyswitch State)

\n

Configuring Network Addresses

\n

SP Network Address Options

\n

Disable or Re-Enable Network Access to the SP

\n

Display the DHCP Server IP Address

\n

Display the IP Address of the SP

\n

Display the Host MAC Address

\n

Using an In-band Connection to the SP

\n

Configuring Boot Mode

\n

Boot Mode Overview

\n

Configure the Host Boot Mode (Oracle VM Server for SPARC)

\n

Change the Host Boot Mode Behavior at Reset

\n

Manage the Host Boot Mode Script

\n

Display Host Boot Mode Expiration Date

\n

Override OBP Settings to Reset the Server

\n

Configuring Server Behavior at Restart

\n

Specify Behavior When the Host Resets

\n

Specify Behavior When the Host Stops Running

\n

Set the Boot Timeout Interval

\n

Specify Behavior at Boot Timeout

\n

Specify Behavior if Restart Fails

\n

Specify Maximum Restart Attempts

\n

Configuring Devices

\n

Unconfigure a Device Manually

\n

Reconfigure a Device Manually

\n

Monitoring the Server

\n

Monitoring Faults

\n

Enabling Automatic System Recovery

\n

Display Server Components

\n

Locate the Server

\n

Updating the Firmware

\n

Display the Firmware Version

\n

Update the Firmware

\n

Display OpenBoot Version

\n

Display POST Version

\n

Identifying WWN-Designated SAS2 Devices

\n

World Wide Name Syntax

\n

Mapping WWN Values to Hard Drives (OBP probe-scsi-all Command)

\n

Identify a Disk Slot Using prtconf (Oracle Solaris OS)

\n

WWN Syntax in an OS Installation on an Individual Drive

\n

WWN Syntax in an OS Installation on a RAID Volume

\n

Service

\n

Identifying Server Components

\n

Front Components

\n

Rear Components

\n

Infrastructure Boards in the SPARC T4-1 Server

\n

Internal System Cables

\n

Illustrated Parts Breakdown

\n

Understanding Hard Drive Data Cable Routing

\n

Detecting and Managing Faults

\n

Diagnostics Overview

\n

Diagnostics Process

\n

Interpreting Diagnostic LEDs

\n

Memory Fault Handling Overview

\n

Managing Faults (Oracle ILOM)

\n

Understanding Fault Management Commands

\n

Interpreting Log Files and System Messages

\n

Checking if Oracle VTS Software Is Installed

\n

Managing Faults (POST)

\n

Managing Faults (PSH)

\n

Managing Components (ASR)

\n

Preparing for Service

\n

Safety Information

\n

Tools Needed for Service

\n

Find the Chassis Serial Number

\n

Locate the Server

\n

Understanding Component Replacement Categories

\n

Removing Power From the System

\n

Positioning the System for Service

\n

Accessing Internal Components

\n

Servicing DIMMs

\n

About DIMMs

\n

DIMM Population Rules

\n

DIMM FRU Names

\n

Supported DIMM Configurations

\n

DIMM Rank Classification Labels

\n

Locate a Faulty DIMM (DIMM Fault Remind Button)

\n

Locate a Faulty DIMM (show faulty Command)

\n

Remove a DIMM

\n

Install a DIMM

\n\n

Verify DIMM Functionality

\n

DIMM Configuration Error Messages

\n

DIMM Configuration Errors—System Console

\n

DIMM Configuration Errors—show faulty Command Output

\n

DIMM Configuration Errors—fmadm faulty Output

\n

Servicing Hard Drives

\n

Hard Drive Hot-Pluggable Capabilities

\n

Hard Drive Slot Configuration Reference

\n

Hard Drive LEDs

\n

Remove a Hard Drive

\n

Install a Hard Drive

\n

Verify the Functionality of a Hard Drive

\n

Servicing the DVD/USB Assembly

\n

DVD/USB Assembly Overview

\n

Remove the DVD/USB Assembly

\n

Install the DVD/USB Assembly

\n

Servicing the Power Supplies

\n

Power Supply Hot-Swap Capabilities

\n

Power Supply LEDs

\n

Locate a Faulty Power Supply

\n

Remove a Power Supply

\n

Install a Power Supply

\n

Verify the Functionality of a Power Supply

\n

Remove or Install a Power Supply Filler Panel

\n

Servicing the Power Distribution Board

\n

Power Distribution Board Overview

\n

Remove the Power Distribution Board

\n

Install the Power Distribution Board

\n

Servicing the Power Supply Backplane

\n

Power Supply Backplane Overview

\n

Remove the Power Supply Backplane

\n

Install the Power Supply Backplane

\n

Servicing the Connector Board

\n

Connector Board Overview

\n

Remove the Connector Board

\n

Install the Connector Board

\n

Servicing PCIe and PCIe/XAUI Risers

\n

Remove a PCIe or PCIe/XAUI Riser

\n

Install a PCIe or PCIe/XAUI Riser

\n

Servicing PCIe Cards

\n

PCIe Card Configuration Reference

\n

Remove a PCIe or XAUI Card

\n

Install a PCIe or XAUI Card

\n

Servicing SAS PCIe RAID HBA Cards

\n

Remove a SAS PCIe RAID HBA Card

\n

Install a SAS PCIe RAID HBA Card

\n

Servicing the Service Processor

\n

Service Processor Overview

\n

Remove the Service Processor

\n

Install the Service Processor

\n

Servicing the System Battery

\n

Replace the System Battery

\n

Verify the System Battery

\n

Servicing Fan Modules

\n

Fan Configuration Reference

\n

Fan Module LEDs

\n

Locate a Faulty Fan Module

\n

Remove a Fan Module

\n

Install a Fan Module

\n

Servicing the Fan Power Board

\n

Fan Power Board Overview

\n

Remove the Fan Power Board

\n

Install the Fan Power Board

\n

Servicing the System Configuration PROM

\n

System Configuration PROM Overview

\n

Remove the System Configuration PROM

\n

Install the System Configuration PROM

\n

Verify the System Configuration PROM

\n

Servicing the HDD Cage

\n

Hard Drive Cage Overview

\n

Remove the Hard Drive Cage

\n

Install the Hard Drive Cage

\n

Servicing the HDD Backplane

\n

Hard Drive Backplane Overview

\n

Remove the Hard Drive Backplane

\n

Install the Hard Drive Backplane

\n

Servicing the Front Panel Light Pipe Assemblies

\n

Front Panel Light Pipe Assemblies Overview

\n

Remove the Front Panel Light Pipe Assembly (Right or Left)

\n

Install the Front Panel Light Pipe Assembly (Right or Left)

\n

Servicing the Motherboard Assembly

\n

Motherboard Servicing Overview

\n

Remove the Motherboard Assembly

\n

Install the Motherboard Assembly

\n

Returning the Server to Operation

\n

Replace the Top Cover

\n

Reinstall the Server in the Rack

\n

Return the Server to the Normal Rack Position

\n

Reconnect the Power Cords

\n

Power On the Server (start /SYS Command)

\n

Power On the Server (Power Button)

\n

Glossary

\n

Index

\n\n
\n\n\n
\n\t\n\n\n\n

Increase System Memory With Additional DIMMs

\n\n

This is a cold service procedure that can be performed by customers.

\n\n

You should be familiar with the memory configuration guidelines described in DIMM Population Rules before\nadding new DIMMs to increase a server's memory capacity.

\n
\n

Caution - You must disconnect the power cables from the system before performing this procedure.

\n
\n
\n
    \n
  1. Consider your first steps:\n\n
  2. \n
  3. Confirm that the DIMMs in the upgrade kit are compatable with the DIMMs\nalready installed in the server.\n

    Note - DIMM architectures must be identical (same capacity, same rank classification label). See DIMM Rank Classification Labels.

    \n
    \n
  4. \n
  5. Unpack the new DIMMs and place them on an antistatic mat.
  6. \n
  7. Swing the air duct up and forward to the fully open position.
  8. \n
  9. At a DIMM slot that is to be upgraded, open the ejector tabs\nand remove the filler panel.

    Do not dispose of the filler panel. You may want to reuse it\nif any DIMMs are removed at another time.

  10. \n
  11. Align the notch on the bottom edge of the DIMM with the key\nin the connector.
    \n
    \n

    Caution - Ensure that the orientation is correct. The DIMM might be damaged if the\norientation is reversed.

    \n
    \n
    \n
  12. \n
  13. Press the DIMM into the connector until the ejector tabs lock the DIMM\nin place.

    Figure 45 Installing DIMMs

    \"image:Graphic\n

    Note - If the DIMM does not easily seat into the connector, do not try\nto force it into position. Instead, check its orientation. If the orientation is\nnot correct, forcing the DIMM into the connector is likely to damage the\nDIMM, or the connector, or both.

    \n
    \n
  14. \n
  15. Repeat Step 5 through Step 7 until all DIMMs are installed.
  16. \n
  17. Finish the installation procedure:\n\n
Related Information
\n\n\n
\n\n
\n\t\n\t\t\n\t\t\t\n\t\t\t\n\t\t\n\t
\n\t\t\t\t
\n\t\t\t\t\tCopyright © 2011, 2013, Oracle and/or its affiliates. All rights reserved. Legal Notices\n\t\t\t\t
\n\t\t\t
\n\t\t\t\t\"Previous\"\n\t\t\t\t\"Next\"\n\t\t\t
\n\t\n
\n\n\n\n\n\n\n\n\n\n" + }, + { + "url": "https://docs.oracle.com/cd/E19253-01/817-2521/6mi67tj40/index.html", + "domain": "docs.oracle.com", + "file_source": "part-00514-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "

International Language Environments Guide

Chinese Text

\n

\nChinese usually consists\nentirely of characters from the ideographic script called Hanzi.

\n
    \n
  • \n

    \nIn the People's Republic of China (PRC) there are\nabout 7000 commonly used Hanzi characters in the GB2312 (zh locale),\nmore than 20,000 characters in the GBK charset (zh.GBK locale),\nand about 30,000 characters in the GB18030-2000 charset (zh_CN.GB18030 locale),\nincluding all CJK extension A characters defined in Unicode 3.0.

    \n\n\n
  • \n
  • \n

    \nIn Taiwan, the most frequently used charsets are the CNS11643-1992\n(zh_TW locale) and the Big5 (zh_TW.BIG5 locale).\nThey share about 13,000 Hanzi characters.

    \n\n\n
  • \n
  • \n

    \nIn Hong Kong, 4702 characters have been added into the Big5 charset\nto become the Big5-HKSCS charset (zh_HK.BIG5HK).

    \n\n\n
  • \n
\n

If a character is not a root character, it usually consists of two or\nmore parts, two being most common. In two-part characters, one part generally\nrepresents meaning, and the other represents pronunciation. Occasionally both\nparts represent meaning. The radical is the most important element, and characters\nare traditionally arranged by radical, of which there are several hundred.\nA single sound can be represented by many different characters, which are\nnot interchangeable in usage. A single character can have different sounds.

\n

Some characters are more appropriate than others in a given context.\nThe appropriate character is distinguished phonetically by the use of tones.\nBy contrast, spoken Japanese and Korean lack tones.

\n

\nSeveral phonetic systems represent\nChinese. In the People's Republic of China the most common is pinyin,\nwhich uses Roman characters and is widely employed in the West for place names\nsuch as Beijing. The Wade-Giles system is an older phonetic system, formerly\nused for place names such as Peking. In Taiwan zhuyin (or bopomofo), a phonetic alphabet with unique letter forms, is often\nused instead.

\n
\n
    \n \n
\n
\n
" + }, + { + "url": "https://docs.oracle.com/cd/B31104_02/books/LoyaltyAdmin/LoyaltyAdmin_Promotions19.html", + "domain": "docs.oracle.com", + "file_source": "part-00514-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "\r\n\r\n\r\n\t\r\n\t\r\n\t\r\n\t\r\n\t\r\n\t\r\n\t\r\n\t\r\n\t\r\n\t\r\n\t\r\n\r\n\r\n\r\n\tBookshelf v7.8: Associating Loyalty Members with Marketing Campaigns\r\n\r\n\t\r\n\t\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\t\r\n\t\t\r\n\r\n\t\t\t\t\r\n\t\r\n
  Bookshelf Home | Contents | Index | PDF   
\r\n\r\n\r\n\r\n
\r\n

Siebel Loyalty Administration Guide > Setting Up Siebel Loyalty Promotions > About Adding Marketing Information to Loyalty Promotions >

\r\n

Associating Loyalty Members with Marketing Campaigns

\r\n
\r\n

To target loyalty members with an outbound communication, you can use marketing campaigns.

Marketing campaigns are based on contacts, not on members. Therefore, you must associate a Member with a Campaign record to include the member in a marketing campaign.

To associate a member with a marketing campaign

\r\n
    \r\n
  1. Navigate to the Loyalty Members screen.\r\n
  2. In the Loyalty Members list, click the Member # field of the member you want to associate with the campaign.\r\n
  3. In the Campaigns list, add a new record, and select the marketing campaign in the Campaign field.\r\n
\r\n
\r\n\r\n\r\n\t\r\n\r\n\t\t\r\n\t\t\t\t\r\n\t\r\n
   
\r\n\r\n\r\n\t\r\n\t\t\r\n\t\r\n\t\r\n\t\t\r\n\t\r\n
Siebel Loyalty Administration Guide
 
\r\n\r\n\r\n\r\n\r\n\r\n" + }, + { + "url": "https://docs.oracle.com/cd/E29587_01/PlatformServices.60x/presentationAPINet/html/26f8297d-f861-ac01-a12f-b0228549b216.htm", + "domain": "docs.oracle.com", + "file_source": "part-00514-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "Refinements Property
Endeca Presentation API
Refinements Property
NamespacesEndeca.NavigationDimensionRefinements
\r\n Gets the refinements available and retrieved in this dimension. \r\n
Declaration Syntax
C#Visual BasicVisual C++
public virtual DimValList Refinements{ get;}
Public Overridable ReadOnly Property Refinements As DimValList
public:\r\nvirtual property DimValList^ Refinements {\r\n\tDimValList^ get ();\r\n}
Value
\r\n Refinements available in this dimension. Returns an empty \r\n DimValList if no refinements are available.\r\n
Remarks
\r\n Returns an empty DimValList when there are no refinements\r\n available. It is possible that while the refinements may return a empty \r\n set, refinements are still possible if not all the refinements were \r\n retrieved from the MDEX Engine in the original query. Refinements\r\n are ordered by the higher ranking to lower ranking where the rankings \r\n are determined by the settings in the MDEX Engine and the data.\r\n

\r\n Each member of this refinement list can be added to the Descriptors\r\n and will yield at least one results.\r\n

© 2008 Endeca Technologies, Inc., Endeca Confidential
Assembly: Endeca.Navigation (Module: Endeca.Navigation) Version: 0.0.0.0

" + }, + { + "url": "https://docs.oracle.com/cd/E23823_01/html/817-1985/clientconcept-33212.html", + "domain": "docs.oracle.com", + "file_source": "part-00331-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "\n\n\n\n\nWhat Are Servers, Clients, and Appliances? - System Administration Guide: Basic Administration\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n\t\n\t\t\n\t\t\t\n\t\t\n\n\t\t\n\t\t\t\n\t\t\n\n\t\t\n\t\t\n\t\t\t\n\n\t\t\t\n\t\t\n\t
\n\t\t\t\tSkip Navigation Links\n\t\t\t
\n\t\t\t\tExit Print View\n\t\t\t
\"Link\n\t\t\tSystem Administration Guide: Basic Administration     Oracle Solaris 10 8/11 Information Library\n\t\t\t
\n
\n\n\n
\n
\n
\n \n \"Oracle\n \n
\n\n
\n \n \"Library\"\n \n
\n
\n \n
\n\n
\n\t\t \n \n \"PDF\"\n \n\t\t \n
\n \n
\n \n \"Print\n \n
\n \n \n
\n \n \"Feedback\"\n \n
\n\n
\n\n\n
\n
\n
\n \n\n
\n
\n \"search\n
\n\n
\n \n \n
\n\n
\n \"search\n
\n
\n\n
\n
\n
\n
\n
 Search Scope:\n
\n
    \n
  • \n
  • \n
\n
\n
\n
\n
\n
\n\t\t\t \n
\n
\n
\n \n\n
\n\n\n
\n\t

Document Information

\n

About This Book

\n

1.  Oracle Solaris Management Tools (Road Map)

\n

2.  Working With the Solaris Management Console (Tasks)

\n

3.  Working With the Oracle Java Web Console (Tasks)

\n

4.  Managing User Accounts and Groups (Overview)

\n

5.  Managing User Accounts and Groups (Tasks)

\n

6.  Managing Client-Server Support (Overview)

\n

What's New in Managing Client-Server Support?

\n

Support for Specifying Platform by Using the bootadm -p Command

\n

nfs4_domain Keyword Impacts Diskless Client Boot

\n

x86: Diskless Client Changes That Apply to GRUB

\n

x86: Changes to the smdiskless Command

\n

Where to Find Client-Server Tasks

\n\n

What Does Client Support Mean?

\n

Overview of System Types

\n

Description of a Server

\n

Stand-Alone Systems

\n

Diskless Clients

\n

Description of an Appliance

\n

Guidelines for Choosing System Types

\n

Diskless Client Management Overview

\n

OS Server and Diskless Client Support Information

\n

Diskless Client Management Features

\n

Working With Diskless Client Commands

\n

Required RBAC Rights for Diskless Client Management

\n

Adding OS Services

\n

Adding OS Services When the OS Server Has Been Patched

\n

Disk Space Requirements for OS Servers

\n

7.  Managing Diskless Clients (Tasks)

\n

8.  Introduction to Shutting Down and Booting a System

\n

9.  Shutting Down and Booting a System (Overview)

\n

10.  Shutting Down a System (Tasks)

\n

11.  Modifying Oracle Solaris Boot Behavior (Tasks)

\n

12.  Booting an Oracle Solaris System (Tasks)

\n

13.  Managing the Oracle Solaris Boot Archives (Tasks)

\n

14.  Troubleshooting Booting an Oracle Solaris System (Tasks)

\n

15.  x86: GRUB Based Booting (Reference)

\n

16.  x86: Booting a System That Does Not Implement GRUB (Tasks)

\n

17.  Working With the Oracle Solaris Auto Registration regadm Command (Tasks)

\n

18.  Managing Services (Overview)

\n

19.  Managing Services (Tasks)

\n

20.  Managing Software (Overview)

\n

21.  Managing Software With Oracle Solaris System Administration Tools (Tasks)

\n

22.  Managing Software by Using Oracle Solaris Package Commands (Tasks)

\n

23.  Managing Patches

\n

A.  SMF Services

\n

Index

\n\n
\n\n\n
\n\t\n\n\n\n

What Are Servers, Clients, and Appliances?

\n\n\n

Systems on the network can usually be described as one of the\nsystem types in this table.

\n\n\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
System Type
Description
Server
A system that provides services to other systems\nin its network. There are file servers, boot servers, web servers, database servers,\nlicense servers, print servers, installation servers, appliance servers, and even servers for particular applications.\nThis chapter uses the term server to mean a system that provides boot\nservices and file systems for other systems on the network.
Client
A system that uses\nremote services from a server. Some clients have limited disk storage capacity, or\nperhaps none at all. Such clients must rely on remote file systems from\na server to function. Diskless systems and appliance systems are examples of this\ntype of client.

Other clients might use remote services (such as installation software) from\na server. However, they don't rely on a server to function. A\nstand-alone system is a good example of this type of client. A stand-alone\nsystem has its own hard disk that contains the root (/), /usr, and\n/export/home file systems and swap space.

Appliance
A network appliance such as the Sun\nRay appliance provides access to applications and the Solaris OS. An appliance gives\nyou centralized server administration, and no client administration or upgrades. Sun Ray appliances also\nprovide hot desking. Hot desking enables you to instantly access your computing session from\nany appliance in the server group, exactly where you left off. For more\ninformation, see http://www.sun.com/software/index.jsp?cat=Desktop&.
\n
\n\n
\n\n
\n\t\n\t\t\n\t\t\t\n\t\t\t\n\t\t\n\t
\n\t\t\t\t
\n\t\t\t\t\tCopyright © 1998, 2011, Oracle and/or its affiliates. All rights reserved. Legal Notices\n\t\t\t\t
\n\t\t\t
\n\t\t\t\t\"Previous\"\n\t\t\t\t\"Next\"\n\t\t\t
\n\t\n
\n\n\n\n\n\n\n\n\n\n" + }, + { + "url": "https://docs.oracle.com/cd/B31104_02/books/ConSecSIA/ConSecSIATradeFunds7.html", + "domain": "docs.oracle.com", + "file_source": "part-00662-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "\r\n\r\n\r\n\t\r\n\t\r\n\t\r\n\t\r\n\t\r\n\t\r\n\t\r\n\t\r\n\t\r\n\t\r\n\t\r\n\r\n\r\n\r\n\tBookshelf v7.8: Adjusting a Fund\r\n\r\n\t\r\n\t\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\t\r\n\t\t\r\n\r\n\t\t\t\t\r\n\t\r\n
  Bookshelf Home | Contents | Index | PDF   
\r\n\r\n\r\n\r\n
\r\n

Siebel Consumer Goods Guide > Trade Funds >

\r\n

Adjusting a Fund

\r\n
\r\n

You can adjust the amount of a fund (either positively or negatively) after the fund has been created. Typically, this is done to reflect changes in market conditions.

This task is a step in Process of Managing Trade Funds.

To create an adjustment for a fund

\r\n
    \r\n
  1. Navigate to the Administration - Data screen > Funds view.\r\n
  2. In the Funds list, select a fund record.\r\n
  3. Click the Adjustments view tab.\r\n
  4. In the Adjustments list, create a new record.\r\n
  5. In the Amount field, enter the negative or positive amount by which the Fund Amount is to be adjusted, for example, 5000 or -5000.\r\n
  6. Enter a description of the adjustment, for example, the description may be the reason for the adjustment.\r\n

    The new amount of money available in the fund is now reflect in the Sub-Total field on the More Info tab for the fund.

    \r\n
\r\n
\r\n\r\n\r\n\t\r\n\r\n\t\t\r\n\t\t\t\t\r\n\t\r\n
   
\r\n\r\n\r\n\t\r\n\t\t\r\n\t\r\n\t\r\n\t\t\r\n\t\r\n
Siebel Consumer Goods Guide
 
\r\n\r\n\r\n\r\n\r\n" + }, + { + "url": "https://docs.oracle.com/cd/E14004_01/books/LSSIA/LSSIAPlanCalls9.html", + "domain": "docs.oracle.com", + "file_source": "part-00331-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "\r\n\r\n\r\n\t\r\n\t\r\n\t\r\n\t\r\n\t\r\n\t\r\n\t\r\n\r\n\t\r\n\r\n\tBookshelf v8.1/8.2: About Defining a Route Plan\r\n\t\r\n\t\r\n\t\r\n\r\n\r\n\r\n\r\n\t\r\n\t\t\r\n\t\t\r\n\t\t\r\n\t\r\n
  Bookshelf Home | Contents | Index | PDF   
\r\n\r\n\r\n\r\n
\r\n

Siebel Life Sciences Guide > Planning Calls >

\r\n

About Defining a Route Plan

\r\n
\r\n

End users can create routes to schedule regular calls to groups of accounts. By defining a route, a user can plan the order in which to visit accounts and determine a starting time for each account call. For more information, see Routes and the scenario Scenario 2: Users Define Route Plans.

When creating a route, users must complete the following tasks in the order listed:

    \r\n
  • Create a route. Users must specify the person for whom they are creating the route. Although each user can be assigned multiple routes, each route can only be assigned to one user. For more information, see To create a route.\r\n
  • Add accounts to a route. Once users have established a route, they can add accounts to it.\r\n
  • (Optional) Assign a target list to a route. If users are using the Account Targeting features to create target lists, they can assign them to a route using the Target Accounts view.\r\n
  • Schedule calls. When a route has been created, users can use it to schedule calls to every account in the route. For more information, see To schedule calls from the My Routes view.\r\n
\r\n
\r\n\r\n\r\n\t\r\n\t\t\r\n\t\t\r\n\t\t\r\n\t\r\n
   
\r\n\r\n\r\n\t\r\n\t\t\r\n\t\t\r\n\t\r\n\t\r\n\t\t\r\n\t\r\n
Siebel Life Sciences GuideCopyright © 2013, Oracle and/or its affiliates. All rights reserved. Legal Notices.
 
\r\n\r\n\r\n\r\n" + }, + { + "url": "https://docs.oracle.com/javaee/7/api/javax/validation/ReportAsSingleViolation.html", + "domain": "docs.oracle.com", + "file_source": "part-00514-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "\n\n\n\n\n\nReportAsSingleViolation (Java(TM) EE 7 Specification APIs)\n\n\n\n\n\n\n\n\n\n
\n\n\n\n
\n\n
\n
\n
    \n
  • Summary: 
  • \n
  • Field | 
  • \n
  • Required | 
  • \n
  • Optional
  • \n
\n
    \n
  • Detail: 
  • \n
  • Field | 
  • \n
  • Element
  • \n
\n
\n\n\n
\n\n\n
\n
javax.validation
\n

Annotation Type ReportAsSingleViolation

\n
\n
\n
\n
    \n
  • \n
    \n
    \n
    @Target(value=ANNOTATION_TYPE)\n @Retention(value=RUNTIME)\npublic @interface ReportAsSingleViolation
    \n
    A constraint annotation hosting this annotation will return the\n composed annotation error report if any of the composing annotations fail.\n The error reports of each individual composing constraint are ignored.\n

    \n Note: Evaluation of composed constraints stops on the first validation\n error in case the composing constraint is annotated with\n @ReportAsSingleViolation.

    \n
    \n
    Author:
    \n
    Emmanuel Bernard
    \n
    \n
  • \n
\n
\n
\n\n\n\n
\n\n\n\n
\n\n
\n
\n
    \n
  • Summary: 
  • \n
  • Field | 
  • \n
  • Required | 
  • \n
  • Optional
  • \n
\n
    \n
  • Detail: 
  • \n
  • Field | 
  • \n
  • Element
  • \n
\n
\n\n\n
\n\n

Copyright © 1996-2015, Oracle and/or its affiliates. All Rights Reserved. Use is subject to license terms.

\n\n\n" + }, + { + "url": "https://docs.oracle.com/middleware/1212/core/WSCPT/jrf-ws-policy.htm", + "domain": "docs.oracle.com", + "file_source": "part-00943-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "\n\n\n\n\n2 Attaching Policies to Oracle Infrastructure Web Services\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n
\n
\n\n6/16\n\n

2 Attaching Policies to Oracle Infrastructure Web Services

\n

This chapter describes how to attach policies to Oracle Infrastructure Web services.

\n

This chapter includes the following topics:

\n\n\n
\n

What Are Policies and Policy Sets?

\n

Policies describe the capabilities and requirements of a Web service such as whether and how a message must be secured, whether and how a message must be delivered reliably, and so on. For more information, see \"Understanding Polices\" in Understanding Oracle Web Services Manager.

\n

A policy set, which can contain multiple policy references, is an abstract representation that provides a means to attach policies globally to a range of subjects of the same type. Attaching policies globally using policy sets provides a mechanism for the administrator to ensure that all subjects are secured in situations where the developer, assembler, or deployer did not explicitly specify the policies to be attached. Policies that are attached using a policy set are considered externally attached.

\n

Policy sets provide the ability to specify a runtime constraint that determines the context in which the policy set is relevant. For example, you can specify that a service use message protection when communicating with external clients only since the message may be transmitted over insecure public networks. However, when communicating with internal clients on a trusted network, message protection may not be required. For more information about policy sets, see \"Global Policy Attachments Using Policy Sets\" in Understanding Oracle Web Services Manager.

\n
\n\n\n
\n

OWSM Predefined Policies and Assertion Templates

\n

Oracle Web Services Manager (OWSM) provides a policy framework to manage and secure Web services consistently across your organization. OWSM can be used by both developers, at design time, and system administrators in production environments. For more information about the OWSM policy framework, see \"Understanding the OWSM Policy Framework\" in Understanding Oracle Web Services Manager.

\n

There is a set of predefined OWSM policies and assertion templates that are automatically available when you install Oracle Fusion Middleware. The predefined policies are based on common best practice policy patterns used in customer deployments.

\n

You can immediately begin attaching these predefined policies to your Web services or clients. You can configure the predefined policies or create a new policy by making a copy of one of the predefined policies.

\n

Predefined policies are constructed using assertions based on predefined assertion templates. You can create new assertion templates, as required.

\n

For more information about the predefined OWSM policies and assertion templates, see the following sections in Securing Web Services and Managing Policies with Oracle Web Services Manager:

\n\n
\n\n\n
\n

Attaching Policies to Web Services

\n

Security policies provide a framework to manage and secure Web services consistently across your organization. They can be attached directly to Web services endpoints:

\n
    \n
  • \n

    Programmatically, at design time, using annotations. When developing an application using JDeveloper, you can take advantage of the wizards available to attach policies to Web services and clients.

    \n
  • \n
  • \n

    Post-deployment using Oracle Fusion Middleware and WLST.

    \n
  • \n
\n

In addition, policy sets provide a means to attach policies globally to a range of endpoints of the same type.

\n

For complete details, see \"Attaching Policies\" in Securing Web Services and Managing Policies with Oracle Web Services Manager.

\n
\n
\n\n\n
\n\n
\n\n\n\n" + }, + { + "url": "https://docs.oracle.com/en/cloud/saas/enterprise-performance-management-common/prest/url_structure_overview.html", + "domain": "docs.oracle.com", + "file_source": "part-00273-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n URL Structure\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
\n Previous\n Next\n JavaScript must be enabled to correctly display this content\n \n
\n
\n
\n
    \n
  1. REST API for Oracle Enterprise Performance Management Cloud
  2. \n
  3. About the REST APIs for EPM Cloud
  4. \n
  5. URL Structure
  6. \n
\n \n \n

URL Structure

\n
\n
\n \n
\n

This topic summarizes the URL structures to use for the EPM REST APIs.

\n
\n \n

For the URL structure to use, see the topic for the REST API:

\n \n \n \n

In the description and examples of all REST APIs, this guide uses\n <BASE-URL>. The\n <BASE-URL> is the first part of your service URL,\n before the context. For example, if your service URL is\n https://epm-acme.epm.us-phoenix-1.ocs.oraclecloud.com/epmcloud,\n your <BASE-URL> is\n https://epm-acme.epm.us-phoenix-1.ocs.oraclecloud.com.\n Similarly, if your service URL is\n https://epm2-acme.epm.us6.oraclecloud.com/epmcloud, your\n <BASE-URL> is\n https://epm2-acme.epm.us6.oraclecloud.com.\n

\n \n
\n

Note:

\n \n

Oracle does not authorize or support the use of REST APIs with the path token\n \"/internal/\" in the URL.\n

\n \n
\n \n

For details on the URL structure, see Differences Between Classic and OCI EPM Cloud\n Environments. To learn about accessing Oracle Cloud and Oracle Enterprise\n Performance Management Cloud, see Getting Started with Oracle Cloud and Getting Started with Oracle Enterprise Management Cloud for\n Administrators.\n

\n \n
\n \n
\n
\n
\n \n" + }, + { + "url": "http://docs.oracle.com/cd/E11882_01/server.112/e10839/whatsnew.htm", + "domain": "docs.oracle.com", + "file_source": "part-00131-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "\n\n\n\n\n\nWhat's New in Oracle Database 11g Release 2 (11.2)\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nGo to main content
\n
\n
\n\n4/20 \n
\n

What's New in Oracle Database 11g Release 2 (11.2)

\n

This section describes new features that are available with Oracle Database 11g Release 2 (11.2).

\n
\n

Oracle Database 11g Release 2 (11.2.0.3) New Features

\n

The following new feature is provided with Oracle Database 11g Release 2 (11.2.0.3)

\n
\n

TDE Hardware Acceleration for Oracle Solaris

\n

Starting with Oracle Database 11g Release 2 (11.2.0.3), cryptographic hardware acceleration for TDE also works with Solaris 11 running on SPARC T3, SPARC T4, and Intel Xeon with AES-NI.

\n
\n
\n\n
\n

Oracle Database 11g Release 2 (11.2.0.2) New Features

\n

The following new feature is provided with Oracle Database 11g Release 2 (11.2.0.2).

\n
\n

TDE Hardware Acceleration for Linux

\n

Transparent Data Encryption (TDE) can automatically detect whether the database host system includes specialized cryptographic silicon that accelerates the encryption or decryption processing. When detected, TDE uses the specialized silicon for cryptographic processing accelerating the overall cryptographic performance significantly.

\n

In Oracle Database 11g Release 2 (11.2.0.2), cryptographic hardware acceleration for TDE is only available on Intel Xeon with AES-NI, and for Linux x86 and Linux x86-64 on Oracle Linux 5 and Red Hat Enterprise Linux 5.

\n
\n
\n
\n
\n\n\n
\n\n
\n\n\n\n" + }, + { + "url": "https://docs.oracle.com/cd/E26505_01/html/E29492/etmjx.html", + "domain": "docs.oracle.com", + "file_source": "part-00331-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "\n\n\n\n\nWhat's New in Shutting Down and Booting a System - Oracle Solaris Administration: Basic Administration\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n\t\n\t\t\n\t\t\t\n\t\t\n\n\t\t\n\t\t\t\n\t\t\n\n\t\t\n\t\t\n\t\t\t\n\n\t\t\t\n\t\t\n\t
\n\t\t\t\tSkip Navigation Links\n\t\t\t
\n\t\t\t\tExit Print View\n\t\t\t
\"Link\n\t\t\tOracle Solaris Administration: Basic Administration     Oracle Solaris 10 1/13 Information Library\n\t\t\t
\n
\n\n\n
\n
\n
\n \n \"Oracle\n \n
\n\n
\n \n \"Library\"\n \n
\n
\n \n
\n\n
\n\t\t \n \n \"PDF\"\n \n\t\t \n
\n \n
\n \n \"Print\n \n
\n \n \n
\n \n \"Feedback\"\n \n
\n\n
\n\n\n
\n
\n
\n \n\n
\n
\n \"search\n
\n\n
\n \n \n
\n\n
\n \"search\n
\n
\n\n
\n
\n
\n
\n
 Search Scope:\n
\n
    \n
  • \n
  • \n
\n
\n
\n
\n
\n
\n\t\t\t \n
\n
\n
\n \n\n
\n\n\n
\n\t

Document Information

\n

About This Book

\n

1.  Oracle Solaris Management Tools (Road Map)

\n

2.  Working With the Solaris Management Console (Tasks)

\n

3.  Working With the Oracle Java Web Console (Tasks)

\n

4.  Managing User Accounts and Groups (Overview)

\n

5.  Managing User Accounts and Groups (Tasks)

\n

6.  Managing Client-Server Support (Overview)

\n

7.  Managing Diskless Clients (Tasks)

\n

8.  Introduction to Shutting Down and Booting a System

\n\n

Where to Find Shut Down and Boot Tasks

\n

Shut Down and Boot Terminology

\n

Guidelines for Shutting Down a System

\n

Guidelines for Booting a System

\n

When to Shut Down a System

\n

When to Boot a System

\n

9.  Shutting Down and Booting a System (Overview)

\n

10.  Shutting Down a System (Tasks)

\n

11.  Modifying Oracle Solaris Boot Behavior (Tasks)

\n

12.  Booting an Oracle Solaris System (Tasks)

\n

13.  Managing the Oracle Solaris Boot Archives (Tasks)

\n

14.  Troubleshooting Booting an Oracle Solaris System (Tasks)

\n

15.  x86: GRUB Based Booting (Reference)

\n

16.  x86: Booting a System That Does Not Implement GRUB (Tasks)

\n

17.  Working With Oracle Configuration Manager

\n

18.  Managing Services (Overview)

\n

19.  Managing Services (Tasks)

\n

20.  Managing Software (Overview)

\n

21.  Managing Software With Oracle Solaris System Administration Tools (Tasks)

\n

22.  Managing Software by Using Oracle Solaris Package Commands (Tasks)

\n

23.  Managing Patches

\n

A.  SMF Services

\n

Index

\n\n
\n\n\n
\n\t\n\n\n\n

What's New in Shutting Down and Booting a System

\n\n\n

This section describes new boot features in the Oracle Solaris release. For a\ncomplete listing of new features and a description of Oracle Solaris releases, see\nOracle Solaris 10 1/13 What’s New.The following is a list of new features:

\n\n\n\n\n\n

Support for Fast Reboot on the SPARC Platform

\n\n\n

The Fast Reboot feature of Oracle Solaris is now supported on the\nSPARC platform. The integration of Fast Reboot on the SPARC platform enables the\n-f option to be used with the reboot command to accelerate the boot\nprocess by skipping certain POST tests.

\n\n

The Fast Reboot feature of Oracle Solaris is managed through SMF and implemented\nthrough a boot configuration service, svc:/system/boot-config. The boot-config service provides a means\nfor setting or changing the default boot configuration parameters. When the config/fastreboot_default property\nis set to true, the system performs a fast reboot automatically, without the\nneed to use the reboot -f command. By default, this property value is\nset to false on the SPARC platform.

\n

Note - On the SPARC platform, the boot-config service also requires the solaris.system.shutdown authorization as the\naction_authorization and value_authorization.

\n
\n\n\n

To make Fast Reboot the default behavior on the SPARC platform, use the\nsvccfg and svcadm commands.

\n\n

For task-related information, see Accelerating the Reboot Process on the SPARC Platform (Task Map).

\n\n\n\n

Oracle Solaris Auto Registration Introduced

\n\n\n

Oracle Solaris 10 9/10: For information on auto registration, see Chapter 17, Working With Oracle Configuration Manager

\n\n\n\n

Automatic Boot Archive Recovery

\n\n\n

Oracle Solaris 10 9/10: Starting with this release, boot archive recovery on the SPARC platform is\nautomatic.

\n\n

To support auto-recovery of the boot archives on the x86 platform, a new\nauto-reboot-safe property has been added to the boot configuration service, svc:/system/boot-config:default. By default, the\nproperty's value is set to false to ensure that the system does not\nautomatically reboot to an unknown boot device. If the system is configured\nto automatically point to the BIOS boot device and GRUB menu entry that\nOracle Solaris 10 is installed on, you can set the property's value to\ntrue. Setting the value to true enables an automatic reboot of the system\nfor the purpose of recovering an out-of-date boot archive.

\n\n

To set or change this property's value, use the svccfg and svcadm\ncommands. See the svccfg(1M) and svcadm(1M) man pages.

\n\n

For general information about this enhancement, see the boot(1M) man page.

\n\n

For step-by-step instructions, see x86: How to Clear Automatic Boot Archive Update Failures by Using the auto-reboot-safe Property.

\n\n\n\n

SPARC Support for Install-Time Updates

\n\n\n

Oracle Solaris 10 9/10: Starting with this release, the itu utility has been modified to support booting\na SPARC based system with Install-Time Updates (ITUs). Third-party vendors can now deliver\ndriver updates on floppy disk, CD or DVD, and USB storage. In addition,\nnew tools that enable you to modify the Oracle Solaris installation media with\nnew packages and patches have been introduced. These tools can be used to\ndeliver software updates for hardware platforms and to produce customized installation media. For\ntask-related information, see SPARC: How to Boot a System With a Newly Created ITU.

\n

See also the following man pages:

\n\n\n\n\n\n

Two-Terabyte Disk Support for Installing and Booting Oracle Solaris 10

\n\n\n

Solaris 10 10/09: In previous releases, you could not install and boot the Solaris OS\nfrom a disk that was greater than 1 Tbyte in size. Starting with\nthis release, you can install and boot the Oracle Solaris OS from a\ndisk that is up to 2 Tbytes in size. In previous releases, you\nalso had to use an EFI label for a disk that was\nlarger than 1 Tbyte. In this release, you can use the VTOC label\non any size disk. However, the addressable space by the VTOC label is\nlimited to 2 Tbytes.

\n\n

For more information, see What’s New in Disk Management? in System Administration Guide: Devices and File Systems.

\n\n\n\n

Oracle Solaris ZFS Boot Support

\n\n\n

Solaris 10 10/08: This release includes Oracle Solaris ZFS installation, as well as\nZFS boot support. You can now install and boot from a ZFS\nroot file system. This enhancement applies to both the SPARC and x86 based\nplatforms. Booting, system operations, and installation procedures have been modified to support this change.

\n\n

For more information, see Booting From an Oracle Solaris ZFS Root File System.

\n\n\n\n

x86: findroot Command

\n\n\n

All Oracle Solaris installation methods, including Solaris Live Upgrade, now use the findroot\ncommand for specifying which disk slice on an x86 based system to boot.\nThis implementation supports booting systems with Oracle Solaris ZFS roots, as well as\nUFS roots. Previously, the root command, root (hd0.0.a), was used to explicitly specify\nwhich disk slice to boot. This information is located in the menu.lst file\nthat is used by GRUB.

\n\n

The most common form of the GRUB menu.lst entry is now as follows:

findroot (rootfs0,0,a)\nkernel$ /platform/i86pc/kernel/$ISADIR/unix -B $ZFS-BOOTFS\nmodule$ /platform/i86pc/$ISADIR/boot_archive
\n\n

In some Oracle Solaris 10 releases, the entry is as follows:

findroot (pool_rpool,0,a)\nkernel$ /platform/i86pc/multiboot -B $ZFS-BOOTFS\nmodule /platform/i86pc/boot_archive
\n\n

For more information, see x86: Implementation of the findroot Command.

\n\n\n\n

Support for Specifying Platform by Using bootadm Command

\n\n\n

A new -p option has been added to the bootadm command.

\n\n

This option enables you to specify the platform or machine hardware class of\na client system in situations where the client platform differs from the server\nplatform, for example when administering diskless clients.

\n

Note - The -p option must be used with the -R option.

\n
\n
# bootadm -p platform -R [altroot]
\n

The specified platform must be one of the following:

\n
  • \n\n

    i86pc

  • \n
  • \n\n

    sun4u

  • \n
  • \n\n

    sun4v

\n\n\n

For more information, see the bootadm(1M) man page.

\n\n\n\n

Redesign of SPARC Bootstrap Process

\n\n\n

The Oracle Solaris SPARC bootstrap process has been redesigned to increase commonality with\nthe x86 boot architecture.

\n\n

Other enhancements include an improved boot architecture that supports booting a system from\nadditional file system types, for example an Oracle Solaris ZFS file system, or\na single miniroot, for installation, as well as booting from DVD, NFS, or\nHTTP. These enhancements increase flexibility and reduce maintenance requirements on SPARC based systems.

\n\n

As part of this redesign, the boot archives and the bootadm command, previously\nonly available on the x86 based platform, are now an integral part of\nthe SPARC boot architecture.

\n\n

The primary difference between the SPARC and x86 boot architectures is how the\nboot device and file are selected at boot time. The SPARC based platform\ncontinues to use the OpenBoot PROM (OBP) as the primary administrative interface, with\nboot options selected by using OBP commands. On x86 based systems, these options\nare selected through the BIOS and the GRand Unified Bootloader (GRUB) menu.

\n

Note - Although the SPARC boot process has changed, no administrative procedures for\nbooting a SPARC based system have been impacted. Boot tasks performed by the\nsystem administrator remain the same as they were prior to the boot architecture\nredesign.

\n
\n\n\n

For more information, see the boot(1M) and bootadm(1M) man pages.

\n\n

For more information in this document, see Understanding the New SPARC Boot Architecture.

\n\n\n\n

x86: Support for Using Power Button to Initiate System Shutdown

\n\n\n

Pressing and releasing the power button on x86 based systems initiates a clean\nsystem shutdown and turns the system off. This functionality is equivalent to using\nthe init 5 command to shut down a system. On some x86 based\nsystems, the BIOS configuration might prevent the power button from initiating shutdown. To\nenable use of the power button to perform a clean system shutdown, reconfigure\nthe BIOS.

\n

Note - On certain x86 based systems that were manufactured before 1999 and are running\nan older release, pressing the power button immediately turns off system power without\nsafely shutting it down. This same behavior occurs when you press the power\nbutton on systems running with ACPI support that is disabled through the use\nof acpi-user-options.

For more information about acpi-user-options, see the eeprom(1M) man page.

\n
\n\n
\n\n
\n\t\n\t\t\n\t\t\t\n\t\t\t\n\t\t\n\t
\n\t\t\t\t
\n\t\t\t\t\tCopyright © 1998, 2013, Oracle and/or its affiliates. All rights reserved. Legal Notices\n\t\t\t\t
\n\t\t\t
\n\t\t\t\t\"Previous\"\n\t\t\t\t\"Next\"\n\t\t\t
\n\t\n
\n\n\n\n\n\n\n\n\n\n" + }, + { + "url": "https://docs.oracle.com/cd/E19225-01/820-5594/ahtel/index.html", + "domain": "docs.oracle.com", + "file_source": "part-00131-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "Configuring a WebSphere Data Source for Identity Manager (Sun Identity Manager 8.1 Installation)

Sun Identity Manager 8.1 Installation

Configuring a WebSphere Data Source for Identity Manager

\n

Use the following information to configure a WebSphere Data Source for Identity Manager.

\n

This section includes:

\n\n

Servlet 2.3 Data Sources

\n

As of the Identity Manager 6.0 release, the deployment descriptor in\nthe WEB-INF/web.xml file refers to Servlet 2.3. Because\nof this, the Identity Manager web application can no longer be used with a\nWebSphere application server version 4 data source.

\n
Note –

Due to interoperability issues between WebSphere data sources\nand Oracle JDBC drivers, Oracle customers who want to use a WebSphere data\nsource with Identity Manager must use Oracle 10g R2 and the corresponding JDBC\ndriver. (The Oracle 9 JDBC driver will not work with a WebSphere data source\nand Identity Manager.) If you have a version of Oracle prior to 10g R2 and\ncannot upgrade Oracle to 10g R2, then configure the Identity Manager repository\nso that it connects to the Oracle database using Oracle’s JDBC Driver\nManager (and not a WebSphere data source).

\n
\n

\n\"Procedure\"To Configure a WebSphere Data Source for Identity Manager

\n
    \n\n
  1. \n

    Configure a JDBC provider.

    \n\n
  2. \n\n
  3. \n

    Configure a WebSphere JDBC Data Source.

    \n\n
  4. \n\n
  5. \n

    Point the repository to the data source.

    \n\n

    These steps\nare discussed next.

    \n\n
  6. \n
\n

Configuring a JDBC Provider

\n

\n\"Procedure\"To Configure a JDBC Provider

\n
Before You Begin

Use WebSphere’s administration console to configure a new JDBC\nProvider.

\n
    \n\n
  1. \n

    Click the Resources tab in\nthe left pane to display a list of resource types.

    \n\n
  2. \n\n
  3. \n

    Click JDBC then JDBC Providers to display a table of configured JDBC\nproviders.

    \n\n
  4. \n\n
  5. \n

    Click the New button above\nthe table of configured JDBC providers.

    \n\n
  6. \n\n
  7. \n

    Select from the list of JDBC database types, provider types, and\nimplementation types. Optionally modify the Name and Description fields.

    \n\n

    Oracle, Oracle JDBC Drive, and Connection pool Data Source will be used\nfor this example.

    \n\n

    Click Next.

    \n\n
  8. \n\n
  9. \n

    Enter database classpath information. The contents of the Enter database class path information page may vary,\ndepending on what you selected in the previous step.

    \n\n\n
      \n
    • \n

      Specify the path to the JAR that contains the JDBC driver.\nFor example, to specify the Oracle thin driver, specify a path similar to\nthe following:

      \n\n

      \n/usr/WebSphere/AppServer/installedApps/idm/idm.ear/idm.war/WEB-INF/lib/oraclejdbc.jar\n

      \n\n

      Click Next.

      \n\n\n
    • \n
    • \n

      Complete any other fields as required. The selected database,\nprovider, and implementation types determine which fields are displayed. Click Next when you have completed the dialog.

      \n\n\n
    • \n
    • \n

      A summary page is displayed. When you are finished reviewing\nyour selections, click the Finish button\nat the bottom of the table. Click the Save link\nto keep your definition. The right pane should display the provider you added.

      \n\n

      To configure a data source that uses this JDBC provider, see Point the Identity Manager Repository to the Data Source.

      \n\n\n
    • \n
    \n\n
  10. \n
\n

Configuring a WebSphere JDBC Data Source

\n

Use WebSphere’s Administrative Console to define a data source\nwith an existing JDBC Provider.

\n

Before you can finish configuring the data source, you must configure\nauthentication data. These aliases contain credentials that are used to connect\nto the DBMS.

\n

\n\"Procedure\"To Configure the Authentication Data

\n
    \n\n
  1. \n

    Click Security > Secure administration, applications, and infrastructure.

    \n\n
  2. \n\n
  3. \n

    Under Authentication, click Java Authentiation and Authorization\nService configuration > J2C authentication data. The JAAS\n- J2C authentication data panel is displayed.

    \n\n
  4. \n\n
  5. \n

    Click New.

    \n\n
  6. \n\n
  7. \n

    Enter a unique alias, a valid user ID, a valid password, and a\nshort description (optional). The user ID must be valid on the target database.

    \n\n
  8. \n\n
  9. \n

    Click OK or Apply. No validation for the user ID and password\nis required.

    \n\n
  10. \n\n
  11. \n

    Click Save.

    \n\n
    \nNote – \n

    The newly created entry is visible without restarting the application\nserver process to use in the data source definition. But the entry is only\nin effect after the server is restarted.

    \n\n
    \n\n
  12. \n
\n

\n\"Procedure\"To Configure the Data Source

\n
    \n\n
  1. \n

    Click Resources > JDBC Providers > Your_JDBC_Provider_Name > Data Sources tab in the left pane to display the\nData sources page. The right pane displays a table of data sources configured\nfor use with this JDBC provider. Click the New button\nabove the table of data sources.

    \n\n
  2. \n\n
  3. \n

    Use the wizard provided to configure the general properties for\nthe new data source. Note the following on the Enter basic data source information\npage:

    \n\n\n
      \n
    • \n

      The JNDI Name is the path\nto the DataSource object in the directory service. You must specify this same\nvalue as the -f argument in setRepo -tdbms -iinitCtxFac -ffilepath.

      \n\n\n
    • \n
    • \n

      Select the Component-managed Authentication\nAlias that you created in Configuring a JDBC Provider. These are the credentials that will be used to access\nthe DBMS (to which this DataSource points).

      \n\n

      Click Next when you have configured this panel. The Create\nNew JDBC provider page is displayed.

      \n\n\n
    • \n
    \n\n
  4. \n\n
  5. \n

    Configure the database-specific properties for this data source\nas needed. Refer to the online help for information about the available properties.

    \n\n

    Make sure Use this data source in container-managed\npersistence (CMP) is unchecked. Identity Manager does not use Enterprise\nJava Beans (EJBs). Click Next to go to the summary page.

    \n\n
  6. \n\n
  7. \n

    Click Finish to save your data\nsource.

    \n\n
  8. \n
\n

Configure the Data Source in a WebSphere Cluster

\n

When configuring the data source in clustered WebSphere environments,\nconfigure it at the cell level. This allows the data source to be accessed\nfrom all nodes in the cell.

\n

To configure this use the -D $propertiesFilePath option where $propertiesFilePath contains:

\n
java.naming.provider.url=iiop://localhost:jndi_port/
\n

or:

\n
-u iiop://localhost:jndi_port/
\n

\n\"Procedure\"To Determine the JNDI Port to Specify

Examine the WebSphere configuration to determine the JNDI port to specify.

\n\n
    \n\n
  1. \n

    In the WebSphere administration console, navigate to Servers > Application Servers > Your_Server > Ports.

    \n\n
  2. \n\n
  3. \n

    Look at the BOOTSTRAP_ADDRESS property.\nUse the specified port in the java.naming.provider.url property.

    \n\n
    \nNote – \n

    The java.naming.provider.url uses localhost as the hostname. WebSphere replicates a JNDI server on each node\nin the cluster so that each application server has its own JNDI server to\nquery. Specify localhost for the host so\nthat each application server in the cluster is used as the JNDI server that Identity Manager queries\nwhen the DataSource is being located.

    \n\n
    \n\n
  4. \n
\n\n

Point the Identity Manager Repository to the Data\nSource

\n

\n\"Procedure\"To Point the repository to a Newly Created Data Source

\n
    \n\n
  1. \n

    Set the WSHOME environment variable to point to your Identity Manager installation;\nfor example:

    \n\n

    \nexport WSHOME=$WAS_HOME/installedApps/idm.ear/idm.war\n

    \n\n

    where $WAS_HOME is the WebSphere\nhome directory, such as /usr/WebSphere/AppServer\n

    \n\n
  2. \n\n
  3. \n

    Make sure that the JAVA_HOME environment variable is set correctly;\nfor example:

    \n\n

    \nexport JAVA_HOME=$WAS_HOME/java\n

    \n\n
  4. \n\n
  5. \n

    Make sure that the Java executable is in your path; for example:

    \n\n

    \nexport PATH=$JAVA_HOME/bin;$PATH\n

    \n\n
  6. \n\n
  7. \n

    Make sure the classpath is pointing to the WebSphere properties\ndirectory. For example

    \n\n

    \nexport CLASSPATH=$WAS_HOME/properties\n

    \n\n
  8. \n\n
  9. \n

    Change to the $WSHOME/bin directory.

    \n\n
  10. \n\n
  11. \n

    (For SQLServer only): Install JTA support:

    \n\n
      \n\n
    1. \n

      Copy the sqljdbc.dll file located in the SQLServer JTA directory to the SQL_SERVER_ROOT/binn directory\nof the SQLServer database server.

      \n\n
      \nNote – \n

      The default location of the SQLServer JTA directory is C:\\Program\nFiles\\Microsoft SQL Server 2000 Driver for JDBC\\SQLServer JTA. The\ndefault location of SQL_SERVER_ROOT/binn is C:\\Program\nFiles\\Microsoft SQL Server\\MSSQL\\Binn.

      \n\n
      \n\n
    2. \n\n
    3. \n

      From the database server, use the ISQL or OSQL utility to run\nthe instjdbc.sql script, which is also found in the SQLServer\nJTA directory. The following examples illustrate the use of these utilities:

      \n\n

      \nisql -Usa -p sa-password -S server-name -i location \\instjdbc.sqlosql -E -i location \\instjdbc.sql\n

      \n\n
    4. \n\n
    \n
  12. \n\n
  13. \n

    Archive a copy of the existing ServerRepository.xml file,\nin case you need to revert. By default, this file is located in $WSHOME/WEB-INF (UNIX), or %WSHOME%\\WEB-INF (Windows).

    \n\n
  14. \n\n
  15. \n

    Point the repository to the new location. For example:

    \n\n\n\n\n\n
    \n
    \n
    lh -Djava.ext.dirs=\"$JAVA_HOME/jre/lib:$JAVA_HOME/jre/lib/ext:\n   $WASHOME/lib:$WASHOME/:$WASHOME/runtimes\" setRepo\n -Uusername\n -Ppassword\n -toracle -icom.ibm.websphere.naming.WsnInitialContextFactory -fDataSourcePath -n -o
    \n
    \n

    In the above example the DataSourcePath might be jdbc/jndiname.\nThe -Djava.ext.dirs option adds all of the JAR files in\nWebSphere’s lib/ and java/jre/lib/ext/ directories\nto the CLASSPATH. This is necessary in order for the setRepo command to run normally.

    \n\n

    Change the -f location\nflag to match the value you specified for the JNDI\nName field when configuring the data source. See Appendix F, setRepo Reference for more information\nabout this command.

    \n\n
  16. \n\n
  17. \n

    In the RepositoryConfiguration configuration object, set the connectionPoolDisable\nattribute to true.

    \n\n

    \n<RepositoryConfiguration connectionPoolDisable=’true’>\n

    \n\n

    This setting prevents WebSphere from sending extraneous\nwarnings to the SystemOut.log file. For more information,\nsee http://www-1.ibm.com/support/docview.wss?uid=swg21121449\n

    \n\n
  18. \n\n
  19. \n

    Restart WebSphere to pick up changes. (This also restarts the\nsystem.)

    \n\n
  20. \n
\n

Specifying Additional JNDI Properties to the setRepo Command

\n

The setRepo command provides an option that allows\nyou to specify an arbitrary set of properties. The -D $propertiesFilePath option allows you to specify any number of settings, including\nvendor-specific properties not specified by JNDI, by including them in a properties\nfile that you create.

\n

For example, to specify a different JNDI port number, include a line\nlike the following in your properties file:

\n
java.naming.provider.url=iiop://localhost:2909
\n
\n
    \n \n
\n
\n
" + }, + { + "url": "https://docs.oracle.com/cd/E36784_01/html/E36836/agkaq.html", + "domain": "docs.oracle.com", + "file_source": "part-00331-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "\n\n\n\n \nFibre Channel Software Features - \nManaging\nSAN Devices and Multipathing in Oracle® Solaris 11.2\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\t
\n\t\t
oracle home
\n\t\t
\nManaging\nSAN Devices and Multipathing in Oracle® Solaris 11.2\n
\n\n\t\t
\n\t\t\t

Exit Print View

\n\t\t
\n\n\t\t\n\t\t
\n\t\t\t
\n\t\t\t\n\t\t\t
\n\t\t\t\t
 
\n\t\t\t\t\n\t\t\t\t\n\t\t\t
\n\t\t\t
\n\t\t\t\t
\n\t\t\t\t\t
Search Scope:
\n\t\t\t\t\t
\n\t\t\t\t\t\t\n\t\t\t\t\t\t\n\t\t\t\t\t
\t\t\t\t\n\t\t\t\t
\n\t\t\t
\n\t\t\t
\n\t\t
\n\t
\n\n\t
\n\t\t\t\n\t\t\n\n\t\t\t\n\t\t
Updated: December 2014
\n\t
\n\t\n\t
 
\n\t
 
\n\n\t
\n\t\t\n\t
\n\t\n\t
\n\n\t\t
\n\t\t\t
\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t
\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t
\n\n\t\t\t\n\t\t\t
\n\t\t\t\t
\n\t\t\t\t\t\n\t\t\t\t\t\n\t\t\t\t
\n\t\t\t
\n\t\t\t \n\t\n\t\t\t
\n\t\t\t\t\n

Fibre Channel Software Features

\n

Solaris I/O multipathing provides the following key features:

\n
    \n
  • \n

    Dynamic storage discovery – The software automatically recognizes devices and any\nmodifications made to device configurations. This feature makes devices available to the system\nwithout requiring you to reboot or manually change information in configuration files.

    \n
  • \n
  • \n

    Persistent device naming – Devices that are configured within the software maintain\ntheir device naming through reboots or reconfiguration. The only exception to this policy are tape\ndevices found in /dev/rmt that will not change unless they are removed and then\nregenerated at a later date.\n\n\n

    \n
  • \n
  • \n

    Fibre Channel Arbitrated Loop (FCAL) support – OpenBoot PROM\n(OBP) commands that are used on servers can access FCAL attached storage for scanning the FC loop.\n\n\n\n\n

    \n
  • \n
  • \n

    Fabric booting – The Solaris OS supports booting from fabric devices as well as\nnon-fabric Fibre Channel devices. Fabric topologies with Fibre Channel switches provide higher\nspeed, more connections, and port isolation.\n\n

    \n
  • \n
  • \n

    FC-HBA library – What was previously known as the Storage Networking Industry\nAssociation Fibre Channel host bus adapter (SNIA FC-HBA) library is now known as the FC-HBA library.\nThe FC-HBA library application programming interface (API) enables management of FC HBAs and\nprovides a standards-based interface for other applications (such as Oracle's\nStorEdge Enterprise Storage Manager) that can be used to gather information\nabout FC HBAs.\n\n\n

    \n

    For more information about common FC-HBA APIs, see \nlibhbaapi\n(3LIB)\n. For additional information about FC specifications, go to:\n\n\nhttp://www.t11.org.

    \n
  • \n
  • \n

    Fibre Channel virtualization – N Port ID Virtualization (NPIV) is an extension to the\nFibre Channel standard, which enables one Fibre Channel port to simulate many ports on the SAN. This\nis useful for virtualization environments such as Oracle VM Server for SPARC or Oracle VM Server 3.0\nfor x86 based systems.

    \n
  • \n
  • \n

    Fibre Channel over Ethernet (FCoE) – A new T11 standard to transport encapsulated Fibre\nChannel frames over Enhanced Ethernet is now available. Solaris FCoE is a software implementation\nthat is designed to work with normal Ethernet controllers.

    \n
  • \n
\n\n\n\t\t\t\t\n\t\t\t
\n\n\t\t\t
\n\t\t\t\t
Copyright © 2009, 2014, Oracle and/or its affiliates. All rights reserved. Legal Notices\n\t\t\t\t
\n\t\t\t\t
\n\t\t\t\t\t
Previous
\n\t\t\t\t\t
Next
\n\t\t\t\t
\n\t\t\t\t
\n\t\t\t
\t\n\t\t
\n\t
\n\n" + }, + { + "url": "https://docs.oracle.com/cd/E19879-01/820-6740/aeoeu/index.html", + "domain": "docs.oracle.com", + "file_source": "part-00310-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "Purging a Physical Destination (Sun Java System Message Queue 4.3 Administration Guide)

Sun Java System Message Queue 4.3 Administration Guide

Purging a Physical Destination

\n

\nPurging a physical destination deletes all messages\nit is currently holding. You might want to do this when a destination’s\naccumulated messages are taking up too much of the system’s resources,\nsuch as when a queue is receiving messages but has no registered consumers\nto which to deliver them, or when a topic’s durable subscribers remain\ninactive for long periods of time.

\n

To purge a physical destination, use the imqcmd purge dst subcommand:

\n
\n   imqcmd purge dst  -t destType  -n destName\n

For example, the following command purges all accumulated messages from\nthe topic destination hotTopic:

\n
\n   imqcmd purge dst  -t t  -n hotTopic  -u admin\n

Note –

\nIn a broker cluster, since\neach broker in the cluster has its own instance of each physical destination,\nyou must purge each such instance individually.

\n
\n
Tip –

When restarting a broker that has been shut down, you can use the\nBroker utility’s -reset messages option to clear out\nits stale messages: for example,

\n
\n   imqbrokerd -reset messages  -u admin\n

This saves you the trouble\nof purging physical destinations after restarting the broker.

\n
\n
\n
    \n \n
\n
\n
" + }, + { + "url": "https://docs.oracle.com/en/database/oracle/oracle-database/21/sqlrf/Scalar-Subquery-Expressions.html", + "domain": "docs.oracle.com", + "file_source": "part-00131-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n Scalar Subquery Expressions\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
\n Previous\n Next\n JavaScript must be enabled to correctly display this content\n \n
\n
\n
\n
    \n
  1. SQL Language Reference
  2. \n
  3. Expressions
  4. \n
  5. Scalar Subquery Expressions
  6. \n
\n \n \n

Scalar Subquery Expressions

\n
\n
\n
\n
\n

A scalar subquery expression is a subquery that returns exactly one column value from one row. The value of the scalar subquery expression is the value of the select list item of the subquery. If the subquery returns 0 rows, then the value of the scalar subquery expression is NULL. If the subquery returns more than one row, then Oracle returns an error.\n

\n

You can use a scalar subquery expression in most syntax that calls for an expression (expr). In all cases, a scalar subquery must be enclosed in its own parentheses, even if its syntactic location already positions it within parentheses (for example, when the scalar subquery is used as the argument to a built-in function).\n

\n

Scalar subqueries are not valid expressions in the following places:

\n
    \n
  • \n

    As default values for columns

    \n
  • \n
  • \n

    As hash expressions for clusters

    \n
  • \n
  • \n

    In the RETURNING clause of DML statements\n

    \n
  • \n
  • \n

    As the basis of a function-based index

    \n
  • \n
  • \n

    In CHECK constraints\n

    \n
  • \n
  • \n

    In GROUP BY clauses\n

    \n
  • \n
  • \n

    In statements that are unrelated to queries, such as CREATE PROFILE

    \n
  • \n
\n
\n \n
\n
\n
\n \n" + }, + { + "url": "https://docs.oracle.com/cd/E24329_01/apirefs.1211/e24401/pagehelp/JMSjmssubscribersjmsdurablesubscriberdetailtitle.html", + "domain": "docs.oracle.com", + "file_source": "part-00331-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "\r\n\r\n\r\n\r\nAdministration Console Online Help\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n
\r\nOTN Home\r\n >\r\n Oracle WebLogic Server 12.1.1.0 Documentation\r\n \r\n >\r\n Administration Console Online Help\r\n >\r\n JMS Topic: Monitoring: Durable Subscribers: View Durable\r\n Subscriber Details
\r\n

Administration Console Online Help

\r\n\r\n\r\n\r\n\r\n
\"Previous\" \"Next\" \"\" \"Open
\r\n\"Content\r\n
\r\n

JMS Topic: Monitoring: Durable Subscribers: View Durable\r\n Subscriber Details

\r\n

\r\nConfiguration Options     Related Tasks     Related Topics\r\n

\r\n

Use this page to display the configuration details of a durable\r\n subscriber running on a JMS topic.

\r\n

\r\nConfiguration Options\r\n

\r\n
\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n
Name Description
Subscription Name\r\n

The subscription name for this durable subscriber. This name \r\nmust be unique for each client ID.

\r\n

Valid durable subscription names cannot include the following \r\ncharacters: comma \",\", equals \"=\", colon \":\", asterisk \"*\", percent \r\n\"%\", or question mark\"?\".

\r\n

MBean Attribute: \r\n\t\t\t\t\t
\r\nJMSDurableSubscriberRuntimeMBean.SubscriptionName\r\n

\r\n
Client ID\r\n

A unique client identifier for this durable subscriber.

\r\n

\r\nNote: The client ID is not necessarily equivalent to the \r\nWebLogic Server username; that is, a name used to authenticate a \r\nuser in the WebLogic security realm. You can set the client ID to \r\nthe WebLogic Server username if it is appropriate for your JMS \r\napplication.

\r\n

MBean Attribute: \r\n\t\t\t\t\t
\r\nJMSDurableSubscriberRuntimeMBean.ClientID\r\n

\r\n
Client ID Policy\r\n

The policy for the client identifier for this durable subscriber.

\r\n

\r\nNote: The client ID policy is either \"Restricted\" or \"Unrestricted\".

\r\n

\r\n
Message Selector\r\n

The message selector defined for this durable subscriber.

\r\n

MBean Attribute: \r\n\t\t\t\t\t
\r\nJMSDurableSubscriberRuntimeMBean.Selector\r\n

\r\n
No Local Messages\r\n

Specifies whether this durable subscriber receives local \r\nmessages that it has published.

\r\n

To prevent this, set the noLocal parameter to \r\ntrue.

\r\n

MBean Attribute: \r\n\t\t\t\t\t
\r\nJMSDurableSubscriberRuntimeMBean.NoLocal\r\n

\r\n
\r\n
\r\n

\r\nRelated Tasks\r\n

\r\n\r\n

\r\nRelated Topics\r\n

\r\n\r\n
\r\n
\r\n\r\n\r\n\r\n\r\n
\"Back\r\n\r\n
\r\n\r\n\r\n" + }, + { + "url": "https://docs.oracle.com/en/cloud/paas/integration-cloud/mapper-user/mappings.html", + "domain": "docs.oracle.com", + "file_source": "part-00089-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n About Mappings\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
\n Previous\n Next\n JavaScript must be enabled to correctly display this content\n \n
\n
\n
\n
    \n
  1. Using the Oracle Mapper with Oracle Integration Generation 2
  2. \n
  3. Get Started with the Mapper
  4. \n
  5. About Mappings
  6. \n
\n \n \n

About Mappings

\n
\n
\n
\n

One of the key tasks to any integration is defining how data is transferred, or mapped, between two applications.\n

\n

In most cases, the messages you want to transfer between the applications in an\n integration have different data structures. A visual mapper enables you to map element\n nodes between applications by dragging source element nodes onto target element nodes.\n When you open the mapper for a request or response message in an integration, the data\n structures are automatically populated with the information pulled from the source and\n target connections. You can expand and load data structure levels on demand to display\n additional levels. There is no limit on the levels of display.
\"Description
Description of the illustration about_mapping.png

\n

The maps you create are called transformation maps, and use the eXtensible Stylesheet Language (XSL) to describe the data mappings, which let you perform complex data manipulation and transformation. A standard set of XSLT constructs are provided (for example, xsl:if, xsl:for-each, and others). A specialized function is also provided for you to reference lookups directly from the mapper.\n

\n
\n

Note:

The mapper supports XSL version 2.0. Version 1.0 is not supported.\n
\n

The mapper supports both qualified and unqualified schemas (that is, schemas without elementFormDefault=”qualified”). Elements and attributes with and without namespace prefixes are also supported.\n

\n

Substitution groups in schemas are supported. You can see all the substitutable elements in a base element in the mapper, and select the one to use.

\n

Extended data types are also supported.

\n

Elements and attributes for which mapping is required are identified by a blue asterisk (*) to the left of their names. To display only required fields, click the Filter icon in the mapper toolbar, select Required Fields, and click Apply.\n

\n

You can also right-click elements and attributes and select Node Info to show specific schema details such as the data type, if mapping is required, and so on.
\"Description
Description of the illustration element_details.png

\n

Additional custom annotations can also be displayed. These annotations are currently only available with the Oracle Sales Cloud Adapter. The Oracle Engagement Cloud\n Adapter obtains this information from the applications and annotates it in the integration WSDL. This information is then read and made visible as annotations in the mapper (for example, title and description). This information can help you better understand what data is being mapped.\n

\n

The mapper toolbar provides the following functionality.

\n
\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
ElementDescription
\"Designer\n

Click to return to the mapping canvas when you are inside the Code, Test, or Recommend page.

\n
\"Code\n

You can view the XSLT code being created as you design your mappings.

\n
\"Test\n

Once you complete designing your mappings, you can test them by entering sample content of the message to process in the mapping tester.

\n
\"Recommend\n

If you enable the recommendations engine, you can accept the target element recommendations of the engine when creating mappings. This eliminates the need to analyze and perform each individual source-to-target mapping.

\n
\"DeveloperClick to disable user-friendly, source and target\n element names in the mapper. By default, user-friendly element names are\n shown.
\"XSLT\n

Click to show the XSLT functions.

\n
\"View\n

You can select the following options:

\n
    \n
  • \n

    Select to show the namespace prefixes on source and target element nodes.

    \n
  • \n
  • \n

    Select to show the types (prefixes and data types) on source and target element nodes.

    \n
  • \n
\n
\"Filter\n

You can filter the display of element nodes, error messages, and warnings in the source or target data structures.

\n
\"Undo\n

You can select to undo the previous action performed in the mapper. For example, if you perform a mapping, then press this button, the mapping is removed. The link is disabled when all actions have been undone.

\n
\"Redo\n

You can redo the action that was undone.

\n
\"Maximize\n

You can maximize the size of the mapper. This is useful when working with large schemas.

\n
\"Function\n

You can add functions, operators, and XSLT expressions to your mappings.

\n
\n
\n \n
\n
\n
\n \n" + }, + { + "url": "https://docs.oracle.com/cd/B40099_02/books/Hospitality/Hospitality_RevenueMgt2.html", + "domain": "docs.oracle.com", + "file_source": "part-00662-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "\r\n\r\n\r\n\t\r\n\t\r\n\t\r\n\t\r\n\t\r\n\t\r\n\t\r\n\t\r\n\t\r\n\t\r\n\r\n\t\r\n\r\n\tBookshelf v8.0: Hospitality Revenue Management\r\n\t\r\n\t\r\n\t\r\n\r\n\r\n\r\n\r\n\r\n\t\r\n\t\t\r\n\t\t\r\n\t\t\r\n\t\r\n
  Bookshelf Home | Contents | Index | PDF   
\r\n\r\n\r\n\r\n
\r\n

Siebel Hospitality Guide > Revenue Management >

\r\n

Hospitality Revenue Management

\r\n
\r\n

The Revenue Management Evaluation process occurs after an opportunity is captured and property-specific quotes are generated. The sales representative continues to work with the customer to update the quote. During this process, the sales representative may need to modify quote information and submit the quote to a revenue manager who reviews and approves the changes. Typical conditions that may trigger the revenue management evaluation process include:

    \r\n
  • The profit variance of a quote is negative.\r\n
  • The sales representative wants to reduce the sleeping room rates below the negotiation floor.\r\n
  • The sales representative wants to overbook some functions. \r\n
\r\n
\r\n\r\n\r\n\t\r\n\t\t\r\n\t\t\r\n\t\t\r\n\t\r\n
   
\r\n\r\n\r\n\t\r\n\t\t\r\n\t\t\r\n\t\r\n\t\r\n\t\t\r\n\t\r\n
Siebel Hospitality GuideCopyright © 2009, Oracle and/or its affiliates. All rights reserved. Legal Notices.
 
\r\n\r\n\r\n\r\n" + }, + { + "url": "https://docs.oracle.com/cd/E19253-01/816-5138/advanced-18/index.html", + "domain": "docs.oracle.com", + "file_source": "part-00331-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "

Solaris 64-bit Developer's Guide

What Does EOVERFLOW Mean?

\n

The EOVERFLOW return value is returned from a system call\nwhenever one or more fields of the data structure used to pass information out of\nthe kernel is too small to hold the value.

\n

A number of 32-bit system calls now return EOVERFLOW when\nfaced with large objects on the 64-bit kernel. While this was already true when dealing\nwith large files, the fact that daddr_t, dev_t, time_t, and its derivative types struct timeval and timespec_t now contain 64-bit quantities might mean more EOVERFLOW return\nvalues are observed by 32-bit applications.

\n
\n
    \n \n
\n
\n
" + }, + { + "url": "http://docs.oracle.com/cd/E16655_01/server.121/e17615/refrn30394.htm", + "domain": "docs.oracle.com", + "file_source": "part-00310-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "\n\n\n\n\nPage Moved\n\n\n

Page Moved

\n

This page has moved to
\n    http://docs.oracle.com/database/121/REFRN/refrn30394.htm

\n\n\n" + }, + { + "url": "https://docs.oracle.com/cd/E36784_01/html/E37628/glmln.html", + "domain": "docs.oracle.com", + "file_source": "part-00352-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "\n\n\n\n \nHow to Create an Archive of the System Image on a Network Device - \nCreating and Using Oracle® Solaris Zones\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\t
\n\t\t
oracle home
\n\t\t
\nCreating and Using Oracle® Solaris Zones
\n\n\t\t
\n\t\t\t

Exit Print View

\n\t\t
\n\n\t\t\n\t\t
\n\t\t\t
\n\t\t\t\n\t\t\t
\n\t\t\t\t
 
\n\t\t\t\t\n\t\t\t\t\n\t\t\t
\n\t\t\t
\n\t\t\t\t
\n\t\t\t\t\t
Search Scope:
\n\t\t\t\t\t
\n\t\t\t\t\t\t\n\t\t\t\t\t\t\n\t\t\t\t\t
\t\t\t\t\n\t\t\t\t
\n\t\t\t
\n\t\t\t
\n\t\t
\n\t
\n\n\t\n\t\n\t
 
\n\t
 
\n\n\t
\n\t\t\n\t
\n\t\n\t
\n\n\t\t
\n\t\t\t
\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t
\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t
\n\n\t\t\t\n\t\t\t
\n\t\t\t\t
\n\t\t\t\t\t\n\t\t\t\t\t\n\t\t\t\t
\n\t\t\t
\n\t\t\t \n\t\n\t\t\t
\n\t\t\t\t

How to Create an Archive of the System Image on a Network Device

Archive the file systems in the global zone. Verify that no non-global zones are installed on\nthe source system.The examples in this section use the zfs\nsend command for creating archives. The examples assume the root pool is named\nrpool.

\n
    \n
  1. Become an administrator.

    For more information, see Using Your Assigned Administrative Rights in Securing Users and Processes in Oracle Solaris 11.2\n.

    \n
  2. \n
  3. \nCreate a snapshot of the entire root pool, named rpool@p2v in this\nprocedure.\n
    source# zfs snapshot -r rpool@p2v
    \n
  4. \n
  5. \nDestroy the snapshots associated with swap and dump devices, because these snapshots are not\nneeded on the target system.\n
    source# zfs destroy rpool/swap@p2v
    \n
    source# zfs destroy rpool/dump@p2v
    \n
  6. \n
  7. \nArchive the system.\n
      \n
    • \nGenerate a ZFS replication stream archive that is compressed with gzip, and\nstored on a remote NFS server.\n
      source# zfs send -R rpool@p2v | gzip > /net/somehost/p2v/s11-zfs.gz
      \n
    • \n
    • \nYou can avoid saving intermediate snapshots and thus reduce the size of the archive by using\nthe following alternative command.\n
      source# zfs send -rc rpool@p2v
      \n
    • \n
    \n
  8. \n

See also

For more information, see the zfs(1M) and archiveadm(1M)man pages.

\n\n\n\t\t\t\t\n\t\t\t
\n\n\t\t\t
\n\t\t\t\t
Copyright © 2004, 2015, Oracle and/or its affiliates. All rights reserved. Legal Notices\n\t\t\t\t
\n\t\t\t\t
\n\t\t\t\t\t
Previous
\n\t\t\t\t\t
Next
\n\t\t\t\t
\n\t\t\t\t
\n\t\t\t
\t\n\t\t
\n\t
\n\n" + }, + { + "url": "http://docs.oracle.com/cd/E28280_01/doc.1111/e10624/img_text/tree_server.htm", + "domain": "docs.oracle.com", + "file_source": "part-00089-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "\n\n\n\n\n\nDescription of the illustration tree_server.gif\n\n\n

Screenshot of the integration hierarchy in Windows Explorer for an Oracle WebCenter Content Server 11gR1 instance running the latest Desktop 11gR1 components and folders feature. The server connection is called 'My Server' and it has the following main nodes: My Content Server, New Check-In, Browse Content, and Search Results. The My Content Server node has these subnodes: My Checked-Out Content, My Workflow Assignments, My Saved Queries, and My Folders. The New Check-In node has this subnode: Standard. The Browse Content node has these subfolders: Accounting, Marketing, Sales, and Users.

\n

A more elaborate description follows this image.

\n\n\n" + }, + { + "url": "http://docs.oracle.com/cd/B31104_02/books/SSSE/SSSE_Configuring3.html", + "domain": "docs.oracle.com", + "file_source": "part-00662-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "\r\n\r\n\r\n\t\r\n\t\r\n\t\r\n\t\r\n\t\r\n\t\r\n\t\r\n\t\r\n\t\r\n\t\r\n\r\n\t\r\n\r\n\tBookshelf v7.8: About Tuning SSSE for Optimal Performance\r\n\t\r\n\t\r\n\t\r\n\t \r\n\r\n\r\n\r\n\r\n\r\n
\r\n\r\n\r\n\t\r\n\t\t\r\n\t\t\r\n\t\t\r\n\t\r\n
  Bookshelf Home | Contents | Index | PDF   
\r\n\r\n\r\n\r\n
\r\n

Siebel Server Sync Guide > Configuring Siebel Server Sync >

\r\n

About Tuning SSSE for Optimal Performance

\r\n
\r\n

In general, SSSE works well when you configure your Siebel implementation using the scalability and sizing recommendations that are discussed in Performance Tuning Guide. For example, you can set parameters such as MaxMTServers and MinMTServers for the PIMSI Engine component to the values that suit your implementation as a whole, without especially considering SSSE performance issues. However, this topic discusses some configuration recommendations that are particularly important for ensuring optimum performance for SSSE.

In an SSSE implementation, two key parameters are MaxUsers and MaxTasks (also called Maximum Tasks). MaxUsers is an Exchange Connector parameter; MaxTasks is a PIMSI Engine server component parameter.

MaxUsers

\r\n

The MaxUsers parameter defines the maximum number of users that can be assigned to each instance of the Exchange Connector. You may want to modify the value of this setting depending on whether your Siebel implementation will be adding many new SSSE users in the near future:

    \r\n
  • If you need to add many new SSSE users, it is recommended that you set the MaxUsers parameter to a very low value, perhaps even the minimum value of 1. This is because all users of an Exchange Connector instance compete for the instance's single communication channel to the Exchange Server. Ongoing synchronizations rarely take long enough to cause problematic delays for other users of the same Connector instance, but initial extracts can monopolize a Connector's communication channel for significant time periods. To prevent any user's synchronization from ever being delayed while another user's initial extract completes, you can set MaxUsers to a value of 1, meaning that there is a dedicated Exchange Connector instance for each user being synchronized. \r\n

    NOTE:  This configuration uses a comparatively large amount of memory on the computers that run Exchange Connector instances—each instance uses about 25 MB of memory.

    \r\n
  • If you add few new SSSE users, you can set the MaxUsers parameter to a higher value. This uses less memory for Connector instances, at the risk of occasional synchronization delays when an initial extract is in progress.\r\n
\r\n

MaxTasks

\r\n

The MaxTasks setting controls the total number of users who are synchronized at one time. The value you choose for MaxTasks can limit the total number of Connector instances that run at any one time, preventing the Connector computers from becoming overloaded.

Selecting and Testing MaxUsers and MaxTasks Values

\r\n

The exact settings to use for your implementation depend on the hardware you use. Each Connector instance uses about 25 MB of memory. As an example, if you have between 1 and 2 GB of memory available for Connector instances to use, you could set MaxUsers to 1 and set MaxTasks to 50.

Perform some initial extract test runs to assess the amount of CPU and memory that are used with your initial parameter settings. Check the Connector computers and the Siebel Servers where your PIMSI Engine components are running. If you have ample CPU and memory still available, you can try increasing the value of MaxTasks.

At a later time, if initial extracts become comparatively rare, you may find that you obtain better performance by setting MaxUsers to a higher value.

Choosing an Appropriate Dispatcher Repeat Interval

\r\n

When you are satisfied with your MaxUsers and MaxTasks setting values, you might want to measure how quickly SSSE can process your entire population of ongoing SSSE users. You can use the information to adjust the values of the Repeating job settings for the PIMSI Dispatcher job, so that the Dispatcher runs at an interval of your choosing. Dispatcher jobs must not overlap. To ensure that this does not occur, set the Repeat From field of the repeating job to End. This ensures that each PIMSI Dispatcher cycle does not start until a specified interval after the previous PIMSI Dispatcher cycle has completed.

For more information about setting up PIMSI Dispatcher jobs, see Configuring and Starting PIMSI Dispatcher.

\r\n
\r\n\r\n\r\n\t\r\n\t\t\r\n\t\t\r\n\t\t\r\n\t\r\n
   
\r\n\r\n\r\n\t\r\n\t\t\r\n\t\r\n\t\r\n\t\t\r\n\t\r\n
Siebel Server Sync Guide
 
\r\n\r\n\r\n\r\n\r\n\r\n" + }, + { + "url": "https://docs.oracle.com/cd/E19879-01/820-6740/6ni21mjsh/index.html", + "domain": "docs.oracle.com", + "file_source": "part-00310-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "JMX Configuration (Sun Java System Message Queue 4.3 Administration Guide)

Sun Java System Message Queue 4.3 Administration Guide

JMX Configuration

\n

Broker configuration properties that support JMX are listed in Table 16–12. These properties can be\nset in the broker's instance configuration file (config.properties)\nor at broker startup with the -D option of the Broker utility\n(imqbrokerd). None of these properties can be set dynamically\nwith the Command utility (imqcmd). In addition, as described\nbelow, some of these properties can be set with corresponding imqbrokerd options.

\n

This section discusses several JMX configuration topics:

\n\n

RMI Registry Configuration

\n

You can configure the broker to do any of the following:

\n
    \n
  • \n

    \nStart an RMI registry (imq.jmx.rmiregistry.start=true)

    \n\n

    If the broker is configured\nto start an RMI registry, then the broker will do the following:

    \n\n\n
      \n
    • \n

      Start an RMI registry in the broker process. The RMI registry\nwill remain operational during the lifetime of the broker.

      \n\n\n
    • \n
    • \n

      Store the JMX connector stub for it's connectors in this RMI\nregistry.

      \n\n\n
    • \n
    • \n

      Advertise a static JMX Service URL that points to the relevant\nJMX connector stub in this registry.

      \n\n\n
    • \n
    • \n

      Shut down the RMI registry as part of the broker shutdown\nprocess.

      \n\n\n
    • \n
    \n\n\n
  • \n
  • \n

    \nUse an existing RMI registry (imq.jmx.rmiregistry.use=true)

    \n\n

    If the broker is configured\nto use an existing RMI registry on the local host, then the broker will do\nthe following:

    \n\n\n
      \n
    • \n

      Expect an RMI registry to be running on the same host (at\na port which can also be specified)

      \n\n\n
    • \n
    • \n

      Store the JMX connector stub for it's connectors in this externally\nmanaged RMI registry.

      \n\n\n
    • \n
    • \n

      Advertise a static JMX Service URL that points to the relevant\nJMX connector stub in this registry. This means the registry must remain operational\nduring the lifetime of the broker.

      \n\n\n
    • \n
    • \n

      Not shut down the RMI registry as part of the broker shutdown\nprocess.

      \n\n\n
    • \n
    \n\n\n
  • \n
  • \n

    \nNot use a registry at all (both imq.jmx.rmiregistry.start and imq.jmx.rmiregistry.use are\nset to false).

    \n\n

    If the broker is configured\nto not use a registry, then the broker will advertise a dynamic JMX Service\nURL that contains the JMX connector stub as a serialized object.

    \n\n\n
  • \n
\n

The choice of using or not using an RMI registry depends upon whether\nyou want a static or dynamic JMX Service URL, respectively. The advantages\nand disadvantages of using an RMI registry are shown in the following table.

\nTable D–1 Advantages and Disadvantages of Using\nan RMI Registry\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n\n
\n

Scenario 

\n\n
\n

Broker Configuration 

\n\n
\n

Advantages 

\n\n
\n

Disadvantages 

\n\n
\n

Using a Registry 

\n

(Static JMX Service URL) 

\n\n
\n

Configuration Properties: 

\n

\nimq.jmx.rmigegistry.start\n

\n

\nimq.jmx.rmigegistry.use\n

\n

\nimq.jmx.rmigegistry.port\n

\n\n
\n

The value of the JMX Service URL is constant across broker restarts. 

\n\n
\n

Broker depends on an RMI registry, either one it starts or one that\nis externally available. There is therefore one more port to worry about with\nregard to port conflicts or firewall configurations. 

\n\n
\n

Not Using a Registry 

\n

(Dynamic JMX Service URL) 

\n\n
\n

Default 

\n\n
\n

Broker does not start up an RMI registry. There is therefore one less\nport to worry about with regard to port conflicts or firewall configurations. 

\n\n
\n

The value of the JMX Service URL changes at every broker startup. JMX\napplications need to be provided a new URL every time the broker restarts.\n(This is not an issue with JMX client applications that use the AdminConnectionFactory class.)

\n\n
\n

\n

If a registry is being used, the imq.jmx.rmiregistry.port property\nspecifies the port number for the RMI registry. For convenience, you can also\nspecify these RMI registry related properties by using equivalent Broker utility\n(imqbrokerd) options at broker startup: -startRmiRegistry, -useRmiRegistry, and -rmiRegistryPort,\nrespectively (see Table 15–1).

\n

Static JMX Service URL: Using an RMI Registry

\n

When using an RMI Registry to store a JMX connector stub, the urlpath portion of the JMX service URL (see The JMX Service URL) does not change across broker\nstartups and has the following form:

\n

\n/jndi/rmi://brokerHost[:rmiPort]/brokerHost/portMapperPort/connectorName\n

\n

This path consists of two segments:

\n
    \n
  • \n

    \n/jndi/rmi://brokerHost[:rmiPort] \n

    \n\n

    Specifies the RMI registry host\nand port at which the JMX contector stub is obtained by performing a JNDI\nlookup. The default port is 1099.

    \n\n\n
  • \n
  • \n

    \n/brokerHost/portMapperPort/connectorName \n

    \n\n

    Specifies\nthe location within the RMI registry where the JMX connector stub is stored.

    \n\n\n
  • \n
\n
Example D–1 JMX Service URL When Using an RMI Registry

The following example shows the JMX service URL for the default jmxrmi connector in the case where an RMI registry is started on port\n1098 on a host called yourhost:

\n

\n# imqbrokerd -startRmiRegistry -rmiRegistryPort 1098\n

\n\n\n\n\n
\n
\n
% imqcmd list jmx -u admin -passfile /myDir/psswds\nListing JMX Connectors on the broker specified by:\n\n-------------------------\nHost         Primary Port\n-------------------------\nlocalhost    7676\n\nName      Active URL\njmxrmi    true   service:jmx:rmi://yourhost/jndi/rmi://yourhost:1098\n                 /yourhost/7676/jmxrmi\nssljmxrmi false\n\nSuccessfully listed JMX Connectors.
\n

\n

The JMX service URL could potentially contain a hostname and port three\nseparate times, indicating the location of the JMX connector, the RMI registry,\nand the broker, respectively.

\n

Dynamic JMX Service URL: Not Using an RMI Registry

\n

When not using an RMI Registry to store a JMX connector stub, the urlpath portion of the JMX service URL is dynamically generated\nat broker startup and has the following form:

\n

\n/stub/rO0ABdmVyLlJlpIDJyGvQkwAAAARod97VdgAEAeA==\n

\n

where the string following /stub/ is the is the serialized\nJMX connector stub encoded in BASE64 (shortened above for legibility)

\n
Example D–2 JMX Service URL When Not Using an RMI Registry

The following example shows the JMX service URL for the default jmxrmi connector when no RMI registry is started by the broker and no\nexisting registry is used.

\n

\n# imqbrokerd\n

\n
\n\n\n\n\n
\n
\n
% imqcmd list jmx -u admin -passfile /myDir/psswds\nListing JMX Connectors on the broker specified by:\n\n-------------------------\nHost         Primary Port\n-------------------------\nlocalhost    7676\n\nName      Active  URL\njmxrmi    true    service:jmx:rmi://yourhost/stub/rO0ABdmVyLlJlpIDJy==\n\nssljmxrmi false\n\nSuccessfully listed JMX Connectors.
\n

SSL-Based JMX Connections

\n

If you need to have secure, encrypted connections between a JMX client\nand the broker's MBean server, then you need to configure both sides of the\nconnection accordingly.

\n

Broker Side SSL Configuration

\n

As mentioned in JMX Connection Infrastructure,\na broker is configured by default for non-secure communication using the preconfigured jmxrmi connector. Applications wishing to use the Secure Socket\nLayer (SSL) for secure communication must activate the alternate ssljmxrmi connector. The ssljmxrmi connector is preconfigured\nwith imq.jmx.connector.RMIconnectorName.useSSL=true.

\n

\n\"Procedure\"To Activate the SSL-Based JMX connector

    \n\n
  1. \n

    Obtain and install a signed certificate.

    \n\n

    The procedure\nis the same as for the ssljms, ssladmin,\nor cluster connection service, as described under Using Signed Certificates.

    \n\n
  2. \n\n
  3. \n

    Install the root certification authority certificate in the trust\nstore if necessary.

    \n\n
  4. \n\n
  5. \n

    Add the ssljmxrmi connector to the list of JMX connectors to be activated at broker startup:

    \n\n
    \n   imq.jmx.connector.activelist=jmxrmi,ssljmxrmi\n
    \n
  6. \n\n
  7. \n

    Start the broker.

    \n\n

    Use the Broker utility (imqbrokerd), either passing it the keystore password in a passfile or typing\nit from at the command line when prompted.

    \n\n
  8. \n\n
  9. \n

    Disable validation of certificates if desired.

    \n\n

    By\ndefault, the ssljmxrmi connector (or any other SSL-based\nconnector) is configured to validate all broker SSL certificates presented\nto it. Validation will fail if the signer of the certificate is not in the\nclient's trust store. To avoid this validation (for instance, when using self-signed\ncertificates during software testing), set the broker property imq.jmx.connector.ssljmxrmi.brokerHostTrusted to true.

    \n\n
  10. \n
\n\n

JMX Client Side SSL Configuration

\n

On the client side, if the AdminConnectionFactory class\nis being used to obtain a JMX connector, the AdminConnectionFactory object\nmust be configured with a URL specifying the ssljmxrmi connector:

\n
AdminConnectionFactory  acf = new AdminConnectionFactory();\n   acf.setProperty(AdminConnectionConfiguration.imqAddress,\n                   \"mq://myhost:7676/ssljmxrmi\");
\n

In addition, if the JMX client needs to access the trust store, use\nthe system properties javax.net.ssl.trustStore and javax.net.ssl.trustStorePassword to point\nthe JMX client to the trust store. For example:

\n

\njava -Djavax.net.ssl.trustStore=/tmp/myStrustsore -Djavax.net.ssl.trustStorePassword=myTurstword\nMyApp\n

\n

JMX Connections Through a Firewall

\n

If a JMX client application needs to connect to a broker that is located\nbehind a firewall, the broker must be configured to use fixed JMX ports so\nthe firewall can, in turn, be configured to allow traffic on these ports.\nThe relevant ports are the following:

\n
    \n
  • \n

    The port used by the JMX connector. The property used to configure\nthis port is imq.jmx.connector.connectorName.port, where connectorName can be jmxrmi or ssljmxrmi.

    \n\n\n
  • \n
  • \n

    The port used by the RMI registry, if any. The property used\nto configure this port is imq.jmx.rmiregistry.port. The\nequivalent command line option for imqbrokerd is -rmiRegistryPort.

    \n\n\n
  • \n
\n

Once these ports are specified, configure the firewall to allow traffic\non these ports.

\n
Example D–3 JMX Configuration for Firewall When Not Using a\nRMI Registry

The following example starts a broker with no RMI registry and a jmxrmi connector on port 5656 on a host called yourhost,\nas follows:

\n

\n# imqbrokerd -Dimq.jmx.connector.jmxrmi.port=5656\n

\n

The resulting JMX service URL is:

\n\n\n\n\n
\n
\n
service:jmx:rmi://yourhost:5656/stub/rO0ABdmVyLlJlpIDJy==
\n

The JMX service URL shows the connector port. In this case, you need\nto configure the firewall to allow traffic only on port 5656.

\n
\n
Example D–4 JMX Configuration for Firewall When Using an RMI\nRegistry

The following example starts a broker with an RMI registry on port 1098\nand a jmxrmi connector on port 5656 on a host called yourhost, as follows:

\n

\n# imqbrokerd -startRmiRegistry -rmiRegistryPort 1098 -Dimq.jmx.connector.jmxrmi.port=5656\n

\n

The resulting JMX service URL is:

\n\n\n\n\n
\n
\n
 service:jmx:rmi://yourhost:5656/jndi/rmi://yourhost:1098\n                 /yourhost/7676/jmxrmi
\n

The JMX service URL shows both these ports. You need to configure the\nfirewall to allow traffic on ports 1098 and 5656.

\n
\n
\n
    \n \n
\n
\n
" + }, + { + "url": "https://docs.oracle.com/en/middleware/webcenter/portal/12.2.1.4/admin/managing-rss.html#WCADM2328", + "domain": "docs.oracle.com", + "file_source": "part-00131-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n Managing RSS\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
\n Previous\n Next\n JavaScript must be enabled to correctly display this content\n \n
\n
\n
\n
    \n
  1. Administering Oracle WebCenter Portal
  2. \n
  3. Administering Tools and Services
  4. \n
  5. Managing RSS
  6. \n
\n \n \n

14 Managing RSS\n

\n
\n
\n
\n
Configure and manage RSS functionality for WebCenter Portal.
\n
\n

Note:

\n

Oracle WebCenter Portal has deprecated the support for Jive features (announcements and discussions). If you have upgraded from a prior release to Release 12c (12.2.1.4.0), Jive features remain available in your upgraded instance but Oracle support is not provided for these features. In the next release, Jive features will not be available even in the upgraded instances\n

\n
\n
\n

Permissions:

\n

To perform the tasks in this chapter, you must be granted the WebLogic Server Admin role through the Oracle WebLogic Server Administration Console and the Administrator role granted through WebCenter Portal Administration.\n

\n

For more information about roles and permissions, see Understanding Administrative Operations, Roles, and Tools.\n

\n
\n
\n

Topics:

\n
\n \n \n

About RSS

\n
\n

The RSS functionality encompasses a RSS Viewer and RSS service that shows news feeds from various WebCenter Portal tools and services. The RSS Viewer enables users to view external news feeds from different web sites inside WebCenter Portal. RSS also delivers content update information from various portal resources including discussions, lists, and announcements.\n

\n
\n

RSS Prerequisites

\n
\n

RSS functionality does not require any back-end server. You do not need to set up a connection to use it. However, depending on your network configuration, you may need to set up a proxy server to enable WebCenter Portal to display content from external RSS news feeds.\n

\n
\n

Setting Up a Proxy Server

\n
\n
\n

A proxy server is required if you want to enable external RSS news feeds and external links in activity stream task flows in WebCenter Portal. The RSS service and the activity stream service share the same proxy server settings.\n

\n

You can set up a proxy server using Fusion Middleware Control or WLST.

\n

This section includes the following subsections:

\n \n
\n \n

Setting Up a Proxy Server for External RSS News Feeds

\n
\n

To enable external RSS news feeds in WebCenter Portal, you must set up a proxy server. \n

\n

A proxy server is also required if you want to display external links in Activity Stream task flows. Both RSS and the activity stream share the same proxy server settings.

\n

You can configure a proxy server by using either Fusion Middleware Control or WLST.

\n
\n

Setting Up a Proxy Server Using Fusion Middleware Control

\n
\n
\n

To set up a proxy server using Fusion Middleware Control:

\n
\n \n
    \n
  1. Log on to Fusion Middleware Control and navigate to the home page for WebCenter Portal.
  2. \n
  3. From the WebCenter Portal menu, select Settings > Application Configuration.
  4. \n
  5. In the Proxy Server section, enter the host name and the port number of the proxy server. For details, see Table 14-1.
    \n
    \n

    Table 14-1 RSS Proxy Server Details

    \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
    FieldDescription
    \n

    Proxy Host

    \n
    \n

    Enter the host name of the proxy server.

    \n
    \n

    Proxy Port

    \n
    \n

    Enter the port number on which the proxy server is running.

    \n
    \n
    \n \n
    \n
  6. \n
  7. Click Apply to save this connection.
  8. \n
  9. Restart the managed server to which your application is deployed.
  10. \n
\n
\n

Setting Up a Proxy Server Using WLST

\n
\n
\n

Use the WLST command setWebCenterProxyConfig to specify the proxy host and port number used by RSS news feeds and activity stream task flows. For example:\n

setWebCenterProxyConfig(appName='webcenter', proxyHost='www-proxy.example.com', proxyPort='80')\n

For command syntax and examples, see setWebCenterProxyConfig in WebCenter WLST Command Reference\nReference.\n

\n

For information about how to run WLST commands, see Running Oracle WebLogic Scripting Tool (WLST) Commands.\n

\n
\n

Note:

\n

To start using new proxy details, you must restart the managed server in which your application is deployed.

\n
\n

Use the getWebCenterProxyConfig command to find out the current proxy host and port used by RSS and activity stream task flows. For example:\n

getWebCenterProxyConfig(appName='webcenter')\n

If you want to delete the current proxy host and port settings, use the unsetWebCenterProxyConfig command. For example: \n

unsetWebCenterProxyConfig(appName='webcenter')\n
\n \n
\n
\n

Testing External RSS News Feed Connections

\n
\n
\n

After setting up the proxy server for the RSS Viewer, you can test the connection to make sure you can access external RSS feeds. To test the RSS connection, you need to add the RSS task flow to a portal page and set the URL to an external RSS feed. If the RSS feed displays correctly, proxy configuration is set up properly. For information about adding the RSS task flow and editing the URL, see Adding RSS News Feeds to a Portal in Building Portals with Oracle\nWebCenter Portal.\n

\n
\n \n
\n
\n
\n
\n \n" + }, + { + "url": "https://docs.oracle.com/cd/E19636-01/819-6769/gctcf/index.html", + "domain": "docs.oracle.com", + "file_source": "part-00310-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "Install and Configure Access Manager (Technical Note: Deploying Access Manager to an Application Server Cluster)

Technical Note: Deploying Access Manager to an Application Server Cluster

Install and Configure Access Manager

\n

Perform the following steps on all of the servers:

\n
    \n
  1. \n

    Install Access Manager using the Java ES installer\nwith the Configure Later option. Access Manager requires Sun Java\nSystem Directory Server. Either install Directory Server before you\ninstall Access Manager. Or, use an existing Directory Server.

    \n\n\n
  2. \n
  3. \n

    Start all instances of Application Server to verify\nthat they are installed properly.

    \n\n\n
  4. \n
  5. \n

    Create an amsamplesilent file on\nthe DAS machine (host-server1) and set the following\nattributes:

    \n\n
    SERVER_NAME=host-server1\nSERVER_HOST=$SERVER_NAME.example.com\nSERVER_PORT=8082\nADMIN_PORT=4849\nDS_HOST=qa-host-server1.example.comDS_DIRMGRPASSWD=password\nROOT_SUFFIX=\"dc=example,dc=com\"\nADMINPASSWD=password\nAMLDAPUSERPASSWD=password\nCOOKIE_DOMAIN=.example.com\nAM_ENC_PWD=\"\"\nNEW_OWNER=root\nNEW_GROUP=other\nPAM_SERVICE_NAME=other\nWEB_CONTAINER=AS8\n\nAS81_HOST=host-server1.example.com\nAS81_INSTANCE=amcluster
    \n\n
  6. \n
  7. \n

    Save and deploy the ampsamplesilent file.\nFor example:

    \n\n
    amconfig -s amsamplesilent
    \n\n
  8. \n
  9. \n

    Restart Access Manager and the cluster instance

    \n\n\n
  10. \n
  11. \n

    Log in to Access Manager as amadmin on\nthe DAS machine and add the additional server instances to the Platform\nserver list. For more information, see Add Instances to the Platform Server List and Realm/DNS Aliases.

    \n\n\n
  12. \n
  13. \n

    Copy the ampsamplesilent file to /usr/tmp directory on both server1 and server2.

    \n\n\n
  14. \n
  15. \n

    On the DAS machine, locate the AM_ENC_PWD property\nin amconfig.properties (located in /etc/opt/SUNWam/config):

    \n\n
    am.encryption.pwd=RrO0vsw+sg8D1+3ldZ6imu9yhjhyksS2
    \n\n
  16. \n
  17. \n

    Copy the value of the AM_ENC_PWD from\nthe DAS machine and replace the value in the amsamplesilent file\nfor both server1 and server2:\n

    \n\n
    SERVER_NAME=server1\nAM_ENC_PWD=RrO0vsw+sg8D1+3ldZ6imu9yhjhyksS2\n\nSERVER_NAME=server2\nAM_ENC_PWD=RrO0vsw+sg8D1+3ldZ6imu9yhjhyksS2
    \n\n
  18. \n
  19. \n

    Deploy the configuration file on server1 and server2. The applications will produce error messages when\nattempting to deploy, because the applications are already in the\ndomain. The configuration files need to be created and the command\nline utilities need to be defined. To do so, run the amconfig command\nfor each installed instance. For example:

    \n\n\n
    amconfig -s /usr/tmp/ amsamplesilent
    \n\n

    For more information, see the Access Manager Administration\nGuide.

    \n\n
    \nNote – \n

    Certain error messages will be sent, but these are expected\nand can be ignored.

    \n\n\n
      \n
    • \n

      Access Manager 7 errors:

      \n\n\n\n\n\n
      \n
      \n
      Directory Server is already loaded with Access Manager DIT.\nCLI171 Command deploy failed : Application amserver is already deployed on other targets. Ple\nase use create-application-ref command to create reference to the specified target; requested\n operation cannot be completed\nFailed to deploy /amserver\ncp: cannot access /var/opt/SUNWappserver/domains/domain1/config/domain.xml\ncp: cannot access /var/opt/SUNWappserver/domains/domain1/config/server.policy\nCLI167 Could not create the following jvm options. Options exist:\n-Djava.protocol.handler.pkgs=com.iplanet.services.com\n-DLOG_COMPATMODE=Off\n-Ds1is.java.util.logging.config.class=com.sun.identity.log.s1is.LogConfigReader\n-Dcom.iplanet.am.serverMode=true\nCLI137 Command create-jvm-options failed.
      \n
      \n\n
    • \n
    \n\n
    \n\n\n
  20. \n
  21. \n

    Repeat these steps for other instances in the cluster.

    \n\n\n
  22. \n
\n
\n
    \n \n
\n
\n
" + }, + { + "url": "https://docs.oracle.com/en-us/iaas/Content/Functions/Tasks/functionsconfigureocicli.htm", + "domain": "docs.oracle.com", + "file_source": "part-00514-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "\n\n\n\n\n\n\n\n\n\n\n\n\nCreating a Profile in the Oracle Cloud Infrastructure CLI Configuration File\n\n\n\n\n\n
\n
\n
\n\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n\n
\n
\n
\n
\n
\n
\n\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\nSkip to main content\n
\n
\n
\n
\n
\n
\n

Creating a Profile in the Oracle Cloud Infrastructure CLI Configuration File

\n

Find out how to create a profile in the Oracle Cloud Infrastructure CLI configuration file for use with OCI Functions.

\n

Before using OCI Functions, you must have an Oracle Cloud Infrastructure CLI configuration file that contains the credentials of the user account that you will be using to create and deploy functions. These user account credentials are referred to as a 'profile'. By default, the Oracle Cloud Infrastructure CLI configuration file is located at ~/.oci/config.

\n

If you are using Cloud Shell as your development environment, an ~/.oci/config file with a suitable profile has already been created for you.

\n

If you are using a local host as your development environment, follow the instructions in the Functions QuickStart on Local Host to create a profile in the ~/.oci/config file by copying and pasting a configuration file snippet. Note the following:

\n
    \n
  • You might already have a configuration file as a result of installing the Oracle Cloud Infrastructure CLI. However, you don't need to have installed the Oracle Cloud Infrastructure CLI in order to use OCI Functions.
  • \n
  • The ~/.oci/config file can contain several profiles. If you already have an ~/.oci/config file containing one or more profiles, you have to add a new profile to the existing file for the Oracle Cloud Infrastructure user who will be using OCI Functions to create and deploy functions.
  • \n
  • Change the name of the new profile you paste into the ~/.oci/config file in the configuration file snippet, from DEFAULT to a name of your choosing. Note that the ~/.oci/config file cannot contain two profiles with the same name.
  • \n
\n

This topic assumes you have already obtained an API signing key, as described in Setting up an Oracle Cloud Infrastructure API Signing Key for Use with OCI Functions. When you have created a profile in the ~/.oci/config file, go on to Installing Docker for Use with OCI Functions.

\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n\n\n \n \n\n" + }, + { + "url": "https://docs.oracle.com/javame/dev-tools/jme-sdk-3.0-win/html-helpset/z400013f1293783.html", + "domain": "docs.oracle.com", + "file_source": "part-00352-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "\r\n\r\n\r\n\r\nAdding the LWUIT Library - Java Platform Micro Edition Software Development Kit\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n
\r\n\t\r\n\t\t\r\n\t\t\t\r\n\t\t\r\n\r\n\t\t\r\n\t\t\r\n\t\t\t\r\n\r\n\t\t\t\r\n\t\t\r\n\t
\r\n\t\t\t\tExit Print View\r\n\t\t\t

Java Platform Micro Edition Software Development Kit Version 3.0

\r\n\t\t\t
\r\n\t\t\t
\r\n
\r\n\r\n\r\n
\r\n\t\r\n\t\t\r\n\t\t\t\r\n\t\t\t\r\n\t\t\t\r\n\t\t\r\n\t
\r\n\t\t\t\t\r\n\t\t\t\t\t\"Get\r\n\t\t\t\t\r\n\r\n\t\t\t\t\"Print\r\n\r\n\t\t\t\t Lightweight UI Toolkit\r\n\t\t\t\t\"Previous\"\r\n\t\t\t\t\"Next\"\r\n\t\t\t
\r\n
\r\n\r\n\r\n\r\n\r\n\r\n
\r\n\t\r\n\r\n\r\n\r\n

Adding the LWUIT Library

\r\n\r\n\r\n

The LWUIT library can be added to any MIDP project.

\r\n
  1. \r\n\r\n

    Right-click on a project and select Properties.

  2. \r\n
  3. \r\n\r\n

    In the Build category, select Libraries & Resources, and click the Add Library... button.

  4. \r\n
  5. \r\n\r\n

    In the Add Libraries window, select LWUIT and click Add Library.

    \r\n\r\n

    You can see the package under Libraries and Resources.

\r\n\r\n
\r\n\r\n
\r\n\t\r\n\t\t\r\n\t\t\t\r\n\t\t\t\r\n\t\t\r\n\t
\r\n\t\t\t\t\r\n\t\t\t\r\n\t\t\t\t\"Previous\"\r\n\t\t\t\t\"Next\"\r\n\t\t\t
\r\n\t\r\n
\r\n\r\n\r\n\r\n\r\n\r\n\r\n\n\r\n\r\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n\n\n\n" + }, + { + "url": "https://docs.oracle.com/cd/E19082-01/819-3620/chp-aggs-7/index.html", + "domain": "docs.oracle.com", + "file_source": "part-00131-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "\nPrinting Aggregations (Solaris Dynamic Tracing Guide)

Solaris Dynamic Tracing Guide

Printing Aggregations

\n

\nBy default, multiple aggregations are displayed in the order they are introduced in the D program. You can override this behavior using the printa() function to print the aggregations. The printa() function also enables you to precisely format the aggregation data using a format string, as described in Chapter 12, Output Formatting.

\n

If an aggregation is not formatted with a printa() statement in your D program, the dtrace command will snapshot the aggregation data and print the results once after tracing has completed using the default aggregation format. If a given aggregation is formatted using a printa() statement, the default behavior is disabled. You can achieve equivalent results by adding the statement printa(@aggregation-name) to a dtrace:::END probe clause in your program. The default output format for the avg(), count(), min(), max(), and sum() aggregating functions displays an integer decimal value corresponding to the aggregated value for each tuple. The default output format for the lquantize() and quantize() aggregating functions displays an ASCII table of the results. Aggregation tuples are printed as if trace() had been applied to each tuple element.

\n
\n
    \n \n
\n
\n
\n" + }, + { + "url": "https://docs.oracle.com/cd/E19261-01/820-2762/repagreee/index.html", + "domain": "docs.oracle.com", + "file_source": "part-00273-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "Migration of Replication Agreements (Sun Java System Directory Server Enterprise Edition 6.3 Migration Guide)

Sun Java System Directory Server Enterprise Edition 6.3 Migration Guide

Migration of Replication Agreements

\n

If possible, you should migrate replicated servers to the same host\nname and port number. If you must change the host name\nor port number of a replicated server, all replication agreements that point\nto that server must be updated manually to point to the new server. For example,\nif you migrate a consumer server from red.example.com:1389 to blue.example.com:1389, the replication agreements on all masters\nthat point to red.example.com:1389 must be updated manually\nto point to blue.example.com:1389.

\n

Replication agreements from the migrated master\nto consumers in the topology are managed by the dsmig migration\ntool. If your topology does not support automated migration, these replication\nagreements must also be updated manually.

\n
\n
    \n \n
\n
\n
" + }, + { + "url": "https://docs.oracle.com/database/121/RACAD/GUID-4DE433AE-AC47-4591-92A7-866DBE77A930.htm", + "domain": "docs.oracle.com", + "file_source": "part-00943-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "\n\n\n\n\n\n\nEnabling and Disabling Services with SRVCTL\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nGo to main content
\n
\n
\n\n 154/567 \n\n

Enabling and Disabling Services with SRVCTL

\n
\n

If you disable a service, then Oracle Clusterware does not consider the service for automatic startup, failover, or restart. You might disable a service when performing application maintenance, to ensure the service is not accidentally restarted by Oracle Clusterware until your maintenance operations are complete. To make the service available for normal operation again, you enable the service.

\n

Use the following SRVCTL syntax from the command line to enable and disable services:

\n
$ srvctl enable service -db db_unique_name -service service_name_list\n    [-instance inst_name]
\n
$ srvctl disable service -db db_unique_name -service service_name_list\n    [-instance inst_name]
\n
\n\n
\n\n
\n\n\n\n" + }, + { + "url": "https://docs.oracle.com/cd/E19253-01/820-5238/ggfsf/index.html", + "domain": "docs.oracle.com", + "file_source": "part-00514-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "

Solaris 10 10/08 Installation Guide: Solaris Live Upgrade and Upgrade Planning

Creating a Boot Environment From a Source Other Than\nthe Currently Running System

\n

\nIf you have an existing\nZFS root pool or UFS boot environment that is not currently used as the active\nboot environment, you can use the following example to create a new ZFS boot\nenvironment from this boot environment. After the new ZFS boot environment\nis created, this new boot environment can be upgraded and activated at your\nconvenience.

\n

If you are creating a boot environment from a source other than the\ncurrently running system, you must use the lucreate command\nwith the -s option. The -s option works the\nsame as for a UFS file system. The -s option provides the\npath to the alternate root (/) file system. This alternate\nroot (/) file system is the source for the creation of\nthe new ZFS root pool. The alternate root can be either a UFS (/)\nroot file system or a ZFS root pool. The copy process might take time, depending\non your system.

\n

The following example shows how the -s option is used\nwhen creating a boot environment on another ZFS root pool.

\n
Example 13–4 How to Create a Boot Environment From a Source\nOther Than the Currently Running System

The following command creates a new ZFS root pool from an existing ZFS\nroot pool. The -n option assigns the name to the boot environment\nto be created, new-zfsBE. The -s option\nspecifies the boot environment, rpool3, to be used as the\nsource of the copy instead of the currently running boot environment. The\n -p option specifies to place the new boot environment in rpool2.

\n\n\n\n\n
\n
\n
# lucreate -n new-zfsBE -s rpool3 -p rpool2\n# lustatus\nboot environment   Is        Active  Active     Can\t    Copy \nName               Complete  Now\t OnReboot   Delete\t Status \n------------------------------------------------------------------------ \nzfsBE               yes      yes     yes        no         - \nzfsBE2              yes      no      no         yes        -\nzfsBE3              yes      no      no         yes        -\nnew-zfsBE           yes      no      no         yes        -\n\n# zfs list\nNAME                            USED    AVAIL   REFER   MOUNTPOINT \nrpool2                         9.29G    57.6G     20K   /rpool2 \nrpool2/ROOT/                   5.38G    57.6G     18K   /rpool2/ROOT \nrpool2/ROOT/new-zfsBE          5.38G    57.6G    551M   /tmp/.new.luupdall.109859\nrpool2/dump                    3.99G        -   3.99G   - \nrpool2/swap                    3.99G        -   3.99G   - \nrpool3                         9.29G    57.6G     20K   /rpool2 \nrpool3/ROOT/                   5.38G    57.6G     18K   /rpool2/ROOT \nrpool3/ROOT/zfsBE3             5.38G    57.6G   551M    /tmp/.new.luupdall.109859\nrpool3/dump                    3.99G        -   3.99G   - \nrpool3/swap                    3.99G        -   3.99G   - \nprool                          9.29G    57.6G     20K   /.new.lulib.rs.109262\nrpool/ROOT                     5.46G    57.6G     18K   legacy\nrpool/ROOT/zfsBE               5.46G    57.6G   551M  \nrpool/dump                     3.99G        -   3.99G   - \nrpool/swap                     3.99G        -   3.99G   -
\n

You can now upgrade and activate the new boot environment.

\n
\n
\n
    \n \n
\n
\n
" + }, + { + "url": "https://docs.oracle.com/cd/E36784_01/html/E36842/overview-33.html", + "domain": "docs.oracle.com", + "file_source": "part-00943-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "\n\n\n\n \nLabel Availability in Trusted Extensions Sessions - \nTrusted Extensions Label Administration\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\t
\n\t\t
oracle home
\n\t\t
\nTrusted Extensions Label Administration
\n\n\t\t
\n\t\t\t

Exit Print View

\n\t\t
\n\n\t\t\n\t\t
\n\t\t\t
\n\t\t\t\n\t\t\t
\n\t\t\t\t
 
\n\t\t\t\t\n\t\t\t\t\n\t\t\t
\n\t\t\t
\n\t\t\t\t
\n\t\t\t\t\t
Search Scope:
\n\t\t\t\t\t
\n\t\t\t\t\t\t\n\t\t\t\t\t\t\n\t\t\t\t\t
\t\t\t\t\n\t\t\t\t
\n\t\t\t
\n\t\t\t
\n\t\t
\n\t
\n\n\t\n\t\n\t
 
\n\t
 
\n\n\t
\n\t\t\n\t
\n\t\n\t
\n\n\t\t
\n\t\t\t
\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t
\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t
\n\n\t\t\t\n\t\t\t
\n\t\t\t\t
\n\t\t\t\t\t\n\t\t\t\t\t\n\t\t\t\t
\n\t\t\t
\n\t\t\t \n\t\n\t\t\t
\n\t\t\t\t

Label Availability in Trusted Extensions Sessions

\n\n

The following figure summarizes the progressive eliminations of available\nlabels. The eliminated labels are shown with a line through them in the range\nwhere they are filtered out. The filtered out labels are not shown in subsequent\nranges.

Figure 1-8  Cumulative Effect of Constraints on a\nSession Range

\"image:Graphic\n\n\n

The following set of examples show session label limitations and availability\nbased on users' session choices. These examples continue the example from Figure 1–8.

Example 1-2  Effect of Using Default Label in Multilevel\nSession

At initial login, the user is assigned the lowest label in the user's\naccount. If the lowest label is CONFIDENTIAL, the user\nuser's first workspace is labeled CONFIDENTIAL. The user\ncan then open workspaces at CONFIDENTIAL A B and SECRET\nA B, the session clearance.

\nExample 1-3  Effect on Multilevel Session of User\nChoosing Higher Label at Initial Login

If the user chooses a higher label at initial login, such as CONFIDENTIAL\nA B, the user's first workspace is labeled CONFIDENTIAL\nA B. The user can open a higher-labeled workspace SECRET\nA B, but cannot open a CONFIDENTIAL workspace\nduring that session.

\nExample 1-4  Label Availability in Single-Level Session

At initial login, the user must choose a label. The available labels\nare within the user's label range up to the session clearance. If the clearance\nis SECRET A B, the labels that are available for the user\nto choose are CONFIDENTIAL, CONFIDENTIAL A B, SECRET A B. After choosing a label like SECRET A B,\nthe user's first workspace is labeled SECRET A B. Because\nthe session is single-level, the only available workspace label is SECRET\nA B.

\n\n\n\t\t\t\t\n\t\t\t
\n\n\t\t\t
\n\t\t\t\t
Copyright © 1997, 2014, Oracle and/or its affiliates. All rights reserved. Legal Notices\n\t\t\t\t
\n\t\t\t\t
\n\t\t\t\t\t
Previous
\n\t\t\t\t\t
Next
\n\t\t\t\t
\n\t\t\t\t
\n\t\t\t
\t\n\t\t
\n\t
\n\n" + }, + { + "url": "https://docs.oracle.com/en/middleware/soa-suite/soa/12.2.1.4/develop/img_text/soa-policy-binding-compon.html", + "domain": "docs.oracle.com", + "file_source": "part-00273-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "\n\n \n \n \n \n Description of the illustration soa-policy-binding-compon.png\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
\n

This image shows the Configure SOA WS Policies dialog. In the upper right are four icons. Below this are the following sections,\n each of which have Add, Delete, Enable, and Disable icons to the right: MTOM, Reliability, Addressing, Security, and Management.\n Security and Management each have an additional edit icon. Below this are Help, OK, and Cancel buttons. The Security section\n has a value. To the right of the value is a check box, which is selected.\n

\n
\n \n \n" + }, + { + "url": "https://docs.oracle.com/cd/B31104_02/books/ConSecSIA/ConSecSIADeductions24.html", + "domain": "docs.oracle.com", + "file_source": "part-00943-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "\r\n\r\n\r\n\t\r\n\t\r\n\t\r\n\t\r\n\t\r\n\t\r\n\t\r\n\t\r\n\t\r\n\t\r\n\t\r\n\r\n\r\n\r\n\tBookshelf v7.8: Reviewing Deduction Literature (End User)\r\n\r\n\t\r\n\t\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\t\r\n\t\t\r\n\r\n\t\t\t\t\r\n\t\r\n
  Bookshelf Home | Contents | Index | PDF   
\r\n\r\n\r\n\r\n
\r\n

Siebel Consumer Goods Guide > Deductions >

\r\n

Reviewing Deduction Literature (End User)

\r\n
\r\n

Key account managers and customer financial services representatives and managers can view literature attached to deductions.

This task is a step in Process of Managing Promotions Deductions.

To review deduction literature

\r\n
    \r\n
  1. Navigate to the Deductions screen > Deduction List view.\r\n
  2. Drill down on the ID field hyperlink for a selected deduction, and click the Literature view tab.\r\n

    Literature attached to the selected deduction appears in the Literature list.

    \r\n
  3. Click the Name field hyperlink in the Literature list to open a literature attachment.\r\n
\r\n
\r\n\r\n\r\n\t\r\n\r\n\t\t\r\n\t\t\t\t\r\n\t\r\n
   
\r\n\r\n\r\n\t\r\n\t\t\r\n\t\r\n\t\r\n\t\t\r\n\t\r\n
Siebel Consumer Goods Guide
 
\r\n\r\n\r\n\r\n\r\n" + }, + { + "url": "https://docs.oracle.com/cd/E19893-01/819-4439/acrmu/index.html", + "domain": "docs.oracle.com", + "file_source": "part-00514-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "Limitations to the LDAP Data Cache Solution (Sun Java Communications Suite 5 Deployment Planning Guide)

Sun Java Communications Suite 5 Deployment Planning Guide

Limitations to the LDAP Data Cache Solution

\n

The LDAP data cache does not provide for:

\n
    \n
  • \n

    Reading the cache for searches where a list of entries is\nexpected, for example, searching for attendees for a meeting. This type of\nsearch is subject to any LDAP delay. For instance, a newly created calendar\nwill not appear in a calendar search if the LDAP search option is active and\nthe search is performed within the delay period following the creation of\na new calendar.

    \n\n\n
  • \n
  • \n

    Reading and writing of the cache across multiple front-end\nservers. Each front-end server has its own cache, which is not aware of data\nin other caches.

    \n\n\n
  • \n
  • \n

    The capability to handle a user who doesn’t always log\ninto the same server. Since each server has its own LDAP cache, within the\ndelay period, one cache will not know about the user's activities while logged\ninto the another cache.

    \n\n\n
  • \n
\n
\n
    \n \n
\n
\n
" + }, + { + "url": "https://docs.oracle.com/cd/E20815_01/html/E20840/sfmsm.gjvej.html", + "domain": "docs.oracle.com", + "file_source": "part-00662-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "\n\n\n\n\nHow to Backup Motherboard FRUID Information - Sun Server X2-8 Product Documentation\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n\t\n\t\t\n\t\t\t\n\t\t\n\n\t\t\n\t\t\t\n\t\t\n\n\t\t\n\t\t\n\t\t\t\n\n\t\t\t\n\t\t\n\t
\n\t\t\t\tSkip Navigation Links\n\t\t\t
\n\t\t\t\tExit Print View\n\t\t\t
\"Link\n\t\t\tSun Server X2-8 Product Documentation     Sun Server X2-8 (formerly Sun Fire X4800 M2) Documentation Library\n\t\t\t
\n
\n\n\n
\n
\n
\n \n \"Oracle\n \n
\n\n
\n \n \"Library\"\n \n
\n
\n \n
\n\n
\n\t\t \n \n \"PDF\"\n \n\t\t \n
\n \n
\n \n \"Print\n \n
\n \n \n
\n \n \"Feedback\"\n \n
\n\n
\n\n\n
\n
\n
\n \n\n
\n
\n \"search\n
\n\n
\n \n \n
\n\n
\n \"search\n
\n
\n\n
\n
\n
\n
\n
 Search Scope:\n
\n
    \n
  • \n
  • \n
\n
\n
\n
\n
\n
\n\t\t\t \n
\n
\n
\n \n\n
\n\n\n
\n\t

Document Information

\n

Using This Documentation

\n

Hardware Installation

\n

Operating System Installation

\n

Administration, Diagnostics, and Service

\n

Service Manual

\n

Sun Server X2-8 Service Manual Overview

\n

Server and Component Overview

\n

Sun Server X2-8 Overview

\n

Sun Server X2-8 Chassis Overview

\n

About Sun Server X2-8 CPU Modules and CMOD Filler Modules

\n

Sun Server X2-8 Subassembly Module Overview

\n

Component Overviews

\n

Front Panel Indicator Module Overview

\n

Power Supply Overview

\n

Hard Drive and XL Bracket Assembly Overview

\n

Drive Backplane Overview

\n

DIMM Overview

\n

RAID Expansion Module Overview

\n

Fabric Expansion Module Overview

\n

System Battery Overview

\n

CPU and Heatsink Assembly Overview

\n

Internal USB Port Overview

\n

Fan Module Overview

\n

Network Express Module Overview

\n

Service Processor (SP) Module Overview

\n

SP Module Multiport Cable Overview

\n

PCIe ExpressModule Overview

\n

AC Power Block Overview

\n

Fan Module Controller Board (FB) Overview

\n

Midplane Overview

\n

Component Removal and Installation Levers

\n

Power and Reset

\n

Power Modes

\n

Power Button and Power OK LED

\n

How to Power On the Server

\n

How to Power Off the Server

\n

Controlling Server Power Remotely

\n

How to Use the Oracle ILOM Command-Line Interface to Control Power

\n

How to Use the Oracle ILOM Web Interface to Control Power

\n

How to Use IPMItool to Control Power

\n

Removing and Installing Components

\n

Preparing for Service and Operation

\n

Service Task Table

\n

CRU and FRU List

\n

Hot-Plug Components

\n

Required Service Tools

\n

Important Safety Information

\n

Antistatic Precautions and Procedures

\n

How to Prepare for Service

\n

How to Prepare for Operation

\n

Removal and Installation Procedures

\n

Removing and Installing a Power Supply (CRU)

\n

Power Supply Designations

\n

How to Remove a Power Supply

\n

How to Install a Power Supply

\n

How to Remove and Install AC Power Cables

\n

Removing and Installing a Hard Drive (CRU)

\n

Hard Drive Designation

\n

How to Remove a Hard Drive

\n

How to Install a Hard Drive

\n

Removing and Installing a Hard Drive Filler

\n

How to Remove a Hard Drive Filler

\n

How to Install a Hard Drive Filler

\n

Replacing the Hard Drive Backplane (FRU)

\n

Hard Drive Backplane Cable Routing and Designations

\n

How to Remove the Hard Drive Backplane

\n

How to Install the Hard Drive Backplane

\n

Adding, Removing, and Installing a CMOD (CRU)

\n

How to Add a CPU Module (CMOD) to the Server Using the Upgrade Kit

\n

How to Remove a CPU Module (CMOD)

\n

How to Remove the CPU Module (CMOD) Cover

\n

How to Install the CPU Module (CMOD) Cover

\n

How to Install a CPU Module (CMOD)

\n

How to Verify CPLD Versions

\n

CPU Module (CMOD) Designation and Population Rules

\n

CPU Module (CMOD) Designations

\n

CPU Module (CMOD) Population Rules

\n

Removing and Installing a CMOD Filler

\n

How to Remove a CMOD Filler

\n

How to Install a CMOD Filler

\n

How to Replace the CMOD Battery (CRU)

\n

Removing and Installing DIMMs (CRU)

\n

How to Remove a DIMM

\n

How to Install a DIMM

\n

DIMM Population Rules

\n

Population Rules

\n

Supported Memory Configurations

\n

Unsupported DIMMs

\n

Removing and Installing a RAID Expansion Module (CRU)

\n

How to Remove a RAID Expansion Module

\n

How to Install a RAID Expansion Module

\n

Removing and Installing a Fabric Expansion Module (CRU)

\n

How to Remove a Fabric Expansion Module

\n

How to Install a Fabric Expansion Module

\n

Replacing a CPU and Heatsink Assembly (FRU)

\n

CPU Designations

\n

How to Remove a CPU and Heatsink Assembly (FRU)

\n

How to Install a CPU and Heatsink Assembly (FRU)

\n

Removing and Installing a Fan Module (CRU)

\n

Fan Module Designations

\n

How to Remove a Fan Module

\n

How to Install a Fan Module

\n

Replacing a Fan Module Controller Board (FRU)

\n

Fan Module Controller Board (FB) Designations

\n

How to Remove a Fan Module Controller Board

\n

How to Install a Fan Module Controller Board

\n

Removing and Installing a NEM or a NEM Filler (CRU)

\n

Network Express Module Designations and Assignments

\n

How to Remove a NEM or a NEM Filler

\n

How to Install a NEM or a NEM Filler

\n

Removing and Installing a PCIe EM (CRU)

\n

PCIe EM Designations, Slot Numbering, and Population Rules

\n

Preparing a PCIe EM for Hot Plug Removal and Installation

\n

How to Remove a PCIe EM

\n

How to Install a PCIe EM or PCIe EM Filler

\n

Removing and Installing the Service Processor Module (CRU)

\n

How to Remove the SP Module

\n

How to Install the SP Module

\n

Removing and Installing the Multiport Cable

\n

How to Remove the Multiport Cable

\n

How to Install the Multiport Cable

\n

Removing and Installing the Subassembly Module (FRU)

\n

How to Remove the Subassembly Module

\n

How to Install the Subassembly Module

\n

Sun Server X2-8 Service Procedures and Information

\n

Managing FRUID Information

\n\n

How to Restore Motherboard FRUID Information

\n

How to Update Motherboard FRUID Information

\n

Firmware Information and Procedures

\n

How to Update Firmware

\n

How to Use the Tools and Drivers CD

\n

SP Module and CMOD Firmware Compatibility Considerations

\n

How to Access the BIOS Setup Utility

\n

BIOS Setup Utility Screens

\n

Main

\n

Advanced

\n

PCIPnP

\n

Boot

\n

Security

\n

RC Settings

\n

Chipset

\n

Exit

\n

Component LED Locations and Functions

\n

Front Indicator Panel LEDs

\n

CPU Module (CMOD) LEDs

\n

Front Panel Power Supply LED's

\n

Front Panel Hard Drive LED's

\n

Back Panel Network Express Module LED's

\n

Back Panel SP Module LEDs

\n

Back Panel PCIe EM LEDs

\n

Thermal Sensor Locations

\n

Power Supply Inlet Thermal Sensors

\n

Power Supply Outlet Thermal Sensors

\n

CMOD Inlet Thermal Sensors

\n

NEM Thermal Sensors

\n

Sun Server X2-8 Specifications

\n

Physical Specifications

\n

Power Specifications

\n

Environmental Specifications

\n

Requesting Physical Media

\n

How to Request Physical Media Online

\n

How To Request Physical Media by Phone

\n

Index

\n\n
\n\n\n
\n\t\n\n\n\n

How to Backup Motherboard FRUID Information

\n

Note - This information is for Oracle Service personnel only.

\n
\n\n\n

The server supports TLI and stores the TLI records in the motherboard (MB)\nFRUID container, called PRIMARY and in a file on the service processor (SP)\nfilesystem, called BACKUP1. After adding or replacing a component in your server, you\ncan initiate a backup of the latest TLI record information in the MB\nFRUID PRIMARY container to BACKUP1 on the SP filesystem using this procedure.

    \n
  1. Enter Service mode.
  2. \n
  3. To backup the PRIMARY container to the SP filesystem, enter the following command:

    copypsnc PRIMARY BACKUP1

See Also

\n\n

How to Restore Motherboard FRUID Information

\n
\n\n
\n\t\n\t\t\n\t\t\t\n\t\t\t\n\t\t\n\t
\n\t\t\t\t
\n\t\t\t\t\tCopyright © 2011, 2014, Oracle and/or its affiliates. All rights reserved. Legal Notices\n\t\t\t\t
\n\t\t\t
\n\t\t\t\t\"Previous\"\n\t\t\t\t\"Next\"\n\t\t\t
\n\t\n
\n\n\n\n\n\n\n\n\n\n" + }, + { + "url": "https://docs.oracle.com/cd/B40099_02/books/UpgradeDB2390/UpgradeDB2390_FileGenerate5.html", + "domain": "docs.oracle.com", + "file_source": "part-00331-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "\r\n\r\n\r\n\t\r\n\t\r\n\t\r\n\t\r\n\t\r\n\t\r\n\t\r\n\t\r\n\t\r\n\t\r\n\r\n\t\r\n\r\n\tBookshelf v8.0: Preparing the Additive Schema and JCL Files on the z/OS Host\r\n\t\r\n\t\r\n\t\r\n\r\n\r\n\r\n\r\n\r\n\t\r\n\t\t\r\n\t\t\r\n\t\t\r\n\t\r\n
  Bookshelf Home | Contents | Index | PDF   
\r\n\r\n\r\n\r\n
\r\n

Siebel Database Upgrade Guide for DB2 UDB for z/OS > Generating the Siebel Upgrade Files >

\r\n

Preparing the Additive Schema and JCL Files on the z/OS Host

\r\n
\r\n

Upgrades: All upgrades.

Environments: All environments.

This topic is part of an upgrade process. See How to Perform a Siebel Database Upgrade.

When you restart the Upgrade Wizard after creating the staging database, it generates SQL and JCL templates for the file generation process. Specifically, the following files are generated:

    \r\n
  • Additive upgrade files\r\n
  • Pret files\r\n
  • Data migration files \r\n
\r\n

When the Upgrade Wizard stops at Pause # 1, you must transfer these files to datasets on the z/OS host. These datasets are then used to create the SQL and JCL templates used to perform the upgrade. These tasks are described in this topic.

Perform the following procedures:

\r\n
\r\n\r\n\r\n\t\r\n\t\t\r\n\t\t\r\n\t\t\r\n\t\r\n
   
\r\n\r\n\r\n\t\r\n\t\t\r\n\t\t\r\n\t\r\n\t\r\n\t\t\r\n\t\r\n
Siebel Database Upgrade Guide for DB2 UDB for z/OSCopyright © 2012, Oracle and/or its affiliates. All rights reserved. Legal Notices.
 
\r\n\r\n\r\n\r\n" + }, + { + "url": "https://docs.oracle.com/cd/E14004_01/books/SiebInstUNIX/SiebInstCOM_Requirements19.html", + "domain": "docs.oracle.com", + "file_source": "part-00131-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "\r\n\r\n\r\n\t\r\n\t\r\n\t\r\n\t\r\n\t\r\n\t\r\n\t\r\n\r\n\t\r\n\r\n\tBookshelf v8.1/8.2: Installing Siebel Business Applications from a Remote Server Computer\r\n\t\r\n\t\r\n\t\r\n\r\n\r\n\r\n\r\n\t\r\n\t\t\r\n\t\t\r\n\t\t\r\n\t\r\n
  Bookshelf Home | Contents | Index | PDF   
\r\n\r\n\r\n\r\n
\r\n

Siebel Installation Guide for UNIX > Requirements for Installing and Configuring Siebel Business Applications > Planning Your Siebel Deployment >

\r\n

Installing Siebel Business Applications from a Remote Server Computer

\r\n
\r\n

This topic is part of Planning Your Siebel Deployment.

If you are installing server-based Siebel modules in GUI mode from a remote server computer, then you must set the DISPLAY variable to display the Java installer user interface on your local computer. For example, depending on your shell:

export DISPLAY=mycomputer:0.0

\r\n

or:

setenv DISPLAY mycomputer:0.0

\r\n

In this command, mycomputer is the computer name or IP address of your local workstation computer.

You might be able to test that your display works correctly on your local computer by entering:

xclock

\r\n

You can also verify the IP address by entering:

echo $DISPLAY

\r\n

If the clock does not appear on your local computer, then issue the following command on your local computer:

xhost +

\r\n

If you are using X-connectivity software to access the UNIX computer from a Windows computer, then the installer might fail or stop responding. If this is the case, then try using a later version of the X-connectivity software.

NOTE:  For additional information, refer to vendor or other documentation for your UNIX operating system, or contact your system administrator for assistance.

\r\n
\r\n\r\n\r\n\t\r\n\t\t\r\n\t\t\r\n\t\t\r\n\t\r\n
   
\r\n\r\n\r\n\t\r\n\t\t\r\n\t\t\r\n\t\r\n\t\r\n\t\t\r\n\t\r\n
Siebel Installation Guide for UNIXCopyright © 2014, Oracle and/or its affiliates. All rights reserved. Legal Notices.
 
\r\n\r\n\r\n\r\n" + }, + { + "url": "https://docs.oracle.com/cd/E22289_01/html/821-1272/directory-deployments.html", + "domain": "docs.oracle.com", + "file_source": "part-00656-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "\n\n\n\n\nExample Deployments Using the Directory Server - Oracle Fusion Middleware Deployment Planning Guide for Oracle Unified Directory\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n\t\n\t\t\n\t\t\t\n\t\t\n\n\t\t\n\t\t\t\n\t\t\n\n\t\t\n\t\t\n\t\t\t\n\n\t\t\t\n\t\t\n\t
\n\t\t\t\tSkip Navigation Links\n\t\t\t
\n\t\t\t\tExit Print View\n\t\t\t
\"Link\n\t\t\tOracle Fusion Middleware Deployment Planning Guide for Oracle Unified Directory 11g Release 1 (11.1.1)\n\t\t\t
\n
\n\n\n
\n
\n
\n \n \"Oracle\n \n
\n\n
\n \n \"Library\"\n \n
\n
\n \n
\n\n
\n\t\t \n \n \"PDF\"\n \n\t\t \n
\n \n
\n \n \"Print\n \n
\n \n \n
\n \n \"Feedback\"\n \n
\n\n\n
\n\n\n
\n
\n
\n \n\n
\n
\n \"search\n
\n\n
\n \n \n
\n\n
\n \"search\n
\n
\n\n
\n
\n
\n
\n
 Search Scope:\n
\n
    \n
  •   This Document
  • \n
  •   Entire Library
  • \n
\n
\n
\n
\n
\n
\n
\n
\n
\n \n\n
\n\n\n\n\n\n
\n\tChapter 6

Example Deployments Using the Directory Server

\n\n

This chapter provides sample configurations for a replicated topology including multiple instances of\nthe Oracle Unified Directory directory server.

\n

This chapter covers the following topics:

\n\n\n\n

For a complete understanding of how replication works in Oracle Unified Directory, see\nChapter 5, Directory Server Replication, in Oracle Fusion Middleware Architecture Reference for Oracle Unified Directory.

\n
\n\n
\n\t\n\t\t\n\t\t\t\n\t\t\t\n\t\t\n\t
\n\t\t\t\t
\n\t\t\t\t\tCopyright © 2006, 2011, Oracle and/or its affiliates. All rights reserved. Legal Notices\n\t\t\t\t
\n\t\t\t
\n\t\t\t\t\"Previous\"\n\t\t\t\t\"Next\"\n\t\t\t
\n\t\n
\n\n\n\n\n\n\n\n\n\n" + }, + { + "url": "https://docs.oracle.com/cd/E19566-01/819-4654/gcmyj/index.html", + "domain": "docs.oracle.com", + "file_source": "part-00514-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "2.8 Set Up a Front End-Back End Deployment Screen for\nCalendar Server 6.3 (Sun Java System Calendar Server 6.3 Administration Guide)

Sun Java System Calendar Server 6.3 Administration Guide

2.8 Set Up a Front End-Back End Deployment Screen for\nCalendar Server 6.3

\n

Choose whether to configure this server as a single server deployment,\nor a front-end, back-end deployment. If you choose to have a single server\ninstance of Calendar Server, then do not select the checkbox on this screen.\nIf you want to put your Calendar Server databases on one or more servers,\nwhile keeping the processes that communicate with the client on a different\nserver, select the checkbox.

\n

This section covers the following topics:

\n\n

2.8.1 Single Server Deployment for Calendar Server 6.3

\n
Figure 2–7 Single Server Deployment

\"This

Do not change any part of this screen if you want a single server deployment\nwhere both the administrative processes and the databases reside on one server.\nClick Next to continue.

\n

If you wish to deploy separate Front End and Back End machines, click\nthe checkbox labeled: Setup a Front End/Back End deployment. The screen will\nchange and you will be allowed to configure the front-end and back-end servers\nseparately, as shown in the following two screen shots.

\n

2.8.2 Front-End and Back-End Deployment for Calendar Server\n6.3

\n
Figure 2–8 Set Up a Front-End and Back-End Server

\"The

To complete this screen, perform the following steps:

\n
    \n
  1. \n

    To configure the back-end server, that is, the server on which\nto store calendar databases, you need only specify the service port.

    \n\n

    The service port entry box is pre-filled with the port named in the ics.conf parameters service.dwp.server.hostname.port and service.dwp.port.

    \n\n

    If you want to change the port number,\nenter the desired port number in the Service Port entry box.

    \n\n\n
  2. \n
  3. \n

    To configure the front-end server, click Add a Host and then\nenter the host name and IP address of the server you are configuring.

    \n\n
    \nNote – \n

    Add only the server you are currently configuring to the list.\nIf you plan to configure other front-end servers, add them at the time you\nconfigure them. (You must run the configuration program on each server you\nadd to your configuration.)

    \n\n
    \n\n\n
  4. \n
  5. \n

    If this server is the default front-end server, select the\nDefault checkbox.

    \n\n\n
  6. \n
  7. \n

    Click Next.

    \n\n
    \nNote – \n

    You may also remove servers from this list by clicking Remove\nSelected Host.

    \n\n
    \n\n\n
  8. \n
\n
\n
    \n \n
\n
\n
" + }, + { + "url": "https://docs.oracle.com/cd/E85660_01/html/E87281/z40000081394727.html", + "domain": "docs.oracle.com", + "file_source": "part-00331-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "\n\n\n\n \nFacility Power Requirements - \nOracle® Rack Cabinet 1242 Power Distribution Units User's Guide\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n\n\n\n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\tGo to main content\n\t
\n\t\t
oracle home
\n\t\t

\nOracle® Rack Cabinet 1242 Power Distribution Units User's Guide

\n\n\t\t
\n\t\t\t

Exit Print View

\n\t\t
\n\n\t\t\n\t\t
\n\t\t\n\t\t\t
\n\t\t\t\n \n \n \n\t\t\t
\n\t\t\t\t
 
\n\t\t\t\t\n\t\t\t\t\n\t\t\t
\n\t\t\t
\n\t\t\t\t
\n\t\t\t\t\t
Search Scope:
\n\t\t\t\t\t
\n\t\t\t\t\t\t\n\t\t\t\t\t\t\n\t\t\t\t\t
\t\t\t\t\n\t\t\t\t
\n\t\t\t
\n\t\t\t
\n\t\t
\n\t
\n\n\t\n\t\n\t
 
\n\t
 
\n\n\t
\n\t\t\n\t
\n\t\n\t
\n\n\t\t
\n\t\t\t
\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t
\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t
\n\n\t\t\t\n\t\t\t
\n\t\t\t\t
\n\t\t\t\t\t\n\t\t\t\t\t\n\t\t\t\t
\n\t\t\t
\n\t\t\t \n\t\n\t\t\t
\n\t\t\t\t\n\t\t\t\t\n

Facility Power Requirements

\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n

To prevent catastrophic failures, design your input power sources to ensure that\n adequate power is provided to the PDUs. Use dedicated AC breaker panels for all\n power circuits that supply power to the PDU.

\n

Oracle PDUs are designed to measure the incoming line to neutral voltage. There\n are many power system types which, when connected to the PDU, will ensure that the\n PDU outlet groups provide the required output voltage. However, the PDUs cannot\n accurately measure the voltage when connected to certain power system types. Oracle\n PDUs measure the correct voltage when connected to a wye TN power system. Using any\n other power system type might lead to incorrect voltage measurements.

\n

Oracle PDUs are designed to be used with TN, TT or IT power systems. However, TN\n systems are recommended for accurate measuring of environmental variables. For TN\n power systems, the PDUs must be provided with protective earthing (grounding) from\n the installation facility through the earth conductor of the PDU power cord.\n Additional earthing (grounding) is recommended to ensure continued safe operation of\n your racked product. Refer to Attaching a Grounding Cable in Oracle Rack Cabinet 1242 User’s Guide\n for instructions.

\n

Note - \n For TT power systems, the power source has no earth connection and so the\n earth connection must be made through the rack. Refer to\n Attaching a Grounding Cable in Oracle Rack Cabinet 1242 User’s Guide for\n instructions.\n
\n

Note - \n All Oracle PDUs are suitable for connection to TN and TT power systems. All\n PDUs with a maximum of a 240V phase-to-phase rating, fuses in each line, and\n neutral conductors are suitable for IT power systems that supply a maximum of\n 240V phase-to-phase. If you are installing an Oracle Engineered System, refer to\n its documentation to determine if the system is suitable for connection to IT\n power systems.\n
\n

While the PDU power cords are 4m (13.12 ft.) long, 1 to 1.5m (3.3 to 4.9 ft.) of\n the cords will be routed within the rack cabinet. The installation site's AC power\n receptacle must be within 2m (6.6 ft.) of the rack.

\n

Electrical work and installations must comply with applicable local, state, or\n national electrical codes. Contact your facilities manager or a qualified\n electrician to determine what type of power is supplied to your building.

\n

When planning for power distribution requirements for installed rack\n equipment:

\n
    \n
  • \n

    Balance the power load between available AC supply branch circuits.

    \n
  • \n
  • \n

    In the United States and Canada, ensure that the overall system AC input\n current load does not exceed 80% of the branch circuit AC current\n rating.

    \n
  • \n
\n \n \n\t\t\t\t\n\t\t\t
\n\n\t\t\t
\n\t\t\t\t\n\t\t\t\t
\n\t\t\t\t\t
Previous
\n\t\t\t\t\t
Next
\n\t\t\t\t
\n\t\t\t\t
\n\t\t\t
\t\n\t\t
\n\t
\n\n\n\n\n" + }, + { + "url": "https://docs.oracle.com/database/121/DWHSG/ch2logdes.htm", + "domain": "docs.oracle.com", + "file_source": "part-00273-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "\n\n\n\n\n\n\nData Warehousing Logical Design\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nGo to main content
\n
\n
\n\n 7/32 \n

2 Data Warehousing Logical Design

\n\n\n
\n

Logical Versus Physical Design in Data Warehouses

\n
\n

Your organization has decided to build an enterprise data warehouse. You have defined the business requirements and agreed upon the scope of your business goals, and created a conceptual design. Now you need to translate your requirements into a system deliverable. To do so, you create the logical and physical design for the data warehouse. You then define:

\n
    \n
  • \n

    The specific data content

    \n
  • \n
  • \n

    Relationships within and between groups of data

    \n
  • \n
  • \n

    The system environment supporting your data warehouse

    \n
  • \n
  • \n

    The data transformations required

    \n
  • \n
  • \n

    The frequency with which data is refreshed

    \n
  • \n
\n

The logical design is more conceptual and abstract than the physical design. In the logical design, you look at the logical relationships among the objects. In the physical design, you look at the most effective way of storing and retrieving the objects as well as handling them from a transportation and backup/recovery perspective.

\n

Orient your design toward the needs of the end users. End users typically want to perform analysis and look at aggregated data, rather than at individual transactions. However, end users might not know what they need until they see it. In addition, a well-planned design allows for growth and changes as the needs of users change and evolve.

\n

By beginning with the logical design, you focus on the information requirements and save the implementation details for later.

\n
\n
\n\n
\n

Creating a Logical Design

\n
\n

A logical design is conceptual and abstract. You do not deal with the physical implementation details yet. You deal only with defining the types of information that you need.

\n

One technique you can use to model your organization's logical information requirements is entity-relationship modeling. Entity-relationship modeling involves identifying the things of importance (entities), the properties of these things (attributes), and how they are related to one another (relationships).

\n

The process of logical design involves arranging data into a series of logical relationships called entities and attributes. An entity represents a chunk of information. In relational databases, an entity often maps to a table. An attribute is a component of an entity that helps define the uniqueness of the entity. In relational databases, an attribute maps to a column.

\n

To ensure that your data is consistent, you must use unique identifiers. A unique identifier is something you add to tables so that you can differentiate between the same item when it appears in different places. In a physical design, this is usually a primary key.

\n

Entity-relationship modeling is purely logical and applies to both OLTP and data warehousing systems. It is also applicable to the various common physical schema modeling techniques found in data warehousing environments, namely normalized (3NF) schemas in Enterprise Data Warehousing environments, star or snowflake schemas in data marts, or hybrid schemas with components of both of these classical modeling techniques.

\n
\n

See Also:

\n\n
\n
\n\n
\n

What is a Schema?

\n
\n

A schema is a collection of database objects, including tables, views, indexes, and synonyms. You can arrange schema objects in the schema models designed for data warehousing in a variety of ways. Most data warehouses use a dimensional model.

\n

The model of your source data and the requirements of your users help you design the data warehouse schema. You can sometimes get the source model from your company's enterprise data model and reverse-engineer the logical data model for the data warehouse from this. The physical implementation of the logical data warehouse model may require some changes to adapt it to your system parameters—size of computer, number of users, storage capacity, type of network, and software. A key part of designing the schema is whether to use a third normal form, star, or snowflake schema, and these are discussed later.

\n
\n
\n
\n\n
\n

About Third Normal Form Schemas

\n
\n

Third Normal Form design seeks to minimize data redundancy and avoid anomalies in data insertion, updates and deletion. 3NF design has a long heritage in online transaction processing (OLTP) systems. OLTP systems must maximize performance and accuracy when inserting, updating and deleting data. Transactions must be handled as quickly as possible or the business may be unable to handle the flow of events, perhaps losing sales or incurring other costs. Therefore, 3NF designs avoid redundant data manipulation and minimize table locks, both of which can slow inserts, updates and deletes. 3NF designs also works well to abstract the data from specific application needs. If new types of data are added to the environment, you can extend the data model with relative ease and minimal impact to existing applications. Likewise, if you have completely new types of analyses to perform in your data warehouse, a well-designed 3NF schema will be able to handle them without requiring redesigned data structures.

\n

3NF designs have great flexibility, but it comes at a cost. 3NF databases use very many tables and this requires complex queries with many joins. For full scale enterprise models built in 3NF form, over one thousand tables are commonly encountered in the schema. With the kinds of queries involved in data warehousing, which will often need access to many rows from many tables, this design imposes understanding and performance penalties. It can be complex for query builders, whether they are humans or business intelligence tools and applications, to choose and join the tables needed for a given piece of data when there are very large numbers of tables available. Even when the tables are readily chosen by the query generator, the 3NF schema often requires that a large number of tables be used in a single query. More tables in a query mean more potential data access paths, which makes the database query optimizer's job harder. The end result can be slow query performance.

\n

The issue of slow query performance in a 3NF system is not necessarily limited to the core queries used to create reports and analyses. It can also show up in the simpler task of users browsing subsets of data to understand the contents. Similarly, the complexity of a 3NF schema may impact generating the pick-lists of data used to constrain queries and reports. Although these may seem relatively minor issues, speedy response time for such processes makes a big impact on user satisfaction.

\n

Figure 2-1 presents a tiny fragment of a 3NF Schema. Note how order information is broken into order and order items to avoid redundant data storage. The \"crow's feet\" markings on the relationship between tables indicate one-to-many relationships among the entities. Thus, one order may have multiple order items, a single customer may have many orders, and a single product may be found in many order items. Although this diagram shows a very small case, you can see that minimizing data redundancy can lead to many tables in the schema.

\n
\n

Figure 2-1 Fragment of a Third Normal Form Schema

\n\"Description
\nDescription of \"Figure 2-1 Fragment of a Third Normal Form Schema\"
\n\n\n
\n\n
\n

About Normalization

\n
\n

Normalization is a data design process that has a high level goal of keeping each fact in just one place to avoid data redundancy and insert, update, and delete anomalies. There are multiple levels of normalization, and this section describes the first three of them. Considering how fundamental the term third normal form (3NF) term is, it only makes sense to see how 3NF is reached.

\n

Consider a situation where you are tracking sales. The core entity you track is sales orders, where each sales order contains details about each item purchased (referred to as a line item): its name, price, quantity, and so on. The order also holds the name and address of the customer and more. Some orders have many different line items, and some orders have just one.

\n

In first normal form (1NF), there are no repeating groups of data and no duplicate rows. Every intersection of a row and column (a field) contains just one value, and there are no groups of columns that contain the same facts. To avoid duplicate rows, there is a primary key. For sales orders, in first normal form, multiple line items of each sales order in a single field of the table are not displayed. Also, there will not be multiple columns showing line items.

\n

Then comes second normal form (2NF), where the design is in first normal form and every non-key column is dependent on the complete primary key. Thus, the line items are broken out into a table of sales order line items where each row represents one line item of one order. You can look at the line item table and see that the names of the items sold are not dependent on the primary key of the line items table: the sales item is its own entity. Therefore, you move the sales item to its own table showing the item name. Prices charged for each item can vary by order (for instance, due to discounts) so these remain in the line items table. In the case of sales order, the name and address of the customer is not dependent on the primary key of the sales order: customer is its own entity. Thus, you move the customer name and address columns out into their own table of customer information.

\n

Next is third normal form, where the goal is to ensure that there are no dependencies on non-key attributes. So the goal is to take columns that do not directly relate to the subject of the row (the primary key), and put them in their own table. So details about customers, such as customer name or customer city, should be put in a separate table, and then a customer foreign key added into the orders table.

\n

Another example of how a 2NF table differs from a 3NF table would be a table of the winners of tennis tournaments that contained columns of tournament, year, winner, and winner's date of birth. In this case, the winner's date of birth is vulnerable to inconsistencies, as the same person could be shown with different dates of birth in different records. The way to avoid this potential problem is to break the table into one for tournament winners, and another for the player dates of birth.

\n
\n
\n\n
\n

Design Concepts for 3NF Schemas

\n
\n

The following section discusses some basic concepts when modeling for a data warehousing environment using a 3NF schema approach. The intent is not to discuss the theoretical foundation for 3NF modeling (or even higher levels of normalization), but to highlight some key components relevant for data warehousing.

\n

Some key 3NF schema design concepts that are relevant to data warehousing are as follows:

\n\n
\n\n
\n

Identifying Candidate Primary Keys

\n
\n

A primary key is an attribute that uniquely identifies a specific record in a table. Primary keys can be identified through single or multiple columns. It is normally preferred to achieve unique identification through as little columns as possible - ideally one or two - and to either use a column that is most likely not going to be updated or even changed in bulk. If your data model does not lead to a simple unique identification through its attributes, you would require too many attributes to uniquely identify a single records, or the data is prone to changes, the usage of a surrogate key is highly recommended.

\n

Specifically, 3NF schemas rely on proper and simple unique identification since queries tend to have many table joins and all columns necessary to uniquely identify a record are needed as join condition to avoid row duplication through the join.

\n
\n
\n\n
\n

Foreign Key Relationships and Referential Integrity Constraints

\n
\n

3NF schemas in data warehousing environments often resemble the data model of its OLTP source systems, in which the logical consistency between data entities is expressed and enforced through primary key - foreign key relationships, also known as parent-child relationship. A foreign key resolves a 1-to-many relationship in relational system and ensures logical consistency: for example, you cannot have an order line item without an order header, or an employee working for a non-existent department.

\n

While such referential are always enforced in OLTP system, data warehousing systems often implement them as declarative, non-enforced conditions, relying on the ETL process to ensure data consistency. Whenever possible, foreign keys and referential integrity constraints should be defined as non-enforced conditions, since it enables better query optimization and cardinality estimates.

\n
\n
\n\n
\n

Denormalization

\n
\n

Proper normalized modelling tends to decompose logical entities - such as a customer. a product, or an order - into many physical tables, making even the retrieval of perceived simple information requiring to join many tables. While this is not a problem from a query processing perspective, it can put some unnecessary burden on both the application developer (for writing code) as well as the database (for joining information that is always used together). It is not uncommon to see some sensible level of denormalization in 3NF data warehousing models, in a logical form as views or in a physical form through slightly denormalized tables.

\n

Care has to be taken with the physical denormalization to preserve the subject-neutral shape and therefore the flexibility of the physical implementation of the 3NF schema.

\n
\n
\n
\n
\n\n
\n

About Star Schemas

\n
\n

Star schemas are often found in data warehousing systems with embedded logical or physical data marts. The term star schema is another way of referring to a \"dimensional modeling\" approach to defining your data model. Most descriptions of dimensional modeling use terminology drawn from the work of Ralph Kimball, the pioneering consultant and writer in this field. Dimensional modeling creates multiple star schemas, each based on a business process such as sales tracking or shipments. Each star schema can be considered a data mart, and perhaps as few as 20 data marts can cover the business intelligence needs of an enterprise. Compared to 3NF designs, the number of tables involved in dimensional modeling is a tiny fraction. Many star schemas will have under a dozen tables. The star schemas are knit together through conformed dimensions and conformed facts. Thus, users are able to get data from multiple star schemas with minimal effort.

\n

The goal for star schemas is structural simplicity and high performance data retrieval. Because most queries in the modern era are generated by reporting tools and applications, it's vital to make the query generation convenient and reliable for the tools and application. In fact, many business intelligence tools and applications are designed with the expectation that a star schema representation will be available to them.

\n

Discussions of star schemas are less abstracted from the physical database than 3NF descriptions. This is due to the pragmatic emphasis of dimensional modeling on the needs of business intelligence users.

\n

Note how different the dimensional modeling style is from the 3NF approach that minimizes data redundancy and the risks of update/inset/delete anomalies. The star schema accepts data redundancy (denormalization) in its dimension tables for the sake of easy user understanding and better data retrieval performance. A common criticism of star schemas is that they limit analysis flexibility compared to 3NF designs. However, a well designed dimensional model can be extended to enable new types of analysis, and star schemas have been successful for many years at the largest enterprises.

\n

As noted earlier, the modern approach to data warehousing does not pit star schemas and 3NF against each other. Rather, both techniques are used, with a foundation layer of 3NF - the Enterprise Data Warehouse of 3NF, acting as the bedrock data, and star schemas as a central part of an access and performance optimization layer.

\n\n
\n\n
\n

About Facts and Dimensions in Star Schemas

\n
\n

Star schemas divide data into facts and dimensions. Facts are the measurements of some event such as a sale and are typically numbers. Dimensions are the categories you use to identify facts, such as date, location, and product.

\n

The name \"star schema\" comes from the fact that the diagrams of the schemas typically show a central fact table with lines joining it to the dimension tables, so the graphic impression is similar to a star. Figure 2-2 is a simple example with sales as the fact table and products, times, customers, and channels as the dimension table.

\n
\n

Figure 2-2 Star Schema

\n\"Description
\nDescription of \"Figure 2-2 Star Schema\"
\n\n\n
\n\n
\n

About Fact Tables in Data Warehouses

\n
\n

Fact tables have measurement data. They have many rows but typically not many columns. Fact tables for a large enterprise can easily hold billions of rows. For many star schemas, the fact table will represent well over 90 percent of the total storage space. A fact table has a composite key made up of the primary keys of the dimension tables of the schema.

\n

A fact table contains either detail-level facts or facts that have been aggregated. Fact tables that contain aggregated facts are often called summary tables. A fact table usually contains facts with the same level of aggregation. Though most facts are additive, they can also be semi-additive or non-additive. Additive facts can be aggregated by simple arithmetical addition. A common example of this is sales. Non-additive facts cannot be added at all. An example of this is averages. Semi-additive facts can be aggregated along some of the dimensions and not along others. An example of this is inventory levels stored in physical warehouses, where you may be able to add across a dimension of warehouse sites, but you cannot aggregate across time.

\n

In terms of adding rows to data in a fact table, there are three main approaches:

\n
    \n
  • \n

    Transaction-based

    \n

    Shows a row for the finest level detail in a transaction. A row is entered only if a transaction has occurred for a given combination of dimension values. This is the most common type of fact table.

    \n
  • \n
  • \n

    Periodic Snapshot

    \n

    Shows data as of the end of a regular time interval, such as daily or weekly. If a row for the snapshot exists in a prior period, a row is entered for it in the new period even if no activity related to it has occurred in the latest interval. This type of fact table is useful in complex business processes where it is difficult to compute snapshot values from individual transaction rows.

    \n
  • \n
  • \n

    Accumulating Snapshot

    \n

    Shows one row for each occurrence of a short-lived process. The rows contain multiple dates tracking major milestones of a short-lived process. Unlike the other two types of fact tables, rows in an accumulating snapshot are updated multiple times as the tracked process moves forward.

    \n
  • \n
\n
\n
\n\n
\n

About Dimension Tables in Data Warehouses

\n
\n

Dimension tables provide category data to give context to the fact data. For instance, a star schema for sales data will have dimension tables for product, date, sales location, promotion and more. Dimension tables act as lookup or reference tables because their information lets you choose the values used to constrain your queries. The values in many dimension tables may change infrequently. As an example, a dimension of geographies showing cities may be fairly static. But when dimension values do change, it is vital to update them fast and reliably. Of course, there are situations where data warehouse dimension values change frequently. The customer dimension for an enterprise will certainly be subject to a frequent stream of updates and deletions.

\n

A key aspect of dimension tables is the hierarchy information they provide. Dimension data typically has rows for the lowest level of detail plus rows for aggregated dimension values. These natural rollups or aggregations within a dimension table are called hierarchies and add great value for analyses. For instance, if you want to calculate the share of sales that a specific product represents within its specific product category, it is far easier and more reliable to have a predefined hierarchy for product aggregation than to specify all the elements of the product category in each query. Because hierarchy information is so valuable, it is common to find multiple hierarchies reflected in a dimension table.

\n

Dimension tables are usually textual and descriptive, and you will use their values as the row headers, column headers and page headers of the reports generated by your queries. While dimension tables have far fewer rows than fact tables, they can be quite wide, with dozens of columns. A location dimension table might have columns indicating every level of its rollup hierarchy, and may show multiple hierarchies reflected in the table. The location dimension table could have columns for its geographic rollup, such as street address, postal code, city, state/province, and country. The same table could include a rollup hierarchy set up for the sales organization, with columns for sales district, sales territory, sales region, and characteristics.

\n
\n

See Also:

\n

Dimensions for further information regarding dimensions

\n
\n
\n
\n
\n\n
\n

Design Concepts in Star Schemas

\n
\n

Here we touch on some of the key terms used in star schemas. This is by no means a full set, but is intended to highlight some of the areas worth your consideration.

\n
\n

Data Grain

\n

One of the most important tasks when designing your model is to consider the level of detail it will provide, referred to as the grain of the data. Consider a sales schema: will the grain be very fine, storing every single item purchased by each customer? Or will it be a coarse grain, storing only the daily totals of sales for each product at each store? In modern data warehousing there is a strong emphasis on providing the finest grain data possible, because this allows for maximum analytic power. Dimensional modeling experts generally recommend that each fact table store just one grain level. Presenting fact data in single-grain tables supports more reliable querying and table maintenance, because there is no ambiguity about the scope of any row in a fact table.

\n
\n\n
\n

Working with Multiple Star Schemas

\n

Because the star schema design approach is intended to chunk data into distinct processes, you need reliable and performant ways to traverse the schemas when queries span multiple schemas. One term for this ability is a data warehouse bus architecture. A data warehouse bus architecture can be achieved with conformed dimensions and conformed facts.

\n
\n\n
\n

Conformed Dimensions

\n

Conformed dimensions means that dimensions are designed identically across the various star schemas. Conformed dimensions use the same values, column names and data types consistently across multiple stars. The conformed dimensions do not have to contain the same number of rows in each schema's copy of the dimension table, as long as the rows in the shorter tables are a true subset of the larger tables.

\n
\n\n
\n

Conformed Facts

\n

If the fact columns in multiple fact tables have exactly the same meaning, then they are considered conformed facts. Such facts can be used together reliably in calculations even though they are from different tables. Conformed facts should have the same column names to indicate their conformed status. Facts that are not conformed should always have different names to highlight their different meanings.

\n
\n\n
\n

Surrogate Keys

\n

Surrogate or artificial keys, usually sequential integers, are recommended for dimension tables. By using surrogate keys, the data is insulated from operational changes. Also, compact integer keys may allow for better performance than large and complex alphanumeric keys.

\n
\n\n
\n

Degenerate Dimensions

\n

Degenerate dimensions are dimension columns in fact tables that do not join to a dimension table. They are typically items such as order numbers and invoice numbers. You will see them when the grain of a fact table is at the level of an order line-item or a single transaction.

\n
\n\n
\n

Junk Dimensions

\n

Junk dimensions are abstract dimension tables used to hold text lookup values for flags and codes in fact tables. These dimensions are referred to as junk, not because they have low value, but because they hold an assortment of columns for convenience, analogous to the idea of a \"junk drawer\" in your home. The number of distinct values (cardinality) of each column in a junk dimension table is typically small.

\n
\n\n
\n

Embedded Hierarchy

\n

Classic dimensional modeling with star schemas advocates that each table contain data at a single grain. However, there are situations where designers choose to have multiple grains in a table, and these commonly represent a rollup hierarchy. A single sales fact table, for instance, might contain both transaction-level data, then a day-level rollup by product, then a month-level rollup by product. In such cases, the fact table will need to contain a level column indicating the hierarchy level applying to each row, and queries against the table will need to include a level predicate.

\n
\n\n
\n

Factless Fact Tables

\n

Factless fact tables do not contain measures such as sales price or quantity sold. Instead, the rows of a factless fact table are used to show events not represented by other fact tables. Another use for factless tables is as a \"coverage table\" which holds all the possible events that could have occurred in a given situation, such as all the products that were part of a sales promotion and might have been sold at the promotional price.

\n
\n\n
\n

Slowly Changing Dimensions

\n

One of the certainties of data warehousing is that the way data is categorized will change. Product names and category names will change. Characteristics of a store will change. The areas included in sales territories will change. The timing and extent of these changes will not always be predictable. How can these slowly changing dimensions be handled? Star schemas treat these in three main ways:

\n
    \n
  • \n

    Type 1

    \n

    The dimension values that change are simply overwritten, with no history kept. This creates a problem for time-based analyses. Also, it invalidates any existing aggregates that depended on the old value of the dimension.

    \n
  • \n
  • \n

    Type 2

    \n

    When a dimension value changes, a new dimension row showing the new value and having a new surrogate key is created. You may choose to include date columns in our dimension showing when the new row is valid and when it is expired. No changes need be made to the fact table.

    \n
  • \n
  • \n

    Type 3

    \n

    When a dimension value is changed, the prior value is stored in a different column of the same row. This enables easy query generation if you want to compare results using the current and prior value of the column.

    \n
  • \n
\n

In practice, Type 2 is the most common treatment for slowly changing dimensions.

\n
\n
\n
\n\n
\n

About Snowflake Schemas

\n
\n

The snowflake schema is a more complex data warehouse model than a star schema, and is a type of star schema. It is called a snowflake schema because the diagram of the schema resembles a snowflake.

\n

Snowflake schemas normalize dimensions to eliminate redundancy. That is, the dimension data has been grouped into multiple tables instead of one large table. For example, a product dimension table in a star schema might be normalized into a products table, a product_category table, and a product_manufacturer table in a snowflake schema. While this saves space, it increases the number of dimension tables and requires more foreign key joins. The result is more complex queries and reduced query performance. Figure 2-3 presents a graphical representation of a snowflake schema.

\n
\n

Figure 2-3 Snowflake Schema

\n\"Description
\nDescription of \"Figure 2-3 Snowflake Schema\"
\n
\n
\n
\n\n
\n

About the Oracle In-Memory Column Store

\n
\n
\n

Note:

\n

This feature is available starting with Oracle Database 12c Release 1 (12.1.0.2).

\n
\n

Traditional analytics has certain limitations or requirements that need to be managed to obtain good performance for analytic queries. You need to know user access patterns and then customize your data structures to provide optimal performance for these access patterns. Existing indexes, materialized views, and OLAP cubes need to be tuned. Certain data marts and reporting databases have complex ETL and thus need specialized tuning. Additionally, you need to strike a balance between performing analytics on stale data and slowing down OLTP operations on the production databases.

\n

The Oracle In-Memory Column Store (IM column store) within the Oracle Database provides improved performance for both ad-hoc queries and analytics on live data. The live transactional database is used to provide instant answers to queries, thus enabling you to seamlessly use the same database for OLTP transactions and data warehouse analytics.

\n

The IM column store is an optional area in the SGA that stores copies of tables, table partitions, and individual columns in a compressed columnar format that is optimized for rapid scans. Columnar format lends itself to easily to vector processing thus making aggregations, joins, and certain types of data retrieval faster than the traditional on-disk formats. The columnar format exists only in memory and does not replace the on-disk or buffer cache format. Instead, it supplements the buffer cache and provides an additional, transaction-consistent, copy of the table that is independent of the disk format.

\n
\n

See Also:

\n\n
\n
\n

Configuring the Oracle In-Memory Column Store

\n
\n\n
\n

Configuring the IM column store is simple. You set the INMEMORY_SIZE initialization parameter, and then use DDL to specify the tablespaces, tables, partitions, or columns to be populated into the IM column store.

\n
\n

See Also:

\n

Oracle Database Administrator's Guide for information about configuring the IM column store

\n
\n
\n\n
\n

Populating the Oracle In-Memory Column Store

\n
\n\n
\n

You can specify that the database populates data into the IM column store from row storage either at database instance startup or when the data is accessed for the first time.

\n
\n

See Also:

\n

Oracle Database Concepts for detailed information about how the IM column store is populated

\n
\n
\n\n
\n

In-Memory Columnar Compression

\n
\n\n
\n

The Oracle Databases uses special compression formats that are optimized for access speeds rather than storage reductions to store data in the IM column store. You can select different compression options for each table, partition, or column.

\n\n
\n
\n\n
\n

Benefits of Using the Oracle In-Memory Column Store

\n
\n

The IM column store enables the Oracle Database to perform scans, joins, and aggregates much faster than when it uses the on-disk format exclusively. Business applications, ad-hoc analytic queries, and data warehouse workloads benefit most. Pure OLTP databases that perform short transactions using index lookups benefit less.

\n

The IM column store seamlessly integrates with the Oracle Database. All existing database features, including High Availability features, are supported with no application changes required. Therefore, by configuring the IM column store, you can instantly improve the performance of existing analytic workloads and ad-hoc queries.

\n

The Oracle Optimizer is aware of the IM column store making it possible for the Oracle Database to seamlessly send analytic queries to the IM column store while OLTP queries and DML are sent to the row store.

\n

The advantages offered by the IM column store for data warehousing environments are:

\n
    \n
  • \n

    Faster scanning of large number of rows and applying filters that use operators such as =,<,>, and IN.

    \n
  • \n
  • \n

    Faster querying of a subset of columns in a table, for example, selecting 5 of 100 columns. See \"Faster Performance for Analytic Queries\".

    \n
  • \n
  • \n

    Enhanced performance for joins by converting predicates on small dimension tables to filters on a large fact table. See \"Enhanced Join Performance Using Vector Joins\".

    \n
  • \n
  • \n

    Efficient aggregation by using VECTOR GROUP BY transformation and vector array processing. See \"Enhanced Aggregation Using VECTOR GROUP BY Transformations\".

    \n
  • \n
  • \n

    Reduced storage space and significantly less processing overhead because fewer indexes, materialized views, and OLAP cubes are required when IM column store is used.

    \n
  • \n
\n
\n

See Also:

\n

Oracle Database Concepts for information about the other advantages of using IM column store

\n
\n
\n

Faster Performance for Analytic Queries

\n

Storing data in memory using columnar format provides fast throughput for analyzing large amounts of data. This is useful for ad-hoc queries with unanticipated access patterns. Columnar format uses fixed-width columns for most numeric and short string data types. This enables very fast vector processing that answers queries faster. Only the columns necessary for the specific data analysis task are scanned instead of entire rows of data.

\n

Data can be analyzed in real-time, thus enabling you to explore different possibilities and perform iteration. Using the IM column store requires fewer OLAP cubes to be created to obtain query results.

\n

For example, you need to find the number of sales in the state of California this year. This data is stored in the SALES table. When this table is stored in the IM column store, the database needs to just scan the State column and count the number of occurrences of state California.

\n
\n\n
\n

Enhanced Join Performance Using Vector Joins

\n

IM column store takes advantage of vector joins. Vector joins speed up joins by converting predicates on small dimension tables to filters on large fact tables. This is useful when performing a join of multiple dimensions with one large fact table. The dimension keys on fact tables have lots of repeat values. The scan performance and repeat value optimization speeds up joins.

\n\n
\n\n
\n

Enhanced Aggregation Using VECTOR GROUP BY Transformations

\n

An important aspect of analytics is to determine patterns and trends by aggregating data. Aggregations and complex SQL queries run faster when data is stored in the IM column store.

\n

VECTOR GROUP BY transformations enable efficient in-memory array-based aggregation. During a fact table scan, aggregate values are accumulated into in-memory arrays and efficient algorithms are used perform aggregation. Performing joins based on the primary key and foreign key relationships are optimized for both star schemas and snowflake schemas.

\n
\n

See Also:

\n

\"Using In-Memory Aggregation\"

\n
\n
\n
\n
\n\n
\n

Using the Oracle In-Memory Column Store

\n
\n
\n

You can store data using columnar format in the IM column store for existing databases or for new database that you plan to create. IM column store is simple to configure and does not impact existing applications. Depending on the requirement, you can configure one or more tablespaces, tables, materialized views, or partitions to be stored in memory.

\n
\n

See Also:

\n\n
\n
\n\n
\n

To store data in the IM column store:

\n
\n\n
    \n
  1. Configure the INMEMORY_SIZE initialization parameter to specify the amount of memory that must be assigned to the IM column store.\n
    \n
    INMEMORY_SIZE = 100 GB\n
    \n
    \n

    See Also:

    \n

    Oracle Database Reference for more information about the INMEMORY_SIZE parameter

    \n
    \n
    \n
  2. \n
  3. Specify the database objects that must be stored in memory. Objects can include tablespaces, tables, materialized views, or partitions. Any queries on these objects will run faster than when the objects are stored on disk.\n
    \n

    For existing tablespaces, tables, or table partitions, use the ALTER command to store them in memory.

    \n
    ALTER TABLESPACE tbs1 INMEMORY;\nALTER TABLE my_table MODIFY PARTITION p1 INMEMORY;\n
    \n

    While creating new tablespaces or tables, use the INMEMORY clause to specify that these objects must be stored in memory.

    \n
    CREATE TABLE my_table (id NUMBER, tname VARCHAR2(45)) INMEMORY;\n
    \n
    \n

    See Also:

    \n

    Oracle Database Administrator's Guide for information about enabling objects to be stored in memory

    \n
    \n
    \n
  4. \n
  5. Drop the indexes that were created to aid OLTP application workloads. Replace these with in-memory indexes. OLTP operations run faster because the objects that need to be accessed are now stored in memory.\n\n
  6. \n
\n
\n
\n
\n

Using Vector Joins to Enhance Join Performance

\n
\n
\n

Joins are an integral part of data warehousing workloads. IM column store enhances the performance of joins when the tables being joined are stored in memory. Simple joins that use bloom filters and complex joins between multiple tables benefit by using the IM column store. In a data warehousing environment, the most frequently-used joins are ones in which one or more dimension tables are joined with a fact table.

\n

The following types of joins run faster when the tables being joined are stored in the IM column store:

\n
\n\n
\n
    \n
  • \n

    Joins that are amenable to using bloom filters

    \n
  • \n
  • \n

    Joins of multiple small dimension tables with one fact table

    \n
  • \n
  • \n

    Joins between two tables that have a PK-FK relationship

    \n
  • \n
\n

The IM column store runs queries that contain joins more efficiently and quickly by using vector joins. Vector joins allow the Oracle Database to take advantage of the fast scanning and vector processing capability of the IM column store. A vector join transforms a join between a dimension and fact table to filter that can be applied as part of the scan of the fact table. This join conversion is performed with the use of bloom filters, which enhance hash join performance in the Oracle Database. Although bloom filters are independent of IM column store, they can be applied very efficiently to data stored in memory through SIMD vector processing.

\n

Consider the following query that performs a join of the CUSTOMERS dimension table with the SALES fact table:

\n
SELECT c.customer_id, s.quantity_sold, s.amount_sold \nFROM CUSTOMERS c, SALES s\nWHERE c.customer_id = s.customer_id AND c.country_id = 'FR';\n
\n

When both these tables are stored in the IM column store, SIMD vector processing is used to quickly scan the data and apply filters. Figure 2-4 displays a graphical representation of the how vector joins are used to implement the query. The predicate on the CUSTOMERS table, c.country_id='FR' is converted into a filter on the SALES fact table. The filter is country_id='FR'. Because the SALES table is stored in memory using columnar format, just one column needs to be scanned to determine the result of this query.

\n
\n

Figure 2-4 Vector Joins Using Oracle In-Memory Column Store

\n\"Description
\nDescription of \"Figure 2-4 Vector Joins Using Oracle In-Memory Column Store\"
\n
\n
\n
\n
\n\n
\n

Automatic Big Table Caching to Improve the Performance of In-Memory Parallel Queries

\n
\n

Automatic big table caching enhances the in-memory query capabilities of Oracle Database. When a table does not fit in memory, the database decides which buffers to cache based on access patterns. This provides efficient caching for large tables, even if they do not fully fit in the buffer cache.

\n

An optional section of the buffer cache, called the big table cache, is used to store data for table scans. The big table cache is integrated with the buffer cache and uses a temperature-based, object-level replacement algorithm to manage the big table cache contents. This is different from the access-based, block level LRU algorithm used by the buffer cache.

\n
\n

Note:

\n

The automatic big table caching feature is available starting with Oracle Database 12c Release 1 (12.1.0.2).

\n
\n

Typical data warehousing workloads scan multiple tables. Performance may be impacted if the combined size of these tables is greater than the combined size of the buffer cache. With automatic big table caching, the scanned tables are stored in the big table cache instead of the buffer cache. The temperature-based, object-level replacement algorithm used by the big table cache can provide enhanced performance for data warehousing workloads by:

\n
    \n
  • \n

    Selectively caching the \"hot\" objects

    \n

    Each time an object is accessed, Oracle Database increments the temperature of that object. An object in the big table cache can be replaced only by another object whose temperature is higher than its own temperature.

    \n
  • \n
  • \n

    Avoiding thrashing

    \n

    Partial objects are cached when objects cannot be fully cached.

    \n
  • \n
\n

In Oracle Real Application Clusters (Oracle RAC) environments, automatic big table caching is supported only for parallel queries. In single instance environments, this functionality is supported for both serial and parallel queries.

\n

To use automatic big table caching, you must enable the big table cache. To use automatic big table caching for serial queries, you must set the DB_BIG_TABLE_CACHE_PERCENT_TARGET initialization parameter to a nonzero value. To use automatic big table caching for parallel queries, you must set PARALLEL_DEGREE_POLICY to AUTO or ADAPTIVE and DB_BIG_TABLE_CACHE_PERCENT_TARGET to a nonzero value.

\n
\n

See Also:

\n

Oracle Database VLDB and Partitioning Guide for more information about the big table cache and how it can be used

\n
\n
\n
\n\n
\n

About In-Memory Aggregation

\n
\n

In-memory aggregation uses the VECTOR GROUP BY operation to enhance the performance of queries that aggregate data and join one or more relatively small tables to a larger table, as often occurs in a star query. VECTOR GROUP BY will be chosen by the SQL optimizer based on cost estimates. This will occur more often when the query selects from in-memory columnar tables and the tables include unique or numeric join keys (regardless of whether the uniqueness is forced by a primary key, unique constraint or schema design).

\n
\n

See Also:

\n

Oracle Database SQL Tuning Guide for details about how In-memory aggregation works

\n
\n

VECTOR GROUP BY aggregation will only by chosen for GROUP BY. It will not be chosen for GROUP BY ROLLUP, GROUPING SETS or CUBE.

\n
\n

Note:

\n

This feature is available starting with Oracle Database 12c Release 1 (12.1.0.2).

\n
\n

This following topics provide additional information about In-memory aggregation:

\n\n
\n\n
\n

VECTOR GROUP BY Aggregation and the Oracle In-Memory Column Store

\n
\n

Although using the IM column store is not a requirement for using VECTOR GROUP BY aggregation, it is strongly recommended that you use both features together. Storing tables in memory using columnar format provides the foundation storage that VECTOR GROUP BY aggregation leverages to provide transactionally consistent results immediately after a schema is updated without the need to wait until the data marts are populated.

\n
\n
\n\n
\n

When to Use VECTOR GROUP BY Aggregation

\n
\n

Not all queries and scenarios benefit from the use of VECTOR GROUP BY aggregation. The following sections provide guidelines about the situations in which using this aggregation can be beneficial.

\n
\n

Situations Where VECTOR GROUP BY Aggregation Is Useful

\n

VECTOR GROUP BY aggregation provides benefits in the following scenarios:

\n
    \n
  • \n

    The schema contains \"mostly\" unique keys or numeric keys for the columns that are used to join the fact and dimensions. The uniqueness can be enforced using a primary key, unique constraint or by schema design.

    \n
  • \n
  • \n

    The fact table is at least 10 times larger than the dimensions.

    \n
  • \n
  • \n

    The IM column store is used to store the dimensions and fact table in memory.

    \n
  • \n
\n
\n\n
\n

Situations Where VECTOR GROUP BY Aggregation Is Not Advantageous

\n

Using VECTOR GROUP BY aggregation does not provide substantial performance benefits in the following scenarios:

\n
    \n
  • \n

    Joins are performed between two very large tables

    \n

    By default, the VECTOR GROUP BY transformation is used only if the fact table is at least 10 times larger than the dimensions.

    \n
  • \n
  • \n

    Dimensions contain more than 2 billion rows

    \n

    The VECTOR GROUP BY transformation is not used if a dimension contains more than 2 billion rows.

    \n
  • \n
  • \n

    The system does not have sufficient memory resources

    \n

    Most systems that use the IM column store will be able to benefit from using the VECTOR GROUP BY transformation.

    \n
  • \n
\n
\n
\n
\n\n
\n

When Is VECTOR GROUP BY Aggregation Used to Process Analytic Queries?

\n
\n

VECTOR GROUP BY aggregation is integrated with the Oracle Optimizer and no new SQL or initialization parameters are required to enable the use of this transformation. It also does not need additional indexes, foreign keys, or dimensions.

\n

By default, Oracle Database decides whether or not to use VECTOR GROUP BY aggregation for a query based on the cost, relative to other execution plans that are determined for this query. However, you can direct the database to use VECTOR GROUP BY aggregation for a query by using query block hints or table hints.

\n

VECTOR GROUP BY aggregation can be used to process a query that uses a fact view that is derived from multiple fact tables.

\n

Oracle Database uses VECTOR GROUP BY aggregation to perform data aggregation when the following conditions are met:

\n
    \n
  • \n

    The queries or subqueries aggregate data from a fact table and join the fact table to one or more dimensions.

    \n

    Multiple fact tables joined to the same dimensions are also supported assuming that these fact tables are connected only through joins to the dimension. In this case, VECTOR GROUP BY aggregates fact table separately and then joins the results on the grouping keys.

    \n
  • \n
  • \n

    The dimensions and fact table are connected to each other only through join columns.

    \n

    Specifically, the query must not have any other predicates that refer to columns across multiple dimensions or from both a dimension and the fact table. If a query performs a join between two or more tables and then joins the result to the fact, then VECTOR GROUP BY aggregation treats the multiple dimensions as a single dimension.

    \n
  • \n
\n

The best performance for VECTOR GROUP BY aggregation is obtained when the tables being joined are stored in the IM column store.

\n

VECTOR GROUP BY aggregation does not support the following:

\n
    \n
  • \n

    Semi- and anti-joins across multiple dimensions or between a dimension and the fact table

    \n
  • \n
  • \n

    Equi-joins across multiple dimensions.

    \n
  • \n
  • \n

    Aggregations performed using DISTINCT

    \n
  • \n
  • \n

    Bloom filters

    \n

    VECTOR GROUP BY aggregation and bloom filters are mutually exclusive.

    \n

    If bloom filters are used to perform joins for a query, then VECTOR GROUP BY aggregation is not applicable to the processing of this query.

    \n
  • \n
\n\n
\n
\n
\n
\n\n
\n\n
\n\n\n\n" + }, + { + "url": "https://docs.oracle.com/cd/E19528-01/819-0997/write_transformations/index.html", + "domain": "docs.oracle.com", + "file_source": "part-00656-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "Write Transformations (Sun Java System Directory Server Enterprise Edition 6.0 Reference)

Sun Java System Directory Server Enterprise Edition 6.0 Reference

Write Transformations

\n

\nA write transformation is applied during the request, but not\nduring the response. A write transformation changes the physical\ndata in storage.

\n

The following diagram illustrates the principals of a write transformation.

\n
Figure 23–3 Write Transformation

\"Figure

A write transformation is defined on a data view, by using the dpconf command as follows:

\n
$ dpconf add-virtual-transformation -h host -p port -D bindDN /\nview-name write action attr-name [parameters]
\n
Example 23–2 When Would You Use a Write Transformation

Imagine an organization has a legacy application whose function is to\nadd person entries to a data source. The application adds the entries without\nthe telephoneNumber attribute. The physical data source\nhas been upgraded and the telephoneNumber is now a mandatory\nattribute for person entries. The transformation required here is to add the telephoneNumber attribute during the add request. This transformation\nchanges the entry that is written to the database. No reverse transformation\nis required.

\n
\n
\n
    \n \n
\n
\n
" + }, + { + "url": "https://docs.oracle.com/cd/E17904_01/bi.1111/b32122/orbr_condhigh002.htm", + "domain": "docs.oracle.com", + "file_source": "part-00089-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "\n\n\n\n22.2 Create a Basic Tabular Report\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n
\n
172/338 \n
\n

22.2 Create a Basic Tabular Report

\n

The steps in this section will show you how to use the Report Wizard to build a simple tabular report.

\n

To create a tabular report:

\n
    \n
  1. \n

    Launch Reports Builder (or, if already open, choose File > New > Report).

    \n
  2. \n
  3. \n

    In the Welcome or New Report dialog box, select Use the Report Wizard, then click OK.

    \n
  4. \n
  5. \n

    If the Welcome page displays, click Next.

    \n
  6. \n
  7. \n

    On the Report Type page, select Create Paper Layout Only, then click Next.

    \n
  8. \n
  9. \n

    On the Style page, type a Title for your report, select Tabular, then click Next.

    \n
  10. \n
  11. \n

    On the Data Source page, click SQL Query, then click Next.

    \n
  12. \n
  13. \n

    On the Data page, enter the following SELECT statement in the Data Source definition field:

    \n
    \nSELECT ALL EMPLOYEES.FIRST_NAME, EMPLOYEES.LAST_NAME,\nEMPLOYEES.EMPLOYEE_ID, EMPLOYEES.SALARY\nFROM HR.EMPLOYEES\n
    \n
    \n

    Note:

    \nYou can enter this query in any of the following ways:\n
      \n
    • \n

      Copy and paste the code from the provided text file called condhigh_code.txt into the Data Source definition field.

      \n
    • \n
    • \n

      Click Query Builder to build the query without entering any code manually.

      \n
    • \n
    • \n

      Type the code in the Data Source definition field.

      \n
    • \n
    \n
    \n
  14. \n
  15. \n

    Click Next.

    \n
    \n

    Note:

    \nIf you are not already connected to a database, you will be prompted to connect to the database when you click Query Builder or Next. Ensure that you connect to a database that has the appropriate schema for this example. Section 22.1, \"Prerequisites for This Example\" describes the sample schema requirements for this example.
    \n
  16. \n
  17. \n

    On the Fields page, click the right arrow (>) to move the following fields to the Displayed Fields list, then click Next. Make sure you move them in the following order:

    \n
      \n
    • \n

      EMPLOYEE_ID

      \n
    • \n
    • \n

      FIRST_NAME

      \n
    • \n
    • \n

      LAST_NAME

      \n
    • \n
    • \n

      SALARY

      \n
    • \n
    \n
  18. \n
  19. \n

    On the Totals page, click Next.

    \n
  20. \n
  21. \n

    On the Labels page, click Next.

    \n
  22. \n
  23. \n

    On the Template page, select Predefined Template and click Beige, then click Finish to preview your report output in the Paper Design view. It should look something like this:

    \n
    \n

    Figure 22-2 Paper Design view for the initial tabular report

    \n\"Description
    \nDescription of \"Figure 22-2 Paper Design view for the initial tabular report\"
    \n
    \n
  24. \n
\n

Now, let us format the data to make it more meaningful.

\n

To format the report:

\n
    \n
  1. \n

    In the Paper Design view, select the Salary column by clicking once on the column of data.

    \n
  2. \n
  3. \n

    Click the Currency button in the toolbar. A currency symbol immediately appears next to all of the values.

    \n
  4. \n
  5. \n

    Click the Commas button.

    \n
  6. \n
  7. \n

    Click the Add Decimal button twice. Two decimal places are added to the right of the decimal point.

    \n

    The Salary column of your report should now look like this:

    \n
    \n

    Figure 22-3 Formatted salary column

    \n\"Description
    \nDescription of \"Figure 22-3 Formatted salary column\"
    \n
    \n
  8. \n
  9. \n

    Change the alignment of your columns by doing the following:

    \n
      \n
    1. \n

      Click the Salary column heading, then click the Align Center button in the toolbar.

      \n
    2. \n
    3. \n

      Click the Salary column once, then click the Align Right button in the toolbar.

      \n
    4. \n
    5. \n

      While the Salary column is selected, Shift-click to select the column heading too, then click and drag one of the right black squares to size the column and column heading.

      \n
    6. \n
    7. \n

      Click the Employee Id column heading, Shift-click to select the Employee Id column too, then click the Align Center button in the toolbar.

      \n
    8. \n
    \n
  10. \n
  11. \n

    The Paper Design view of your report should now look like this:

    \n
    \n

    Figure 22-4 Paper Design view of the tabular report

    \n\"Description
    \nDescription of \"Figure 22-4 Paper Design view of the tabular report\"
    \n
    \n
  12. \n
  13. \n

    Save your report as condhigh_your_initials.rdf.

    \n
  14. \n
\n
\n
\n\n\n
\n\n
\n\n\n\n" + }, + { + "url": "https://docs.oracle.com/cd/E21764_01/admin.1111/e12321/isvacontexttemplates.htm", + "domain": "docs.oracle.com", + "file_source": "part-00662-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "\n\n\n\n4.1 About Context Templates\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n
\n
18/65 \n
\n

4.1 About Context Templates

\n

Context templates are created so that contexts can be derived from them. Only domain administrators can create context templates, although both domain administrators and domain managers can create contexts based on them.

\n

Two context templates are supplied at installation: \"Export\", and \"Standard\". These can be copied to create new context templates, and then modified as required.

\n

A context template provides the features and functionality of the contexts that are derived from it. When changes are made to a context template, the changes are applied to the contexts that were created from the template.

\n

Context templates have a set of roles assigned to them. The roles are used in contexts to create rights for users and groups.

\n
\n

Caution:

\nIt is strongly recommended that you do not delete context templates, and that you do not modify context templates other than to add new roles to them. It is strongly recommended that you do not remove roles from context templates. If a context template becomes unsuitable, it is best to deactivate it and to create a new one to use in its place.
\n

You cannot delete a context template if any contexts derived from it are in use.

\n

You cannot remove a role from a context template if the role is currently assigned to a user within a context created from that context template.

\n

See also Section 3, \"Working with Roles\", Section 5, \"Working with Contexts\", and Section 6, \"Working with Rights\".

\n
\n
\n\n\n
\n\n
\n\n\n\n" + }, + { + "url": "https://docs.oracle.com/cd/B28359_01/appdev.111/b28370/raise_statement.htm", + "domain": "docs.oracle.com", + "file_source": "part-00656-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "\n\n\n\n\n\nRAISE Statement\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nGo to main content\n
\n
\n
\n\n53/89 \n
\n
\n

RAISE Statement

\n

The RAISE statement stops normal execution of a PL/SQL block or subprogram and transfers control to an exception handler.

\n

RAISE statements can raise predefined exceptions, such as ZERO_DIVIDE or NO_DATA_FOUND, or user-defined exceptions whose names you decide.

\n

Syntax

\n

raise_statement ::=

\n\"raise_statement\"
\nDescription of the illustration raise_statement.gif
\n
\n

Keyword and Parameter Descriptions

\n

exception_name

\n

A predefined or user-defined exception. For a list of the predefined exceptions, see Predefined PL/SQL Exceptions.

\n

Usage Notes

\n

Raise an exception in a PL/SQL block or subprogram only when an error makes it impractical to continue processing. You can code a RAISE statement for a given exception anywhere within the scope of that exception.

\n

When an exception is raised, if PL/SQL cannot find a handler for it in the current block, the exception propagates to successive enclosing blocks, until a handler is found or there are no more blocks to search. If no handler is found, PL/SQL returns an unhandled exception error to the host environment.

\n

In an exception handler, you can omit the exception name in a RAISE statement, which raises the current exception again. This technique enables you to take some initial corrective action (perhaps just logging the problem), then pass control to another handler that does more extensive correction. When an exception is reraised, the first block searched is the enclosing block, not the current block.

\n

Examples

\n\n

Related Topics

\n\n
\n
\n\n\n
\n\n
\n\n\n\n" + }, + { + "url": "https://docs.oracle.com/cloud/latest/related-docs/OSNUG/oa_coll.htm", + "domain": "docs.oracle.com", + "file_source": "part-00943-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "\n\n\n\n\n\n\nUsing Collections in the Add-In for Outlook\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nGo to main content
\n
\n
\n\n 158/367 \n\n

Using Collections in the Add-In for Outlook

\n\n\n
\n

What are collections?

\n
\n

As your lists of Conversations, Social Objects, people, and groups grow, it might be useful to organize those that focus on a common topic, project, or issue into an easy-access list, called a collection. Use collections to create your own lists of related Conversations, collections, profile and group walls, and Social Objects.

\n

You can give the collection a meaningful name so it's easy to identify its purpose. You can add Conversations and walls (including people's walls and group walls) to a collection. You can also add Social Objects and other collections to a collection to create a hierarchy. There is no limit to the levels of hierarchy you can create.

\n

In the add-in for Outlook navigation bar, click the Collections icon to open the list of collections.

\n
\"Collections
\n\n

For more information, see How do I create a collection?

\n
\n
\n\n
\n

How do I create a collection?

\n
\n
    \n
  1. On the navigation bar, click \"Collections.
  2. \n
  3. Click \"Create to open the New Collection dialog.
  4. \n
  5. In the Name field, enter a name for the collection.
  6. \n
  7. Select check boxes to enable collection options:\n
    \n
      \n
    • \n

      Make publicly available—Make this a public collection that anyone with an Oracle Social Network account can join.

      \n
    • \n
    • \n

      Add to favorites—Add this collection to your Favorites list.

      \n
    • \n
    \n
    \n
  8. \n
  9. Click OK.
  10. \n
\n
\n
\n\n
\n

How do I add members to a collection?

\n
\n
    \n
  1. On the navigation bar, click \"Collections.
  2. \n
  3. Double-click a listed collection to open it.
  4. \n
  5. Click \"Members to open the collection's list of members.\n
    \n

    Tip: To see how many members a collection has, mouse over the Members tab.

    \n
    \n
  6. \n
  7. Click \"Add to open the Select Members dialog.
  8. \n
  9. In the Select Members dialog, select groups and people you want to add as members. Use Ctrl+Click to select more than one person or group.\n
    \n

    Double-click a name to add it to the Select list at the bottom of the dialog, or select names, then click Select to add all selections.

    \n

    By default, you see a list of contacts. Click a tab to change your view:

    \n
      \n
    • \n

      Contacts—View a list of your contacts.

      \n
    • \n
    • \n

      Groups—View a list of your groups.

      \n
    • \n
    • \n

      Current Members—View a list of current members.

      \n
    • \n
    \n

    Alternatively, you can enter part of a person's or group's name, and select them from the list that pops up.

    \n
    \n
  10. \n
  11. Click OK to add your selection to the collection's membership.
  12. \n
\n
\n
\n\n
\n

How do I remove members from a collection?

\n
\n
    \n
  1. On the navigation bar, click \"Collections.
  2. \n
  3. Double-click a listed collection to open it.
  4. \n
  5. Click \"Members to open the collection's list of members.
  6. \n
  7. Right-click the person or group, and select Remove from Membership.
  8. \n
  9. In the confirmation dialog, click Yes.
  10. \n
\n
\n
\n\n
\n

How do I add an existing item to a collection?

\n
\n
    \n
  1. On the navigation bar, click \"Collections.
  2. \n
  3. Double-click a listed collection to open it.
  4. \n
  5. Click \"Related to show the items in this collection.
  6. \n
  7. Click \"Add
  8. \n
  9. In the Browse Items dialog, select one or more items to add to the collection, then click OK. Use Ctrl+Click to select more than one item.\n
    \n

    By default, you see a list of recent items. Click a tab to change your view:

    \n
      \n
    • \n

      Recent Items—Select from items you have visited recently.

      \n
    • \n
    • \n

      Favorites—Select from items you have marked as a favorite.

      \n
    • \n
    \n

    Alternatively, you can enter part of a item's name, and select it from the list that pops up. When you select from the search's pop-up results, the dialog closes and your selection is added to the collection.

    \n
    \n
  10. \n
\n
\n
\n\n
\n

How do I add a new item to a collection?

\n
\n
    \n
  1. On the navigation bar, click \"Collections.
  2. \n
  3. Double-click a listed collection to open it.
  4. \n
  5. Click \"Related to show the related items in this collection.
  6. \n
  7. Click \"Add, and select:\n
    \n
      \n
    • \n

      Add New Conversation—To create a new Conversation and add it to the collection.

      \n
    • \n
    • \n

      New Collection—To create a new collection and add it to the current collection.

      \n
    • \n
    \n
    \n
  8. \n
  9. Enter a name for the item in the Name field.
  10. \n
  11. Select the options you want to apply to the new item:\n
    \n
      \n
    • \n

      Make publicly available—Make this a publicly available item that anyone with an Oracle Social Network account can join.

      \n
    • \n
    • \n

      Add to favorites—Add this item to your favorites list.

      \n
    • \n
    \n
    \n
  12. \n
  13. Click OK to add the new item to the collection.
  14. \n
\n
\n
\n\n
\n

How do I remove an item from a collection?

\n
\n
\n

When you remove an item from a collection, there's no confirmation dialog—the item is simply removed. If you decide removal was a mistake, you can always add it back (see How do I add an existing item to a collection?).

\n
\n\n
    \n
  1. On the navigation bar, click \"Collections.
  2. \n
  3. Double-click a listed collection to open it.
  4. \n
  5. Click \"Relatedto show the related items in this collection.
  6. \n
  7. Right-click the item, and select Remove Item from the Collection.
  8. \n
\n
\n
\n\n
\n

How can I rearrange a list of collections?

\n
\n
\n

You can select the type of collections you want to appear on the collections list (Yours, Available, Favorites, Muted), then arrange that view so that it appears in the order you find the most useful.

\n
\n\n
    \n
  1. On the navigation bar, click \"Collections.
  2. \n
  3. On the Collections banner, click \"Options, then select the type of collection you want to see.
  4. \n
  5. Click Arrange By, then select your preferred sort order.
  6. \n
\n
\n

Tip: In a collection, you can choose to see different types of content in the panel. Use the tabs at the top of a collection to navigate to each type:

\n
    \n
  • \n

    Posts\"Posts View all messages and documents added to the collection's wall.

    \n
  • \n
  • \n

    Documents\"Documents View just the documents added to the collection's wall.

    \n
  • \n
  • \n

    Members\"Members View and manage the collection's membership.

    \n
  • \n
  • \n

    Related Items\"Related View a list of the Conversations, profile and group walls, Social Objects, and other collections in this collection.

    \n
  • \n
  • \n

    Referring Items\"Referring View the Social Objects that include this collection in their lists of related items.

    \n
  • \n
\n
\n
\n
\n\n
\n

How do I hide membership messages in a collection?

\n
\n

When a user is added to or removed from an item a membership message is posted to the item (for example, This wall is now accessible to: Jane Doe). If you'd prefer not to show those messages, you can hide them. See How do I hide membership messages?

\n
\n
\n\n
\n

How do I rename a collection?

\n
\n
    \n
  1. On the navigation bar, click \"Collections.
  2. \n
  3. Right-click a listed collection, and select Modify.
  4. \n
  5. In the Modify dialog, rename the collection and click OK.
  6. \n
\n
\n
\n
\n\n
\n\n
\n\n\n\n" + }, + { + "url": "https://docs.oracle.com/cd/E19636-01/819-4770/6n6ta7669/index.html", + "domain": "docs.oracle.com", + "file_source": "part-00131-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "Preparing to Install Agent for Apache HTTP Server on Solaris Systems (Sun Java System Access Manager Policy Agent 2.2 Guide for Apache HTTP Server 2.0.54)

Sun Java System Access Manager Policy Agent 2.2 Guide for Apache HTTP Server 2.0.54

Preparing to Install Agent for Apache HTTP Server on Solaris Systems

\n
\"CautionCaution –

Do not use the version of Apache HTTP Server that comes bundled\nwith SolarisTM 9 Operating System or with Solaris 10 Operating System. The bundled Apache HTTP Server package is incomplete.\nAny attempt to Install Agent for Apache HTTP Server on a bundled version of Apache HTTP Server is\nlikely to fail.

\n

Therefore, download the desired version of Apache HTTP Server from\nthe Apache web site at http://www.apache.org/ before\nattempting to install the agent.

\n

Follow the specific steps outlined in this section before you install\nthe web agent to reduce the chance of complications occurring during and after\nthe installation.

\n

\n\"Procedure\"To Prepare to Install Policy Agent 2.2 for Apache HTTP Server on\nSolaris Systems

\n
Note –

You must have Java Runtime Environment (JRE)\n1.3.1 or higher installed or available on a shared file system in order to\nrun the graphical user interface (GUI) of the web agent installation program.\nCurrently, JRE 1.3.1 or any version higher is certified for use with the web\nagent installation program.

\n
\n

Perform the following pre-installation tasks:

\n\n
    \n\n
  1. \n

    Ensure that Policy Agent 2.2 for Apache HTTP Server is\nsupported on the desired platform as listed in Supported Platforms and Compatibility of Agent for Apache HTTP Server.

    \n\n
  2. \n\n
  3. \n

    Install Apache HTTP Server if not already installed.

    \n\n

    Refer\nto the Apache HTTP Server documentation for details on how best to install and\nconfigure this server for your platform.

    \n\n
  4. \n\n
  5. \n

    Ensure that Apache HTTP Server has the latest patches available.

    \n\n
  6. \n\n
  7. \n

    Set your JAVAHOME environment variable to a JDK version 1.3.1_04\nor higher.

    \n\n

    The installation requires that you set up your JAVAHOME\nvariable correctly. However, if you have incorrectly set the JAVAHOME variable,\nthe setup script will prompt you for supplying the correct\nJAVAHOME value:

    \n\n

    \nPlease enter JAVAHOME path to pick up\njava:\n

    \n\n
  8. \n
\n
\n
    \n \n
\n
\n
" + }, + { + "url": "https://docs.oracle.com/en/cloud/paas/integration-cloud/user-processes/img_text/event-snippet.html", + "domain": "docs.oracle.com", + "file_source": "part-00310-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "\n\n \n \n \n \n Description of illustration event-snippet.png\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
\n
\n

This image shows an event definition window after the Extract Snippet button is clicked. It displays a name field on top, with the text snippetName entered in it. There are OK, Cancel, Undo, and Redo buttons next to the name field. Below the name field, there are a couple of event actions defined. Next to each action block,\n there is a toggle button, which you can turn off to exclude an action block from your extracted snippet.\n

\n
\n \n
\n \n \n" + }, + { + "url": "https://docs.oracle.com/cd/B31104_02/books/LSPortalSIS/LSPortalSISConsPort8.html", + "domain": "docs.oracle.com", + "file_source": "part-00273-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "\r\n\r\n\r\n\t\r\n\t\r\n\t\r\n\t\r\n\t\r\n\t\r\n\t\r\n\t\r\n\t\r\n\t\r\n\t\r\n\r\n\r\n\r\n\tBookshelf v7.8: Making Solutions and FAQs Available\r\n\r\n\t\r\n\t\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\t\r\n\t\t\r\n\r\n\t\t\t\t\r\n\t\r\n
  Bookshelf Home | Contents | Index | PDF   
\r\n\r\n\r\n\r\n
\r\n

Siebel Life Sciences Portals Guide > Consumer Portal >

\r\n

Making Solutions and FAQs Available

\r\n
\r\n

Solutions are answers to frequently asked questions (FAQs). Solutions are documented by service agents and call center administrators and are stored in the database. Agents and customers can frequently resolve service issues by searching this knowledge base of solutions.

You can associate a solution with related solutions and resolution documents through the Solutions Administration screen.

A solution can be published internally or externally. Solutions published internally are accessible only by your employees. Solutions published externally are available to Consumer Portal users.

\r\n
\r\n\r\n\r\n\t\r\n\r\n\t\t\r\n\t\t\t\t\r\n\t\r\n
   
\r\n\r\n\r\n\t\r\n\t\t\r\n\t\r\n\t\r\n\t\t\r\n\t\r\n
Siebel Life Sciences Portals Guide
 
\r\n\r\n\r\n\r\n\r\n" + }, + { + "url": "https://docs.oracle.com/cd/F21615_01/oroms/pdf/190/cws_help/ch1013005.htm", + "domain": "docs.oracle.com", + "file_source": "part-00331-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "\r\n\r\n\r\n\r\nBackorder Report by Vendor\r\n\r\n
\r\n\r\n\r\n\r\n\r\n\r\n

Backorder Report by Vendor

\r\n

How to print: Select the By\r\nvendor field on the Print Backorder\r\nReport Selection Prompt Screen and select Accept.\r\nThe system submits the program BORVND.

\r\n

Important: A\r\nbackordered item is included on this report only if a primary vendor\r\nhas been defined for the item.

\r\n

For more information: See\r\nthe sample\r\nreport in PDF format.

\r\n

Contents:

\r\n

• Item number

\r\n

• Item description

\r\n

• Vendor number

\r\n

• Vendor description

\r\n

• Quantity on backorder (cumulative total of\r\nall backordered SKUs for the item)

\r\n

• Quantity on purchase order (cumulative total\r\nof all SKUs being ordered from your suppliers)

\r\n

• Quantity on hand (cumulative total of all SKUs\r\ncurrently available across all warehouses)

\r\n

• Quantity on reserve (cumulative total of all\r\nSKUs already reserved to existing orders)

\r\n

• Subtotals for each vendor

\r\n

• Final totals for all vendors

\r\n\r\n

\r\n\r\n\r\n\r\n
\r\nBackorder Report by Warehouse with SKU Details\r\n Contents SCVsSearchGlossary Reports Solutions XMLIndex\r\nBackorder Report by Vendor with SKU Details\r\nLegal

\"\"

CS06_01r OMSCS 19.0 December 2019 OHC \r\n\r\n\r\n\r\n" + }, + { + "url": "https://docs.oracle.com/cd/E19082-01/819-6990/gfxdh/index.html", + "domain": "docs.oracle.com", + "file_source": "part-00943-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "\nNICs That Are Missing at Boot (System Administration Guide: Network Interfaces and Network Virtualization)


System Administration Guide: Network Interfaces and Network Virtualization

NICs That Are Missing at Boot

\n

NICs that are not present at system boot represent a special instance\nof failure detection. At boot time, the startup scripts track any interfaces\nwith /etc/hostname.interface files.\n Any data addresses in such an interface's /etc/hostname.interface file are automatically configured on the corresponding\nIPMP interface for the group. However, if the interfaces themselves cannot\nbe plumbed because they are missing, then error messages similar to the following\nare displayed:

\n\n\n\n\n
\n
\n
moving addresses from missing IPv4 interfaces: hme0 (moved to ipmp0)\nmoving addresses from missing IPv6 interfaces: hme0 (moved to ipmp0)
\n

Note –

In this instance of failure detection, only data addresses that\nare explicitly specified in the missing interface's /etc/hostname.interface file are moved to the IPMP interface.

\n
\n

If an interface with the same name as another interface that was missing\nat system boot is reattached using DR, the Reconfiguration Coordination Manager\n(RCM) automatically plumbs the interface. Then, RCM configures the interface\naccording to the contents of the interface's /etc/hostname.interface file. However, data addresses, which are addresses\nwithout the NOFAILOVER flag, that are in the /etc/hostname.interface file are ignored. This mechanism adheres\nto the rule that data addresses should be in the /etc/hostname.ipmp-interface file, and only test addresses should\nbe in the underlying interface's /etc/hostname.interface file.\nIssuing the ifconfig group command causes that interface\nto again become part of the group. Thus, the final network configuration is\nidentical to the configuration that would have been made if the system had\nbeen booted with the interface present.

\n

For more information about missing interfaces, see About Missing Interfaces at System Boot.

\n
\n
    \n \n
\n
\n
\n" + }, + { + "url": "https://docs.oracle.com/cd/B40099_02/books/Hospitality/Hospitality_Diary7.html", + "domain": "docs.oracle.com", + "file_source": "part-00310-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "\r\n\r\n\r\n\t\r\n\t\r\n\t\r\n\t\r\n\t\r\n\t\r\n\t\r\n\t\r\n\t\r\n\t\r\n\r\n\t\r\n\r\n\tBookshelf v8.0: Viewing Function and Booking Details in the Function Space Diary\r\n\t\r\n\t\r\n\t\r\n\r\n\r\n\r\n\r\n\r\n\t\r\n\t\t\r\n\t\t\r\n\t\t\r\n\t\r\n
  Bookshelf Home | Contents | Index | PDF   
\r\n\r\n\r\n\r\n
\r\n

Siebel Hospitality Guide > Using the Function Space Diary > Function Space Diary Overview >

\r\n

Viewing Function and Booking Details in the Function Space Diary

\r\n
\r\n

By moving the mouse over elements in the Function Space Diary, users can view additional information in the ToolTips that appear. Icons and colored indicator bars are also used to indicate different types of bookings and so on. See Icons in the Function Space Diary for more information on the icons used in the diary.

Viewing ToolTip Information in the Function Space Diary

\r\n

Users can view a summary of function details through the ToolTip feature. The information displayed in the ToolTip depends on the option selected from the Display drop-down list. For example, if users select Function Name, the function name appears on the booking indicator bar and not in the ToolTip.

If a special event, holiday, or sell note has been activated and attached to a date in the Function Space Diary, the ToolTip for that date displays description information for the item.

To view a summary of function details

\r\n
    \r\n
  • Position the mouse pointer over the booking in the diary, and in the ToolTip that appears, review details such as:\r\n
      \r\n
    • Function name \r\n
    • Function type \r\n
    • Function status \r\n
    • Start time \r\n
    • End time \r\n
    • Setup style \r\n
    • Expected number of attendees\r\n
    \r\n
\r\n

NOTE:  If a sales or event manager has been nominated for the function, these details can also be included in the ToolTip.

\r\n

To review setup and tear-down information

\r\n
    \r\n
  • Position the mouse pointer over the setup or tear-down time, and in the ToolTip that appears, review the start and end date and time periods.\r\n
      \r\n
    • If the Turn Times check box is selected, the time periods for each booking's setup and tear-down are displayed according to the pattern predefined by the administrator.\r\n
    \r\n
\r\n

Active and Backup Bookings

\r\n

If certain function spaces require backup spaces, an icon is displayed next to the appropriate function space names on the y-axis. Icons are also used to indicate other booking attributes, for example, bookings that are on 24-hour hold, affiliate bookings, noisy bookings, and so on. For more information about icon usage in the diary in the Function Space Diary, see Icons in the Function Space Diary for more information.

Active and backup bookings are differentiated though the use of a colored border. For example, the administrator may define blue as the active booking color and yellow as the backup booking color. When displayed in the diary, the active booking is shown with a blue border while the backup booking is shown with a yellow border.

Temporarily Assigned Category Spaces and Specific Spaces

\r\n

Temporarily Assigned category spaces are shown against specific spaces in a color predefined by the administrator.

Dependency Bookings

\r\n

Dependency bookings are displayed in the diary to indicate bookings that impact other function spaces. For example, if a combined space (A and B and C) is booked, then individual spaces A, B, and C have dependency bars. If, however, AB is booked, then the combined space ABC has dependency bars. The color for the dependency bars is defined by the administrator. Users cannot change the dependency bookings; a ToolTip on the dependency booking shows the function causing the dependency along with the start and end time. In many cases, dependency bookings may be the result of multiple bookings. In this situation, the ToolTip information lists the function space bookings that cause the dependency booking.

Overbookings and Optioned Bookings

\r\n

When an overbooking or optioned booking occurs, the indicator bar is split vertically. Each booking can be displayed with a separate color if defined by the administrator. The predefined colors and patterns assigned to each bar represent the function's inventory status and setup and tear-down information.

Out of Order and Unavailable Function Spaces

\r\n

When a function space is set to Out of Order or Unavailable, users can view this information in the Function Space Diary. The color for an Out of Order and Unavailable function space is configured by the administrator. The label shows the booking type (Unavailable or Out of Order) with no setup and tear-down time.

Reserved Beyond Capacity Bookings

\r\n

Functions that are reserved beyond capacity are differentiated from other bookings through the use of a specific color as defined by the administrator. A function space can be reserved beyond capacity by setting an authorization level which is above the capacity level. For example, the capacity level for Salon A is 2 but the authorization level is set to 3. If the function space is reserved three times, it appears as Reserved Beyond Capacity.

Time-Shifted Bookings

\r\n

Reserved bookings, which have been shifted to a different time period or date, are indicated by the addition of the word Shifted to the function name on the indicator bar. The color of the indicator bar also changes to the defined color for Reserved Shifted. The original booking still appears also in the diary. If the booking cannot be reserved after shifting, it does not appear in the diary.

Function Bookings in Suites

\r\n

If functions are scheduled to take place in reserved suites, both the suite booking and the function booking are displayed in line with the appropriate suite name. The indicator bar for the particular suite is halved horizontally, and an additional indicator bar appears showing the start and end times for the function in the suite. A separate ToolTip appears for the reserved suite booking and the scheduled function, providing relevant information for each.

Effective Start and End Times

\r\n

If effective start and end times have been added for any function spaces or categories, these can be viewed in the diary. The ToolTip shows the effective start and end times for the particular function space. The color for an Ineffective function space is configured by the administrator.

Categories and specific spaces are typically removed from the diary view if they are not effective for any of the dates currently displayed. The exception to this is if the dates shown in the diary are within the diary's cached date range. The diary caches information about a certain range of dates beyond what is currently shown and may begin showing a category or specific space up to approximately a month before it becomes effective. When such cached date ranges appear depends on the Time Scale option that users are using to view the information. For an explanation of the Time Scale option, see Using the Information Display Controls in the Function Space Diary.

\r\n
\r\n\r\n\r\n\t\r\n\t\t\r\n\t\t\r\n\t\t\r\n\t\r\n
   
\r\n\r\n\r\n\t\r\n\t\t\r\n\t\t\r\n\t\r\n\t\r\n\t\t\r\n\t\r\n
Siebel Hospitality GuideCopyright © 2009, Oracle and/or its affiliates. All rights reserved. Legal Notices.
 
\r\n\r\n\r\n\r\n" + }, + { + "url": "https://docs.oracle.com/cd/E19044-01/sol.containers/817-1592/z.inst.ov-1/index.html", + "domain": "docs.oracle.com", + "file_source": "part-00943-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "Chapter 19 About Installing, Halting,\nCloning, and Uninstalling Non-Global Zones (Overview) (System Administration Guide: Oracle Solaris Containers-Resource Management and Oracle Solaris Zones)

System Administration Guide: Oracle Solaris Containers-Resource Management and Oracle Solaris Zones

Chapter 19 About Installing, Halting,\nCloning, and Uninstalling Non-Global Zones (Overview)

\n

This chapter discusses zone installation on your Solaris system. It\nalso describes the two processes that manage the virtual platform and the\napplication environment, zoneadmd and zsched.\nInformation about halting, rebooting, cloning, and uninstalling zones is also\nprovided.

\n

The following topics are addressed in this chapter:

\n\n

To clone a non-global zone, install and boot a non-global zone, or to\nhalt or uninstall a non-global zone, see Chapter 20, Installing, Booting, Halting, Uninstalling, and Cloning Non-Global Zones (Tasks).

\n

For information about lx branded zone installation,\nsee Chapter 34, About Installing, Booting, Halting, Cloning, and Uninstalling lx Branded Zones (Overview) and Chapter 35, Installing, Booting, Halting, Uninstalling and Cloning lx Branded Zones (Tasks).

\n\n

What's New in This Chapter?

\n

\nSolaris 10 11/06: The ability to\nclone a non-global zone is now available. See Solaris 10 11/06: Cloning a Non-Global Zone on the Same System.

\n

\nSolaris 10 8/07: Information on boot\narguments has also been added. See Solaris 10 8/07: Zone Boot Arguments.

\n

\nSolaris 10 5/09: ZFS clone has been\nimplemented. When the source zonepath and the target zonepath both reside on ZFS and are in the same pool, the zoneadm clone command automatically uses ZFS to clone\nthe zone. If both zonepaths are non-ZFS, or if one is ZFS\nand the other non-ZFS, the code will use the existing copy technique.

\n

Zone Installation and Administration Concepts

\n

\nThe zoneadm command described in the zoneadm(1M) man page is the primary tool\nused to install and administer non-global zones. Operations using the zoneadm command must be run from the global zone. The following tasks can\nbe performed using the zoneadm command:

\n
    \n
  • \n

    Verify a zone

    \n\n\n
  • \n
  • \n

    Install a zone

    \n\n\n
  • \n
  • \n

    Boot a zone, which is similar to booting a regular Solaris\nsystem

    \n\n\n
  • \n
  • \n

    Display information about a running zone

    \n\n\n
  • \n
  • \n

    Halt a zone

    \n\n\n
  • \n
  • \n

    Reboot a zone

    \n\n\n
  • \n
  • \n

    Uninstall a zone

    \n\n\n
  • \n
  • \n

    Relocate a zone from one point on a system to another point\non the same system

    \n\n\n
  • \n
  • \n

    Provision a new zone based on the configuration of an existing\nzone on the same system

    \n\n\n
  • \n
  • \n

    Migrate a zone, used with the zonecfg command

    \n\n\n
  • \n
\n

For zone installation and verification procedures, see Chapter 20, Installing, Booting, Halting, Uninstalling, and Cloning Non-Global Zones (Tasks) and the zoneadm(1M) man page. Also refer to the zoneadm(1M)man\npage for supported options to the zoneadm list command.\nFor zone configuration procedures, see Chapter 18, Planning and Configuring Non-Global Zones (Tasks) and the zonecfg(1M) man page. Zone\nstates are described in Non-Global Zone State Model.

\n

If you plan to produce Solaris auditing records for zones, read Using Solaris Auditing in Zones before you\ninstall non-global zones.

\n

Zone Construction

\n

\nThis section\napplies to initial zone construction, and not to the cloning of existing zones.

\n

After you have configured a non-global zone, you should verify that\nthe zone can be installed safely on your system's configuration. You can then\ninstall the zone. The files needed for the zone's root file system are installed\nby the system under the zone's root path.

\n

A non-global zone is installed with the open networking configuration\n(generic_open.xml). Network configuration types are described\nin Chapter 19, Managing Services (Tasks), in System Administration Guide: Basic Administration. The\nzone administrator can switch the zone to the limited networking configuration\n(generic_limited_net.xml) by using the netservices command.\nSpecific services can be enabled or disabled by using SMF commands.

\n

A successfully installed zone is ready for initial login and booting.

\n

The method used to initially install packages in a Solaris installation\nis also the method used to populate a non-global zone.

\n

The global zone must contain all the data necessary to populate a non-global\nzone. Populating a zone includes creating directories, copying files, and\nproviding configuration information.

\n

Only the information or data that was created in the global zone from\npackages is used to populate the zone from the global zone. For more information,\nsee the pkgparam(1) and pkginfo(4) man pages.

\n

Data from the following are not referenced or copied when a zone is\ninstalled:

\n
    \n
  • \n

    Non-installed packages

    \n\n\n
  • \n
  • \n

    Patches

    \n\n\n
  • \n
  • \n

    Data on CDs and DVDs

    \n\n\n
  • \n
  • \n

    Network installation images

    \n\n\n
  • \n
  • \n

    Any prototype or other instance of a zone

    \n\n\n
  • \n
\n

In addition, the following types of information, if present in the global\nzone, are not copied into a zone that is being installed:

\n
    \n
  • \n

    New or changed users in the /etc/passwd file

    \n\n\n
  • \n
  • \n

    New or changed groups in the /etc/group file

    \n\n\n
  • \n
  • \n

    Configurations for networking services such as DHCP address\nassignment, UUCP, or sendmail

    \n\n\n
  • \n
  • \n

    Configurations for network services such as naming services

    \n\n\n
  • \n
  • \n

    New or changed crontab, printer, and mail\nfiles

    \n\n\n
  • \n
  • \n

    System log, message, and accounting files

    \n\n\n
  • \n
\n

If Solaris auditing is used, modifications to auditing files copied\nfrom the global zone might be required. For more information, see Using Solaris Auditing in Zones.

\n

The following features cannot be configured in a non-global zone:

\n
    \n
  • \n

    Solaris Live Upgrade boot environments

    \n\n\n
  • \n
  • \n

    Solaris Volume Manager metadevices

    \n\n\n
  • \n
  • \n

    DHCP address assignment in a shared-IP zone

    \n\n\n
  • \n
  • \n

    SSL proxy server

    \n\n\n
  • \n
\n

The resources specified in the configuration file are added when the\nzone transitions from installed to ready. A unique zone ID is assigned by\nthe system. File systems are mounted, network interfaces are set up, and devices\nare configured. Transitioning into the ready state prepares the virtual platform\nto begin running user processes. In the ready state, the zsched and zoneadmd processes are started to manage the virtual platform.

\n
    \n
  • \n

    \nzsched, a system scheduling process similar\nto sched, is used to track kernel resources associated\nwith the zone.

    \n\n\n
  • \n
  • \n

    \nzoneadmd is the zones administration daemon.

    \n\n\n
  • \n
\n

A zone in the ready state does not have any user processes executing\nin it. The primary difference between a ready zone and a running zone is that\nat least one process is executing in a running zone. See the init(1M) man page for more information.

\n

The zoneadmd Daemon

\n

\nThe\nzones administration daemon, zoneadmd, is the primary process\nfor managing the zone's virtual platform. The daemon is also responsible for\nmanaging zone booting and shutting down. There is one zoneadmd process\nrunning for each active (ready, running, or shutting down) zone on the system.

\n

The zoneadmd daemon sets up the zone as specified\nin the zone configuration. This process includes the following actions:

\n
    \n
  • \n

    Allocating the zone ID and starting the zsched system\nprocess.

    \n\n\n
  • \n
  • \n

    Setting zone-wide resource controls.

    \n\n\n
  • \n
  • \n

    Preparing the zone's devices as specified in the zone configuration.\nFor more information, see the devfsadmd(1M) man page.

    \n\n\n
  • \n
  • \n

    Setting up virtual network interfaces.

    \n\n\n
  • \n
  • \n

    Mounting loopback and conventional file systems.

    \n\n\n
  • \n
  • \n

    Instantiating and initializing the zone console device.

    \n\n\n
  • \n
\n

Unless the zoneadmd daemon is already running, it\nis automatically started by zoneadm. Thus, if the daemon\nis not running for any reason, any invocation of zoneadm \nto administer the zone will restart zoneadmd.

\n

The man page for the zoneadmd daemon is zoneadmd(1M).

\n

The zsched Zone Scheduler

\n

\nAn\nactive zone is a zone that is in the ready state, the running state, or the\nshutting down state. Every active zone has an associated kernel process, zsched. Kernel threads doing work on behalf of the zone are owned\nby zsched. The zsched process enables\nthe zones subsystem to keep track of per-zone kernel threads.

\n

Zone Application Environment

\n

The zoneadm command is used to create the zone application\nenvironment.

\n

Before a non-global zone is booted for the first time, the internal\nconfiguration of the zone must be created. The internal configuration specifies\na naming service to use, the default locale and time zone, the zone's root\npassword, and other aspects of the application environment. The application\nenvironment is established by responding to a series of prompts that appear\non the zone console, as explained in Internal Zone Configuration. Note that the default locale and time zone for\na zone can be configured independently of the global settings.

\n

About Halting, Rebooting, and Uninstalling\nZones

\n

This section provides an overview of the procedures for halting, rebooting,\nand uninstalling zones. Troubleshooting tips for zones that fail to halt when\nrequested are also provided.

\n

Halting a Zone

\n

\nThe zoneadm halt command\nis used to remove both the application environment and the virtual platform\nfor a zone. The zone is then brought back to the installed state. All processes\nare killed, devices are unconfigured, network interfaces are destroyed, file\nsystems are unmounted, and the kernel data structures are destroyed.

\n

The halt command does not run\nany shutdown scripts within the zone. To shut down a zone, see How to Use zlogin to Shut Down a Zone.

\n

If the halt operation fails, see Zone Does not Halt.

\n

Rebooting a Zone

\n

\nThe zoneadm reboot command\nis used to reboot a zone. The zone is halted and then booted again. The zone\nID will change when the zone is rebooted.

\n

Solaris 10 8/07: Zone Boot Arguments

\n

\nZones support the following boot arguments used with the zoneadm boot and reboot commands:

\n
    \n
  • \n

    \n-i altinit\n

    \n\n\n
  • \n
  • \n

    \n-m smf_options\n

    \n\n\n
  • \n
  • \n

    \n-s\n

    \n\n\n
  • \n
\n

The following definitions apply:

\n
\n
\n-i altinit\n
\n
\n

Selects an alternative executable to be the first process. altinit must be a valid path to an executable. The default first\nprocess is described in init(1M).

\n\n
\n
\n-m smf_options\n
\n
\n

Controls the boot behavior of SMF. There are two categories\nof options, recovery options and messages options. Message options determine\nthe type and number of messages that displays during boot. Service options\ndetermine the services that are used to boot the system.

\n\n

Recovery options include the following:

\n\n\n
\n
debug
\n
\n

Prints standard per-service output and all svc.startd messages\nto log.

\n\n
\n
\nmilestone=milestone\n
\n
\n

Boot to the subgraph defined by the given milestone. Legitimate\n milestones are none, single-user, multi-user, multi-user-server, and all.

\n\n
\n
\n\n

Message options include the following:

\n\n\n
\n
\nquiet\n
\n
\n

Prints standard per-service output and error messages requiring\nadministrative intervention

\n\n
\n
\nverbose\n
\n
\n

Prints standard per-service output and messages providing\nmore information.

\n\n
\n
\n\n
\n
\n-s\n
\n
\n

Boots only to milestone svc:/milestone/single-user:default. This milestone is equivalent to init level s.

\n\n
\n
\n

For usage examples, see How to Boot a Zone and How to Boot a Zone in Single-User Mode.

\n

For information on the Solaris service management facility (SMF) and init , see Chapter 18, Managing Services (Overview), in System Administration Guide: Basic Administration, svc.startd(1M) and init(1M).

\n

Zone autoboot\n

\n

If you set the autoboot resource property in a zone's\nconfiguration to true, that zone is automatically booted\nwhen the global zone is booted. The default setting is false.

\n

Note that for the zones to autoboot, the zones service svc:/system/zones:default must also be enabled.

\n

Uninstalling a Zone

\n

The zoneadm uninstall command\nis used to uninstall all of the files under the zone's root file system. Before\nproceeding, the command prompts you to confirm the action, unless the -F (force)\noption is also used. Use the uninstall command with caution,\nbecause the action is irreversible.

\n

Solaris 10 11/06 and Later: About Cloning Non-Global\nZones

\n

Cloning allows you to copy an existing configured and installed zone\non your system to rapidly provision a new zone on the same system. Note that\nat a minimum, you must reset properties and resources for the components that\ncannot be identical for different zones. Thus, the zonepath must\nalways be changed. In addition, for a shared-IP zone, the IP addresses in\nany net resources must be different. For an exclusive-IP zone, the physical\nproperty of any net resources must be different.

\n
    \n
  • \n

    Cloning a zone is a faster way to install a zone.

    \n\n\n
  • \n
  • \n

    The new zone will include any changes that have been made\nto customize the source zone, such as added packages or file modifications.

    \n\n\n
  • \n
\n

\nSolaris 10 5/09: When the source zonepath and the target zonepath both reside\non ZFS and are in the same pool, the zoneadm clone command\nautomatically uses ZFS to clone the zone. When using ZFS clone, the data is\nnot actually copied until it is modified. Thus, the initial clone takes very\nlittle time. The zoneadm command takes a ZFS snapshot of\nthe source zonepath, and sets up the target zonepath.\nThe system names the snapshot SUNWzoneX,\nwhere X is a unique ID used to distinguish between\nmultiple snapshots. The zonepath of the destination zone\nis used to name the ZFS clone. A software inventory is performed so that a\nsnapshot used at a future time can be validated by the system. To clone a\nsource zone multiple times, the zoneadm command allows\nyou to specify that an existing snapshot should be used. The system validates\nthat the existing snapshot is usable on the target.

\n

You cannot use manual snapshots, such as the type described in Creating and Destroying ZFS Snapshots in Oracle Solaris ZFS Administration Guide. This type of snapshot\n lacks the data to perform a validation.

\n

You might want to clone a source zone many times but not want to have\na new snapshot for each clone. The -s parameter to the clone subcommand allows you to specify that an existing snapshot taken\nfrom a previous clone should be used. See Solaris 10 5/09: How to Clone a Zone from an Existing Snapshot.

\n

Because the contents of a snapshot represent a zone from a point in\nthe past, it is possible that the system has been updated in some way, such\nas by patching or upgrading, since the snapshot was taken. The fact that the\nzone was upgraded could render the snapshot invalid for use as a zone on the\npresent-day system.

\n
Note –

You can specify that a ZFS zonepath be copied\ninstead of ZFS cloned, even though the source could be cloned in this way.

\n
\n

See Solaris 10 11/06: Cloning a Non-Global Zone on the Same System for more information.

\n
\n
    \n \n
\n
\n
" + } +] \ No newline at end of file