text
stringlengths 26
1.5k
|
---|
nfront-door-tls
nfull-stack-tls
Note: For information about the required additions to the base
kustomization.yaml for using cert-manager as the certificate generator, see the
README file at $deploy/sas-bases/examples/security/README.md
(for
Markdown format) or at $deploy/sas-bases/docs/configure_network_security_and_encryption_using_sas_security_certificate_framework.htm
(for HTML format).
SAS strongly recommends that you safeguard your passwords and data by securing
network communication. You can choose not to use TLS, but communication
between the pods in your deployment will be unsecured. If you accept that risk or
want to conduct experiments using fake data and credentials, you can eliminate
network security by deleting the following lines from the example
kustomization.yaml:
nFrom the resources block:
o- site-config/security/openssl-generated-ingress-certificate.yaml
nFrom the components block:
o- sas-bases/components/security/core/base/full-stack-tls
o- sas-bases/components/security/network/networking.k8s.io/ingress/nginx.ingress.kubernetes.io/full-stack-tls
For more information about TLS and your SAS Viya platform deployment, see the
README file at $deploy/sas-bases/examples/security/README.md
(for
Markdown format) or at $deploy/sas-bases/docs/configure_network_security_and_encryption_using_sas_security_certificate_framework.htm
(for HTML format).
Add Forward Proxy Settings
|
(for HTML format).
Add Forward Proxy Settings
Note: If you are not using a forward proxy while deploying your software, skip this
section.
SAS Viya Platform Deployment Operator
Proxy settings, when using the SAS Viya Platform Deployment Operator, can be set
for all SAS Viya platform deployments in a cluster or for individual SAS Viya
platform deployments within the cluster.
To configure proxy settings for all operator-based Viya deployments in the cluster,
environment variables should be added to the deployment operator manifest file.
For details, see “Configure Proxy Information” on page 17.44Chapter 2 / Installation
|
To configure proxy settings for a deployment, add proxy information to the
SASDeployment Custom Resource. For more information, see “Revise the Custom
Resource for Proxy Information” on page 84. If the settings for the cluster-wide
proxy are different than the settings for the deployment proxy, the values for the
deployment proxy are used.
sas-orchestration Command and
Kubernetes Commands
If you are using a proxy and deploying with the sas-orchestration tool or with
Kubernetes commands, ensure that the appropriate environment variables are set
appropriately for the Update Checker. For details, see “(Optional) Define Proxy
Environment Variables ” on page 41.
Configure Container Security Settings
Note: If you are deploying on Red Hat OpenShift and have completed the steps in
“Additional Security” on page 27, you have performed the necessary steps and
should skip this section.
You can change the default container security settings (such as removing, adding,
or updating settings in the podSpecs) in a SAS Viya platform deployment. SAS has
provided example and overlay files to manage the fsGroup field and the secure
computing mode (seccomp) profile for your deployment.
nThe fsGroup field defines a special supplemental group that assigns a group ID
(GID) for all containers in the pod.
nSeccomp is a Linux kernel feature used to restrict actions available within a
container.
There are many reasons why an administrator might want to modify these settings.
|
container.
There are many reasons why an administrator might want to modify these settings.
However, if you are deploying on Red Hat OpenShift, they must be modified in order
to take advantage of OpenShift's built-in security context constraints. For more
information about these settings, see the README file at $deploy/sas-bases/examples/security/container-security/README.md
(for Markdown format) or $deploy/sas-bases/docs/modify_container_security_settings.htm
(for HTML
format).
Enable Multi-tenancy
By default, your SAS Viya platform deployment is not multi-tenant. To make your
SAS Viya platform deployment multi-tenant, follow the instructions in the README
file at $deploy/sas-bases/examples/multi-tenant/README.md
(for Markdown
format) or at $deploy/sas-bases/docs/multi-tenant_deployment.htm
(for HTML
format).Common Customizations 45
|
Note: The decision to enable multi-tenancy must be made before deployment. You
cannot change the multi-tenancy status of your deployment after the software has
been deployed. The only way to change the status of multi-tenancy in a deployment
is to re-deploy the software.
Configure SAS Image Staging
By default, SAS Image Staging starts pods on nodes via a daemonset at
approximate two-minute intervals to ensure that relevant images have been pulled
to hosts. While this behavior accomplishes the goal of pulling images to nodes and
decreasing start-up times, some users may want more intelligent and specific
control with less churn in Kubernetes. To accomplish these goals, configure SAS
Image Staging to take advantage of a node list to further decrease start-up times
and target specific nodes for pulling.
For information about both methods of using SAS Image Staging, including a
comparison of their relative advantages and disadvantages, see the README file at $deploy/sas-bases/examples/sas-prepull/README.md
(for Markdown format) or
at $deploy/sas-bases/docs/sas_image_staging_configuration_option.htm
(for
HTML format).
Deploy SAS Startup Sequencer
Although the SAS Viya platform comprises components that are designed to start in
any order, in some scenarios it is more efficient for the components to start in an
ordered sequence. SAS Startup Sequencer inserts an Init Container into the pods
|
ordered sequence. SAS Startup Sequencer inserts an Init Container into the pods
within the Deployments and StatefulSets within the SAS Viya platform. The Init
Container ensures that a predetermined, ordered start-up sequence is respected by
forcing a pod's start-up to wait until that particular pod can be efficiently started
relative to the other pods. The Init Container gracefully exits when it detects the
appropriate time to start its accompanying pod, allowing the pod to start. This design
ensures that certain components start before others and allows Kubernetes to pull
container Images in a priority-based sequence. This design also provides a degree
of resource optimization, in that resources are more efficiently spent during the SAS
Viya platform start-up with a priority given to starting essential components first.
If you prefer to not use SAS Startup Sequencer, see the README file at $deploy/sas-bases/overlays/startup/README.md
(for Markdown format) or $deploy/sas-bases/docs/disabling_the_sas_viya_start-up_sequencer.htm
(for HTML
format).
Configure High Availability
The SAS Viya platform can be deployed as a High Availability (HA) system. In this
mode, the SAS Viya platform has redundant stateless and stateful services to
handle service outages, such as an errant Kubernetes node. A Kustomize 46Chapter 2 / Installation
|
transformer enables HA in the SAS Viya platform among the stateless
microservices. Stateful services, with the exception of SMP and OpenSearch, are
enabled as HA at initial deployment.
To enable HA, add a reference to the enable-ha-transformer.yaml file to the base
kustomization.yaml file:...transformers:...- sas-bases/overlays/scaling/ha/enable-ha-transformer.yaml...
Note: To enable HA for OpenSearch, see the README file located at $deploy/sas-bases/examples/configure-elasticsearch/internal/topology/README.md
(for Markdown format) or at $deploy/sas-bases/docs/configure_a_default_topology_for_opensearch.htm
(for HTML format).
For more information about this transformer file and disabling HA, see the README
file at $deploy/sas-bases/examples/scaling/ha/README.md
(for Markdown) or at $deploy/sas-bases/examples/docs/high_availability_ha_in_the_sas_viya_platform.htm
(for HTML).
Note: The instructions in this section increase the number of replicas of your SAS
Viya platform deployment, making it more resilient to pod or node failure and
increasing its availability. However, your SAS Viya platform environment probably
has dependencies on software that is running in other namespaces in the same
cluster. For example, software like ingress-nginx, and the SAS Viya Platform
Monitoring for Kubernetes solution might be critical to the availability of the SAS
Viya platform and may have been deployed by default with unique replicas, making
|
Viya platform and may have been deployed by default with unique replicas, making
them less highly available than the SAS Viya platform itself. If you want to increase
the availability of other software, consult the documentation for that software for
more information.
Furthermore, an over-tainting of the nodes for the SAS Viya platform can result in
third-party software being locked out of the cluster in spite of available spare
capacity. In order to achieve maximum overall availability, either dedicate some
nodes to this software or add tolerations to it so it can more easily run on the same
nodes as the SAS Viya platform.
Configure PostgreSQL
Based on your decision about your PostgreSQL instance (see “Internal versus
External PostgreSQL Instances” in System Requirements for the SAS Viya
Platform ), you must perform steps to deploy PostgreSQL (internal) or connect to an
existing PostgreSQL instance (external).
Internal Instance of PostgreSQL
If you are using an internal instance of PostgreSQL:Common Customizations 47
|
1Go to the base kustomization.yaml file. In the resources block of that file, add the
following lines:resources:- sas-bases/overlays/crunchydata/postgres-operator- sas-bases/overlays/postgres/platform-postgres
2In the components block of the base kustomization.yaml file, add the following
content. The new line should be listed before any entries that do not to relate to
Crunchy Data, such as TLS..components:- sas-bases/components/crunchydata/internal-platform-postgres
The kustomization.yaml file from “Initial kustomization.yaml File” on page 32
includes these references. For additional information, see the README file located
at $deploy/sas-bases/examples/postgres/README.md
(for Markdown format) or $deploy/sas-bases/docs/configure_postgresql.htm
(for HTML format).
External Instance of PostgreSQL
For the steps to configure include an external instance of PostgreSQL in your
deployment, see the README file located at $deploy/sas-bases/examples/postgres/README.md
(for Markdown format ) or$deploy/sas-bases/docs/configure_postgresql.htm
(for HTML format).
Configure CDS PostgreSQL
Several offerings on the SAS Viya platform include a second instance of
PostgreSQL referred to as CDS PostgreSQL. The CDS PostgreSQL instance is
used because the character of the data used by those offerings is hierarchically
different than the data generally stored in the platform PostgreSQL database. The
separation into two different databases allows them to be tuned individually, in turn
|
separation into two different databases allows them to be tuned individually, in turn
enhancing the performance of both.
The list of software offerings that include CDS PostgreSQL is located at “SAS
Common Data Store Requirements” in System Requirements for the SAS Viya
Platform . If your software order includes at least one of these offerings, CDS
PostgreSQL must be configured as well. To configure an instance of CDS
PostgreSQL, see the README file located at $deploy/sas-bases/examples/postgres/README.md
(for Markdown format ) or$deploy/sas-bases/docs/configure_postgresql.htm
(for HTML format).
Specify SMP or MPP CAS
Your deployment of SAS Cloud Analytic Services (CAS) consists of either a single
node (SMP) or a set of nodes that include one controller, optionally one backup
controller, and multiple workers (MPP). (Although a one-worker MPP configuration is
supported, it is not an efficient allocation of resources.) The base
kustomization.yaml file from “Initial kustomization.yaml File” on page 32 includes the 48Chapter 2 / Installation
|
reference that is required for deploying SMP or MPP CAS. If you do not make any
changes to the files in $deploy/sas-bases/overlays/cas-server
, CAS is
deployed as SMP.
To deploy MPP CAS, follow the instructions in the README file at $deploy/sas-bases/overlays/cas-server/README.md
(for Markdown) or $deploy/sas-bases/docs/cas_server_for_the_sas_viya_platform.htm
(for HTML) to modify the
appropriate files.
Note: Deployments that enable multi-tenancy should use SMP CAS. When
additional tenants are onboarded, the decision whether to use SMP or MPP CAS
should be made for each tenant.
Configure CAS Settings
Mount persistentVolumeClaims and Data
Connectors for the CAS Server
Note: This section describes using hostPath mounts as an option for this task.
Using a hostPath volume for storage is supported, but it potentially entails multiple
security risks. SAS recommends that you use hostPath volumes only if you are
knowledgeable of those risks. For more information, see the "hostPath" section of
the Kubernetes Storage volume documentation .
Data storage in containers is ephemeral. When a container is deleted, the stored
data is lost. For durable storage, data should be maintained in persistent volumes
outside the Kubernetes cluster. Data remains intact, regardless of whether the
containers that the storage is connected to are terminated.
To connect data storage outside the cluster to your SAS Viya deployment:
|
containers that the storage is connected to are terminated.
To connect data storage outside the cluster to your SAS Viya deployment:
1Decide if you are going to mount NFS or non-NFS persistentVolumeClaims.
2Copy the appropriate transformer file to the your site-config directory. If you are
mounting an NFS volume, copy $deploy/sas-bases/examples/cas/configure/cas-add-nfs-mount.yaml
to your /site-config directory. If you are
mounting a non-NFS volume, copy $deploy/sas-bases/examples/cas/configure/cas-add-host-mount.yaml
to your /site-config directory.
Note: For more information about the /site-config directory and its structure, see
“$deploy/site-config Directory” on page 28.
3In the new transformer file, replace the variables with actual values. Variables
are enclosed in braces ({ }) and spaces. When replacing a variable with an
actual value, ensure that the braces, spaces, and the hyphenated variable name
are removed.Common Customizations 49
|
4Save and close the new transformer file.
5In the base kustomization.yaml file, add the path to your new transformer file to
the transformers block. Here is an example using an NFS volume:...transformers:- site-config/{{ DIRECTORY-PATH }}/cas-add-nfs-mount.yaml...
6Save and close the kustomization.yaml file.
Note: You need a version of one of the two transformer files described in this topic
for each persistent volume that you want to use. Repeat the steps in this section for
each persistent volume. Use a different name for the .yaml file each time. Try to use
a name that indicates the purpose of the file.
Change accessMode
The default accessMode for the cas-default-data (referred to as “CASDATADIR” in
SAS Viya 3.X) and cas-default-permstore persistentVolumeClaims is
ReadWriteMany, because it is required for any backup controllers for CAS. It is not
required for deployments with SMP CAS, but changing the access mode
complicates a possible transition from SMP to MPP in the future.
To change the access mode for either cas-default-data or cas-default-permstore:
1Copy $deploy/sas-bases/examples/cas/configure/cas-storage-access-modes.yaml
to your /site-config directory.
Note: For more information about the /site-config directory and its structure, see
“$deploy/site-config Directory” on page 28.
2In the new cas-storage-access-modes.yaml file, replace the variables with actual
values. Variables are enclosed in braces ({ }) and spaces. To replace a variable
|
values. Variables are enclosed in braces ({ }) and spaces. To replace a variable
with an actual value, ensure that the braces, spaces, and hyphenated variable
name are removed.
3Save and close the new cas-storage-access-modes.yaml file.
4In the base kustomization.yaml file, add the path to your new cas-storage-
access-modes.yaml file to the transformers block:...transformers:- site-config/{{ DIRECTORY-PATH }}/cas-storage-access-modes.yaml...
5If you are using the initial kustomization.yaml file, go to the patches block.
Remove sas-cas-operator
from the parenthetical list in the annotationSelector
value.50Chapter 2 / Installation
|
Note: For more information about the initial kustomization.yaml file, see “Initial
kustomization.yaml File” on page 32.
6Save and close the kustomization.yaml file.
Adjust RAM and CPU Resources for CAS
Servers
If you use the initial kustomization.yaml file, the CAS operator applies auto-
resourcing by default in order to manage the RAM and CPU resources of the nodes
where CAS is running. When you instead want to allocate node resources manually,
you can disable auto-resourcing and manually modify resourcing requests. For
example, you might want to configure guaranteed QoS for CAS server pods. If you
manually allocate resources, you must set both the RAM and CPU resources
manually.
Note: For auto-resourcing to work appropriately, you must have set labels on your
node. See “Plan the Workload Placement” on page 4 for more information.
If you prefer to set your own RAM and CPU resources, perform the following steps.
1Copy $deploy/sas-bases/examples/cas/configure/cas-manage-cpu-and-memory.yaml
to your /site-config directory.
Note: For more information about the /site-config directory and its structure, see
“$deploy/site-config Directory” on page 28.
2In the new cas-manage-cpu-and-memory.yaml file, replace the variables with
actual values. Variables are enclosed in braces ({ }) and spaces. To replace a
variable, ensure that the braces, spaces, and hyphenated variable name are
removed.
3Save and close the new cas-manage-cpu-and-memory.yaml file.
|
variable, ensure that the braces, spaces, and hyphenated variable name are
removed.
3Save and close the new cas-manage-cpu-and-memory.yaml file.
4In the base kustomization.yaml file, remove - sas-bases/overlays/cas-server/auto-resources
from the resources block. Also remove - sas-bases/overlays/cas-server/auto-resources/remove-resources.yaml
from the
transformers block.
5In the base kustomization.yaml file, add the path to your new cas-manage-cpu-
and-memory.yaml file to the transformers block:...transformers:- site-config/{{ DIRECTORY-PATH }}/cas-manage-cpu-and-memory.yaml...
6Save and close the kustomization.yaml file.Common Customizations 51
|
Change the Number of Workers for MPP
CAS
Note: This customization can be performed only for deployments enabling MPP
CAS.
By default, MPP CAS has two workers. Perform the following steps to change the
number of workers before or after the initial deployment of your SAS Viya platform.
Note: If you want to change the number of workers after the initial deployment of
the SAS Viya platform, adding workers and having them join the grid does not
require a restart. However, existing SAS sessions will not reallocate or load balance
to use the new workers. New sessions should take advantage of the new workers.
Removing workers after the initial deployment requires deleting the CAS
deployment, modifying the YAML file, restarting the CAS server, reloading your data,
and starting new SAS sessions.
1Copy $deploy/sas-bases/examples/cas/configure/cas-manage-workers.yaml
to your /site-config directory.
Note: For more information about the /site-config directory and its structure, see
“$deploy/site-config Directory” on page 28.
2In the new cas-manage-workers.yaml file, replace the variables with actual
values. Variables are enclosed in braces ({ }) and spaces. To replace a variable
with a value, ensure that the braces, spaces, and hyphenated variable name are
removed.
3Save and close the new cas-manage-workers.yaml file.
4In the base kustomization.yaml file, add the path to your new cas-manage-
|
removed.
3Save and close the new cas-manage-workers.yaml file.
4In the base kustomization.yaml file, add the path to your new cas-manage-
workers.yaml file to the transformers block:...transformers:- site-config/{{ DIRECTORY-PATH }}/cas-manage-workers.yaml...
5Save and close the kustomization.yaml file.
Add a Backup Controller for MPP CAS
Note: This customization can be performed only for deployments with CAS in MPP
mode.52Chapter 2 / Installation
|
1Copy $deploy/sas-bases/examples/cas/configure/cas-manage-backup.yaml
to your /site-config directory.
Note: For more information about the /site-config directory and its structure, see
“$deploy/site-config Directory” on page 28.
2In the new cas-manage-backup.yaml file, replace the variable with the value 0 or
1. The value 0 indicates that you do not want a backup controller, and the value
1 indicates that you want a backup controller.
3Save and close the new cas-manage-backup.yaml file.
4In the base kustomization.yaml file, add the path to your new cas-manage-
backup.yaml file to the transformers block:...transformers:- site-config/{{ DIRECTORY-PATH }}/cas-manage-backup.yaml...
5Save and close the kustomization.yaml file.
Tune CAS_DISK_CACHE
About CAS_DISK_CACHE
The CAS server uses the directory or directories referred to as the CAS Disk Cache
as a scratch area. It is associated with the environment variable
CASENV_CAS_DISK_CACHE and has two primary purposes:
1As data is loaded into memory, it is organized in blocks. Each time a block
reaches the default block size of 16Mb, the block is copied to the CAS Disk
Cache. The copied block can be re-read back into memory quickly if memory
use becomes high and the original data must be freed from memory.
2For a distributed CAS server (MPP), copies of the blocks are transferred to CAS
worker pod for fault tolerance. Those copies are also stored in the CAS Disk
Cache of the receiving CAS Worker.
|
worker pod for fault tolerance. Those copies are also stored in the CAS Disk
Cache of the receiving CAS Worker.
A secondary use of the cache is for files that are uploaded to the server. By default,
a copy of the file is temporarily stored on the CAS controller in its CAS Disk Cache.
To specify a different location, see “Storage Location for Uploaded Files” on page
57.
About the Default Configuration
By default, the server is configured to use a directory that is named /cas/cache
on
each controller and worker pod. This directory is provisioned as a Kubernetes
emptyDir and uses disk space from the root volume of the Kubernetes node.Common Customizations 53
|
The default configuration is acceptable for testing and evaluation, but not for
production workloads. If disk space in the root volume of the node becomes low,
then Kubernetes begins evicting pods. The pod is unlikely to be rescheduled.
When the server stores a block in the cache, the server uses a configure technique
that involves opening a file, deleting the file, and then holding the handle to the
deleted file. The negative consequence to this technique is that Kubernetes cannot
monitor the disk use in the cache.
Choose the Best Storage
The server uses memory mapped I/O for the blocks in the cache. The best
performance is provided by using disks that are local to the node for each controller
and worker pod. If possible, use disks that provide high data transfer rates such as
NVMe or SSD.
If you follow the best practices for workload placement, then no other pods are
scheduled on a node that is used by CAS. Even if the root volume is sufficiently
large, it is likely that the performance yielded by the root volume will be lower than
that of an Ephemeral drive, assuming one is available to the node.
A better strategy is to use a disk that is attached to the node. If the server fills the
disk with blocks, the server logs an error rather than Kubernetes evicting the pod.
An end user receives the following message when the server runs out of disk space
|
An end user receives the following message when the server runs out of disk space
used for the cache on any node.Cloud Analytic Services failed writing to system disk space. Please contact youradministrator.
Note: The disk that is used does not need to persist beyond the duration of the
pod and does not need to be backed up. Ephemeral storage is ideal.
Use a hostPath for CAS Disk Cache
Most cloud providers offer virtual machines that include a temporary disk for
ephemeral storage. Typically, the disk is available at /dev/sdb1
or a similarly named
device. Some cloud providers automatically mount the device on the /mnt
directory
for the VM.
In order to leverage those alternate disks, you can use a Kubernetes hostPath
instead of an emptyDir. The SAS Viya platform deployment requires that those
temporary disks are already mounted and available on the CAS nodes and that the
path is identical on all nodes.
Single Disk for CAS Disk Cache
1In your $deploy/site-config/
directory, create a file named cas_disk_cache-
config.yaml.
2Use the following content in the cas_disk_cache-config.yaml file. Replace the
variables in the brackets, and the brackets themselves, with values that match
your environment.54Chapter 2 / Installation
|
# # this defines the volume and volumemount for CAS DISK CACHE location---apiVersion: builtinkind: PatchTransformermetadata: name: cas-cache-hostpathpatch: |- - op: add path: /spec/controllerTemplate/spec/volumes/- value: name: cas-cache-nvme0 hostPath: # # hostPath, is the path on the host, outside the pod path: {{/mnt-nvme0}} - op: add path: /spec/controllerTemplate/spec/containers/0/volumeMounts/- value: name: cas-cache-nvme0 mountPath: /cas/cache-nvme0 # # mountPath is the path inside the pod that CAS will reference - op: add path: /spec/controllerTemplate/spec/containers/0/env/- value: name: CASENV_CAS_DISK_CACHE value: "/cas/cache-nvme0" # # This has to match the value that is inside the podtarget: version: v1alpha1 group: viya.sas.com kind: CASDeployment # # Target filtering: chose/uncomment one of these option: # # To target only the default CAS server (cas-shared-default) : labelSelector: "sas.com/cas-server-default" # # To target only a single CAS server (e.g. MyCAS) other than default: # name: {{MyCAS}} # # To target all CAS Servers # name: .*
3In the base kustomization.yaml file, add the path to your new cas_disk_cache-
config.yaml file to the transformers block:...transformers:...- site-config/cas_disk_cache-config.yaml...
Microsoft Azure and other cloud providers offer VMs with NVMe storage. Make sure
the volume is formatted with an xfs or ext4 file system and is mounted by the VM.
|
the volume is formatted with an xfs or ext4 file system and is mounted by the VM.
Multiple Disks for CAS Disk Cache
If you use nodes with more than one high-performance disk, you can use more than
one disk for the CAS Disk Cache. The server uses a round-robin algorithm for
storing blocks on multiple disks.Common Customizations 55
|
1In your $deploy/site-config/
directory, create a file named cas_disk_cache-
config.yaml.
2Use the following content in the cas_disk_cache-config.yaml file. Replace the
variables in the brackets, and the brackets themselves, with values that match
|
your environment.# # this defines the volume and volumemount for CAS DISK CACHE location---apiVersion: builtinkind: PatchTransformermetadata: name: cas-cache-hostpathpatch: |- - op: add path: /spec/controllerTemplate/spec/volumes/- value: name: cas-cache-nvme0 hostPath: # # hostPath, is the path on the host, outside the pod path: {{/mnt-nvme0}} - op: add path: /spec/controllerTemplate/spec/volumes/- value: name: cas-cache-nvme1 hostPath: # # hostPath, is the path on the host, outside the pod path: {{/mnt-nvme1}} - op: add path: /spec/controllerTemplate/spec/containers/0/volumeMounts/- value: name: cas-cache-nvme0 mountPath: /cas/cache-nvme0 # # mountPath is the path inside the pod that CAS will reference - op: add path: /spec/controllerTemplate/spec/containers/0/volumeMounts/- value: name: cas-cache-nvme1 mountPath: /cas/cache-nvme1 # # mountPath is the path inside the pod that CAS will reference - op: add path: /spec/controllerTemplate/spec/containers/0/env/- value: name: CASENV_CAS_DISK_CACHE value: "/cas/cache-nvme0:/cas/cache-nvme1" # # This has to match the value that is inside the podtarget: version: v1alpha1 group: viya.sas.com kind: CASDeployment # # Target filtering: chose/uncomment one of these option: # # To target only the default CAS server (cas-shared-default) : labelSelector: "sas.com/cas-server-default" # # To target only a single CAS server (e.g.
|
only the default CAS server (cas-shared-default) : labelSelector: "sas.com/cas-server-default" # # To target only a single CAS server (e.g. MyCAS) other than default: # name: {{MyCAS}} # # To target all CAS Servers # name: .*
|
56Chapter 2 / Installation
|
3In the base kustomization.yaml file, add the path to your new cas_disk_cache-
config.yaml file to the transformers block:...transformers:...- site-config/cas_disk_cache-config.yaml...
The preceding sample suggests that two NVMe disks are mounted on the node at /mnt-nvme0
and /mnt-nvme1
. Steps to perform that action are not shown in this
documentation.
Configure Block Size
By default, the server uses a 16 MB block size. If the site accesses very large tables
exclusively, you can configure a larger block size to reduce the chance of running
out of file handles. Set the CASCFG_MAXTABLEMEM environment variable to the
preferred value by adding the following block of code to the end of the patch block
of your cas_disk_cache-config.yaml file. - op: add path: /spec/controllerTemplate/spec/containers/0/env/- value: name: CASCFG_MAXTABLEMEM value: {{ BLOCKSIZE }}
The value for {{ BLOCKSIZE }}
should be a numerical value followed by units
(K=kilobytes, M=megabytes, or G=gigabytes). The default is 16M.
If a variety of table sizes is used, then individual users can set the MAXTABLEMEM
session option on a case-by-case basis.
Storage Location for Uploaded Files
An upload is a data transfer of an entire file to the server, such as a SAS data set in
SAS7BDAT format or a CSV file. The client, such as SAS, Python, or a web
browser, performs no processing on the file. The server performs any processing
|
browser, performs no processing on the file. The server performs any processing
that is needed, such as parsing records from a CSV file. - op: add path: /spec/controllerTemplate/spec/containers/0/env/- value: name: CASENV_CAS_CONTROLLER_TEMP value: {{ MOUNT-PATH-TO-VOLUME }}
Ensure that the path you use for {{ MOUNT-PATH-TO-VOLUME }}
is enclosed by
double quotation marks, such as "/cas/cache-nvme0"
.Common Customizations 57
|
Configure External Access to CAS
Overview of CAS Connectivity
By default, a single CAS server is configured during the deployment process and is
accessible to SAS services and web applications that are deployed in the
Kubernetes cluster. For example, SAS Visual Analytics, SAS Studio, and other SAS
software can work with CAS and do not require any additional configuration.
In addition, an HTTP Ingress is enabled that provides access to CAS from outside
the cluster to clients that use REST. This Ingress can be used with clients such as
Python SWAT.
The Ingress Controller that is configured for your cluster enables connectivity to
CAS at an HTTP path like https://www.example.com/cas-shared-default-http/ .
Note: Use the path as shown for clients such as Python SWAT. For curl, use a path
such as /cas-shared-default-http/cas. This document shows the path that is
appropriate for Python SWAT.
Note: The default instance of the CAS server is referenced in this example and the
rest of this topic. If you add more than one server, then the Ingress or Service name
uses the server instance name instead of the word “default”.
Optional Connectivity
There are two uses of CAS that require additional configuration:
nConnections from SAS 9.4, SAS Viya 3.5, or other binary clients. If you want
to connect to CAS from SAS Viya 3.5, SAS 9.4, or use a binary connection with
open programming clients such as Python, R, and Java, you can enable a binary
connection.
|
open programming clients such as Python, R, and Java, you can enable a binary
connection.
nConnections to CAS from SAS Data Connectors. For information about
enabling connectivity for SAS/ACCESS and data connectors, see the README
file $deploy/sas-bases/examples/data-access/README.md
(for Markdown) or $deploy/sas-bases/docs/configuring_sasaccess_and_data_connectors_for_sas_viya_4.htm
(for
HTML).
About Binary Connectivity
Most clients can use a binary connection to the CAS server. Typically, performance
is better than HTTP because the data stream is more compact than REST.
If you want to connect from SAS Viya 3.5 or SAS 9.4, then you must enable binary
communication. You can use the node port or load balancer as described here or 58Chapter 2 / Installation
|
you can configure a custom Ingress to proxy TCP port 5570. Configuring a custom
Ingress is not described in this documentation.
Optional Binary and HTTP Services
You can enable two services that provide external access to CAS for programmers.
One service provides binary communication and the other service provides HTTP
communication for REST. The HTTP service is an alternative to using the HTTP
Ingress that is enabled by default.
The binary communication provides better performance and can be used by SAS
Viya 3.5 or SAS 9.4. Open source clients such as Python SWAT require C language
libraries to use the binary connection. Refer to the documentation for the open
source client for information about the libraries.
If you enable either of these services, they are enabled as NodePorts by default. To
use the services as LoadBalancers, you must specify LoadBalancer as the type.
You can also restrict traffic by setting ranges of IP addresses for the load balancers
to accept traffic on.
Note: The CAS operator supports setting the binary and HTTP services to either
NodePort or LoadBalancer. Setting a combination of service types is not supported
by the operator. In addition, the DC and EPCS services that are part of
SAS/ACCESS and Data Connectors are also affected.
Configuration
1Copy the $deploy/sas-bases/examples/cas/configure/cas-enable-external-services.yaml
to your $deploy/site-config
directory.
|
Configuration
1Copy the $deploy/sas-bases/examples/cas/configure/cas-enable-external-services.yaml
to your $deploy/site-config
directory.
2In the copied file, set the publishBinaryService key to true to enable binary
communication for clients from outside the Kubernetes cluster:- op: replace path: /spec/publishBinaryService value: true
3If you want to enable the HTTP service, set the publishHTTPService key to true.
This enables a service for REST access from outside the Kubernetes cluster. Be
aware that REST access is enabled by default through a Kubernetes Ingress. If
you have access through the Ingress, then enabling this HTTP service is
redundant.- op: replace path: /spec/publishHTTPService value: true
4The services are configured as NodePort by default. For deployments in
Microsoft Azure or Amazon Web Services (AWS), NodePort is not supported and
you must configure the services as LoadBalancer services.
To configure them as LoadBalancer services, uncomment the serviceTemplate.
Setting source ranges is optional. Delete the lines if you do not want them. Here
is an example:- op: add path: /spec/serviceTemplate
Common Customizations 59
|
value: spec: type: LoadBalancer loadBalancerSourceRanges: - 192.168.0.0/16 - 10.0.0.0/8
Note: SAS supports setting the type and loadBalancerSourceRanges keys in
the service specification. Adding any other key such as port or selector can
result in poor performance or prevent connectivity.
5Set the publishExtHostnameSuffix key if you set the service to use
LoadBalancer, your deployment is in Microsoft Azure or in AWS, and you meet
either of these conditions:
nif you are using the DC or EPCS service.
nif the deployment is configured to use TLS.
When you set the key, the CAS Operator adds a subject alternative name (SAN)
for each service to the certificate that is created by sas-certframe. The operator
also adds a DNS label annotation to the service.- op: add path: /spec/publishExtHostnameSuffix value: "-unique-name.subdomain-name"
For Microsoft Azure, replace subdomain-name with your Azure region name,
such as "eastus2.cloudapp.azure.com". The text in the value, up to the first
period, is appended to the service name to create a unique DNS name. For
example, the default value for the binary service is sas-cas-server-default-bin. If -orion.eastus2.cloudapp.azure.com
is specified, then the operator creates
the following annotation and publishes a DNS record for it.apiVersion: v1kind: Servicemetadata: annotations: service.beta.kubernetes.io/azure-dns-label-name: sas-cas-server-default-bin-orion...
|
For the example, the DNS record is sas-cas-server-default-bin-
orion.eastus2.cloudapp.azure.com.
For AWS, replace subdomain-name with your subdomain of choice, such as
"viya.acme.com". For example the value you supply for
publishExtHostnameSuffix could be "-pisces.viya.acme.com". The service.beta.kubernetes.io/azure-dns-label-name
annotation will be
added to the deployment, but will be ignored by AWS. No DNS record will be
generated. The administrator must create a DNS alias/CNAME record for each
external service, including each node of the SAS Data Connect Accelerators,
after deployment. See “Configure External Access to Amazon Web Services
CAS Services” on page 108 for details.
6In the base kustomization.yaml file, add the path to your new cas-enable-
external-services.yaml file to the transformers block:transformers:...- site-config/{{ DIRECTORY-PATH }}/cas-enable-external-services.yaml
60Chapter 2 / Installation
|
...
7Save and close the kustomization.yaml file.
Note: If you configure direct access to the CAS server via HTTP or binary and are
using full-stack TLS, the subject alternative names (SAN) in the certificate
generated by cert-manager must include the host name or IP address being used to
access that service. For the steps to include the host name or IP address, see “Add
the External Host Name or IP Address to the SAN in the Signed Certificate” in SAS
Viya Platform Encryption: Data in Motion .
If you are making these changes after the initial deployment, the binary and HTTP
services do not require that you restart CAS. For other services related to CAS,
refer to the documentation to determine if a restart is required.
Use one of the next two sections to identify the connection information that
programmers need to connect to CAS from outside the Kubernetes cluster.
Connection Information for Programmers: NodePort
You can use the following commands to identify the network port that maps to the
service.kubectl -n name-of-namespace get svc sas-cas-server-default-httpkubectl -n name-of-namespace get svc sas-cas-server-default-bin
For a NodePort, find the network port that programmers connect to.NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)sas-cas-server-default-bin NodePort 10.0.5.236 <none> 5570:31066/TCP
Programmers need to know the host name of one of the Kubernetes nodes. You can
|
Programmers need to know the host name of one of the Kubernetes nodes. You can
use the following command to list the node names.kubectl -n name-of-namespace get nodesNAME STATUS ROLES AGE VERSIONhost02398.example.com Ready <none> 24d v1.18.4host02483.example.com Ready <none> 24d v1.18.4host02656.example.com Ready master 24d v1.18.4host02795.example.com Ready <none> 24d v1.18.4host02854.example.com Ready <none> 24d v1.18.4
To connect from SAS Viya 3.5 or SAS 9.4 to the NodePort, run a CAS statement like
the following example:options CASHOST="host02398.example.com" CASPORT=31066;cas casauto;
For the sas-cas-server-default-http service, a REST client connects to one of the
Kubernetes nodes, such as host02398.example.com, and the port that is mapped to
8777.NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)sas-cas-server-default-http NodePort 10.107.219.118 <none> 8777:31535/TCP
Common Customizations 61
|
A REST client can connect to a resource such as https://
host02398.example.com:31535/cas-shared-default-http/ .
Connection Information for Programmers:
LoadBalancer
You can use the following commands to identify the external IP address for the load
balancer.kubectl -n name-of-namespace get svc sas-cas-server-default-httpkubectl -n name-of-namespace get svc sas-cas-server-default-bin
The output includes the IP address of the load balancer and programmers connect
to native port, 5570.NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)sas-cas-server-default-bin LoadBalancer 10.0.44.57 52.247.0.1 5570:32215/TCP
To connect from SAS Viya 3.5 or SAS 9.4 to the LoadBalancer, run a CAS
statement like the following example:options CASHOST="sas-cas-server-default-bin-orion.eastus2.cloudapp.azure.com";options CASPORT=5570;cas casauto;
Substitute your deployment-specific information for the sample unique value, orion,
and the sample region, eastus2.
For the sas-cas-server-default-http service, a REST client connects to the load
balancer on port 80 for HTTP or port 443 for HTTPS if TLS is configured. Only one
of the two ports is operational, depending on whether TLS is configured.NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)sas-cas-server-default-http LoadBalancer 10.0.61.68 52.247.0.168 8777:31979/TCP,80:30707/TCP,443:30032/TCP
A REST client can connect to a resource such as https://sas–cas–server–default–
|
A REST client can connect to a resource such as https://sas–cas–server–default–
http–orion.eastus2.cloudapp.azure.com:443/cas-shared-default-http/ .
SAS Data Connect Accelerators
The SAS Data Connect Accelerators enable parallel data transfer between a
distributed CAS server (MPP) and some data sources such as Teradata and
Hadoop. For information about enabling connectivity for SAS/ACCESS and Data
Connectors, see the README file at $deploy/sas-bases/examples/data-access/README.md
(for Markdown) or $deploy/sas-bases/docs/configuring_sasaccess_and_data_connectors_for_sas_viya_4.htm
(for HTML).62Chapter 2 / Installation
|
More Documentation
For more information about the services, see “Kubernetes Services for CAS” in SAS
Viya Platform Operations: Servers and Services .
Enable Host Launch
By default, CAS cannot launch sessions under a user's host identity. All sessions
run under the cas service account instead. CAS can be configured to allow for host
identity launches by including a patch transformer in the kustomization.yaml file. To
enable host launch for CAS, see the “Enable Host Launch in the CAS Server”
section of the README file located at $deploy/sas-bases/examples/cas/configure/README.md
(for Markdown format) or at $deploy/sas-bases/docs/configuration_settings_for_cas.htm
(for HTML format).
Enable State Transfer for CAS Servers
Note: If you are not using the SAS Viya Platform Deployment Operator or sas-
orchestration deploy to manage your deployment, skip this section.
Enabling state transfers preserves the sessions, tables, and state of a running CAS
server for a new CAS server instance that is being started as part of a CAS server
update. To enable the state transfer of CAS servers, see the README at $deploy/sas-bases/overlays/cas-server/state-transfer/README.md
(for Markdown
format) or $deploy/sas-bases/docs/state_transfer_for_cas_server_for_the_sas_viya_platform.htm
(for HTML
format).
Note: You cannot enable state transfer and CAS auto-restart in the same SAS Viya
platform deployment. If you want to enable state transfer for a deployment that
|
platform deployment. If you want to enable state transfer for a deployment that
already has CAS auto-restart enabled, you must first disable CAS auto-restart
before enabling state transfer.
Enable CAS Auto-Restart After Updates
Note: If you are not using the SAS Viya Platform Deployment Operator or sas-
orchestration deploy to manage your deployment, skip this section.
By default, CAS does not automatically restart during version updates performed by
the SAS Viya Platform Deployment Operator or sas-orchestration deploy. To change
the default to enable auto-restart, see the “CAS Auto-Restart During Version
Updates” section of the README file located at $deploy/sas-bases/overlays/
Common Customizations 63
|
cas-server/README.md
(for Markdown format) or $deploy/sas-bases/docs/cas_server_for_the_sas_viya_platform.htm
(for HTML format).
Note: You cannot enable CAS auto-restart and state transfer in the same SAS Viya
platform deployment.
Configure GPUs for CAS
The SAS GPU Reservation Service aids SAS processes in resource sharing and
utilization of the Graphic Processing Units (GPUs) that are available in a Kubernetes
Pod. It is required in every SAS Cloud Analytic Services (CAS) Pod that is GPU-
enabled. For information about implementing the SAS GPU Reservation Service,
see the README located at $deploy/sas-bases/examples/gpu/README.md
(for
Markdown format) or at $deploy/sas-bases/docs/sas_gpu_reservation_service.htm
for HTML format.
Note: This README only describes how to use the SAS GPU Reservation Service
for CAS. To learn about how other products use GPUs, consult the READMEs for
each product.
Create a Personal CAS Server
For development purposes in applications such as SAS Studio, you might need to
allow data scientists the ability to work with a CAS server that is local to their SAS
Compute session. This personal CAS server is just like a regular (shared) CAS
server except it is simpler, relatively short-lived, and is only for one person.
To set up a personal CAS server, see the README file at $deploy/sas-bases/overlays/sas-programming-environment/personal-cas-server/README.md
(for
|
To set up a personal CAS server, see the README file at $deploy/sas-bases/overlays/sas-programming-environment/personal-cas-server/README.md
(for
Markdown format) or at $deploy/sas-bases/docs/configuring_sas_compute_server_to_use_a_personal_cas_server.htm
(for
HTML format).
To set up a personal CAS server that uses a GPU, see the README file at $deploy/sas-bases/overlays/sas-programming-environment/personal-cas-server-with-gpu/README.md
(for Markdown format) or at $deploy/sas-bases/docs/configuring_sas_compute_server_to_use_a_personal_cas_server_with_gpu.htm
(for HTML format).
To configure a personal CAS server with or without a GPU, see the README file at $deploy/sas-bases/examples/sas-programming-environment/personal-cas-server/README.md
(for Markdown format) or at configuration_settings_for_the_personal_cas_server.htm
(for HTML format).64Chapter 2 / Installation
|
Use Kerberos Connections to Connect to the CAS
Server
If you want to connect to the CAS Server from external clients through the binary or
REST ports, you must also configure the CAS Server to accept direct Kerberos
connections. That connection can use either System Security Services Daemon
(SSSD) or nss_wrapper. Unlike SSSD, nss_wrapper does not require running in a
privilege elevated container. For more information about the difference between
SSSD and nss_wrapper, see the "Kerberos Connections: section of the README
file located at $deploy/sas-bases/examples/kerberos/sas-servers/README.md
(for Markdown format) or at $deploy/sas-bases/docs/configuring_sas_servers_for_kerberos_in_sas_viya_platform.htm
(for HTML
format).
Configure SSSD
Enable SSSD
The configuration of SSSD is, by default, performed automatically for you. The
automatic process uses the configuration service LDAP settings (if they exist) to
construct an sssd.conf file. However, until that configuration is enabled, SSSD will
not be available to your SAS Viya platform deployment. To enable that SSSD
configuration:
1Add a reference to the sas-bases/overlays/cas-server/cas-sssd-sidecar.yaml file
to the transformers block in the base kustomization.yaml file. The new line must
precede any lines for TLS transformers and the line for required transformers.
Here is an example:transformers:- sas-bases/overlays/cas-server/cas-sssd-sidecar.yaml...- sas-bases/overlays/required/transformers.yaml...
|
Here is an example:transformers:- sas-bases/overlays/cas-server/cas-sssd-sidecar.yaml...- sas-bases/overlays/required/transformers.yaml...
2Follow the steps described in the ”Disable Cloud Native Mode” in the
“Configuration Settings for CAS” README file. The README is located at $deploy/sas-bases/examples/cas/configure/README.md
(for Markdown
format) and $deploy/sas-bases/docs/configuration_settings_for_cas.htm
(for HTML format).
3Because SSSD requires host authentication, follow the steps described at
“Enable Host Launch” on page 63.
4Save and close the kustomization.yaml file.Common Customizations 65
|
Add a Custom Configuration for SSSD
If you would prefer to use a custom configuration for SSSD instead of the default,
after completing the steps in “Enable SSSD” on page 65, perform the following
steps:
1Copy the $deploy/sas-bases/examples/cas/configure/cas-sssd-example.yaml
file to the location of your CAS server overlay, such as $deploy/site-config/cas-server/cas-sssd-example.yaml
.
2Add the location of the copied cas-sssd-example.yaml to the transformers block
of the base kustomization.yaml file. The new line should go after the required
transformers line. Here is an example based on the example used in step 1:transformers:...- sas-bases/overlays/required/transformers.yaml- site-config/cas-server/cas-sssd-example.yaml...
3Create your sssd.conf file and add your custom SSSD configuration to it. SAS
recommends putting the sssd.conf file in the $deploy/site-config
directory.
4Add the following code to the secretGenerator block of the base
kustomization.yaml file using the path to the sssd.conf file you created in step 3.
Here is an example using $deploy/site-config/cas-server/sssd.conf as that path:secretGenerator:...- name: sas-sssd-config files: - SSSD_CONF=site-config/cas-server/sssd.conf type: Opaque...
5Save and close the kustomization.yaml file.
Configure nss_wrapper
To configure nss_wrapper, add the following to the transformers block of the base
|
5Save and close the kustomization.yaml file.
Configure nss_wrapper
To configure nss_wrapper, add the following to the transformers block of the base
kustomization.yaml file. The reference to the nss_wrapper must come before the sas-bases/overlays/required/transformers.yaml
reference.transformers: ...- sas-bases/overlays/kerberos/nss_wrapper/add-nss-wrapper-transformer.yaml- sas-bases/overlays/required/transformers.yaml
66Chapter 2 / Installation
|
Configure OpenSearch
Note: The SAS Viya Programming offering does not include OpenSearch. If your
SAS Viya platform order contains SAS Viya Programming, skip this section.
Based on your decision about your OpenSearch instance (see “Internal versus
External OpenSearch Instances” in System Requirements for the SAS Viya
Platform ), you must perform steps to deploy OpenSearch (internal) or connect to an
existing OpenSearch instance (external).
Internal Instances of OpenSearch
Initial Customizations
OpenSearch is an Apache 2.0-licensed distribution with enterprise security. The
SAS Viya platform includes OpenSearch and uses its distributed search cluster in
infrastructure and solution services. Some additions to the base kustomization.yaml
file must be made to configure OpenSearch.
Note: The example kustomization.yaml file, located at “Initial kustomization.yaml
File” on page 32, includes these customizations.
1Add the following line to the resources block of the base kustomization.yaml file:resources:...- sas-bases/overlays/internal-elasticsearch...
2Add the following line to the transformers block of the base kustomization.yaml
file:transformers:...- sas-bases/overlays/internal-elasticsearch/internal-elasticsearch-transformer.yaml...
Configure Default Virtual Memory Resources
Note: If you are deploying on Red Hat OpenShift and have completed the steps in
“Security Context Constraints and Service Accounts” on page 21, you have
|
“Security Context Constraints and Service Accounts” on page 21, you have
performed the necessary steps and should skip this section.Common Customizations 67
|
The OpenSearch pods require additional virtual memory resources. In order to
provide these memory resources, a transformer uses a privileged container to set
the virtual memory for the mmapfs directory to the required level. Therefore,
privileged containers must be permitted by your Pod security policies. For more
information about Pod security policies, see https://kubernetes.io/docs/concepts/
policy/pod-security-policy/ .
You have three options:
nIf privileged containers are enabled, add a reference to the sysctl-
transformer.yaml file to the transformers block of the base kustomization.yaml
file. This transformer must be included after any TLS transformers and before
the sas-bases/overlays/required/transformers.yaml transformer.
Note: The sysctl-transformer.yaml transformer uses a privileged container to set
vm.max_map_count. If privileged containers are not allowed in your deployment,
do not add this line.
Here is an example:transformers:...- sas-bases/overlays/network/ingress/security/transformers/...- sas-bases/overlays/internal-elasticsearch/sysctl-transformer.yaml- sas-bases/overlays/required/transformers.yaml...
Note: Using this option requires modifying the OpenShift SCC for sas-
opendistro to allow it. For more information, see the README file at $deploy/sas-bases/examples/configure-elasticsearch/internal/openshift/README.md
(for Markdown format) or $deploy/sas-bases/docs/opensearch_on_red_hat_openshift.htm
(for HTML format).
|
(for Markdown format) or $deploy/sas-bases/docs/opensearch_on_red_hat_openshift.htm
(for HTML format).
nIf privileged containers are not allowed in your environment, a Kubernetes
administrator with elevated permissions can set the virtual memory manually
before performing the SAS Viya platform deployment. All nodes that run
workloads in a class that is tolerated by the stateful workload class are affected
by this requirement.
To configure the virtual memory settings for mmapfs manually:
1Log on to the first stateful node as root or with a sudoers account.
2Set the virtual memory using the appropriate method:
nTo set the value permanently, use your preferred text editor to modify /etc/sysctl.conf
or the equivalent in your environment. Update the
vm.max_map_count setting to 262144
and save the file.
nTo set the value temporarily, run the following command:sysctl -w vm.max_map_count=262144
3(Optional) Verify the modified setting:sysctl vm.max_map_count
4Repeat the previous steps on each node that is labeled for stateful
workloads.68Chapter 2 / Installation
|
If you are using a managed Kubernetes cluster, your cloud provider probably
provisions the nodes dynamically. In this instance, be aware that manual
modifications do not persist after a restart of a Kubernetes node. The cluster
administrator must use an alternative method to save the vm.max_map_count
setting.
nYou can disable the use of mmap at a cost of performance and memory usage.
To disable mmap, include a reference to the disable-mmap-transformer.yaml
overlay in the transformers block of the base kustomization.yaml file.transformers:...- sas-bases/overlays/internal-elasticsearch/disable-mmap-transformer.yaml
Configure a StorageClass
Deploying OpenSearch requires a StorageClass that provides block storage (such
as virtual disks) or a local file system mount to store the search indices. For the
instructions to configure such a StorageClass for all cloud providers, see the
“Configure a Default StorageClass for OpenSearch” README, located at $deploy/sas-bases//examples/configure-elasticsearch/internal/storage/README.md
(for Markdown format) or at $deploy/sas-bases/docs/configure_a_default_storageclass_for_opensearch.htm
(for HTML format).
Configure High Availability
To enable HA for OpenSearch, see the README file located at $deploy/sas-bases/examples/configure-elasticsearch/internal/topology/README.md
(for
Markdown format) or at $deploy/sas-bases/docs/configure_a_default_topology_for_opensearch.htm
(for HTML format).
Configure a Run User
|
(for
Markdown format) or at $deploy/sas-bases/docs/configure_a_default_topology_for_opensearch.htm
(for HTML format).
Configure a Run User
A fixed user ID (UID) is required so that files that are written to storage for the
search indices can be read after subsequent restarts. In a default deployment of the
SAS Viya platform, the OpenSearch JVM process runs under the fixed UID of 1000.
However, on some environments, using a UID of 1000 can lead to a conflict
between users within the container and those on the host. At the initial deployment,
you can select a new run user for the OpenSearch pods.
Note: This task can only be performed at the initial deployment.
To configure a UID other than 1000, see the README file located at $deploy/sas-bases/examples/configure-elasticsearch/internal/run-user/README.md
(for
Markdown format) or at $deploy/sas-bases/docs/configure_a_run_user_for_opensearch.htm
(for HTML format).Common Customizations 69
|
External Instances of OpenSearch
For the steps to configure an external instance of OpenSearch in your deployment,
see the README file located at $deploy/sas-bases/examples/configure-elasticsearch/external/README.md
(for Markdown format ) or $deploy/sas-bases/docs/configure_an_external_opensearch_instance.htm
(for HTML
format).
Additional Configuration for FIPS
Compliance
Starting with 2023.06, the SAS Viya platform supports deployments in a FIPS-
compliant environment. However, neither the internal nor the external instance of
OpenSearch supports FIPS at this time. In order to enable OpenSearch to start and
run in a FIPS-enabled environment, you must apply a transformer to your
deployment manifest.
1Create a transformer named opendistro-disable-fips-transformer.yaml in the
directory $deploy/site-config with the following contents:---apiVersion: builtinkind: PatchTransformermetadata: name: sas-opendistro-disable-fips-transformerpatch: |- - op: add path: /spec/config/jvm/- value: -Dcom.redhat.fips=falsetarget: kind: OpenDistroCluster name: sas-opendistro
Note: The formatting for this file is important. Be sure to copy and paste the
content exactly as it appears here.
2Add the opendistro-disable-fips-transformer.yaml file to the transformers block of
the base kustomization.yaml file. Here is an example:transformers:...- site-config/opendistro-disable-fips-transformer.yaml
70Chapter 2 / Installation
|
Configure SAS/CONNECT Settings
Support External Sign-on
To enable NodePort or LoadBalancer, see the README file at $deploy/sas-bases/examples/sas-connect-spawner/README.md
(for Markdown language) or $deploy/sas-bases/docs/configure_sasconnect_spawner_in_the_sas_viya_platform.htm
(for HTML).
Note: In managed environments like Microsoft Azure, you cannot access the
NodePort service from a client outside of the cluster.
Spawn SAS/CONNECT Servers Within the
Spawner Pod
By default, SAS/CONNECT servers cannot be spawned within the spawner pod.
Instead they are spawned in the SAS/CONNECT server pods. However, SAS clients
at 9.4 M6 or older and 9.4 M7 clients that do not have the hot fix linked to SAS Note
68611 cannot reach launched SAS/CONNECT server pods. Those clients must
enable spawning SAS/CONNECT servers within the spawner pods by applying the
security settings in the enable-spawned-servers.yaml example file. For details, see
the "Allow the Ability to Spawn Servers within the Spawner Pod" section of the
"Configure SAS/CONNECT Spawner in SAS Viya" README file located at $deploy/sas-bases/examples/sas-connect-spawner/README.md
(for Markdown
format) and at $deploy/sas-bases/docs/configure_sasconnect_spawner_in_the_sas_viya_platform.htm
(for HTML
format).
Connection Information for Programmers:
NodePort
To sign on when a NodePort is specified:
1Get the NodePort value that is mapped to the service port.kubectl describe service/sas-connect-spawner-nodeport
|
1Get the NodePort value that is mapped to the service port.kubectl describe service/sas-connect-spawner-nodeport
orkubectl get service/sas-connect-spawner-nodeport -o yaml
Common Customizations 71
|
- name: service nodePort: 24133 // port that is exposed externally port: 17551 protocol: TCP
2Determine the host name of one of the Kubernetes nodes. If you are using no
TLS or using TLS with self-signed certificates with the nodes in the DNS list or
using a wildcard to match the nodes, you can use the following command to list
the node names.kubectl -n name-of-namespace get nodesNAME STATUS ROLES AGE VERSIONhost02398.example.com Ready <none> 24d v1.18.4host02483.example.com Ready <none> 24d v1.18.4host02656.example.com Ready master 24d v1.18.4host02795.example.com Ready <none> 24d v1.18.4host02854.example.com Ready <none> 24d v1.18.4
If you are using TLS with a cert-manager (such as sas-viya-issuer), sas-
certframe adds the node name that the pod is running on to the certificate for the
service. Use this command to find the log entry describing the addition of the
node name to the certificate:
Note: Because of the length of the command and the margin of the page, this
command appears as more than one line. The command should be entered as a
single line.kubectl -n name-of-namespace logs deployment/sas-connect-spawner sas-certframe | grep KUBE_NODE_NAME
In the output, the node name is listed and can be provided to programmers. Here
is an example:2020-09-03 23:11:45 - [INFO] - Adding KUBE_NODE_NAME host02398.example.com to SAS_CERTIFICATE_SAN_DNS
|
is an example:2020-09-03 23:11:45 - [INFO] - Adding KUBE_NODE_NAME host02398.example.com to SAS_CERTIFICATE_SAN_DNS
3Sign on from an external client machine.%let rem=node-name-from-step-2 nodeport-from-step-1; signon rem user='user-ID' password='password';
Using the examples from step 1 and 2, the command would look like this:%let rem=host02398.example.com 24133; signon rem user='myuserid' password='mypassword';
Connection Information for Programmers:
LoadBalancer
1Determine the DNS name for the IP address that was provided by the load
balancer. If you have not already registered a DNS name, you should do so now. 72Chapter 2 / Installation
|
The requirement to register a DNS name is described at “Kubernetes Cluster
Requirements” in System Requirements for the SAS Viya Platform .
Note: For information about DNS names while using Azure, see Apply a DNS
label to the service .
2Sign on from an external client machine.%let rem=DNS-name-from-step-1 17551; signon rem user='user-ID' password='password';
Configure SAS Programming Run-Time
Environment
External Storage Class for SAS
Programming Run-Time Environment
All SAS Viya Platform Servers
The Batch Server, Compute Server, and SAS/CONNECT Server are SAS Viya
platform servers that use the SAS Programming Run-time Environment. They create
a number of temporary files for run-time information in a location that is local to the
sas-programming-environment pod. By default, these pods are backed by an
emptyDir volume named viya
, which is mounted automatically. However, using the
default emptyDir volume is not recommended because SAS programming
components can consume large amounts of storage quickly and cause nodes to
shut down.
To configure different storage classes for the viya
volume, see the README file at $deploy/sas-bases/examples/sas-programming-environment/storage/README.md
(for Markdown format) or $deploy/sas-bases/docs/sas_programming_environment_storage_tasks.htm
(for HTML format).
Batch Server Only
If you want the Batch Server to have storage that is different than the Compute
|
(for HTML format).
Batch Server Only
If you want the Batch Server to have storage that is different than the Compute
Server and the SAS/CONNECT Server, such as using persistent storage rather than
ephemeral storage, see the README file at $deploy/sas-bases/examples/sas-batch-server/storage/README.md
(for Markdown format) or $deploy/sas-bases/docs/sas_batch_server_storage_task_for_checkpoint_restart.htm
(for HTML
format).Common Customizations 73
|
GPUs for SAS Programming Run-Time
Environment
For large amounts of data, some procedures for SAS Programming Run-Time
Environment run faster on a Graphic Processing Unit (GPU) than on a CPU with
multiple threads. The SAS GPU Reservation Service aids SAS processes in
resource sharing and utilization of the GPUs that are available in a Kubernetes pod.
The SAS Programming Environment container image makes this service available,
but it must be enabled in order to take advantage of the GPUs in your cluster. To
enable the SAS GPU Reservation Service for the SAS Programming Run-time
Environment, see the README file at $deploy/sas-bases/overlays/sas-programming-environment/gpu/README.md
(for Markdown format) or $deploy/sas-bases/docs/sas_gpu_reservation_service_for_sas_programming_environment.htm
(for
HTML format).
Note: This README only describes how to use the SAS GPU Reservation Service
for the SAS Programming Run-Time Environment. To learn about how other
products use GPUs, consult the READMEs for each product.
Configure SAS Workload Orchestrator
The SAS Workload Orchestrator Service is used to manage workload started on
demand through the launcher service. The service has manager pods in a stateful
set and server pods in a daemon set.
The SAS Workload Orchestrator is deployed by default. For the instructions to
disable SAS Workload Orchestrator, see the README at $deploy/sas-bases/examples/sas-workload-orchestrator/enable-disable/README.md
(for
|
disable SAS Workload Orchestrator, see the README at $deploy/sas-bases/examples/sas-workload-orchestrator/enable-disable/README.md
(for
Markdown format) or at $deploy/sas-bases/docs/disabling_and_enabling_the_sas_workload_orchestrator_service.htm
(for
HTML format).
The SAS Workload Orchestrator daemons require information about resources on
the nodes that can be used to run jobs. In order to obtain accurate resource
information, you must add a ClusterRole and a ClusterRoleBinding to the SAS
Workload Orchestrator service account. For more information about the required
ClusterRoles, see the README file located at $deploy/sas-bases/overlays/sas-workload-orchestrator/README.md
(for Markdown format) or at $deploy/sas-bases/docs/cluster_privileges_for_sas_workload_orchestrator_service.htm
(for HTML
format).
For information about configuring SAS Workload Orchestrator, see the README file
located at $deploy/sas-bases/examples/sas-workload-orchestrator/configure/README.md
(for Markdown format) or at $deploy/sas-bases/docs/configuration_settings_for_sas_workload_orchestrator_service.htm
(for
HTML format).74Chapter 2 / Installation
|
Configure Redis
The SAS Viya platform uses Redis to provide a distributed cache technology in its
deployments. For information about configuring Redis, see the README file at $deploy/sas-bases/examples/redis/server/README.md
(for Markdown format) or
at $deploy/sas-bases/docs/configuration_settings_for_redis.htm
(for HTML
format).
Set Default SAS LOCALE and ENCODING in SAS
Launcher Service
Setting the default locale and encoding for the SAS Launcher Service controls the
default SAS LOCALE and ENCODING for SAS Compute Server, SAS/CONNECT ,
and SAS Batch Server, unless overridden by another specification. In order to set or
modify these settings, see the “Locale and Encoding Defaults” section of the
README file at $deploy/sas-bases/examples/sas-launcher/configure/README.md
(for Markdown format) or at $deploy/sas-bases/docs/configuration_settings_for_sas_launcher_service.htm
(for HTML format).
Change the Location of the NFS Server
SAS provides a transformer that allows you to change the location of the NFS
server hosting the user's home directories. For information about using the
transformer, see the “NFS Server Location” section of the README file located at $deploy/sas-bases/examples/sas-launcher/configure/README.md
(for
Markdown format) or $deploy/sas-bases/docs/configuration_settings_for_sas_launcher_service.htm
(for HTML format).
Enable Access Methods Through LOCKDOWN
System Option
|
(for HTML format).
Enable Access Methods Through LOCKDOWN
System Option
The SAS Viya platform uses the LOCKDOWN system option to limit access to files
and features. By default, the following methods cannot be used to access files and
specific SAS features for a SAS session that is executing in batch mode or server
processing mode:
nEMAIL
nFTP
nHADOOP
nHTTPCommon Customizations 75
|
nPYTHON
nPYTHON_EMBED
nSOCKET
nTCPIP
nURL
To enable any of these access methods, see the README file at $deploy/sas-bases/examples/sas-programming-environment/lockdown/README.md
(for
Markdown format) or $deploy/sas-bases/docs/lockdown_settings_for_the_sas_programming_environment.htm
(for HTML
format). For more information about the LOCKDOWN system option, see
“LOCKDOWN System Option” in SAS Viya Platform: Programming Run-Time
Servers .
Specify PersistentVolumeClaims to Use
ReadWriteMany StorageClass
The manifest file that the base kustomization.yaml creates must have information
about which PVCs in your deployment should take advantage of the StorageClass
you created for your cloud provider.
1In the $deploy/site-config
directory, create a file named storageclass.yaml.
Use the following content in that file.kind: RWXStorageClassmetadata: name: wildcardspec: storageClassName: {{ RWX-STORAGE-CLASS }}
Replace {{ RWX-STORAGE-CLASS }}
with the name of your cluster’s
StorageClass that provides ReadWriteMany (RWX) access.
2In the base kustomization.yaml file, add a patches block with the following
content.
Note: The annotationSelector
line in the following code is too long for the
width of the page. A line break has been added to address the issue. If you copy
|
line in the following code is too long for the
width of the page. A line break has been added to address the issue. If you copy
this code for use, be sure to remove the line break.patches:- path: site-config/storageclass.yaml target: kind: PersistentVolumeClaim annotationSelector: sas.com/component-name in (sas-backup-job,sas-data-quality-services,sas-commonfiles,sas-cas-operator,sas-pyconfig)
Note: If you are using the example kustomization.yaml file included at “Initial
kustomization.yaml File” on page 32, the patches block is already present.76Chapter 2 / Installation
|
Depending on the software that you are deploying, you might have to add more
content to the annotationSelector
line.
nIf your order contains SAS Model Risk Management, add sas-risk-cirrus-search
to the parenthetical list in the annotationSelector value.
nIf your order includes SAS Risk Modeling, add sas-risk-modeling-core
to
the parenthetical list in the annotationSelector value.
Note: If you have changed the accessMode for CAS per the instructions at
“Change accessMode” on page 50, remove sas-cas-operator
from the
parenthetical list in the annotationSelector value.
Configure Open Source Integration Points
The SAS Viya platform integrates with open source programming languages such
as Python and R in both directions, from the SAS Viya platform to open source and
back. With this integration, you can call out to open-source engines from within the
SAS Viya platform interfaces to leverage code that was previously written in other
environments. You can also write open-source code to access powerful SAS
Analytics from your coding interfaces of choice, including Jupyter Notebooks and R-
Studio. You can use Python or R directly with SAS or integrate SAS into applications
using REST APIs to process operations more efficiently on a multithreaded, in-
memory, massively parallel processing engine.
For a high-level list of the steps to install, configure, and deploy Python and R to
|
memory, massively parallel processing engine.
For a high-level list of the steps to install, configure, and deploy Python and R to
enable integration in the SAS Viya platform, see the README located at $deploy/sas-bases/examples/sas-open-source-config/README.md
(for Markdown format)
or at $deploy/sas-bases/docs/configure _python_and_r_integration_with_sas_viya.htm
(for HTML format).
Configure SAS/ACCESS
To configure and deploy your SAS/ACCESS products, see the “Configuring
SAS/ACCESS and Data Connectors for Viya 4” README file at $deploy/sas-bases/examples/data-access/README.md
(for Markdown) or $deploy/sas-bases/docs/configuring_sasaccess_and_data_connectors_for_sas_viya_4.htm
(for
HTML).
Install the Orchestration Tool
The sas-orchestration tool is required when using sas-orchestration or SAS Viya
Platform Deployment Operator to deploy. If you have not already deployed the
orchestration tool, do it now. Follow the instructions in the “Prerequisites” section of Install the Orchestration Tool 77
|
the README file at $deploy/sas-bases/examples/kubernetes-tools/README.md
(for Markdown format) or $deploy/sas-bases/docs/using_kubernetes_tools_from_the_sas-orchestration_image.htm
(for HTML
format).
Create the SASDeployment Custom
Resource
Note: This section is required only if you are using the SAS Viya Platform
Deployment Operator.
Add an imagePullSecret for the SAS Viya Platform
Namespace
If your SAS Viya platform content has been mirrored and the mirror requires
authentication, you must create an imagePullSecret for the namespace in which you
are deploying the SAS Viya platform. The imagePullSecret must be named sas-orchestration-secret
. For more information about the command to create
imagePullSecrets, see Pull an Image from a Private Registry .
All Cloud Providers
Use the following command to add the sas-orchestration-secret
.kubectl -n name-of-namespace \ create secret generic sas-orchestration-secret \ --type=kubernetes.io/dockerconfigjson \ --from-file=.dockerconfigjson=file-with-secret-content
For example, if you are deploying the SAS Viya platform into a namespace called
viya, and the secret content is in a file named site-config/image-pull-secret.json, the
command would look like this:kubectl -n viya \ create secret generic sas-orchestration-secret \ --type=kubernetes.io/dockerconfigjson \ --from-file=.dockerconfigjson=site-config/image-pull-secret.json
78Chapter 2 / Installation
|
Red Hat OpenShift Alternative
If you are deploying on Red Hat Openshift, you can create the secret from the
existing secret.
1Find the name of the secret:kubectl -n name-of-namespace get secret | grep default-dockercfg
The output looks like this:default-dockercfg-#####
2Create a file that contains the contents of the secret:kubectl -n name-of-namespace get secret output-from-step-1 --output="jsonpath={.data.\.dockercfg}" | base64 --decode > name-of-namespace.default.dockercfg.json
3Create the secret named from this file:kubectl -n name-of-namespace \ create secret generic sas-orchestration-secret \ --type=kubernetes.io/dockercfg \ --from-file=.dockercfg=name-of-namespace.name-of-secret.json
The name-of-secret is a name you choose. It should have meaning to you or
your organization and help to identify the secret.
Run the create sas-deployment-cr Command
Note: The container to create the custom resource runs under the sas ID and
group. Ensure that the $(pwd)
directory, specified in the create command, has
permissions that can accommodate the sas ID and group.
As an administrator with local cluster permissions, run the following command to
create the SASDeployment custom resource and to name it $deploy -
sasdeployment.yaml. Make sure that the command is run from the parent directory
|
create the SASDeployment custom resource and to name it $deploy -
sasdeployment.yaml. Make sure that the command is run from the parent directory
of the $license and $deploy directories. Here is the command format:docker run --rm \ -v $(pwd):mount-for-working-directory-inside-container \ sas-orchestration \ create sas-deployment-cr \ --deployment-data certificates-information \ --license license-information \ --user-content location-of-deployment-files \ --cadence-name stable-or-lts \ --cadence-version cadence-version-number \ [--cadence-release cadence-release-number \] [--image-registry mirror-registry-location \] [--repository-warehouse repository-warehouse-location \]
Create the SASDeployment Custom Resource 79
|
> $deploy-sasdeployment.yaml
Here is a description of the values to be substituted for the variables in the
command:
Note: For information about all the flags available for the create sas-deployment-cr
command, use the help flag:docker run --rm \sas-orchestration \create sas-deployment-cr \--help
mount-for-working-directory-inside-container
The path at which the current working directory should be mounted inside the
container.
certificates-information
The location of the *-certs.zip file. It can be a directory path, which includes the
mount for the working directory, or a go-getter URL.
Note: If you use a go-getter URL for any of the values in this command, that go-
getter should reference a Git repository. If it is a private Git repository, the URL must
include the user ID and the personal access token (PAT). If it is a public Git
repository, the user ID and PAT are not required. The URL must use the git::https
protocol.
Do not use a go-getter URL to refer to local files.
license-information
The location of the license, which can be a directory path, including the mount
for the working directory, or a go-getter URL.
location-of-deployment-files
The location of the $deploy directory. This can be a directory path, including the
mount for the working directory and the $deploy directory name, or a go-getter
URL.
stable-or-lts
Use stable
for software in the Stable cadence, or use lts
for the Long-Term
Support cadence.
cadence-version-number
|
URL.
stable-or-lts
Use stable
for software in the Stable cadence, or use lts
for the Long-Term
Support cadence.
cadence-version-number
The cadence version number of the software to be deployed (for example,
2020.1.4).
[cadence-release-number ] (optional)
The latest cadence release or a specific cadence release of the cadence version
number of the software to be deployed. See the important note that follows.
Note: Because the orchestration tool generates an internal sas-bases folder based
on the information in this command, you can ensure that the data is consistent by
reviewing the $deploy/sas-bases/.orchestration/cadence.yaml
file. Ensure that
the cadence-type
, cadence-version-number
, and cadence-release-number
flags
in the command match the name
, version
, and release
fields, respectively, of the
cadence.yaml file.80Chapter 2 / Installation
|
[mirror-registry-location ] (optional)
The URL for the docker image registry (for example, registry.example.com
).
This flag is required if you are deploying with a mirror registry.
Note: If you are deploying on Red Hat OpenShift, the URL must be in the
following format: service-name .name-of-registry-namespace .svc:port/platform-
namespace . For example, image-registry.openshift-image-registry.svc:5000/myviya
. Use the same value you used to replace {{ MIRROR-HOST }}
in the mirror-flattened.yaml file (see step 2 of “Configure the
Mirror Registry” on page 14).
[repository-warehouse-location ] (optional)
The URL for the warehouse describing what should be deployed. This flag is
needed if you are managing a dark environment.
$deploy
Precede the name of the sasdeployment.yaml file with the name of the directory
that the software is being deployed from. For example, if you use viya1
as
$deploy, the file should be named viya1-sasdeployment.yaml
.
Note: The files being pulled into the custom resource must be text files. If you must
use a binary file, it should be added to the custom resource by using a go-getter
URL. For information about go-getter URLS, see https://github.com/hashicorp/go-
getter .
IMPORTANT If you specify the cadence release, its specification affects
how the operator reconciles the custom resource. Consider the following
when deciding how to use the --cadence-release
option.
nIf you are using a mirror registry, the cadence-type , cadence-version-
|
when deciding how to use the --cadence-release
option.
nIf you are using a mirror registry, the cadence-type , cadence-version-
number , and cadence-release-number must match the --deployment-assets
, --cadence
, and --release
flags used to populate that mirror.
nIf you are using a mirror registry and did not use the --deployment-assets
, --cadence
, or --release
flags to populate that mirror, then the
cadence-type and cadence-version-number must be the latest available at
the time the mirror was created. The cadence-release-number must be
either an empty string (""
) or the latest release available at the time the
mirror was created.
nSet the value to ""
in order to force the operator to use the latest release
for the requested cadence-version-number . A consequence of using this
value is that processes that seem unrelated to software updates, such as
renewing your license or making configuration changes, might also spawn
updates to the running software.
nSet the value to a specific cadence release value to use that cadence
release in the custom resource. Specifying the cadence release of the
currently running deployment can be used to make configuration changes
without introducing updates. If the specified cadence release does not
exist, the user is presented with an error.
nIf no cadence release is specified, as in the examples that follow, the
operator uses the latest cadence release when the custom resource is
|
nIf no cadence release is specified, as in the examples that follow, the
operator uses the latest cadence release when the custom resource is
initially introduced to a namespace. The operator automatically assigns
the chosen release value in the cluster representation of this custom Create the SASDeployment Custom Resource 81
|
resource. If the user later applies an updated custom resource, without a
cadence release specified, Kubernetes preserves the previously assigned
value of this field and therefore the versions of any software already
deployed into that namespace. To change the version of software in an
existing namespace, assign a cadence release as described above.
Here is an example of the command with the following values.
nThe directory should be mounted in /cwd/
in the container.
nThe *-certs.zip file is located at /cwd/license/SASViyaV4_69SWC4_certs.zip
.
nThe license file from SAS is located at /cwd/license/SASViyaV4_69SWC4_lts_2021_license_2020-09-08T105930.jwt
.
nThe $deploy directory is /cwd/viya1
.
nThe software being deployed is Long-Term Support 2021.1.docker run --rm \ -v $(pwd):/cwd/ \ sas-orchestration \ create sas-deployment-cr \ --deployment-data /cwd/license/SASViyaV4_69SWC4_certs.zip \ --license /cwd/license/SASViyaV4_69SWC4_lts_2021_license_2020-09-08T105930.jwt \ --user-content /cwd/viya1 \ --cadence-name lts \ --cadence-version 2021.1 \> viya1-sasdeployment.yaml
|
Here is an excerpt of the generated custom resource:...---apiVersion: orchestration.sas.com/v1alpha1kind: SASDeploymentmetadata: name: sas-viyaspec: cadenceName: lts cadenceVersion: "2021.1" license: secretKeyRef: name: sas-viya key: license clientCertificate: secretKeyRef: name: sas-viya key: cert caCertificate: secretKeyRef: name: sas-viya key: cacert userContent: files: kustomization.yaml: | resources: - sas-bases/base - sas-bases/overlays/cert-manager-issuer
82Chapter 2 / Installation
|
...
Notice that the values that were entered in the command are included in the custom
resource. The custom resource also includes a transcription of the contents of the
base kustomization.yaml file.
Here is an example of the command that uses references (in the form of go-getter
URLs) to the locations for the values:
nThe directory should be mounted in/cwd/deploy
in the container.
nThe *-certs.zip file is located at https://example.com/
SASViyaV4_69SWC4_certs.zip .
nThe license file from SAS is located at https://example.com/
SASViyaV4_69SWC4_lts_2021_license_2020-09-08T105930.jwt .
nThe $deploy directory is git::https://user:token@git.example.com/repository.git//
viya1 .
nThe software that is being deployed is Long-Term Support 2021.1.
Note: When fetching from a Git repository, in order for the content to be cloned
locally by the operator before being used, you must use the annotation environment.orchestration.sas.com/readOnlyRootFilesystem: "false"
. docker run --rm \ -v $(pwd):/cwd/deploy \ sas-orchestration \ create sas-deployment-cr \ --deployment-data https://example.com/SASViyaV4_69SWC4_certs.zip \ --license https://example.com/SASViyaV4_69SWC4_lts_2021_license_2020-09-08T105930.jwt \ --user-content git::https://user:token@git.example.com/repository.git//viya1 \ --cadence-name lts \ --cadence-version 2020.1 \> viya1-sasdeployment.yaml
|
The generated custom resource would include this content:...---apiVersion: orchestration.sas.com/v1alpha1kind: SASDeploymentmetadata: annotations: environment.orchestration.sas.com/readOnlyRootFilesystem: "false" creationTimestamp: null name: sas-viyaspec: caCertificate: url: https://example.com/SAS_CA_Certificate.pem cadenceName: lts cadenceVersion: "2020.1" clientCertificate: url: https://example.com/entitlement_certificate.pem license: url: https://example.com/SASViyaV4_69SWC4_lts_2021_license_2020-09-08T105930.jwt repositoryWarehouse: {} userContent: url: git::https://user:token@git.example.com/repository.git//viya1
Create the SASDeployment Custom Resource 83
|
status:...
Notice that the custom resource contains the information that is included in the
command.
Note: For more information about the fields in the SASDeployment custom
resource, see “Fields in the SASDeployment Custom Resource” on page 156.
Revise the Custom Resource for Proxy Information
Note: If you are not using a forward proxy for your cluster, skip this section. If you
would like to use a proxy for all SAS Viya platform deployments in a cluster, you
must modify the deployment operator manifest file. For details, see “Configure Proxy
Information” on page 17.
To define the proxy for a single SAS Viya platform deployment, add the following
lines to the metadata/annotations
block of the custom resource:environment.orchestration.sas.com/HTTP_PROXY: proxy-URL-for-HTTP-requestsenvironment.orchestration.sas.com/HTTPS_PROXY: proxy-URL-for-HTTPS-requests
Additionally, you can add a line that defines which requests should not go through
the proxy:environment.orchestration.sas.com/NO_PROXY: do-not-proxy-list
The do-not-proxy-list is a comma-separated list of host names, fully qualified host
names, and IP addresses that the proxy should ignore.
|
The do-not-proxy-list is a comma-separated list of host names, fully qualified host
names, and IP addresses that the proxy should ignore.
Here is an example of a custom resource that includes proxy information:...---apiVersion: orchestration.sas.com/v1alpha1kind: SASDeploymentmetadata: annotations: environment.orchestration.sas.com/readOnlyRootFilesystem: "false" environment.orchestration.sas.com/HTTP_PROXY: http://webproxy.example.com:5000 environment.orchestration.sas.com/HTTPS_PROXY: http://webproxy.example.com:5000 environment.orchestration.sas.com/NO_PROXY: localhost,noproxy.example.com,kubernetes.default.svc,10.96.0.1 creationTimestamp: null name: sas-viya...
Note: The following values must be included in the list of values for the
NO_PROXY variable:
nkubernetes.default.svc (the Kubernetes API server)
nthe value of the KUBERNETES_SERVICE_HOST environment variable for the
cluster84Chapter 2 / Installation
|
Revise the Custom Resource for Red Hat
OpenShift
Note: If your deployment is not running on Red Hat OpenShift, you should skip this
section.
If your deployment is running on Red Hat OpenShift, you must add an annotation to
the SAS Deployment custom resource. In the metadata/annotations
block of the
custom resource, add the following line:environment.orchestration.sas.com/FLATTENED_IMAGE_REGISTRY: "true"
Here is an example:...---apiVersion: orchestration.sas.com/v1alpha1kind: SASDeploymentmetadata: annotations: environment.orchestration.sas.com/readOnlyRootFilesystem: "false" environment.orchestration.sas.com/FLATTENED_IMAGE_REGISTRY: "true" creationTimestamp: null name: sas-viya...
Deploy the Software
Deployment Using the SAS Viya Platform
Deployment Operator
Command and Output
Because the operator is actually running as a result of the last command that you
performed in “Apply the SAS Viya Platform Deployment Operator Resources to the
Cluster” on page 18, the operator responds to any changes to the SASDeployment
custom resource by applying those changes. Therefore, to perform the initial
deployment, run the following command as an administrator with local cluster
permissions to apply the SASDeployment custom resource:kubectl -n name-of-namespace apply -f $deploy-sasdeployment.yaml
Deploy the Software 85
|
Note: Because $deploy/sas-bases
is restricted from modification, the operator
generates a sas-bases folder based on the cadence information you supplied. This
folder plus user-supplied content (such as the base kustomization.yaml file) is used
for deploying or updating your software.
To determine the status of the deployment, run the following command:kubectl -n name-of-namespace get sasdeployment
Here is an example of the output:NAME STATE CADENCENAME CADENCEVERSION CADENCERELEASE AGEviya1 SUCCEEDED stable 2020.1.3 20210304.1614817334881 130m
The STATE
field cycles through several values. The field value starts with PENDING
,
then RECONCILING
, and finishes in either SUCCEEDED
or FAILED
. For more information
about communications from the SAS Viya Platform Deployment Operator, see
“Communications from the Operator” on page 162.
Note: SAS recommends that you save a copy of the SASDeployment custom
resource locally or to Git as a backup.
Initial Troubleshooting
When the SAS Viya Platform Deployment Operator is not working as expected,
three different sources can be used to diagnose problems. If you need to contact
SAS Technical Support for help, be sure to share the output from all three of these
sources.
Note: After the deployment, the log from the SAS Viya Platform Deployment
|
sources.
Note: After the deployment, the log from the SAS Viya Platform Deployment
Operator Reconcile Job might contain the following message:Warning: 'vars' is deprecated. Please use 'replacements' instead. [EXPERIMENTAL] Run 'kustomize edit fix' to update your Kustomization automatically.
If this message is displayed, it can safely be ignored.
Log from the SAS Viya Platform Deployment
Operator Pod
The log from the SAS Viya Platform Deployment Operator pod can be useful in
diagnosing problems that might be preventing the SAS Viya Platform Deployment
Operator from deploying the SAS Viya platform. By default, that pod is named sas-deployment-operator-hash
. The Kustomize tool appends the hash value during
the deployment of the SAS Viya Platform Deployment Operator. An example pod
name is sas-deployment-operator-57f567f7bc-drg5z
.
Use the following command to generate log output:kubectl \
86Chapter 2 / Installation
|
logs \ -n name-of-deployment-operator-namespace \ deployment-operator-pod-name
SASDeployment Custom Resource
The .status
field of a SASDeployment custom resource contains information about
the last attempt to deploy the SAS Viya platform. For complete details about this
field, see “Communications from the Operator” on page 162. Specifically, the
The .status.messages
field contains all the messages from the last Reconcile Job
that was started by the SAS Viya Platform Deployment Operator. These messages
relate to fetching URLs, running Kustomize, and running kubectl.
Use the following command to generate output for the entire SASDeployment
custom resource:kubectl \ get sasdeployments \ -n name-of-SAS-Viya-namespace \ -o yaml
Log from the Reconcile Job
The log from the SAS Viya Platform Deployment Operator Reconcile Job can be
useful in diagnosing problems with deploying a particular SASDeployment custom
resource. By default, that Job is named sas-deployment-operator-reconcile-hash
. A unique Job is associated with each deployment attempt. All these Jobs are
located in the same namespace as the SASDeployment custom resource that they
are deploying, providing a historical record of those attempts. The Jobs are removed
automatically after the associated SASDeployment custom resource is removed
from Kubernetes.
Depending on cluster settings, the pod that is run by the Job might not be available
|
from Kubernetes.
Depending on cluster settings, the pod that is run by the Job might not be available
after the process exits. However, if the pod remains, use the following command to
generate output for its Job log:kubectl \ logs \ -n name-of-SAS-Viya-namespace \ reconcile-Job-pod-name
Remediation
If an issue prevents the successful deployment of your software and one of the
sources described above indicates the issue is associated with content in $deploy/site-config
or the base kustomization.yaml file, take the following steps to
address the issue before contacting SAS Technical Support:
1Make corrections for the error. Debugging can include reviewing example files
for formatting, file names, or path specifications. The base kustomization.yaml
file can also be reviewed to ensure it was revised as necessary. To help with
debugging, refer to the appropriate documentation, including README files.
2Rebuild the SASDeployment custom resource using the instructions at “Run the
create sas-deployment-cr Command” on page 79.Deploy the Software 87
|
3Apply the custom resource using the instructions at “Command and Output” on
page 85.
Deployment Using the sas-orchestration Command
Install the Orchestration Tool
Before you can issue the command to deploy your software, you must first install the
orchestration tool. Follow the instructions at “Install the Orchestration Tool” on page
77.
Command
After the orchestration tool is installed, run the following command to deploy your
software:docker run --rm \ -v $(pwd):mount-for-working-directory-inside-container \ -v "mount-for-kubeconfig-file-location-inside-container" \ -e "KUBECONFIG=assignment-of-kubeconfig-file-within-container" \ [-e FLATTENED_IMAGE_REGISTRY=true \] sas-orchestration \ deploy \ --namespace name-of-namespace \ --deployment-data certificates-information \ --license license-information \ --user-content location-of-deployment-files \ --cadence-name stable-or-lts \ --cadence-version cadence-version-number \ [--cadence-release cadence-release-number \] [--image-registry mirror-registry-location \] [--repository-warehouse repository-warehouse-location \]
Note: The -e FLATTENED_IMAGE_REGISTRY=true
option should only be used if you
are deploying from an image registry on Red Hat OpenShift.
Here is a description of the values to be substituted for the variables in the
command:
Note: Because $deploy/sas-bases
is restricted from modification, the
orchestration tool generates a sas-bases folder based on the cadence information
|
is restricted from modification, the
orchestration tool generates a sas-bases folder based on the cadence information
you supplied. This folder plus user-supplied content (such as the base
kustomization.yaml file) is used for deploying or updating your software.88Chapter 2 / Installation
|
Note: For information about all the flags available for the deploy
command, use the
help flag:docker run --rm \sas-orchestration \deploy \--help
mount-for-working-directory-inside-container
The path at which the current working directory should be mounted inside the
container.
mount-for-kubeconfig-file-location-inside-container
The mounted location of the cluster's configuration file.
assignment-of-kubeconfig-file-within-container
The KUBECONFIG
environment variable pointing to the location of the kubeconfig
file within the container.
name-of-namespace
The namespace were the software is to be deployed.
certificates-information
The location of the *-certs.zip file. It can be a directory path, which includes the
mount for the working directory, or a go-getter URL.
Note: If you use a go-getter URL for any of the values in this command, that go-
getter should reference a Git repository. If it is a private Git repository, the URL must
include the user ID and the personal access token (PAT). If it is a public Git
repository, the user ID and PAT are not required. The URL must use the git::https
protocol.
Do not use a go-getter URL to refer to local files.
license-information
The location of the license, which can be a directory path, including the mount
for the working directory, or a go-getter URL.
location-of-deployment-files
The location of the $deploy directory. This can be a directory path, including the
mount for the working directory, or a go-getter URL.
|
The location of the $deploy directory. This can be a directory path, including the
mount for the working directory, or a go-getter URL.
stable-or-lts
Use stable
for software in the Stable cadence, or use lts
for the Long-Term
Support cadence.
cadence-version-number
The cadence version number of the software to be deployed (for example,
2020.1.4).
[cadence-release-number ] (optional)
The latest cadence release or a specific cadence release of the cadence version
number of the software to be deployed.
Note: Because the orchestration tool generates an internal sas-bases folder based
on the information in this command, you can ensure that the data is consistent by
reviewing the $deploy/sas-bases/.orchestration/cadence.yaml
file. Ensure that
the cadence-type
, cadence-version-number
, and cadence-release-number
flags Deploy the Software 89
|
in the command match the name
, version
, and release
fields, respectively, of the
cadence.yaml file.
[mirror-registry-location ] (optional)
The URL for the docker image registry (for example, registry.example.com
).
This flag is needed if you are deploying with a mirror registry.
Note: If you are deploying on Red Hat OpenShift, the URL must be in the
following format: service-name .name-of-registry-namespace .svc:port/platform-
namespace (for example, image-registry.openshift-image-registry.svc:5000/myviya
). Use the same value you used in the
configMapGenerator block of the base kustomization.yaml file (see step 2 of
“Using the sas-orchestration Tool on Red Hat OpenShift” on page 39).
[repository-warehouse-location ] (optional)
The URL for the warehouse describing what should be deployed. This flag is
needed if you are managing a dark environment.
Note: The files being pulled used to deploy the software must be text files. If you
must use a binary file, it should be added to the custom resource by using a go-
getter URL. For information about go-getter URLS, see https://github.com/
hashicorp/go-getter .
Example
Here is an example of the sas-orchestration deploy
command that includes the
following values.
nThe directory should be mounted in /cwd/
in the container.
nThe software is being deployed in the viya1 namespace.
nThe *-certs.zip file is located at /cwd/SASViyaV4_69SWC4_certs.zip
.
|
in the container.
nThe software is being deployed in the viya1 namespace.
nThe *-certs.zip file is located at /cwd/SASViyaV4_69SWC4_certs.zip
.
nThe license file from SAS is located at /cwd/SASViyaV4_69SWC4_stable_2022.12_license_2022-12-08T105930.jwt
.
nThe $deploy directory is /cwd/deploy
.
nThe software being deployed is Stable 2022.12.docker run --rm \ -v $(pwd):/cwd/ \ -v "/home/user/.kube/config:/kube/config" \ -e "KUBECONFIG=/kube/config" \ sas-orchestration \ deploy \ --namespace viya1 \ --deployment-data /cwd/SASViyaV4_69SWC4_certs.zip \ --license /cwd/SASViyaV4_69SWC4_stable_2022.12_license_2022-12-08T105930.jwt \ --user-content /cwd/deploy \ --cadence-name stable \ --cadence-version 2022.12
90Chapter 2 / Installation
|
IMPORTANT Provider-specific code has been removed from the open
source Kubernetes code base in Kubernetes 1.26. With Google Kubernetes
Engine (GKE) 1.26, Google-specific artifacts are required for deployment.
The sas-orchestration deploy command can only be used with GKE 1.26 by
mounting the provider-specific artifacts to the sas-orchestration container.
Here is the command format for Google cloud (the provider-specific artifacts
are highlighted):docker run --rm \ -v $(pwd):mount-for-working-directory-inside-container \ --user $(id -u):$(id -g) \ -v location-of-the-GKE-gcloud-auth-plugin:location-in-the-container \ -v location-of-authentication-files-used-by-Google-CLI:location-in-the-container \ -v "mount-for-kubeconfig-file-location-inside-container" \ -e "KUBECONFIG=assignment-of-kubeconfig-file-within-container" \ -e "PATH=append-location-of-the-GKE-gcloud-auth-plugin-to-existing-path" \ sas-orchestration \ deploy \...
Here is an example of the command:docker run --rm \ -v $(pwd):/cwd/ \ --user $(id -u):$(id -g) \ -v /install/google-cloud-sdk:/usr/lib64/google-cloud-sdk \ -v "$HOME"/.config/gcloud:/.config/gcloud \ -v "/home/user/.kube/config:/kube/config" \ -e "KUBECONFIG=/kube/config" \ -e "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/opt/sas/viya/home/bin/:/usr/lib64/google-cloud-sdk/bin" \ sas-orchestration \ deploy \...
Deployment Using Kubernetes Commands
|
Deployment Using Kubernetes Commands
Note: If you have deployed the SAS Viya Platform Deployment Operator, these
commands are not necessary since the operator deploys your software for you. For
more information, see $deploy/sas-bases/examples/deployment-operator/deploy/README.md
(for Markdown) or $deploy/sas-bases/docs/sas_viya_deployment_operator.htm
(for HTML).
IMPORTANT The following kubectl
commands require that the kubeconfig
environment variable is set. For information about setting that variable, see
The KUBECONFIG environment variable . Alternatively, you can add the --kubeconfig=namespace-kubeconfig-file
argument to each kubectl
command for the command to work properly.
1On the kubectl machine, create the Kubernetes manifest:Deploy the Software 91
|
kustomize build -o site.yaml
The following message might be displayed:Warning: 'vars' is deprecated. Please use 'replacements' instead. [EXPERIMENTAL] Run 'kustomize edit fix' to update your Kustomization automatically.
If the message is displayed, it can safely be ignored.
2Apply cluster-api resources to the cluster. As an administrator with cluster
permissions, runkubectl apply --selector="sas.com/admin=cluster-api" --server-side --force-conflicts -f site.yamlkubectl wait --for condition=established --timeout=60s -l "sas.com/admin=cluster-api" crd
The kubectl apply
command might cause the following messages to be
displayed:error: no objects passed to applyresource mapping not found for name: "foo" namespace: "<name-of-namespace>" from "site.yaml": no matches for kind "bar" in version "baz"ensure CRDs are installed first
If either message is displayed, it can safely be ignored.
3As an administrator with cluster permissions, runkubectl apply --selector="sas.com/admin=cluster-wide" -f site.yaml
4As an administrator with local cluster permissions, runkubectl apply --selector="sas.com/admin=cluster-local" -f site.yaml --prune
The kubectl apply
command might cause the following message to be
displayed:Deprecated: kubectl apply will no longer prune non-namespaced resources by default when used with the --namespace flag in a future release.
If the message is displayed, it can safely be ignored.
|
If the message is displayed, it can safely be ignored.
5As an administrator with namespace permissions, runkubectl apply --selector="sas.com/admin=namespace" -f site.yaml --prune
The kubectl apply
command might cause any of the following messages to be
displayed:error: error pruning nonNamespaced objectDeprecated: kubectl apply will no longer prune non-namespaced resources by default when used with the --namespace flag in a future release.Warning: path <API-path-for-URLs> cannot be used with pathType Prefix
92Chapter 2 / Installation
|
If any of these messages are displayed, they can safely be ignored.
6If you are performing an update, as an administrator with namespace
permissions, run the following command to prune additional resources not in the
default set.kubectl apply --selector="sas.com/admin=namespace" -f site.yaml --prune --prune-whitelist=autoscaling/v2/HorizontalPodAutoscaler
7Wait for Kubernetes to create and start the pods. To determine whether the pods
have started:kubectl -n name-of-namespace get pods
The output of this command looks like this:NAMESPACE NAME READY STATUS RESTARTS AGEd10006 annotations-66dc4479fd-qfqqr 1/1 Running 0 5sd10006 appregistry-bbbdfb78c-tcllv 1/1 Running 0 5sd10006 audit-7c4ff4b8b8-zxg8k 1/1 Running 0 5sd10006 authorization-79d4f594b9-t9sbx 1/1 Running 0 5sd10006 cachelocator-668fcdb544-hcxbs 1/1 Running 0 5sd10006 cacheserver-7dc898d4bf-8dfgx 1/1 Running 0 5sd10006 casaccessmanagement-64b5769d8f-mlmjf 1/1 Running 0 5sd10006 casadministration-747746f94c-j2dm2 1/1 Running 0 5s
During startup some pods restart a number of times until other pods are ready.
The value in the Status column is Running
or Completed
|
During startup some pods restart a number of times until other pods are ready.
The value in the Status column is Running
or Completed
when the pods have
either fully started or completed their expected function.
Save the $deploy Directory
The files in the $deploy directory are used in subsequent administration tasks with
your deployment, such as updating your software and applying a new license.
Therefore, you must not delete the $deploy directory. Should you choose, you can
move it to a GitOps repository or other location for later use.
IMPORTANT If you move the $deploy directory from its original location,
you must notify other potential administrators of the new location.
Readiness Service
The readiness service checks the status of the SAS Viya platform to determine
whether it is ready for use. The service performs all of its checks every 30 seconds.
After the software is deployed, the service should be consulted to determine
whether the deployment is ready for use. The readiness service is also a useful tool
for the administration of SAS Viya platform throughout its life.Readiness Service 93
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.